Machine learning allows computers to emulate human behaviour by teaching them historical data and knowledge about potential future events. This section will examine exciting machine learning techniques, including A* search, decision tree pruning, and the minimax algorithm.

A* search

A* (pronounced "A-star") is a graph traversal and path search method utilised in various computer science disciplines due to its completeness, optimality, and efficiency.

Thus, in practical travel-routing systems, it is often outperformed by algorithms that can pre-process the graph to achieve higher performance, as well as memory-bounded techniques; nonetheless, in many circumstances, A* is still the optimal answer. The algorithm was first published in 1968 by Peter Hart, Nils Nilsson, and Berthael Raphael of Stanford Research Institute (now SRI International). It is an expansion of the Dijkstra algorithm. A* achieves improved performance by using heuristics to direct its search.

A* only finds the shortest path from a source to a common objective, unlike Dijkstra's technique. It is an essential trade-off when employing a heuristic with a specific purpose. Since the entire shortest-path tree is constructed by Dijkstra's method, every node represents a goal, and there cannot be a particular goal-directed heuristic.

Decision tree pruning

In machine learning and search algorithms, pruning is a data compression approach that decreases the size of decision trees by deleting non-critical and redundant classification nodes. As a result, pruning decreases the final classifier's complexity, enhancing the predicted accuracy by reducing overfitting.

In a decision tree algorithm, the topic of the ideal size of the final tree arises. However, it is difficult to determine when a tree algorithm should stop, as it is impossible to predict if adding a single additional node will significantly reduce error. This issue is referred to as the horizon effect. A popular method is to expand the tree until each node has a small number of instances and then prune away nodes that contribute no further information.

The pruning of a learning tree should lower its size without diminishing its predicted accuracy, as determined by a cross-validation set. There are a variety of tree-pruning approaches that use different measurements to maximise performance.

Minimax algorithm

In artificial intelligence, decision theory, game theory, statistics, and philosophy, minimax reduces worst-case loss (maximum loss). To maximise the smallest gain is referred to as "maximin" when dealing with gains. It has been expanded to more complicated games and general decision-making in the context of ambiguity.

A simplified version of the minimax method is described below for games such as tic-tac-toe in which each player can win, lose, or tie. If player A can win in a single move, that is their optimal move. However, if player B understands that one action will lead to a situation in which player A can win in one move. In contrast, another move will lead to a position in which player A can get at the most draw, and then the best option for player B is the one leading to a draw. Determining the "best" move late in the game is straightforward. The minimax method aids in determining the optimal move by working backwards from the game's conclusion.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE