Machine learning enables computers to behave like humans by teaching them historical data and information about what might happen in the future. This section will look at several fascinating machine learning algorithms such as Fitness approximation, the Weasel programme, and hyperNEAT.

Fitness approximation

The objective of fitness approximation is to approximate the objective or fitness functions using numerical simulation or physical experiment data. Here, the machine learning models are called meta-models or surrogates. Evolutionary optimization based on approximated fitness evaluations is also referred to as surrogate-assisted evolutionary approximation. In evolutionary optimization, fitness approximation is a subfield of data-driven evolutionary optimization.

Optimization cost is driven mainly by the quantity of fitness function evaluations required to arrive at a satisfactory solution in many real-world optimization situations, including engineering difficulties. Utilizing prior knowledge amassed during optimization is essential to obtaining practical optimization algorithms. Therefore, using previous knowledge to build a fitness function model to help choose assessment solutions is natural. 

Weasel program

The weasel programme, often known as Dawkins' weasel, is a thought experiment illustrated through several computer simulations. Their objective is to show that the mechanism driving evolutionary systems—random variation mixed with non-random cumulative selection—does not simply depend on chance. Richard Dawkins devised the idea for the thought experiment and created the initial simulation; other programmers have since made several subsequent iterations.

The programme aims to illustrate how minor character changes can be preserved over time. It can produce meaningful combinations in a relatively short amount of time. Incremental changes will occur if there is a mechanism to select cumulative changes, such as a person choosing desirable traits (artificial selection) or an environmental fitness criterion (natural selection). Because the children inherit a replica of their parent's traits, reproducing systems tend to preserve traits through generations. Selection relies on the child and copying differences to "survive" and "die" phrases.

HyperNEAT

Kenneth Stanley's Hypercube-based NeuroEvolution of Augmented Topologies (HyperNEAT) is a generative encoding that grows artificial neural networks (ANNs) using the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. 

The representation theory of HyperNEAT postulates that a decent representation of an artificial neural network should be able to explain its connectivity pattern concisely. The term for this type of description is encoding. Furthermore, compositional pattern-creating networks describe regular patterns such as symmetry, repetition, and variation in repetition. Therefore, HyperNEAT may grow neural networks with these characteristics. The primary implication of this capability is that HyperNEAT can efficiently evolve extensive neural networks. These closely resemble neural connectivity patterns in the brain (repetitive with many regularities and a few irregularities) and are generally much more significant than those produced by previous neural learning techniques.

In addition to being able to observe the geometry of the problem domain, HyperNEAT's other distinctive and essential characteristic is that it can comprehend the geometry of the problem domain. It is odd to think of, yet most neuroevolution algorithms (and brain learning algorithms in general) are utterly blind to domain geometry. For example, when a checker's board position is entered into an artificial neural network, the network is unaware of which piece is adjacent. Yet, it must do so if it is ever to comprehend the board geometry. In contrast, when humans play checkers, we immediately understand the board's geometry; we do not need to deduce it from countless examples of games. HyperNEAT possesses identical functionality. It truly perceives the geometry of its inputs (and outputs) and can considerably improve learning by exploiting this geometry. Technically speaking, HyperNEAT calculates the connection of its neural networks based on their geometry.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in