Radial basis function (RBF) networks are prevalent artificial neural networks that approximate functions. RBF networks are distinguished from other neural networks by their universal approximation and quicker learning rate. Functional approximation, time series forecasting, classification, and system control are just a few applications for radial basis functions networks.

In 1988, Broomhead and Lowe devised the RBF network. Since RBFs have only one hidden layer, the convergence of optimization objectives is much quicker. Moreover, despite having only one hidden layer, RBFs have been demonstrated to be universal approximators.

The inputs' RBF and neuron parameters are combined linearly to produce the network's output. Numerous applications exist for RBF networks, including function approximation, time series prediction, classification, and system control. They were first proposed in a 1988 paper by Broomhead and Lowe, both Royal Signals and Radar Establishment researchers.

There are numerous applications for RBF networks, including function approximation, interpolation, classification, and time series prediction. These applications serve various industrial objectives, including stock price prediction, anomaly detection in data, and fraud detection in financial transactions.

Architecture

An RBF network has three layers: 

  • an input layer, 
  • a hidden layer, and 
  • an output layer. 

The RBF's Hidden layer is made up of hidden neurons, and their activation function is Gaussian. The hidden layer generates a signal corresponding to an input layer's input vector, and the network generates a response to this signal.

Design Considerations

In contrast to the Multilayer perceptron (MLP), RBF networks are local approximators of nonlinear input-output mapping. Their main advantages are a shorter training phase and less sensitivity to the order in which training data is presented. However, in many cases, achieving a smooth mapping requires many radial basis functions to span the input space. Many practical applications are hampered as a result.

The RBF network only has one hidden layer, and we can decide the number and nature of the basis functions online while learning takes place. The number of neurons in the input layer is the same as the feature vector's dimension. Similarly, the number of output layer nodes corresponds to the number of classes.

RBF Networks and Genetic Algorithms

The structure of an RBF network is chosen by trial and error in the standard training procedure. The network parameters are determined in two stages: The first obtains the centres of the hidden layer nodes using the k-means clustering algorithm. In the second stage, simple linear regression calculates the connection weights.

The GAs, on the other hand, that will serve as the foundation for developing the new method are iterative stochastic methodologies that begin with a random population of possible solutions. Individuals with the best traits are chosen for reproduction, and their "chromosomes" are passed down to the next generation. Furthermore, the algorithms include some genetic "operators" created by randomly combining the existing ones.

Conclusion

Universal approximators include multilayer perceptrons and radial basis function networks. These are layered feed-forward nonlinear networks. Therefore, it is unsurprising that an RBF network can always successfully mimic a specific MLP or vice versa. Furthermore, RBF networks have been used to solve many problems, though not as many as MLPs. Examples of applications include image processing, speech recognition, time-series analysis, adaptive equalization, radar point-source location, and medical diagnosis.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in