DBN is a layered Restricted Boltzmann Machines (RBMs). It is a generative model that Geoffrey Hinton suggested in 2006. 

Some experts say the deep belief network is like a set of stacked RBMs. DBNs are of many smaller, unsupervised neural networks. All deep belief networks have in common that even though the layers are connected, there are no links between the units in the same layer.

Geoff Hinton, one of the people who started this process, says that stacked RBMs provide a system in a "greedy" way and that deep belief networks are models that "extract a deep hierarchical representation of training data." In general, this unsupervised machine learning model shows how engineers can work on less structured, more robust systems where there isn't as much labelling of data and the technology has to put together results based on random inputs and iterative processes.

Purpose

DBN can solve unsupervised learning problems to reduce the dimensionality of features. In addition, as supervised learning tasks to construct classification or regression models. There are two processes involved in training a DBN: 

  • layer-by-layer training and 
  • fine-tuning. 

Layer-by-layer training refers to the unsupervised training of each RBM. In contrast, fine-tuning involves the use of error back-propagation methods to fine-tune the parameters of DBN after unsupervised training is complete.

Learning process

A DBN can learn to probabilistically reconstruct its inputs when trained on a set of examples without being watched. The layers then find features. After this learning step, a DBN can classify close watches. Moreover, DBNs are simple, unsupervised networks like RBMs or autoencoders, where the hidden layer is the visible layer of the next. An RBM is an undirected, energy-based model with a "visible" input layer, a "hidden" output layer, and connections between layers but not between them. 

How does it work?

The Greedy algorithm is to train deep belief networks. This algorithm learns the top-down approach and most critical generative weights by building up layers. These weights show how all the variables in a layer depend on the variables above it. In DBN, the researchers use several steps of Gibbs sampling on the top two hidden layers. First, the two hidden layers at the top of this stage are to take a sample from the RBM.

The rest of the model uses a single pass of ancestral sampling to take a sample from all visible units. We can use a single, bottom-up pass to learn the values of all hidden variables in each layer. Greedy pretraining starts with a data vector only seen in the bottom layer. Fine-tuning is then used to give the generative weights in the opposite direction.

Conclusion

DBNs are the approach of stacking numerous independent, unsupervised networks that use the hidden layer of each network as the input for the subsequent layer. Typically, RBMs or autoencoders are in this capacity. The final objective is to develop a process for unsupervised training. It depends on contrastive divergence for each subnetwork and is faster.

In addition, DBNs were a response to the issues experienced while training classic neural networks in deep layered networks, such as slow learning, becoming stuck in local minima owing to poor parameter selection and requiring many training datasets. Overall, there are many exciting ways to use and implement DBNs in real-world situations and applications (e.g., electroencephalography, drug discovery).

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in