Results for ""
Explanation-Based Learning (EBL) is a moral strategy for improving supervised learning by utilising available domain knowledge. As a result, the rate of learning, the certainty of learning, the precision of the taught concept, or a combination of these can be enhanced.
It is a method for an intelligent system to learn by seeing instances. The capacity of EBl systems to generate justified generalisations from single training instances distinguishes them.
Explanation-based learning can learn from just one training event. Rather than taking multiple examples, explanation-based learning emphasises learning a single, specific model. Take, for example, the Ludoo game. Explanation-based learning can learn from just one training event.
A programme that learns how to play chess by watching others do it is an example of EBL using the perfect domain theory. A chess situation with an essential part like "Forced loss of black queen in two moves" also has many unimportant details, like how the pawns are scattered on the board. EBL can look at a single training case and determine the essential parts to make a broad statement.
A domain theory is perfect or complete if it has, in view, all the knowledge needed to answer any question about the domain. For example, the rules of chess are the domain theory for the game of chess. If you know the rules, you can usually figure out the best move in any scenario. Unfortunately, the combinatoric explosion makes it impossible to make this conclusion in real life. EBL uses training cases to make finding the deductions from a domain theory easier.
An EBL system works by finding a way to figure out each training case from the system's database of domain theory. The EBL system can quickly identify new models like the training example by adding a short proof of the training example to the domain-theory database. Minton looked at the main problem with the method, which is that it costs a lot to use the learned proof formulas as there are so many of them.
Natural language processing (NLP) is a particularly excellent application domain for an EBL. Using a treebank (training examples), a rich domain theory, i.e. a natural language grammar, which could be better and more comprehensive, is adapted to a specific application or language usage. Rayner was an innovator in this field. A commercial NL interface to relational databases was NL's first successful industrial application.
Furthermore, several large-scale natural language parsing systems have solved the utility problem by omitting the grammar (domain theory) and using specialised LR-parsing techniques, resulting in enormous speedups at the expense of coverage but not disambiguation. Additionally, EBL-like techniques have been applied to surface generation, which is the opposite of parsing.
Image source: Unsplash