Belief revision is altering one's beliefs in light of new information. It is a significant challenge in AI research and a popular theoretical computer science and logic topic. How do you incorporate new information into an existing knowledge base? Is there a plan b if the latest data contradicts a widely held belief? A smart system should be able to handle any of these edge circumstances.

Essential Issue

When building AI, it is preferable to emulate human reasoning as nearly as possible to achieve a realistic, yet artificial, intellect. It includes the ability to create a strategy for fixing a particular problem and the ability to alter or dismiss the plan in favour of a new one. In these cases, the agents' environment should be regarded as dynamic and complex as the world it represents. As a result, an agent's beliefs may become inconsistent and need to be changed. As a result, belief revision is a critical topic in modern AI.

Typically, two types of modifications are distinguished:

  • Update: The new information is about the current situation, but the old ideas are about the past. Modifying existing ideas to reflect the change is known as updating.
  • Revision: The old beliefs and the new information both pertain to the same situation; if there is a discrepancy between the two, it could be because the old data is less reliable than the new; and revision is the process of incorporating further information into the existing body of beliefs without introducing inconsistency.

Belief revision presumes minimal change before and after the alteration. This principle formalizes update inertia. Revision necessitates keeping as much information as possible.

Assumptions

Assumptions are frequently made to help an AI system function more efficiently and with less complexity. It allows for simple and beautiful planning and problem-solving to occur. However, realizing such systems is typically more challenging since the assumptions made need to be narrower to model the complex reality in which we live. The inconsistency of human reasoning is the first thing to keep in mind while trying to describe intelligence. It also looks at an example of an expert system from which trying to reason with the data could be more consistent due to the expert statements being represented using classical logic. It illustrates how experts on a subject may not always agree with one another and how contradictions need to be considered when drawing conclusions based on their views. 

Although the experts' intentions are likely good, an entity's engagement with an agent may not be. Inconsistencies can also be caused by malicious actors who intentionally mislead or offer misleading information to the agent. For instance, when modelling computer security using multiagent systems, agents may stand in for hackers or other types of attackers. The fact that it is ever-evolving is another fundamental feature of the real world. Adaptability is essential for agents operating in dynamic contexts where inconsistencies may or may not arise. Programming multiagent systems can be challenging since it requires considering a wide variety of factors and potential outcomes.

Transmutations

Williams was the first to employ transformations for broad iterative updating of beliefs. While the preference ordering remains inverted (the lower a model is, the more plausible it is), the degree of probability of a revising formula is direct (the higher the degree, the more believed it is). She demonstrated transmutations using two types of revision, conditionalization and adjustment, both of which work on numerical preference orderings.

Ranked revision

Start with a ranked model, which assigns non-negative numbers to models. Like a preferred order, this rank is unaffected by revision. Instead, updates affect the current collection of models (representing the knowledge base) and the series rank.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in