Results for ""
Non-monotonic logics are designed to capture and reflect defeasible inferences, i.e., inferences in which reasoners draw tentative conclusions that we can revise based on new data. Most formal logics investigated have a monotonic entailment relation, which means that adding a formula to a theory never results in a trimming of its set of conclusions.
Monotonicity implies that adding a new piece of knowledge cannot lower the set of what is known. A monotonic logic cannot handle various reasoning tasks such as:
Objective
Nonmonotonic reasoning is an essential component of the logical approach to Artificial Intelligence (AI). Non-monotonic logic researchers are passionate about separating default reasoning from any statistical or empirical interpretation. Default reasoning necessitates using two facilities: one that demands conclusions to be withdrawn in the face of fresh refuting evidence and another that prevents decisions from being retracted in the face of new but irrelevant evidence. Expectation-evoking and explanation-evoking default norms are involved in commonsense reasoning.
The contrast between causal and evidential defaults helps us to discern between rules that should and should not be invoked. While we can unleash the full potential of this distinction only in systems sensitive to the relative strengths of the default rules, causality can still provide a lot to systems that aren't. Furthermore, we can realise the significant computational advantages of causal reasoning with only an approximate characterization of rule strength. Bayesian analysis should be used as a guideline for developing more precise logical systems.
Abductive reasoning
The technique of obtaining a sufficient explanation for known facts is known as abductive reasoning. Because the most plausible explanations are only sometimes correct, abductive logic should not be monotonic. For example, the most plausible reason for seeing wet grass is that it rained; nevertheless, this explanation must be retracted when it is discovered that the actual cause of the wet grass was a sprinkler. Because the old description (it rained) has been retracted due to the addition of new information (a sprinkler was activated), any logic that models explanations are non-monotonic.
Conclusion
Logic shouldn't be monotonic if it contains equations showing something is unknown. The formula indicating it is unknown gets removed when something previously unknown is discovered. The monotonicity criteria are broken by the second change, a removal caused by an addition. The autoepistemic logic is a logic for knowledge-based reasoning.
Furthermore, the process of modifying beliefs to fit a new idea that may be inconsistent with the old ones is known as belief revision. If the new belief is valid, we must withdraw some old ones to maintain consistency. Because of this retraction in reaction to acquiring a new idea, any logic for belief revision becomes non-monotonic. The belief revision approach differs from paraconsistent logic in tolerating inconsistency rather than striving to eliminate it.