Results for ""
In computational learning theory, the concept of stability, commonly referred to as algorithmic stability, describes how a machine learning algorithm is affected by minute input changes.
When the training set of data is minimally altered, the forecast for a stable learning algorithm does not significantly change. For example, look at a machine learning system trained to identify handwritten letters of the alphabet. The algorithm is given 1000 instances of handwritten notes and their labels ("A" to "Z") as part of the training set. It is possible to change this training set so that there are just 999 examples of handwritten letters and their labels accessible by omitting one example. The 1000-element and 999-element training sets would result in a similar classifier, provided the learning process was stable.
Since stability is a characteristic of the learning process rather than a part of the information being learned, it can be examined for various learning problems, from language learning to inverse problems in physics and engineering. After establishing that stability and generalization are related, the study of stability acquired significance in computational learning theory in the 2000s. Researchers demonstrated that specific types of stability provide good generalization for large classes of learning algorithms, particularly empirical risk minimization algorithms.
Goal
When making a machine learning system, one of the main goals is to ensure that the learning algorithm will work well on new examples after being trained on a limited number of them. Getting generalization bounds for supervised learning algorithms was a big step forward in the 1990s. In the past, the way to show that a generalization was trustworthy was to show that an algorithm was consistent by showing that empirical quantities always converged to their means in the same way.
In computational learning theory, one of the most important things to do is to limit the generalization error of learning algorithms. Until recently, most research focused on uniform a-priori bounds, which guarantee that the difference between the training error and the test error is always small for any hypothesis in a given class. Most of the time, these limits are given in combinatorial quantities like VCdimension. In the last few years, researchers have tried to find better ways to estimate the size of the search space (such as by using "covering numbers") or to find out what the algorithm did to find a solution (e.g. margin). There are other ways to look at learning systems, like the observed VC dimension, but they all focus on how they are put together.
Conclusion
Researchers introduced a different approach to establishing generalization bounds in the 2000s for computational learning theory: stability analysis. The stability of an algorithm can be evaluated in algorithms with hypothesis spaces with unbounded or indefinite VC-dimension, such as the nearest neighbour, and is a property of the learning process rather than a direct attribute of the hypothesis space H.
Furthermore, the learned function for a stable learning algorithm does not change considerably when the training set is minimally changed, for as, by eliminating an example. A Cross-Validation Leave One Out (CVloo) method assesses a learning algorithm's stability about the loss function using a measure of leave-one-out error. Stability analysis, then, is sensitivity analysis applied to machine learning.