Ray Solomonoff founded the theory sometime in the 1960s. It combines the Principle of Multiple Explanations and Occam's razor in a formal mathematical way. 

The probability of the subsequent observation is determined using all computational theories that accurately account for previous observations, with the shorter computational theories receiving more weight. This approach is the foundation of Marcus Hutter's universal artificial intelligence, which determines the expected value of an action.

Ray Solomonoff formalized Occam's razor for induction in theoretical computer science and probability theory. Solomonoff's induction, in its simplest form, calculates the posterior probability of any calculable theory given a set of observed data. This posterior probability is from the Bayes rule and a universal prior, or prior that gives any calculable theory a good chance of being true.

Solomonoff's completeness

The most interesting thing about Solomonoff's induction is that it is complete. In essence, the completeness theorem guarantees the expected cumulative errors made by predictions based on Solomonoff's induction. Moreover, the Kullback–Leibler divergence or the square of the difference between the induction's prediction and the probability given by the (random) process makes it possible to use the data to measure the errors.

Solomonoff's uncomputability

Solomonoff also showed that researchers couldn't work out Solomonoff's induction. He showed that computability and completeness couldn't go together. Any theory that is complete can't be computable. A game between the induction and the environment shows that this is true.

Applications

Solomonoff's inductive inference is not computable, but we can make it work on a modern computer with the help of a few AIXI-based algorithms that are similar. The more computing power they have, the closer they are to the predictions of inductive inference (their mathematical limit is Solomonoff's inductive inference). Furthermore, E. Mark Gold's model of learning in the limit from 1967 is the basis for another direction of inductive inference. 

Turing machine

The theory of automata and computation is the third-way inductive math-based reasoning. In this case, Inductive Turing machines are the next step in the development of computer science. They provide better models for modern computers and computer networks. Moreover, they are an essential class of super-recursive algorithms because they meet all of the conditions in the definition of an algorithm. Namely, each inductive Turing machine is an effective method in which a clear list of well-defined instructions for completing a task, when given an initial state, will move through a precise series of successive states until it reaches an end state. 

Inductive Turing machine

The difference between a Turing machine and an inductive Turing machine is that a Turing machine has to stop to get the result. In contrast, an inductive Turing machine can sometimes get the result without stopping. Stephen Kleene called a procedure or algorithm a calculation procedure or algorithm if it could run forever without stopping. However, Kleene also said that this kind of algorithm had to show "some object" in the end. Inductive Turing machines meet this condition because their results are after a finite number of steps. However, inductive Turing machines don't always show at which step we reached the result.

Conclusion

Some researchers think that the computations of inductive Turing machines happen all the time or take an infinite amount of time. The rules of an inductive Turing machine say when a computation, whether it stops or not, gives a result. In other words, an inductive Turing machine has an output that changes over time. When this output stops changing, it is called the result of the computation. It is essential to know that some papers misdescribe this rule.

The main difference between regular Turing machines and simple inductive Turing machines is their work. 

  • The first difference is that even simple inductive Turing machines can do much more than standard Turing machines. 
  • The second difference is that a traditional Turing machine always tells you when the result is reached (by stopping or coming to a final state). 

Still, a simple inductive Turing machine sometimes tells you when the result is achieved and sometimes doesn't (when the traditional Turing machine is helpless). 

Furthermore, people think a computer will always stop or do something else to let them know. In contrast, users often have to decide whether the computation results are what they need. Standard desktop programs like word processors and spreadsheets spend most of their time waiting in event loops and don't end until the user tells them to.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE