Computational models of cognition have become very popular, but they only cover a small part of cognition. Most of the time, they only focus on problem solving and reasoning or treat perception and motivation as separate modules. 

Dietrich Dorner came up with the Psi theory, the first architecture to cover cognition more broadly. By combining motivation and emotion with perception and reasoning and by including grounded neuro-symbolic representations, the Psi helps us completely understand the mind. It gives a conceptual framework that shows the links between perception and memory, language and mental representation, reasoning and motivation, emotion and cognition, autonomy and social behaviour.

What is Psi theory?

Psi-theory says that a cognitive system is a structure of relationships and dependencies that keep a homeostatic balance in a constantly changing environment. In addition, Psi-theory says that researchers can use hierarchical networks of nodes to represent declarative, procedural, and tacit knowledge in the same way everywhere. These nodes can store both local and global representations. The system's activity spreads through these networks in a controlled and directional way.

Nature of Psi theory

Psi is a way of thinking about solving problems using neuro-symbolic models. Representations in Psi are perceptual symbol systems, which means that declarative and procedural descriptions are entirely rooted in interaction contexts. This approach uses hierarchical spreading activation networks, with the lowest hierarchy level addressing sensor and motor systems. Moreover, Psi differs significantly from other cognitive architectures like ACT-R and Soar because it focuses on emotion, motivation, and interaction. On the other hand, representations in Psi are different because they don't use separate symbolic and sub-symbolic formalisms. Instead, they use the same way of representing both symbolic and sub-symbolic information.

What is MicroPsi?

Joscha Bach at the Humboldt University of Berlin and the Institute of Cognitive Science at the University of Osnabrück built the MicroPsi cognitive architecture. MicroPsi adds taxonomies, inheritance, and linguistic labelling to the representations of Psi-theory. In addition, MicroPsi's spreading activation networks make neural learning, planning, and associative retrieval possible.

Furthermore, the first generation of MicroPsi, from 2003 to 2009, is written in Java and has a framework for editing and simulating software agents using spreading activation networks and a graphics engine for viewing.

MicroPsi and the Psi theory

MicroPsi takes the Psi theory and turns it into a cognitive architecture that researchers can compare to other methods. It includes a development and simulation framework written in Java that lets you build multi-agent systems based on the Psi theory. In addition, MicroPsi uses as an architecture for controlling robots. Furthermore, MicroPsi 2 is a new version of MicroPsi. It is written in Python and is a tool for representing knowledge right now.

Conclusion

The Psi-theory is hard to test in an experiment because it depends on many different things. Most of the predictions and claims of the Psi-theory are about how things are. Likewise, the Psi-theory can also describe how a cognitive architecture should work. Furthermore, OpenPsi is a simple implementation of Psi-theory that is part of the OpenCog cognitive architecture. It has interfaces for Hanson Robotics robots that researchers can use to model emotions.

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in