Machine learning (ML) is changing modern living at a rapid pace. Medicine, finance, and transportation are among the many fields poised for transformation by the proliferation of machine learning models that can outperform their human counterparts. These algorithms make decisions that have significant consequences for the health and happiness of the people who use them—for example, identifying whether a blip on a scan has the potential to be cancerous, applying the brakes in an autonomous vehicle, or awarding a loan. One challenge facing the creators of these algorithms is that modern AI solves problems in mysterious ways—the so-called black-box problem. Because these models refine themselves autonomously and with an idiosyncrasy beyond the scope of human comprehension and computation, it is often impossible for a model’s user or even creator to explain the model’s decision.

Due to the transformative promise of AI at scale and the urgent lack of satisfying explanations for AI decision-making, there is increasing political, ethical, economic, and curiosity-driven theoretical pressure on ML researchers to solve the black-box problem, creating a sub-field called explainable artificial intelligence (XAI).

XAI for psychologists

There is a large family of explainable AI models that do not incorporate a ‘deep’ architecture. Models like linear regression, decision trees, and many types of clustering describe the relationships between variables in mathematically simple terms: either as a linear or monotonic function or as the consequence of some quantitative decision threshold or rule-based logic. 

While these models are interpretable, in many realistic settings, their performance does not compete with deep neural networks. There is a tradeoff between performance and model complexity, matching the intuition that deep learning models perform well because they capture complicated relationships that cannot be easily represented.

Artificial cognition

Given the need for satisfying explanations for black-box behavior, recognizing the appetite for a science of machine behavior within computer science, and recognizing cognitive psychology’s rich history of developing models of the mind through experimentation, we advance a hybrid discipline called Artificial Cognition, first coined by Ritter et al. 

Artificial Cognition can be thought of as a branch of the Machine Behavior movement toward XAI, unique in its emphasis on cognitive models that are inferred from data elicited via experimentation rather than directly observed. Psychologists may recognize the distinction as being similar to the historic transformation that psychology endured in the 1950s when early cognitivism objected to the contemporary dominant view that mental processes were inadmissible as topics of scientific study; psychology at the time was limited to behaviorism and its models of stimulus and response.

Artificial Cognition can be a sub-discipline dedicated to inferring causal behavioral models for AI following a domain-general scientific framework.

Artificial minds

One of the fundamental tools of experimental psychology is inferential statistics. Every psychology undergraduate receives extensive training in how to infer behavior in the population from patterns in a sample. The assumption underlying the pervasive use of inferential statistics is that there is a fundamental commonality between minds in the population: Our minds are alike, so your behavior in the laboratory informs my behavior in the world. Most psychologists would agree that the shared architecture of our brains justifies the assumption and is demonstrably robust against observation. 

However, the assumption of a common mind is impossible in artificial intelligence, where the diversity of “brains” is matched only by the ingenuity of computer scientists. It does not make sense to infer the behavior of one algorithm from the behavior of other algorithms with explicitly different architectures.

Artificial Cognition will be more akin to the psychology of individual differences, where deviations from the mean represent true data. When known variables guide decision-making, evidenced by varying performance as a function of stimulus intensity, classic psychophysics methods can provide detailed descriptions of ability.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE