Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

Can machines make an inference of a human being's goals and help them accomplish them? The researchers at MIT have been pondering over this question in the recent past. 

One of the most critical components of this engineering task is developing an understanding that is cognizant of our human nature - our mistakes. In a classic experiment on human social intelligence by psychologists Felix Warneken and Michael Tomasello, adults struggle with a task with visible difficulties while toddlers observe them. Having inferred the goals of the adults from observing their failures, toddlers offer to help. 

Just like how toddlers inferred goals through mistakes, the MIT researchers concluded that machines too, need to develop a social intelligence to understand the human mistakes in actions and plans if they want to instinctively help us. 

In this quest, to capture social intelligence in machines, the researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Department of Brain and Cognitive Sciences have written an algorithm capable of understanding goals and plans, even the plans that might fail. This is exciting as these developments could eventually lead to the creation of improved assistive technologies such as caretaking robots and digital assistants like Siri and Alexa. 

“This ability to account for mistakes could be crucial for building machines that robustly infer and act in our interests,” says Tan Zhi-Xuan, PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS) and the lead author on a new paper about the research. “Otherwise, AI systems might wrongly infer that, since we failed to achieve our higher-order goals, those goals weren't desired after all. We've seen what happens when algorithms feed on our reflexive and unplanned usage of social media, leading us down paths of dependency and polarization. Ideally, the algorithms of the future will recognize our mistakes, bad habits, and irrationalities and help us avoid, rather than reinforce, them.” 

The team created the algorithm on Gen, a new AI programming platform that the institute has recently developed. This platform combines symbolic AI plannings with Bayesian inference which provides the best way to include uncertainties with new data. This inference technology is usually used in financial risk evaluation, diagnostic testing and election forecasting. 

The team's inspiration, as for most artificial intelligence tools, as humans, particularly how we plan; in parts while filling or adapting the rest of the gaps as we go ahead with the execution, with necessary reiterations. The team's algorithm, "Sequential Inverse Plan Search (SIPS)” taking a cue from this approach, infers only a part of the user's goal and cuts the unlikely plans in advance. 

“One of our early insights was that if you want to infer someone’s goals, you don’t need to think further ahead than they do. We realized this could be used not just to speed up goal inference, but also to infer intended goals from actions that are too shortsighted to succeed, leading us to shift from scaling up algorithms to exploring ways to resolve more fundamental limitations of current AI systems,” says Vikash Mansinghka, a principal research scientist at MIT and one of Tan Zhi-Xuan's co-advisors, along with Joshua Tenenbaum, MIT professor in brain and cognitive sciences. “This is part of our larger moonshot — to reverse-engineer 18-month-old human common sense.” 

The CSAIL algorithm was 75% accurate in inferring goals and performed up to 150 times faster than the existing Bayesian Inverse Reinforcement Learning (BIRL) method. The BIRL method learns a user's objective, value and rewards by closely studying its behaviour so that it can predict the needed plans and policies in advance. 

“AI is in the process of abandoning the ‘standard model’ where a fixed, known objective is given to the machine,” says Stuart Russell, the Smith-Zadeh Professor of Engineering at the University of California at Berkeley. “Instead, the machine knows that it doesn't know what we want, which means that research on how to infer goals and preferences from human behaviour becomes a central topic in AI. This paper takes that goal seriously; in particular, it is a step towards modelling — and hence inverting — the actual process by which humans generate behaviour from goals and preferences."

The researchers have currently explored the inferencing system has been explored in relatively small planning problems over set goals. In future, the team will explore hierarchies that are more complex in goals and plans. 

“Though this work represents only a small initial step, my hope is that this research will lay some of the philosophical and conceptual groundwork necessary to build machines that truly understand human goals, plans and values,” says Xuan. “This basic approach of modelling humans as imperfect reasoners feels very promising. It now allows us to infer when plans are mistaken, and perhaps it will eventually allow us to infer when people hold mistaken beliefs, assumptions, and guiding principles as well.”

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE