Results for ""
The Trolley Problem in AI
Rahul De, Professor of Information Systems at IIM Bangalore initiated the webinar, explaining the Trolley Problem in AI. Typically, the Trolley Problem is addressed in major academic journals, books and research literature, and draws its origin from philosophy. The problem is – a trolley is hurtling down a track at a high speed. A person is standing by the track with access to a throw switch. If he pulls the switch, the trolley veers off the original track and moves right. The dilemma is – at the end of the original track, there are a group of people standing, while there is one person standing on the right side track. Should the switch be pulled or not? Is it better to sacrifice one person or five people?
This conundrum forms a critical aspect of De and his student Sai Dattathrani’s research, focusing on the output problem of AI, which builds metaphorical trolleys like this all the time. He cites the example of Tesla’s self driving cars – if one of these cars were out of control on a highway, what would the machine choose? To hit pedestrians or swerve sharply and hit a wall, risking the driver of the car? This is not an imagined problem. In a scenario like this, the AI has to decide in a matter of seconds what the call should be. Based on this decision, the outcome is deemed good or bad. These ethical dilemmas apply to the real world scenarios, and lead to outcomes that include bias, accuracy, consequences, legal responsibility, explainability and more.
De went on to explain that his research however, was more focused on the processes that lead to the outcome, which involves ethics at the deepest level imaginable. Citing examples of ERP systems, automated ticketing and healthcare, De explains how erroneous decisions made here propagate bias and impact the discourse on AI’s efficacy. So the change must be affected at the level where the machine is taught to address the problem through processes. He is even exploring the development of AI systems to address ethical concerns, their impact on humans and stretching the capability of technology.
Sai Dattathrani, doctoral student at Information Systems, IIM Bangalore elaborated further on the research done by Prof. De., stating that they started out by understanding capabilities and consequences and understood differences between traditional systems like ERP and AI based systems like chatbots and robots. Traditional systems support humans in their decision making, but AI makes the decisions for humans. This capability affects AI design. Within traditional systems, user’s views are embedded whereas AI systems learn and adapt from the data being fed to them. AI systems can even emulate identities and emotions – these developments can lead to a range of consequences. This has allowed De and Dattathrani to develop a conceptual framework, with factors like sense of purpose, power to choose and intentions of actors being threaded in. They even deployed these factors in a research setting like AI based detection of cancer where they were able to track the impact of AI based systems and ethics in the design and user phase
Understanding the Problem, Then Applying AI
Falaah Arif Khan, Artist in Residence at the Montreal AI Ethics Institute, spoke about the fundamental differences in human decision making – decisions area based not on pure objectivity but in the context of society, culture and politics. Its imperative to first understand what the problem is, and how AI can solve the problem without perpetuating the same or similar biases. Within the AI field, there are different kinds of data – for instance, image classification is largely objective but student grade allocation is more subjective.
The broader discourse that links engineers, lawyers, policy makers alike is – if there is no right answer, all models are wrong but some models can be useful. By taking some data from a domain like credit scoring, an engineer an create can an algorithm that can do credit modelling – this conflates an engineer’s experience with that of a finance professional. But just by creating a model that can do financial modelling, doesn’t mean the engineer understands the basics and principles of financial modelling. While creating models for a social context, experts from that field are needed to be able to provide limitations on what the tech can and cannot do. Biased data is a reflection of the data pool itself – ethical AI is all about removing bias from data or atleast mitigating it.
There are some extremes when it comes to technology – some are techno optimists while others are tech-averse. A middle ground is needed for a nuanced discussion and fixes need to applied as the discourse continues.