As AI becomes ubiquitous, ethics is an important consideration and will soon be, critical. We understand this but do we quite comprehend what lies behind these considerations? To demystify this complexity, we had Amit Sethi from IIT Bombay in conversation with Jibu Elias, the Content Lead for INDIAai.

Building an Ethical Framework

The technology (AI) is progressing at a tremendous pace and to play catch-up even, policy-makers will need to adopt a multi-disciplinary approach. In general, there’s a lack of understanding of the legal framework among AI scientists, and sensitization is required. From every segment, be the government or industry, there’s a lot of excitement about what AI can do for mankind but equally, there are damaging misconceptions doing rounds. These misconceptions sway between two ends. On one end, there’s a prevailing belief that AI will become conscious (magically?) as it starts to reward itself. And on the other, the cynical lot mistakenly think AI to be dumb or not so smart as the name suggests. Truth, as always, is somewhere in between. There is no self-conscious AI and it is likely to stay that way for at least 20 – 30 years.

Any AI model is a mathematical formula, albeit a complex one. It’s difficult to comprehend the impact when parameters are changed and AI scientists often struggle to draw this correlation. Interestingly, the flexibility that the model accords is because there are complex parameters at work.

Let’s take the example of a self-driving car. Simultaneously, there are multiple AI systems at play. Simplistically put, one is about recognizing road signages, two, about recognizing the boundaries including pedestrians and other vehicles, and three, a lidar system. On top of all this is a control system that interacts with all the others and makes decisions in real-time. Usually, people think that AI is self-improving but that’s rarely the case. Also, when there’s self-improvement, essentially, the mathematical formula changes, and for regulated industries such as healthcare, any change would require fresh approvals, as the output may be adversely impacted.

The lifecycle of an AI Model

The lifecycle of an AI model has four phases which are:

  • Economic need: Why are we building this model? Will it lower cost/result in faster turnaround or higher accuracy? At this stage, the intention of the person building the model is very important. For example, is the bot being created to hack into systems or to ensure that cybersecurity breaches are taken care of? One is a malevolent intent and the other, benevolent. At the economic need stage, it can be determined whether it is ethical or not. 
  • Due diligence & model formulation: A lot of it is supervised learning where you feed in hundreds of thousands of data sets to train the model to predict the output. In this phase, one needs to ask if there are sufficient inputs to make a prediction. There are no perfect models but is there a reasonable chance that it will be better than what existed earlier and help achieve the desired result faster/cheaper/more accurately. This is where the questions regarding data comes. Such has Did we collect enough data? Was there a diversity in data collection? Let’s take the example of face recognition software. Has it taken into consideration in what condition it will be deployed? An Israeli company (eg) has built an AI-enabled tool but the model was predominantly tested on Caucasians. Now, did they claim it’s ready for the world population? Has there been adequate training, testing & validation for the world population? Or is the claim being made based on narrow data sets? Also, at our end, did we ask these specific questions when we procured this solution? In India, there’s diversity, and for any face recognition software to work, this has to be a critical consideration.  
  • Data preparation and Training & Validation: AI models used to detect cancer in India may show different results than US test data. In India, people visit government hospitals at a very late stage and there are other complications due to habits such as chewing tobacco. The data collected on Indian patients (manifestation of cancer) are different from their American counterparts. To add, vendors may not have the same lab practices and this can create disturbance in the model. While procuring these AI-enabled solutions, the right questions need to be asked. And, what kind of adaptations are required for Indian conditions?
  • Packaging the AI Model as a product or service and Monitoring & audit: This stage is often overlooked. But think of any complex machinery that requires an audit process. Let’s take another example – Howitzer guns being procured from Switzerland. These guns would have been tested at specific weather conditions but will they function as accurately in Ladakh? To address this, we test for a small pilot, and only if it works well in Ladakh will the decision be taken to import in greater quantities. AI models also require a similar approach to testing and validation.

Assessing Performance

At every step, there’s a human involved and there’s interactivity between the model and the human. The human is responsible for deciding if the output is stable enough to be passed onto the next stage. Did the human interpret correctly before taking it forward?

At the Monitoring & Audit stage if it is detected that in one of the earlier stages, not enough due diligence was conducted and the output was found to be misleading or inadequate, then the entire process would have to be retraced & repeated from that particular stage onwards (where the anomaly was detected).

Privacy-related Challenges

It’s a tricky ask. There is a need to balance privacy and leg-room for innovation. If one is paranoid about the former, then it can stymie innovation. While Indians, in general, do not view privacy through the same lens as Westerners – as long as it does not result in social shaming, denial of insurance or loss of employment, etc. -  but it cannot be that we disregard this crucial aspect. There is such a thing as reasonable levels of privacy concerns and that’s why anonymization & data encryption is imperative so that the principal cannot be identified. For example, looking at an anonymized CT-Scan image, one cannot identify the individual. The AI model sees only the encrypted data. Our country has a huge shortage of medical practitioners and AI can be used to address these gaps which would have otherwise taken many years. It can be effectively used to reduce the burden on doctors.     

Many startups are doing great work in the healthcare vertical and what’s needed is ecosystem support, funding, and enabling policies. Policy-making should not be burdensome and while the process needs to evolve, due-diligence cannot be compromised upon. For example, if the approval has been given for Ver 1 then it cannot follow automatically for Ver 1.1. For this, statistically relevant tests need to be conducted to validate the new model works with new and old data and gives better results. For approval, well-informed processes need to be put in place.

Assigning Responsibility & Informed Decision-making

Doctors cannot blindly depend upon AI. They have to be educated on the parameters for which the model has been tested and what it has not been tested. For instance, there are different types of breast cancers such as Metastatic Breast Cancer, Ductal Carcinoma in Situ, Invasive Ductal Carcinoma, Triple Negative Breast Cancer, Inflammatory Breast Cancer – has the model been tested and validated for all these types? This knowledge is critical to make informed decisions and to what extent AI-led findings can be depended upon.     

It took a crisis to show us what can be achieved in a remarkably short time, how policies can be reworked, and how the ecosystem players rally around to work towards a common goal.

Surely, it can work for AI policymaking too, and create a viable framework that looks at ethical considerations with all seriousness. 

Want to publish your content?

Get Published Icon
ALSO EXPLORE