Results for ""
As AI becomes ubiquitous, ethics is an important consideration and will soon be, critical. We understand this but do we quite comprehend what lies behind these considerations? To demystify this complexity, we had Amit Sethi from IIT Bombay in conversation with Jibu Elias, the Content Lead for INDIAai.
The technology (AI) is progressing at a tremendous pace and to play catch-up even, policy-makers will need to adopt a multi-disciplinary approach. In general, there’s a lack of understanding of the legal framework among AI scientists, and sensitization is required. From every segment, be the government or industry, there’s a lot of excitement about what AI can do for mankind but equally, there are damaging misconceptions doing rounds. These misconceptions sway between two ends. On one end, there’s a prevailing belief that AI will become conscious (magically?) as it starts to reward itself. And on the other, the cynical lot mistakenly think AI to be dumb or not so smart as the name suggests. Truth, as always, is somewhere in between. There is no self-conscious AI and it is likely to stay that way for at least 20 – 30 years.
Any AI model is a mathematical formula, albeit a complex one. It’s difficult to comprehend the impact when parameters are changed and AI scientists often struggle to draw this correlation. Interestingly, the flexibility that the model accords is because there are complex parameters at work.
Let’s take the example of a self-driving car. Simultaneously, there are multiple AI systems at play. Simplistically put, one is about recognizing road signages, two, about recognizing the boundaries including pedestrians and other vehicles, and three, a lidar system. On top of all this is a control system that interacts with all the others and makes decisions in real-time. Usually, people think that AI is self-improving but that’s rarely the case. Also, when there’s self-improvement, essentially, the mathematical formula changes, and for regulated industries such as healthcare, any change would require fresh approvals, as the output may be adversely impacted.
The lifecycle of an AI model has four phases which are:
At every step, there’s a human involved and there’s interactivity between the model and the human. The human is responsible for deciding if the output is stable enough to be passed onto the next stage. Did the human interpret correctly before taking it forward?
At the Monitoring & Audit stage if it is detected that in one of the earlier stages, not enough due diligence was conducted and the output was found to be misleading or inadequate, then the entire process would have to be retraced & repeated from that particular stage onwards (where the anomaly was detected).
It’s a tricky ask. There is a need to balance privacy and leg-room for innovation. If one is paranoid about the former, then it can stymie innovation. While Indians, in general, do not view privacy through the same lens as Westerners – as long as it does not result in social shaming, denial of insurance or loss of employment, etc. - but it cannot be that we disregard this crucial aspect. There is such a thing as reasonable levels of privacy concerns and that’s why anonymization & data encryption is imperative so that the principal cannot be identified. For example, looking at an anonymized CT-Scan image, one cannot identify the individual. The AI model sees only the encrypted data. Our country has a huge shortage of medical practitioners and AI can be used to address these gaps which would have otherwise taken many years. It can be effectively used to reduce the burden on doctors.
Many startups are doing great work in the healthcare vertical and what’s needed is ecosystem support, funding, and enabling policies. Policy-making should not be burdensome and while the process needs to evolve, due-diligence cannot be compromised upon. For example, if the approval has been given for Ver 1 then it cannot follow automatically for Ver 1.1. For this, statistically relevant tests need to be conducted to validate the new model works with new and old data and gives better results. For approval, well-informed processes need to be put in place.
Doctors cannot blindly depend upon AI. They have to be educated on the parameters for which the model has been tested and what it has not been tested. For instance, there are different types of breast cancers such as Metastatic Breast Cancer, Ductal Carcinoma in Situ, Invasive Ductal Carcinoma, Triple Negative Breast Cancer, Inflammatory Breast Cancer – has the model been tested and validated for all these types? This knowledge is critical to make informed decisions and to what extent AI-led findings can be depended upon.
It took a crisis to show us what can be achieved in a remarkably short time, how policies can be reworked, and how the ecosystem players rally around to work towards a common goal.
Surely, it can work for AI policymaking too, and create a viable framework that looks at ethical considerations with all seriousness.