Get featured on INDIAai

Contribute your expertise or opinions and become part of the ecosystem!

Ever since the pandemic, the role of Artificial intelligence (AI) has increased exponentially in the healthcare and medicine industry worldwide. This advancement can benefit mankind but only if these systems are built with the perspective of preserving ethics and human rights at their core. To ensure that AI continues to benefit humankind, the World Health Organisation (WHO), has published new guidance documents that lay down six principles to limit the risks. 

“Like all new technology, artificial intelligence…can also be misused and cause harm”, warmed Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization (WHO). 

The Ethics and governance of artificial intelligence for health report sheds light on the fact that AI is an intrinsic part of several wealthy countries already where it is deployed to improve on speed and accuracy of diagnosis and screening, assist in healthcare, strengthen research and drug development and support various public health interventions. 

For patients, the technology ensures that they have better control over their own healthcare and for poor countries, it provides an opportunity to bridge health service access gaps. 

However, the WHO cautions the reader and advises not to overestimate AI's boons, especially at the expense of core investments and strategies required to achieve universal health coverage. Risks such as unethical data collection pertaining to health, AI-biases and risk of patient safety, cybersecurity and environment are a few of the risks that WHO elaborated on, in the report. 

Overall, it gives a solemn warning that systems that are trained on a certain type of datasets, for example, data pertaining to individuals from high-income countries may prove to be ineffective to individuals in low-and middle-income countries. Therefore, WHO strongly advocates that AI systems must be made after taking into terms the vast diversity in terms of socio-economic, cultural and stages and quality of healthcare, and be accompanied by digital skills training and community engagement. 

The last point mentioned is extremely crucial for healthcare workers who need to be trained in digital literacy. 

Of the six guiding principles that are laid down by WHO, the first one is, of course, protecting human autonomy. People must be in control of the healthcare systems and medical decisions; AI can be of assistance. 

Secondly, AI designers should protect privacy and confidentiality. Patients should be informed and should provide valid informed consent through appropriate legal frameworks. 

The third principle calls for appropriate regulatory systems for safety, accuracy and efficacy, including measures of quality control to promote human wellbeing and public interest. 

The fourth principle propounds that all required information be made public or is well-documented before the AI technology is designed or deployed so that the wider public can understand the technology and there is transparency. 

The fifth principle urges creators, developers and users to promote AI that is inclusive, responsive and sustainable for all people, irrespective of age, gender, ethnicity or other characteristics protected under human rights codes. 

The final principle urges designers, developers and users to transparently assess applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in