Artificial Intelligence (AI) has become a ubiquitous part of daily lives these days, and this rise in AI applications has stimulated significant interest from different stakeholders. Since last few decades, AI has emerged as a primary data science function– by utilizing advanced algorithms and ever-increasing computing power. AI, specifically Machine Learning (ML), holds great potential for India, from two perspectives: a) technology capability and b) addressing the challenges that the country and its society is facing. India’s main strength lies in its technology skilled workforce and the diversity of data sources and applications. ML enables a great blend of these complementary strengths to build data-driven large-scale solutions, which could cater to India’s needs. Artificial Intelligence (AI) for Social Good (henceforth AI4SG) is becoming common in various fields of application and is also gaining attention within the AI community1. Some of the applications of use of AI4SG in India include but are not restricted to tools for reducing the disaster risk2, maintaining database of cough of covid positive patients for efficiently dealing with the pandemics in future3, predicting human-wildlife conflict for enhanced decision making4, better preparedness to deal with diebetes5, prediction of groundwater6, and optimization of solar grids for efficient power generation etc.

In last few years, various frameworks for the design, development, and dissemination of ethical AI4SG have emerged7. Upcoming applications of AI4SG seem to be capturing and supporting widespread decisions, which were earlier thought to be impossible and/ or unaffordable. Nonetheless, the understanding about what constitutes AI4SG is still very limited, and needs be enhanced so as make the most out of the potential opportunities. What makes AI socially good in theory might not always be true while its application, which affects its scalability and widespread uptake. Many prototypes either phase-out in the near time or are not scalable after their initial successes owing to unnecessary failures and missed opportunities.

A few lessons learned from past and ongoing activities to use AI4SG:

a) Falsifiability and incremental deployment: Credibility of an AI4SG application demands that the application will adhere to the principle of beneficence, or at least the principle of nonmaleficence7. Falsifiability requires specified conditions for the application to be fully functional, which could be the detailed specification, the empirical testing of its critical requirement/s. However, we cannot know if a particular AI4SG application is safe without testing its application in all probable contexts. In such cases, the map of testing the application would be comparable to its territory of dissemination. Despite the fuzziness about the success of the application, there is still the possibility to know when a given critical requirement is not implemented or may be failing to work properly.

b) Privacy protection and subject consent: Respect for privacy is a necessary factor of human dignity and personal information can be seen to be constituting an individual, therefore deprivatising records without consent of that individual is potentially a violation of human dignity8. The conception of individual privacy as a fundamental right underlies recent debate on “personal” and “non-personal” data and formation of ‘Data Protection Authority’ in India9. In the case of AI4SG in India, it is particularly important to emphasize the relevance of users’ consent to the use of personal data, which is most likely to be broadened to non-personal data as well.

c) Safeguards against the manipulation of predictors: The predictive power of AI4SG can face risk due the manipulation of input data, and excessive reliance on non-causal indicators. The risk of data manipulation in AI4SG may be exacerbated and can deliver disastrous outputs that breach the principle of justice.

d) Situational fairness: There are several ways in which bias can creep into an algorithm even before the data for the algorithm is collected, and even through stages of building the algorithm such as when ‘framing the problem’, when collecting data and when preparing the data. Therefore, the designers must sanitize the data before using it in AI to maintain the situational fairness. While doing so it is also important to keep in mind the sensitivity towards factors that ensure inclusiveness.

e) Contextualized intervention: One of the challenges for AI4SG projects is to design the interventions that maintain the balance between current and future benefits, which mostly pertains to a temporal choice interdependency. An efficient receiver-contextualized intervention is important, which enables to attain the right level of disruption and respects the autonomy through optionality. Hence, it is crucial to involve the targeted users in the process of design and dissemination of any AI based system.

f) Human friendly semanticisation: Mostly while applying AI the designers have technical capacity to automate meaning- and sense-creation (semanticisation), but it could lead to mistrust or unfairness if it is done carelessly. Therefore, it is important to differentiate between the tasks that should and should not be delegated to an AI. AI should facilitate human-friendly semantisation but should not provide it itself. AI4SG designers should not hinder the ability for people to semanticise, which is a must to make sense of something, which can affect the success of the AI based application.

In all, it important to remember that AI has real consequences and sometimes it is certain to produce unintended outcomes10. Therefore, the designers need to explore all possible perspectives to address the challenge of accountability and to do their best to position themselves to be proactive against, and responsive to, undesirable outcomes. 

Sources of Article

1. Hager, G. D., Drobnis, A., Fang, F., Ghani, R., Greenwald, A., Lyons, T., & Parkes, D. C. et al. (2017). Artificial intelligence for social good, 24–24. 2. https://ai.google/social-good/ 3. https://analyticsindiamag.com/how-this-researcher-is-fighting-the-pandemic-with-hisinitiative-coughagainstcovid/ 4. https://www.wildlifeconservationtrust.org/can-artificial-intelligence-predict-human-wildlifeconflict/ 5. https://arogyaworld.org/arogya-world-winsgoogle-ai-for-social-good-ai4sg/ 6. NITI aayog report 7. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. 8. Floridi, L. (2016). On human dignity as a foundation for the right to privacy. Philosophy & Technology, 29(4), 307–312. https ://doi.org/10.1007/s1334 7-016-0220-8. 9. The print: https://theprint.in/theprint-essential/non-personal-data-social-media-what-newdata-protection-bill-could-look-like/776389/ 10. Moore, J. (2019). AI for not bad. Front. Big Data. https ://doi.org/10.3389/fdata.2019.00032.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE