In this multi-part series, I explain and assess the National Strategy on AI, published by NITI Aayog (a policy think tank) directed by the Government of India to achieve the goals of #AIforAll. Each article will assess one dimension of the national strategy in light of state-of-the-art academic scholarship. This assessment is contemporarily relevant because academic and industry reports indicate that the advent of AI will bring substantial social and economic changes in how work in India is organized. This series intends to inform business professionals in India about our nation’s stance on the global debates on AI and the future of work, and how their work may be impacted by it.

The advent of AI is often called the Fourth Industrial Revolution with predicted consequences of automation and unemployment. The pace of new technology, coupled with the Great Resignation Wave, has created fear of employment among workers with medium skills and fragmentable jobs. However, labor theory scholars argue that mere digitization of work is unlikely to result in its dramatic disappearance. In the World Development Report of 2019, experts at the World Bank find that despite the hype, our deployment of AI to replace traditional work and generate profit has been marginal. The adoption of AI at work is determined by sociopolitical factors, human agency and how well the new systems manage conflicting demands of its stakeholders. However, global trends emerge with spillover effects to India: increased demand for skilled labor and decreased demand for unskilled work, increased pressure for professional development, and considerable growth of the gig economy. In this context where technology blurs lines between organizations and its people, business professionals benefit from a deeper understanding of AI’s policy implications. Today we explore systemic consideration 1 in the NITI Aayog report: understanding the AI system’s functioning for safe and reliable deployment.

Algorithmic decision-making systems (ADS) contain three constituents: algorithm, parameters, and training data. The algorithm is the engine, parameters are the steering wheel and training data is the fuel. ADS learns by adjusting its parameters from training data and this process is constricted by the examples it encounters in the training dataset. When algorithms encounter real-life examples that were missing or misrepresented in training data, they behave unpredictably. In doing so, they continue or perpetrate biases that may have existed in training data. Our inherent belief that algorithmic decisions are objective because they are based on data, thus falls by the wayside. Algorithms are powerful, no doubt, but they are impenetrable code ‘black boxes’ designed to privilege business interests over individual privacy or ethical concerns. This makes it possible for unscrupulous individuals in authority to get away with questionable choices and avoid punishment for inadequate explanations with the excuse that ‘the algorithm did it’. Even when users are aware of biases and possible exploitation, the complexity of the algorithms makes them illegible to non-experts, and nearly impossible to locate or correct. As algorithms become more efficient, they may take over decision making entirely from human users, with unintended negative consequences. An example is the tragic crash of the Air France flight 447: the AI autopilot algorithm refused to hand over control to the human pilot until its crash even though it was showing inconsistent and dangerous readings.

Increasing the reliability and trustworthiness of AI algorithms thus becomes a priority for socioeconomic welfare. Here are some recommendations from scholars at the AI Now institute in New York University.

  1. Avoid ‘black box’ algorithms for critical public services: This requires setting up a standard for accountability and transparency in algorithms used in critical services such as justice, healthcare, and education. Algorithms may be implemented only after detailed documentation on their operation as well as accountability for different aspects of the code have been received. Pre-trained models from third-party vendors must be avoided and in-house models must be trained extensively. They should be available for public auditing, testing, and review, and subject to predetermined standards of accountability. Since algorithms operate in diverse institutional domains, officials with backgrounds in sociology, law, administration, HR, and related fields must be introduced to them and assigned decision making authority in case the algorithms malfunction.
  2. Make training data publicly accessible: Many algorithms are developed and trained by AI engineers and sold online as pre-trained models ready for use. Since algorithms self-update based on training data, developers of pre-trained models must make training data publicly available. They should also indicate the methods used for testing and steps taken to ensure that spurious correlations in the data are not propagated as biases. Public organizations must continue to monitor the use of AI algorithms across contexts through academically rigorous processes accessible to the public: this is crucial in high-stakes contexts like health, justice, and marginalized groups.
  3. Deploy a cross-disciplinary approach when exploring and mitigating bias problems: Biases in public systems have historic, structural origins and require interdisciplinary research. Rather than looking for one-shot fixes for issues, leveraging domain expertise to determine the reason for specific practices and the steps to resolve them may result in meaningful long-term solutions. This may require deeper assessment of gender stereotypes (such as associating ‘male’ with ‘doctor’ and ‘female’ with ‘nurse’), ethical codes of conduct, and changing labor regulation, among other issues.

In India, AI is being adopted rapidly across sectors like manufacturing and healthcare. The benefits are immense but so are the risks. Policy makers must be cautious before implementing AI into existing systems because without adequate deliberation, they may be unaware of the broader implications of algorithms in legacy systems. There is an urgent need for algorithmic transparency, accountability, and explainability in our strategy to implement AI across our public institutions. 

Sources of Article

National AI Strategy document

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in