TAMDEF:

The AI policy document released by the state government of Tamilnadu has outlined a six-step framework that address six core challenges in AI:

Transparency & Audit (T): Since the AI-based Systems would be regularly interacting with humans, technology providers should be capable of explaining the decision-making process to the user so that the AI System does not remain a black box to them. The explainability of such systems is critical when government agencies use these for decision making. Moreover, an audit trail of the decisions made would be needed when there are disputes and public agencies would be called upon to explain their decisions.

Accountability & Legal issues (A): AI makes it challenging to uphold accountability. When the algorithms are used for decision making, it is often unknown to the developer as well. AI machines are capable of inventing superior ways of accomplishing the task given with unintended consequences that can have adverse implications on society.

Misuse Protection (M): Like all emerging technologies in their nascent stage of development, AI’s potential is not fully apparent, even to their developers. Despite its noble intentions, there is a possibility of these technologies being misused. AI policy has to be far reaching in terms of considering both the positive applications and the possibilities of misuse. Further, the policy needs to be balanced – one that balances the twin objectives of encouraging innovation without excessive regulation while at the same time ensuring that the possibilities of misuse are minimized.

Digital Divide & Data Deficit (D): Since the entire AI revolution has data at its foundation, there is a real danger of societies that have inadequate access to information technology, the Internet and digitization are left behind. informed citizens would tend to gain disproportionately in this data-driven revolution. Communities having good quality granular data are going to derive the maximum benefit out of this disruption. Those communities where the data is of poor quality or poor granularity would be left behind in harnessing the power of AI to improve the lives of its citizens. There is a threat that this Technology would adversely affect communities which are more deficient in data. Unfortunately, it is the low-resource communities which would be hit by this data-deficit because they are the ones who never had the resources to invest in data collection and collation. Another challenge that emerges from this technology is the skewed power distribution between digital haves and have-nots. Only those who have the ability, knowledge and resources needed to connect to online data-driven systems would be heard. The voices of others may not get registered in the system.

Ethics (E): Defining ethics for machines and then making them computable is a tall order. If treated purely from an AI perspective, ethics can be divided into two sub-components – (i) Privacy and Data Protection, and (ii) Human and Environmental Values. Both these dimensions of ethics are critical for keeping AI Systems safe for human society

Fairness and Equity (F): AI Systems can create new social paradigms, which if left un-regulated and unevaluated, can severely damage the social fabric and expose people lower in the bargaining hierarchy with a real threat of exploitation and unfair treatment. It would lead to the commoditization of human labour and could chip away at human dignity. On the other hand, an AI System designed with equity as a priority would ensure that no one gets left behind. Another critical requirement for an AI System is fairness. They shall be ‘trained’ in human values, and shall not exhibit any gender or racial bias, and shall be designed to stay away from ‘social profiling’ (especially in law enforcement, fraud detection and crime prevention areas). AI Systems designed would have to comply with ‘free of bias’ norm to prevent stereotyping.

Deepmax Scorecard

An objective scorecard based on the six challenges of AI in Public Policy is proposed, which, (with suitably designed test data sets) can reliably produce a safety and social desirability score for a given AI System by testing it against each of the seven DEEP-MAX parameters.

DEEP-MAX Scorecard is a transparent point-based rating system for AI Systems on the seven key parameters of

  • Diversity (D)
  • Equity (E)
  • Ethics (E)
  • Privacy and Data Protection (P)
  • Misuse Protection (M)
  • Audit and Transparency (A)
  • Cross Geography and Cross-Society applicability (X)


AI Systems are self-learning, and the DEEP-MAX Scores, which will ship-out with each AI Module, may no longer be valid after some time. Periodic updates of the DEEP-MAX Scorecard are to be ensured for all AI Systems deployed for public use. The required periodicity of updates should be established based upon the nature of the AI use case class.


Source: Tamil Nadu Safe and Ethical Intelligence Policy 2020 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE