Results for ""
The potential benefits of AI in any society are huge. But we also understand the criticality of regulating AI or else, it can lead to disastrous consequences. But that word regulation is rather vague (at times). There’s a hard-touch approach to regulation, and there’s a light-touch one. Press it too hard, innovation is asphyxiated, keep it too light, and the dark web residents can cut loose. Towards addressing this, the EU has come up with a framework (which builds on an earlier coordinated plan) to set a direction for AI developers and consumers so that undesirable consequences can be avoided. https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682
Biometric identification systems have incredible potential to disrupt legacies. And, the word “disrupt” can have positive and negative connotations. We’ve seen in states such as Telangana, Tamil Nadu, and others, how these systems can enable attendance-taking which leads to great accuracy and the time saved. But this technology is so powerful that the deleterious applications are equally lucrative and attractive for certain sections of society. Examples are rife - social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, etc. This framework has outlined a set (limited) of harmful use of AI because they infringe on the fundamental rights of EU citizens. This kind of risk is “unacceptable” and therefore, banned. The second category falls under “high risk”. The guiding principles on which this framework is centred are about TRUST and that it’s a must-have criterion rather than something good to have. That’s why the framework is discerning about this category as well. These applications have to undergo mandatory requirements that cover “the quality of data sets used; technical documentation and record-keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy, and cybersecurity.” Any breach will “allow national authorities to have access to the information needed to investigate whether the use of the AI systems was legally binding as per the EU laws. Similarly, the lists of applications that have limited and minimal risks have been tabulated, and guiding principles are drawn on how they can be made to function in a trustworthy manner.
The classification has been done based on:
· the extent of the use and its intended purpose.
· the number of potentially affected persons.
· the outcome and the irreversibility of harm.
· the extent to which existing Union legislation provides for effective measures to prevent or substantially minimize those risks.
The problem is that all these things look great on paper but when it comes to legislation and implementation, there are grey areas. Real-time remote biometric identification in public places poses threat to fundamental rights, therefore, it’s prohibited in principle but there are exceptions - to fight serious crime. This implies that cops can still use it. There are good cops…and there are bad cops.
But the good thing is that the developers of AI systems classified under “high-risk” will have to undergo conformity assessment. Are such systems trustworthy or not - e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, and robustness?
The secret sauce is really about compliance. Member States of the EU have the key responsibility. Each State “should designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities.” The national supervisory body (overarching) will represent the country in the European Artificial Intelligence Board. The latter is tasked with implementation and support standardizations to the extent possible.
Transparency, accountability, responsibility, etc, are things we understand. Every AI framework for responsible use has these principles built-in right at the design stage, and yet there are black-boxes. We also know that these instances can be minimized and not done away with altogether, much as we desire. What this framework does is tightens the loose ends by making definitions inclusive and specific; and, in case there are breaches, national authorities will have access to the information needed for the investigation.
Even importers of AI systems will have to ensure that the appropriate conformity assessments (as per the EU) have been carried out by the foreign service provider and bears the marking. On the one hand, it may seem to constrict but if you look at it in greater depth, it opens up a new line of innovative thinking that is deeply focussed on human-centric AI – huge scope for testing and experimentation as well. AI regulation is still an emerging field and this would set a high benchmark even for countries that trade with the EU.