“As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive. Understanding what AI can do and how it fits into your strategy is the beginning, not the end, of that process”.

— Andrew Ng, Founder & CEO, Landing AI 

When people talk about AI today, they’re mostly talking about machine learning: a sub-field of computer science that dates back to at least the 1950s. The methods popular today for building recommendation engines, spam classifiers or traffic predictions are not fundamentally different from the algorithms invented decades ago. 

So why all the interest and investment now? The simple answer is, data. Fitbits, GPS, images, credit card purchases have all ensured that every human being on the planet today is a living, breathing data warehouse.

Today’s debate is no longer a question of if and how AI will participate in society - it is a complex discussion about how we should employ this collection of technologies to run our communities, businesses and governments in effective yet human-centred ways.

The era of smart technology 

AI has made our lives incredibly fast and efficient in a lot more ways than we may realise. 

When we order food from Swiggy or Zomato, there is a backend algorithm that returns restaurants and dishes in response to a query using fuzzy text matching and geo-location filtering. When we book a cab from Ola or Uber, the algorithm can forecast demand and alert nearby cabs, therefore decreasing the expected time of arrival. 

When we stream through Netflix or Prime or Hotstar, AI is being used to create teasers, highlights, recaps, and trailers for shows that can boost viewership - no more sifting through hours of information to decide what we want to watch.

Be it route optimisation in Google Maps, recommended news feeds in Instagram or AI chatbots in banking/healthcare/e-commerce portals, we can now receive a quickly accessible, highly personalised interface wherever we go. 

The need to regulate AI

While AI has transformed convenience across several industries, it is not without its shortcomings - the digital economy has a significant impact on labour, identity, and human rights. 

India is gradually catching on to the potential harms posed by AI - as is evident from the Indian absurdist sci-fi TV series featuring Radhika Apte and Jackie Shroff, “OK Computer”. Based in 2031, the show revolves around a self-driving taxi being hacked and ordered to kill an anonymous human victim. We may not have to wait another 9 years though; this has already happened at a time when confidence in autonomous vehicles was at an all-time high - In May 2018 a self-driving Uber hit Elaine Herzberg, a pedestrian in Arizona, resulting in her death. Uber did not face any charges citing that “there was no basis for criminal liability”. 

This doesn’t necessarily mean their AI became sentient and developed homicidal tendencies - only that certain crucial safety flaws and algorithmic biases were overlooked in its design. Even more dubious is that the exact cause of death cannot be ascertained- owing to the complexity of machine learning models, their reasoning is not always straightforwardly interpretable to humans, creating a black-box effect. 

Understanding how AI works

To understand this better, let’s back up a bit - how exactly do machine learning models work? 

In his classic 1962 essay Artificial Intelligence: A Frontier of Automation Arthur Samuel, an IBM researcher wrote, “Programming for computations is, at best, a difficult task, not primarily because of any inherent complexity in the computer itself, but rather, because of the need to spell out every minute step of the process in the most exasperating detail. Computers, as any programmer will tell you, are giant morons, not giant brains”. 

His basic idea was this: instead of telling the computer the exact steps required to solve a problem, instead, show it examples of the problem to solve, and let it figure out how to solve it itself. So essentially much like human beings, this machine could now “learn” from its experiences. 

How can AI be biased?

Any algorithm is only as good as its training data. And, no training data is without bias, not even the ones generated through automation. For instance, Amazon's book store experienced a "cataloging error" in which 57,310 books lost their "sales ranking"; a number used to help books show up quicker in the book suggestions algorithm. Among those that yielded no results were "lgbtq", "pride", and "queer".

The books affected were reported to include between "dozens" and "hundreds" of books containing LGBTQIA+ themes, often labeling them as "adult material" when similar books containing heterosexual characters remain at the top of the sales ranking. This is a result of AI content moderation - something you might have increasingly noticed on Instagram or Twitter where the algorithm is deactivating accounts or taking down what it considers to be inappropriate. This has had a profound effect on the lives and work of individuals in different parts of the world including Myanmar, Australia, Europe, and China, as well as in marginalized communities in the U.S. and elsewhere. Users are being kept in the dark, voices that should be heard are being silenced forever by automation, and that must change.

Along the same lines, the COMPAS (Correctional Offender Management Profiling for Alternative 

If ths werei a human authority, there was still the hope of having a meaningful dialogue to better Sanctions) predictive policing tool was found to to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.

understand the reasoning behind the classification - but with AI, defendants rarely have an opportunity to challenge their assessments. The results are usually shared with the defendant’s attorney, but the calculations that transformed the underlying data into a score are rarely revealed.

Speaking of authoritative AI, IBM’s Watson for Oncology is being implemented in Manipal Hospitals in India. As a note - Watson has come under fire at a global level for being a human driven engine masquerading as an artificial intelligence. Concerns that have been voiced include the lack of an independent study, lack of follow-up to understand if the recommendations helped patients, and that the solution is trained on data that does not reflect the diversity of cancer patients across the world. 

Here’s where things can get really dystopian. The recruitment-technology firm, HireVue which is leveraged by companies such as Microsoft, Oracle, IBM, Nike, Accenture, Goldman Sachs and more offers an AI-powered service that automatically analyzes an applicants' tone, word choice and facial movements during video interviews to predict their skill and demeanor on the job. (Candidates are encouraged to smile for best results.) The system uses a candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score. As a result, applicants who deviate from the “traditional”—including people who don't speak English as a native language or who are disabled—are likely to get lower scores.

Ever experienced the frustration of interacting with an AI chatbot when your enquiry didn’t fit within the very narrow parameters of the process for which the bot was trained, or didn’t use the precise language to communicate it? Now consider the complexities that could arise from KYC compliance AI systems if the data sources are incorrect. Or consider the efficacy of a fraud detection AI system without the right kind of data. According to Accenture’s Senior Partner and Financial Services Sector Leader Rishi Aurora, “A key challenge is the availability of the right data. Data is the lifeblood of AI, and any vulnerability arising from unverified information is a serious concern for businesses. Structured mechanisms for collecting, validating, standardizing, correlating, archiving and distributing AI relevant data is crucial”.

Incorporating ethical practices into AI technologies

How do you transition to making Responsible AI the norm rather than the exception? A good place to start is by recognising the various cognitive biases that could potentially influence the data that governs your machine learning models. It’s important to understand that Machine Learning algorithms are just that - algorithms. Underneath the statistics and programming are math equations simply trying to maximize or minimize an equation. At the end of the day, it’s understood that “Garbage in means Garbage out”.

What does this mean then? Do we avoid the adoption and integration of AI systems altogether? Just like a knife could be an incredibly dangerous weapon to have around, there’s not much you could do in the kitchen without it. We cannot turn back the clock and stop AI innovation, just like we didn’t stop chemistry by banning chemical weapons, and we didn’t stop biological research by banning biological weapons. We wouldn’t want to either - which is why frameworks addressing ethical practices in deploying AI technologies is essential. The earliest known AI “framework” can be dated to 1942, when Isaac Asimov devised a set of rules called “The Three Laws of Robotics”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;

The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Developing an AI framework

Developing an AI framework in India broadly raises a number of different and interesting questions. For example:

  • How do you think AI should behave in India? Would this be different in another context? What societal norms are specific to India that you think AI should reflect?
  • Should AI be allowed to make autonomous decisions? Are there situations where AI should never take autonomous decisions? Should there always be a kill switch? Should a human always be in the loop?
  • Should AI be given personhood?
  • Should children be allowed to use AI?
  • Who should be held responsible if harm arises from a decision taken or augmented by an AI?
  • Can we trust AI? Is there a way for programmers to develop AI to demonstrate trust?

Principles, guidelines, and legal provisions have emerged at the international level as ethical frameworks for guiding the development and use of AI. These have emerged from civil society and academia, industry, standards setting bodies, and are beginning to emerge from governments. Some of these include:

● Standards Setting Bodies

○ IEEE Global Initiative on Ethics and Autonomous and Intelligent Systems and Ethically Alined Design

○ Niti Aayog - Responsible AI for all

○ Global Union - ten principles of Ethical AI

● Government

○ Digital Services Act

○ Digital Markets Act

○ The EU AI Act

● Companies

○ Google

○ Microsoft

○ Accenture

AI will radically transform and disrupt our world, but the right ethical choices for AI can make it a force of good for humanity. Until governments, business sectors and academics start thinking about bringing codes of ethics into the AI discussion, there is no anchor for smart tech to be integrated into our lives - at least one that does not involve daily hysteria and controversies surrounding AI getting genocidal or committing financial fraud. An international regulatory model is essential for the responsible design, development and deployment of AI. 

If you’re curious about how you can secure your data while leveraging various AI-monitored applications, consider checking out these browser plug-ins:

https://chrome.google.com/webstore/detail/i-dont-care-about-cookies/

https://tosdr.org

Additionally, here are a few alternative privacy-centric services worth checking out:

https://startpage.com/

https://www.saymine.com

https://signal.org/en/

https://duckduckgo.com/spread 

https://proton.me/mail

https://www.eder.io 

Sources of Article

https://www.techtarget.com/searchenterpriseai/feature/How-to-solve-the-black-box-AI-problem-through-transparency?amp=1

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in