The importance of Responsible AI in AI ecosystems is quite a familiar issue to us all. However, it is critical that we also stress upon maintaining the same throughout the life of the AI algorithmic models. To understand the idea thoroughly, the need for it, and the challenges, Jibu Elias, Content & Research Lead at INDIAai, got in touch with Shea Brown, Founder & CEO, BABL AI Inc.

Shea’s company BABL AI Inc. is engaged in providing guidance to companies in an endeavour to build non-discriminating and bias-free algorithms. His vision is to set a regulatory body that can audit the AI algorithms used by various companies. The idea is to verify that the AI algorithms comply with industry-standards and do not infringe on people’s rights.

Shea believes that owing to the lack of standardisation and uniformity in testing AI algorithms, practice of algorithmic auditing was much needed.

Jibu: What is algorithmic auditing, and how to achieve it?

Shea: Algorithmic auditing has two facets or, to say, two roles. One is testing the algorithms periodically by internal teams to check if they perform as expected under variable conditions and to test that these algorithms are bias-free. The other aspect is testing those algorithms that have a significant role to play in people’s lives. In the current scenario around AI where there is a lack of regulations, the need of the hour is to have an external third party that can ensure that your customer trusts you and has faith that you are doing the right thing.

Shea stressed on training for all AI algorithm developers around the sensitivity for bias, fairness, consent, rights, and practices look very critical to auditing.

Jibu: Is Algorithmic auditing going to build people’s and companies’ trust in AI ? 

Shea: Shea said that auditing is just a piece of the puzzle however the whole it is a very critical aspect in the bigger picture. For responsible innovation affirming trust is unavoidable. Shea added that verification and testing start early in the lifecycle of an algorithm. Right from the inception of the idea to who decides the appropriateness of the algorithm are critical factors. Even the data collection methods should be fair and shall not invade anyone’s privacy along with consent. It should be a regulatory ecosystem where consumers can rest assured that someone is guarding their data and watching these algorithms. Hence, auditing is not the only way to build trust amongst consumers and stakeholders, but it is critical.

Jibu: What challenges do you think auditing will face in making its place in the ecosystem?

Shea: A challenge that auditing faces is a lack of trust from various companies around the fact that people feel insecure that their data and inside information getting exposed during the auditing processes. To this Shea said that Non-Disclosure Agreements and black box testing could win companies’ faith in the process.

Jibu: Shea, how far are we from proper regulations being implemented? 

Shea: “In the US, we are pretty close to achieving that state where individual cities or states will have their regulations,” said Shea. As an example that there is a proposed regulation in New York that, if passed, would require any algorithm that is used for hiring, to be mandatorily tested for any potential bias.

“Another attention-worthy progress is the Algorithmic Accountability Act (US), proposed in 2019, which is not yet passed, mandates an algorithmic impact assessment for any organization that uses an algorithm that affects a certain or more number of people” shared Shea. These are few steps in the direction that look promising to build trustworthy AI ecosystems, though the progress has been a bit delayed and slow.

Jibu: How are the big players in the AI arena going to react to the whole idea of auditing? Would it be only internal auditing for them or is it going to be third part algorithmic auditing for them too?

Shea: I feel confident that companies would gradually come to terms with the need for audits and consulting, shaping up a better and stronger algorithmic auditing community. There is a need for talent with AI knowledge to be bridged to achieve the desired results in the direction.

Shea further explained that algorithmic auditing might face acceptance from various big AI players out there. This is known that when it comes to AI algorithms, the developer is the one who knows the best about the algorithm. Hence it might become an issue for companies to let an external third party come in and decide on the fairness and performance. However, “I believes that all the people are in it together, and the final goal is to have more fair, quality, and bias-free, non-discriminating algorithms,” said Shea.

Jibu: Any message for aspiring students for the algorithmic auditing as a job role, Shea?

Shea: In terms of studies Social science studies along with technical AI knowledge can empower you to tackle black spots that might pop up during auditing, testing and verifications work and will make you a good auditor. Shea shared that he had his initial studies around Astrophysics however he got inclined to make a mark in AI. Deep understanding of ML and Mathematics is also important however, deep understanding of ethics is critical to achieve the right perspective in the direction. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in