There are a few individuals who are changing the landscape of artificial intelligence (AI) at a global level – Shameek Kundu features in this list. There’s no end to his string of achievements, which is probably why it’s not easy to introduce him in a single breath. 

Currently, he heads the Financial Services vertical at Truera, a startup that is dedicated to building trust in AI. Besides, he’s also a member of the Singapore government’s Advisory Council on the Ethical Use of AI and Data, as well as an expert member at Global Partnership on AI. Previously, he has served on several AI governance practitioner forums at the Bank of England/FCA and the Monetary Authority of Singapore. 

With so much expertise in the field, there’s no one better than him to understand the challenges that could possibly limit the growth of AI. Jibu Elias — Research & Content Head at IndiaAI caught up with Kundu to know more about his current roles and responsibilities, the significance of trust in AI, and more. 

Tryst with AI

Kundu starts off this chat by sharing that “there’s a little bit of method to the madness.” in his career trajectory. About nine years ago, he was the Group Chief Data Officer at Standard Chartered Bank, Singapore. It was during this stint that he gained clarity that the role of data and algorithms was only going to increase with time. The second realisation was that even those who have data science degrees have limited information about data and algorithms that are heavily driving decision-making.

“A lot of my work in the last decade is centred around trust in data and algorithms. Most of it was internally focused on my banking career, but from 2018-19, I started working with regulatory bodies, as a part of my role – in particular, the Monetary Authority of Singapore, and with the IMDA Privacy Commissioner. Eventually, this also led me to work with the Bank of England on the same subject,” he added. 

Having worked so extensively in financial regulation in AI, Kundu believes that the biggest challenge that plagues the industry today is the risk that it won’t take off. That’s partly due to regulatory or ethical concerns. Moreover, it’s all quite difficult – from getting the data together to ensuring the systems scale up, to having the right talent and skills. 

“My biggest fear is that it may become another case of overpromise and under delivery for all the reasons I just mentioned,” he added.

A deep dive into the regulatory landscape 

Although Singapore is a small country, they are active in the AI space, believes Kundu. He works with IMDA that he calls the equivalent of MeitY – they have recently introduced a centre on digital trust. Their focus is not purely focused on regulating the use of data and AI, but also leveraging it in a safe and responsible manner. 

“The IMDA has started a testing facility, where you can go through the entire process – a test – that will get you the trust mark. The Monetary Authority of Singapore, which can be regarded as an RBI equivalent, has something similar. Here, I was involved as one of the co-authors of the regulatory guideline called Fairness, Ethics, Accountability, and Transparency, in 2018,” he shared. 

Further, Kundu mentioned that Singapore is well-aware of its strengths and weaknesses – that’s why they believe to create impact, they must collaborate heavily with other agencies. 

But is there a right amount of regulation? He answers the burning question: “I don’t think anybody would like the absence of regulation, that ship has sailed. In fact, you’ll find several technology folks welcoming regulatory guardrails of some kind. Nobody I have talked to has denied the need to have fairness requirements in place – it’s just a question of how explicit it must be.”

While several questions appear to be unique to AI, they aren’t, reveals Kundu. For instance, in financial services, individuals believe that if they offer transparency to customers on how they arrived at certain decisions – there’s also a concern looming large on existing decisions, where AI automation isn’t being leveraged. His advice is that it is essential to have regulatory guidelines that are principle-based. 

“You have to embed AI by design – it’s not a good idea to have AI governance at the end of the process. Plus, the integration at the initial stage must be as painless and automated as possible,” he added. 

Why trust is important for AI 

If there’s one factor that is intrinsically linked with AI or other technologies, it is trust. Trust comes from transparency or reliable testing, believes Kundu. There are certain parts that are completely human-oriented; for instance, defining whether a particular model must be used in a particular area. 

“Like the European Union says, do not use social scoring for most applications. No machine can tell you whether you should or should not use social scoring, but if you do, you must look at it through a lens of fairness and robustness. Now, for these technical aspects, True Era provides the software to test and monitor – but we are very clear that there’s a strong role for the human to use this output, in order to decide if the model is trustworthy or not,” he added. 

Explaining further, Kundu says that it is easier to use this on more traditional machine learning models that are more structured. 

“We are beginning to look at how some of this can be applied to large language generation models,” he concluded, adding that this kind of technology must not be industry specific. It all boils down to implementation. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in