It is an undeniable fact that Fractal played a crucial role in laying the foundation stones for India's AI ecosystem, with its co-founder and current Group Chief Executive Srikanth Velamakanni being one of the pioneers in this arena. 

On a personal note, speaking with Srikanth for INDIAai felt more like a conversation with a philosopher and logician than a corporate leader most of the times. It could be that his perceptions and ideas are deeply rooted in mathematics or his habit of reading around 50 books in a year. Either way, Srikanth's thoughts and beliefs are characterised by clarity, simplicity, and depth, equally. 

The journey to predict human behaviour

Fractal, or Fractal Analytics as it was known before, was founded by Srikanth Velamakanni and Pranay Agrawal in 2000. "We were interested in maths and human behaviour; as a result, we started asking how can we use maths to understand the world better and help people make better decisions, and that's how Fractal began", recalled Srikanth. 

"Initially, we started with risk models to predict consumer behaviour on things such as credit default, and we soon expanded to how people generally behave during a recession period," he added. 

Soon after the beginning of Fractal, in 2001 the world witnesses a recession. It resulted in an 8-month economic downturn mostly affecting the US, European countries and Asian economic powerhouses such as Japan. During this period, Fractal mastered its trade of predicting consumer behaviour as it built accurate models on changing consumer behaviour and decision making during hard times. 

"It's all in data, you can accurately predict consumer behaviour if you have enough data and you know the data thoroughly," Srikanth pointed out. 

It was in this same period Fractal partnered with the then-upcoming HDFC Bank for building its risk infrastructure. The risk models made by Fractal became crucial for the bank's process of approving credit card applications as well as in getting new customers from the open market. These models, alongside the names storming process orientation, helped the bank to thrive even during the 2008 global economic crisis. 

In 2004, a major shift happened in Fractal's and Srikanth's life. "We wanted to test ourself globally, and so we moved to the US", said Srikanth. 

"However, then we realised that the world has actually moved on a quite a bit from an analytics and AI standpoint and took us another six months to one year to catch up with the rest of the world back then."

According to Srikanth, some of the technology they came across was very new to India, even to the world. Even the ones they thought were new to India was quite outdated there. "It was an insightful experience, and in the next few years we were able to build a great business around the US and the world, and that's the first ten years of Fractal," recollects Srikanth. 

For Fractal, the next ten years has been about raising capital, building a strong team and creating products such as the Qure.ai, which has been extremely successful in the field of healthcare space. Some of the other companies from Fractal's stable included Cuddle.aI, AI-powered, voice-enabled business analytics platform that automatically detects patterns in enterprise data and Eugenie.ai, which is focusing on anomaly detection using AI.

"We are fortunate to have the trust of the biggest clients in the world's including most of the Fortune 500 companies, and they see Fractal as a strategic partner," said Srikanth. 

"So when you solve the toughest problems for the most demanding clients, then it would be easy to serve the rest of the world" added Srikanth. 

Today Fractal has a workforce of over 2000 people and is the largest AI player in the country. The company has raised $325 million of capital as well.

"Starting analytics 20 years ago was like selling the proverbial shoes in Africa situation," said Srikanth. "It feels great to be in this space as AI is now everywhere, and India has woken up to the potential of the technology with many AI startups doing amazing work. It feels good to have played a part in starting this industry and watching it grow."

When it comes to AI, Fractal and Srikanth have a unique approach. "Firstly, we think at Fractal is how AI can enable better decisions, an example is QureAI helps radiologists make better diagnostics that in turn help doctors make better decisions," noted Srikanth. 

"Secondly, for us, AI is not just about automation, but most these processes we think about is in augmenting humans make better decisions," he explained. "In short, it is about using AI to enable better decisions, either automated or augmented."

"We think smart algorithms are not enough to enable decisions; rather, they are one step of the bigger process, especially in places where a human being is making decisions."

"We have to understand how human behaviour works, as well; to help the radiologist, the doctor or the CEO to make the right decision, we have to understand them as well, and that is where we combine in AI and human behaviour to solve problems."

The path for building Responsible AI systems for future

AI is growing prominence in the human decision-making process as well as with the rise of automated decision making systems has led to more controversies. From the parole allocation algorithms in the US to A-level exam grading system in the UK, AI systems have been at the centre of controversies as a result of the severe bias it has demonstrated. 

"This is a very serious problem," stated Srikanth, "AI is not bias-free because it learns from data, and the data contains the biases we humans had in all these years." 

"Essentially, data is nothing but the record of human activity and human decisions from the past and AI is learning from that data and assuming most of them is true; if it is learning from biased data, then it will continue to reinforce those biases."

 AI is not guaranteed to give you ethical decisions or answers every time just because it is more accurate. Even we as human beings are very biased in many ways that we don't generally notice. 

"When it comes to AI, we can't just sit back and give it a free pass because human beings are biased," noted Srikanth.

"I do believe that we need to have high standards for AI because when a machine gets biased millions of decision gets affected because machine errors scale so much worse than human errors. Hence the responsibility for a machine to get it right is very important," he added. 

One way to overcome these challenges is to build responsible and ethical AI systems. 

"AI needs to be responsible, and remember there are so many ethical questions, from trolley problems to abortion, in which humans have different perceptions, and we have no one right answers and for us to teach these answers to machines is very hard." 

"Importantly, sometimes it is not about right vs wrong, it is about what is more important than the other," Srikanth delved into the philosophy and ethical conundrum arising from machines making decisions. 

"We certainly cannot code human values into AI, but we can make an AI learn how human beings actually make decisions, what are our implicit priorities in the way we decide things," explained Srikanth. 

"Can we make AI observe human behaviours and learn from it so that they can mimic human-like actions? There are many systems which can do it exist right now, and that could be one way to go forward." 

However, as any follower of AI Ethics can guarantee, these aren't direct black and white puzzles to solve. 

"Assuming that we can teach AI some of the values we have as humans, the next question is, how do we create these process that delivers these kind of systems. The first suggestion is to design these systems right way, by clearly focusing on what we intend to do and defining the problem matters the most," explained Srikanth by switching the cap of a philosopher for a logician. 

"When you are designing a system, you can design it in a way to guarantee the outcome you want."

According to Srikanth, the parole recommending algorithms can be designed with a number of outcomes in mind. It can be designed to give parole to people who will commit less crime in the future, it can be designed to allow parole to people who have a least chance to get arrested during the parol, or it can design in a way that it will give equal opportunity to all the races. 

"Once we identify the intention, the second part is on how the design has to be implemented the right way. What data to use and what data should not be used," Srikanth points. 

"And finally, there is the question of accountability. Who should be responsible for what? If a self-driving car gets into an accident, who is responsible? The car company, the user, the programmer, the tyre manufacture?"

"If we do all these three things right, and we can teach computers to learn our values, we have a sense of a responsible AI. Right now, it is a challenge to accomplish it, but what we can do is make is incremental progress to it by systems, process, audit mechanism and accountability mechanism."

Another contested talking point when it comes to responsible AI is the question of legal and policy framework. 

"We do need a framework; most of them can be guidelines. If there is a common understanding across the world on something, it will be helpful. Even if it is not law, people or workforce can hold a company accountable. When it comes to an enforceable framework, it has to be some clear and strong laws without any wiggle rooms."

A good example of this kind of enforceable law, according to Srikanth, is the US has fair credit reporting act, which predates AI systems. According to the act, one cannot be discriminated on approval for loans based on age or race. Explainability in insurance model in the US is another example, as these models should be approved by the state.

"We need to have responsible and ethical AI guidelines, and laws, otherwise in the absence people will do what is best for them, that is maximising profit, fairness doesn't come automatically to corporations," explained Srikanth sounding more like an ethicist which itself is a rare case in the highly competitive corporate world. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in