Diversity and inclusivity may be oft-used terms, but its implementation is far from easy. More so, in a virtual world that is increasingly leaning on new-age technologies like machine learning (AI) and artificial intelligence (AI). While the adoption of tech has made life much simpler, it has also brought with it a ton of challenges that require a strategic approach to tackle. That also includes making sure that applications dependent on AI and other such technologies do not spread any form of bias or alienate certain communities, making sure the experience is positive for all! 

That’s exactly what Tulsee Doshi, Product Lead for Google’s ML Fairness and Responsible AI brings forth with her experience. Her consistent efforts to ensure that each of the products ‘fit in’ with the requirements of an individual, without perpetuating any biases or stereotypes, has been applauded the world over.

In an exclusive chat withINDIAai's Content Lead Jibu Elias, Doshi spills the beans about her AI journey, deconstructs fair ML, and the challenges that surface while ensuring diversity and inclusivity. 

An illustrious AI journey

Doshi was always inclined towards STEM, and spent a large chunk of her formative years engaging in academic and extracurricular pursuits in the same space. It is her undying passion and a drive to make a difference that took her to Stanford University to learn more about the field from a highly-skilled faculty. 

“I realised I loved computer science, but I also wanted to learn many more things. That’s why I ended up majoring in a course called Symbolic Systems. It is a unique program for undergraduates and graduates that integrates knowledge from diverse fields of studying. I studied linguistics, philosophy and psychology, and how computer science influences other fields. I learnt all about how individuals interact with technology, and how does technology affect the way we build relationships with each other, and the way we build relationships with technology? What are the ethical implications of the technology we are building? How do we think about decision-making, and what role does technology play in that?,” she adds. 

The course also teaches students about societal constructs, says Doshi. She believes that a combination of technical skills and broader constructs is the need of the hour, which she also deploys at her current role with Google. 

“I had a background in AI, and I studied machine learning in my undergraduate and master courses. But when I joined Google as a product manager, I hadn’t really thought of how I should utilise AI and build that skill set. All that changed when I became a product manager for the YouTube recommendation system, four years ago. I started thinking about ways to build a great user experience for recommendations. What does it mean to show users recommendations that are valuable to them?”

Diving deep into responsible AI and fair ML

Doshi deconstructs the meaning of fair ML, “I think it boils down to the fact that you want ML to work for everyone. It can manifest in different ways — but it’s really about preventing harm and providing experiences that support diverse needs. Sometimes this can be providing equal opportunities to different users. At times, it’s about preventing stereotypes. It really depends on the product use case, but if you really focus on how we build ML for everyone, then that ends up becoming fair ML.”

She sheds light on the many challenges that exist in the present ecosystem.

“Today, I lead this team that’s focused on responsible AI and ML fairness, and I handle the product efforts there. What I do is bridge the gap between a lot of amazing work that’s happening on the research side and the policy conversations around responsible AI. One thing I see in any of the products I work with is how important it is to apply research in the right context for the product. We learn a lot about Responsible AI from ongoing research, but building a complete user experience requires building on top of the research foundations. We need to understand what kinds of effects different approaches have on the product and experience. We work directly with products across the world to understand how we can build in fairness, transparency and ensure our product is doing the best it can,” explains Doshi.

Consequences for unfair systems

“I think the consequences can be huge, and it really depends on the system. Let’s take an example: risk assessment systems are being used in the justice system, where there are algorithms and AI systems that determine whether someone should get bail or they should be released from jail. Those systems can have a direct impact on your life; they can have a direct impact on your ability to work, and they can have a direct impact on what the next 10 years of your life could look like. They can literally determine the course and direction of your future,” says Doshi.

In case of consumer projects, it’s about the quality of the experience and the potential risks and harms of each product, feels Doshi.

 “Are users able to use the product in the same way other users are? Are they able to get value from the product in the way other users are? Are there needs met by the product? What is the product representing and perpetuating in terms of stereotypes or harm?,” she says.

Fair ML is critical, feels Doshi, because if the system perpetuates a bias for a particular set of users, it will get internalised in no time. Unfortunately, this creates a cycle of unconscious bias, which might not have a direct impact in the short-term, but has long-term repercussions on the functioning of a society. 

“I think the impact is obvious at times, and sometimes it is hard to see, but equally impactful or more impactful for certain communities,” she adds. 

Google’s contribution to addressing pre-existing challenges in startups

Startups have a similar responsibility, believes Doshi. 

“Even if it’s a small company, who are the people who represent it? What are the viewpoints represented in that company? 

No matter the size of the organization, it’s important to identify potential problems early on, and address them immediately. The other thing is to seek external expertise. Even if you’re a small organisation, there are experts maybe not in machine learning, but maybe in civil rights, human rights, fairness and justice, who you can consult and identify opportunities where things can go well, and where things can be potentially harmful,” she asserts, adding that the way forward is to build training resources to address common pitfalls and concerns.  

In many startups, it’s all about “moving fast and breaking things”, feels Doshi, but it is at the initial stage that they need to be more mindful. 

“Most startups want to launch quickly, they want to get their product out in the world, and let people start using it. For them, it’s about figuring out the responsible stuff later. The more we can push for intentionality in that development process, the better it will be. Let’s be careful from the start, what is the impact going to be, and who are we going to affect? I think that is a kind of critical shift we want to make from a cultural perspective, and I think that will drive all of this, to begin with,” she adds.  

Accomplishing a legal framework

“I do believe in regulations, and I feel legal frameworks will help here. There are hundreds of startups and companies, even within a company, and you can’t expect everyone to have the same ethical framework or the same thinking about the world. I do think regulations and systematised understanding can push one for action, it can push organisations to do the right work, but also make it easier across organisations to have consistency and understanding. I will say the big caveat is that the regulation needs to be done right. If we try to take one big hammer and say we’re going to create some general regulation that’s going to apply to all ML systems, I don’t think we are going to do it right. I think we are going to come out with problems on the other side. Different ML systems have different concerns, and the regulation may or may not tackle that. We may end up still harming users, despite not realising it or not doing things about it. Different industries need different kinds of regulations, different types of systems need different types of regulations, and it’s about figuring out how the legal frameworks are nuanced enough to address these concerns,” shares Doshi.

Women in tech

While there’s an increasing number of women taking on the mantle in tech fields, there’s still a long way to go. Doshi has a motivational message for all of them, “Tech needs you now more than ever. This is our time to really bring perspective, bring diversity of experience. 

Doshi also believes that more women will be inclined towards tech, if school systems start to introduce important concepts early on. Recalling her school years, she says she didn’t experience computer science, until high school.

“I think the more we can introduce tech concepts earlier in computer science and the ethics of technology, it can be a huge push to make more people comfortable,” she remarks.

The last word

There’s a growing concern that moral norms and ethical frameworks might hamper innovation, but Doshi feels it’s completely dependent on what innovation means to people. 

“One thing that I see is an increasing number of people are pushing for innovation at the expense of particular communities. If you say I make this product work for a certain set of users, that is innovation. I think we sometimes talk about responsible AI as a constraint that will slow down progress, rather than something that is innovative in itself. Most people don’t understand that it is actually developing an experience that is good for a set of users,” she explains.

The key is to strike a balance when developing a product, but there are certain trade-offs one doesn’t want to make in any way, says Doshi.

“Sometimes, it means building a product in a certain way that works for certain kinds of users, and then the expansion keeps happening. I think we can still be innovative; I don’t think we always have to take the huge hammer for every problem. We just have to be thoughtful about the process; that still allows you plenty of innovation and time,” she concludes.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in