Just like anything else, AI also has positive and negative faces. Researchers, analysts and developers worldwide are attempting to fathom how this can be used for good. It was in this context that NASSCOM formulated the Responsible AI Resource kit. Responsible AI brings many AI practices together and makes them more reasonable and trustable. 

The Resource Kit is the culmination of a collaboration between NASSCOM and leading industry partners to seed the adoption of responsible AI at scale. The Resource Kit comprises sector-agnostic tools and guidance to enable businesses to leverage AI to grow and scale confidently by prioritizing user trust and safety. 

“With NASSCOM, we brought together an industry agnostic framework that anybody can leverage from to develop responsible AI practices”, said Akbar Mohammed, Architect, Fractal Analytics, who was a leading contributor to the development of the Responsible AI Resource kit, in a conversation with INDIAai.

Developing the Resource Kit 

The formulation of the Resource Kit was a long process. Multiple points of view were analyzed to frame together the responsible AI principles. The major challenge was to make the resource kit adaptable to every industry.  

For the successful formulation of the resource kit, the developers had to think from a practitioner perspective rather than an organizational lens. “India is the first nation to think about practicing responsible AI”, said Sagar Shah, Client Partner, Fractal Analytics. “In the coming months, countries worldwide, including the United States, will introduce new bills. Indian developers and organizations will be affected by it. We are trying to make organizations aware of this possible transformation”. 

Being responsible 

Being consciously responsible is a challenging task. Even when the leading organizations want to be more responsible, the analysts and developers who hold their daily jobs might not be diligent about the idea. According to Sagar, to ensure the responsible practice of AI, they developed behaviors which explained the responsible AI principles.  

“For instance, human centricity is a behavior which ensures that humans are thought about first. Another is explainability which ensures that nothing is covered in a black box”, says Sagar. 

The Responsible AI Resource Kit enables these behaviors. Within the kit is a guidebook that NASSCOM has launched. It also encompasses the training courses which will be soon introduced.  

The resource kit also covers the positives of adopting Responsible AI rather than concentrating on the negatives that could occur if not adopted. The developers hope to incorporate these principles and practices across thousands of companies in India in the coming months. 

A joint venture 

In the long run, governments and organizations will have to adopt responsible AI practices, for which industry partners will have to join hands and support each other. Also, post implementation, the results might differ from initial expectations, and a call for amendment might arise. Therefore, the resource kit will be under constant enhancements.  

According to Fractal’s contributors, it is essential to understand that in the future, companies might be wiped out due to the failure or the wrongful adoption of responsible AI practices. 

 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in