Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

The pandemic has been the reckoning of Artificial Intelligence (AI). Companies, communities and countries all invested in the humongous AI projects and AI start-ups had record-breaking investments. It is needless to say that AI is going to be an intrinsic part of modern life at a grand scale - from healthcare to banking, from education to governance. 

That's why, industry leaders, Google have expounded ways for value-based AI which would be beneficial to businesses. It propounds that industries should assess AI systems when it is performing and even when it is not, to ensure more reliable, secure products. This assures that the industries are focusing on building accountable products. 

Further, Google suggests that AI systems should be created in such a way that it is well-founded, authentic and efficacious, especially for the end-users. All this while following the best general practices for software systems along with practices that take into consideration the novelty of machine learning (ML). 

There are the pearls of wisdom that Google shared for building and sustaining responsible AI systems:

  •  Keep a human-centric approach - "The way actual users experience your system is essential to assessing the true impact of its predictions, recommendations, and decisions," says the official blog. To ensure a good user experience, coherence and control are paramount. Therefore, creators should consider this while designing. 
  • Assuring appropriate assistance - When a system or a model can provide one answer to convince a multitude of users, it is appropriate to produce just one answer. However, that is not applicable to most systems and therefore, systems should suggest a few more options to the users. 
  • To comprehend trade-offs between various errors and experiences, it helps to use multiple, appropriate metrics that would encompass the context and goals of a system. 
  • Examine raw data - Systems reflect the data they are trained on therefore it is important to ensure that correct data will be used to train systems. Use accurate data that will represent the end-users 
  • Acknowledge limitations - A system is almost as effective as its training. Therefore, it is important that creators clearly define the scope and coverage of the training. Wherever limitations are experiences, users should be informed about it. 
  • Keep improving - To have best test practices it is always best to learn from software engineering and quality engineering so that it is possible to make sure that the AI system is working deliberately and can be trusted.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE