Venkatesh Krishnamoorthy leads on policy issues around personal data protection, privacy, cybersecurity, cloud adoption, and emerging technologies in India. At BSA, one of his key focus areas is to promote cross-border data flows, enable data transfers, and pushback against data localization mandates. He has specialized in roles that play at the intersection of business, government, and society for a decade.

According to BSA, who are the key stakeholders responsible for ensuring the development of responsible AI models? 

The global nature of our technology ecosystem demands a globally consistent and coordinated policy response to mitigate risks and foster innovation. Policymakers and AI ecosystem stakeholders can play a crucial role in this objective to promote dialogue and establish a shared vision for a risk-based policy approach. 

The advent of AI-enabled tools has prompted questions about how malicious actors might exploit artificial intelligence to exacerbate misinformation, create and spread disinformation, and ultimately undermine trust in institutions. Therefore, responsible artificial intelligence (AI) model development relies on several key stakeholders, including policymakers, AI-developing organisations, AI developers, AI deployers and other actors within the AI value chain.  

What, in your opinion, are the most pressing ethical dilemmas that arise due to AI? How can we address them? 

The proliferation of AI across industries is prompting questions about its design and use and the steps that can be taken to account for any potential risks. Risks around bias, discrimination, and fairness are key concerns. Bias in AI can perpetuate or even exacerbate existing social inequalities, leading to unfair treatment of certain groups. For instance, bias in an AI model developed to improve access to credit and housing in historically marginalised communities can have an adverse impact. 

BSA published a document that sets forth an AI Bias Risk Management Framework that organisations can use to perform impact assessments to identify and mitigate potential risks of bias that may emerge throughout an AI system's lifecycle. Organisations developing and using AI systems can voluntarily leverage the BSA framework to prevent or minimise bias.  

Addressing risks requires a comprehensive strategy that includes risk evaluation in the design phase of AI systems, rigorous development testing, and continuous deployment monitoring to address emerging risks. These assessments help document key design and deployment decisions, fostering transparency and accountability. By integrating responsible AI practices, companies can build trust, enhance governance, and ensure that AI technologies are developed and used in a manner that is ethical, accountable, and aligned with societal values. 

At what stage should we integrate responsible AI notions into the developmental process? 

Artificial intelligence (AI)-enabled software is helping businesses in every sector leverage the value of data to drive digital transformation. From manufacturers deploying AI to design innovative products to small companies relying on automated translation capabilities to grow their customer base, AI creates new opportunities to solve complex everyday challenges. Hence, it's critical to integrate responsible AI notions at all stages of the AI lifecycle, which involves design, development, and deployment.  

During the design phase, it is crucial to embed ethical considerations and risk management strategies into the AI systems' architecture to ensure they align with societal values and legal standards. This proactive approach helps mitigate potential biases and ethical dilemmas before they materialise. In the development phase, companies must implement rigorous testing and validation processes, including formal policies and executive oversight, to ensure the AI operates as intended and adheres to defined ethical guidelines. Finally, in the deployment phase, continuous monitoring and evaluation are necessary to address emerging risks and ensure ongoing compliance with ethical standards. By integrating responsible AI practices at each stage, companies can build trust, enhance governance, and support the development of AI technologies that are both innovative and socially responsible. 

How are companies upholding the notions of responsible AI in developed models? 

BSA members are at the forefront of the responsible development of AI, providing trusted software solutions that enable organisations to harness the power of AI in critical areas such as health care, defence and infrastructure, and education. BSA members are implementing a variety of strategies and frameworks that ensure ethical and accountable AI development and deployment. To manage AI risks effectively, our member companies are adopting robust AI governance measures. These include implementing comprehensive risk management programs that identify and mitigate potential risks, assigning clear roles and responsibilities, establishing formal policies, and conducting impact assessments for high-risk AI applications.  

Transparency is another critical component of responsible AI. BSA advocates for the use of watermarks or other disclosure methods to help consumers distinguish between human-generated and AI-generated content, which is essential in combating misinformation. Additionally, BSA supports the Content Authenticity Initiative (CAI), which promotes an open standard for content provenance and authenticity. This initiative aims to enhance transparency by providing secure, indelible provenance for digital content, helping consumers assess the trustworthiness of the content they encounter. In data privacy, we encourage companies to adopt comprehensive consumer privacy laws that protect personal data while allowing for its legitimate use in AI applications.  

Through these multifaceted efforts, companies are not only enhancing the capabilities and competitiveness of their AI systems but also ensuring that these technologies are developed and used ethically, responsibly, and in accordance with societal values. 

Can you explain the importance of nations working together to promote multistakeholder dialogue and develop a shared vision for creating trustworthy AI policy solutions? 

Today, we are in a moment in which the global policy landscape around AI is beginning to take shape, and it is crucial for policymakers around the world to harmonise their regulatory approaches to AI. The global nature of today's technology ecosystem demands coordinated policy responses to foster innovation. Countries should work together to promote multistakeholder dialogue and develop a shared vision for a risk-based policy approach for addressing common AI challenges and advancing norms around responsible AI governance. Global partners should also agree on common AI terminology and taxonomy, including building on ongoing work in the EU-US Trade and Technology Council.  

What is the AI Policy Solutions Framework developed by BSA? How can it help government and companies learn more about transparency, corporate governance, and globally harmonised frameworks and policies in AI? 

BSA's AI Policy Solutions is a comprehensive approach to ensure the responsible design, development and deployment of artificial intelligence (AI). This framework includes principles for policymakers to best address risks and suggestions for developers and deployers of AI systems. It supports creating policies and practices that promote AI transparency, accountability, and innovation, addressing critical issues such as corporate governance, privacy protection, and global harmonisation.  

From financial services to healthcare, AI is increasingly leveraged to improve customer experiences, enhance competitiveness, and solve previously intractable problems. BSA's AI Policy Solutions offers a roadmap for lawmakers to set a robust national policy framework with a strong focus on risk assessment throughout the software product lifecycle, which spurs the adoption of AI-related tools that are infused throughout the economy.   

  

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE