Rajiv Avacharmal is a Corporate Vice President, Center for Data Science and Artificial Intelligence (CDSAi) at New York Life Insurance Company. He developed and implemented various AI model risk frameworks.

Rajiv is a founding member of the Model Risk team and spearheaded an enterprise-wide initiative at MUFG Union Bank. He identified and governed 60+ AI/ML models, significantly enhancing model transparency, compliance, and performance.

INDIAai interviewed Rajiv Avarchamal to get his perspective on AI.

Can you provide an overview of AI and Model Risk Management Roles within the financial services?

Financial institutions have increasingly adopted AI and machine learning models in recent years to improve their services, streamline operations, and make data-driven decisions. However, using these sophisticated models also introduces new risks and challenges that must be carefully managed. This is where the role of AI and Model Risk Management comes in.

As an AI and Model Risk Management professional in the financial services industry, my primary responsibility is to ensure that the organisation's AI and machine learning models are developed, deployed, and monitored responsibly and competently. It involves working closely with cross-functional teams, including data scientists, business stakeholders, and compliance experts, to establish and maintain a robust AI and model risk management governance framework. By effectively managing the risks associated with AI and machine learning models, AI and Model Risk Management professionals play a critical role in ensuring that financial institutions can leverage these technologies to drive innovation and improve services while maintaining their customers' and stakeholders' trust and confidence.

As an experienced AI And Model Risk Management professional, could you elaborate on the key goals and strategies for ensuring responsible AI practices within organizations?

As an experienced professional in AI and Model Risk Management, the key goals for ensuring effective AI practices within organizations include:

  • Mitigating potential biases and errors.
  • Enhancing model interpretability and explainability.
  • Safeguarding data privacy and security.
  • Establishing clear lines of accountability.

To achieve these goals, organizations should adopt strategies such as developing comprehensive AI governance frameworks, involving diverse stakeholders in the AI development process, providing training and education on AI best practices, conducting thorough testing and validation of AI systems, fostering a culture of transparency and open communication, and collaborating with industry peers and regulators. By prioritizing these goals and strategies, organizations can build robust and reliable AI systems, mitigate potential risks, and maximize AI's benefits while maintaining their stakeholders' trust.

How do you assess and mitigate the risks associated with AI and GenAI models, particularly in insurance and financial services?

Assessing and mitigating risks in AI and GenAI models involves a comprehensive approach it could encompass several key measures;

  •  Rigorous Validation: The foundation of risk mitigation begins with rigorous validation of AI models. It employs advanced statistical techniques and simulations to ensure models perform reliably under diverse and unpredictable conditions. For GenAI, which often involves novel data generation, it’s crucial to validate the authenticity and accuracy of outputs, safeguarding against potential misrepresentations or errors.
  • Continuous Monitoring: To adapt to the dynamic nature of financial markets and evolving data inputs, continuous monitoring systems must be implemented. These systems are designed to track model performance in real-time and quickly identify deviations from expected patterns, allowing immediate corrective actions.
  • Bias Detection and Correction: Fairness in AI applications is critical, particularly in financial services/ banking sectors, where decisions impact real lives. Comprehensive bias detection and correction mechanisms should be integrated throughout the model development and deployment phases, utilizing diversified data sets and algorithmic fairness techniques to produce equitable outcomes.
  • Emphasizing Explainability: Transparency in AI decision-making processes is not just a regulatory requirement but a necessity for trust and accountability. Explainable AI techniques that illuminate how decisions are made could be promoted wherever possible (by understanding trade-offs), making complex models more accessible and understandable to all stakeholders involved.
  • Regulatory Compliance: Staying abreast of and compliant with regulatory requirements is essential. This involves proactive engagement with regulatory developments to ensure that all deployed models adhere to current laws and ethical standards and are prepared for upcoming changes.
  • Collaborative Risk Management: Effective risk management in AI requires collaboration across disciplines, such as data scientists, IT security, compliance teams, and business leaders.

Through these strategies, financial services organizations can mitigate risks and enhance the reliability and effectiveness of AI and GenAI applications, ensuring they serve their intended purpose without unintended consequences.

What are some common challenges in implementing AI risk assessment methodologies, and how can they be addressed?

Some common challenges include data privacy concerns, the need for more interpretability in complex models, and the need for specialized skills and resources. These can be addressed by adopting privacy-preserving techniques like federated learning, investing in explainable AI research, and building diverse teams with AI and risk management expertise. Collaboration with industry partners and staying up-to-date with the latest research can also help overcome these challenges.

How can organizations ensure compliance with regulatory standards and industry best practices when developing and deploying AI solutions?

Ensuring compliance requires a proactive approach that involves staying informed about the latest regulatory developments, conducting regular audits and assessments, and maintaining detailed documentation of AI systems and processes. Engaging with regulators, participating in industry forums, and leveraging compliance tools and frameworks can help organizations navigate the complex regulatory landscape and align with best practices.

What qualities are essential for team members in AI governance and risk management?

Some essential qualities include strong analytical and problem-solving skills, a deep understanding of AI technologies and their implications, excellent communication and collaboration abilities, and a commitment to Risk Management and AI practices. A diverse team with expertise in various domains such as data science, risk management, legal, and compliance is also essential.

Looking ahead, what do you envision as the future of AI and data science within the insurance and financial services sector

The future of AI and data science in financial services is inspiring. It can transform the industry through personalized products, improved risk assessment, and enhanced customer experiences. However, this innovation must be balanced with responsible AI practices and robust governance frameworks to ensure fair and transparent use of these technologies. Collaboration between industry, academia, and regulators will be key to shaping the future of AI in insurance.

What advice would you give to organizations that establish robust AI governance frameworks and risk management protocols?

I recommend clearly understanding the organization's AI goals and associated risks. Engage with organisational stakeholders to build consensus and buy-in for AI Risk Management practices. Invest in building a strong foundation of AI governance, and risk management skills within the team. Continuously monitor and assess the performance of AI systems and be prepared to adapt and evolve the governance framework as needed. Finally, foster a culture of transparency, accountability, and collaboration to ensure the successful implementation of robust AI practices.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE