Results for ""
Infosys has been selected as an inaugural member of the AI Safety Institute Consortium, created by The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, to jointly develop guidelines and standards for AI security and Responsible AI adoption along with other members of the consortium.
The consortium brings together over 200 organisations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.
The consortium aims to build a collaborative establishment that enables the development and responsible use of safe and trustworthy AI. It focuses on advanced AI systems, such as the most capable foundation models.
Infosys has been invited to the consortium based on its pioneering work in AI security and operationalising AI governance under the overarching umbrella of the Infosys Topaz Responsible AI Suite - a set of 10+ offerings built around the Scan, Shield, and Steer framework. The framework monitor and protects AI models and systems from risks and threats. At the same time, they enable businesses to apply AI responsibly.
The offerings across the framework include a combination of accelerators and solutions designed to drive responsible AI adoption across enterprises. The company will bring its deep expertise, capabilities, and point of view to support the safe development and deployment of generative AI to the table as part of the consortium.
Source: