Results for ""
IndiaAI IBD invites Expressions of Interest (EoI) from organizations with expertise in AI governance, risk mitigation, and ethical AI deployment to develop practical tools and frameworks that enhance trust, transparency, and security in AI applications.
IndiaAI has identified five core themes under the Safe and Trusted AI initiative, calling for collaborative proposals to address AI governance challenges effectively.
Ensuring the authenticity and traceability of AI-generated content is critical in combating misinformation and upholding digital trust. Proposed solutions may incorporate advanced algorithms to distinguish AI-generated content from human-created materials, embedding imperceptible but secure markers for content provenance tracking. Furthermore, sophisticated detection mechanisms may be developed to identify synthetic media, preventing the proliferation of harmful and/or illegal materials. Integrating robust testing capabilities will ensure the accuracy and effectiveness of AI authentication mechanisms, thereby enhancing transparency and user trust.
Ethical AI frameworks play a crucial role in shaping AI systems that align with fundamental human values, promoting fairness, accountability, and transparency. Organizations are encouraged to develop structured frameworks that adhere to global best practices, such as the IEEE Global Initiative on Ethics of Autonomous Systems and the European Commission’s Ethics Guidelines for Trustworthy AI. The Ethical AI frameworks that are proposed should have the capability to guide developers, policymakers, and enterprises in implementing AI solutions that mitigate biases, avoid discrimination, and uphold ethical decision-making standards.
The widespread adoption of AI necessitates robust risk assessment and management frameworks to identify, evaluate, and mitigate AI-specific risks across various sectors. Proposed tools may include methodologies for systematic risk classification (e.g., high-risk vs. low-risk AI applications), vulnerability management, and risk-scoring mechanisms to assess ethical, legal, and technical risks. By establishing a clear taxonomy and interoperable guidelines, such frameworks will enhance AI system accountability and compliance with national policies and regulations.
AI systems, particularly those integrated into critical infrastructure and essential services, must be resilient against high-stress scenarios such as cyberattacks, data breaches, or natural disasters. Stress testing tools should incorporate simulation-based evaluations and stress metrics to assess AI robustness under adverse conditions. These tools must be able to provide actionable insights into system vulnerabilities, enabling proactive risk mitigation strategies that ensure AI systems remain operational and secure in high-stakes situations.
The proliferation of deepfake technology presents a significant challenge in curbing misinformation and digital fraud. To counter this, AI-driven deepfake detection solutions should leverage deep learning algorithms, provenance-based authentication, and inference-based analysis to identify manipulated media. Proposed tools should be seamlessly integrated into web browsers,social media, and other relevant platforms to enable real-time content verification, thereby preserving the integrity of India’s digital information ecosystem.
The deadline for application submission has been extended to 28th February 2025, providing a fresh opportunity for organisations, companies, and institutions to participate. By submitting proposals, selected entities will have the chance to co-develop innovative tools and frameworks with collaborative partners, unlocking new possibilities in AI innovation.
Submit Your EoI Here
For any questions, reach out at: fellow2.gpai-india@meity.gov.in
Organizations with expertise in AI governance, risk management, and ethical AI practices are invited to contribute by submitting proposals that align with the aforementioned Safe and Trusted AI themes.