Artificial intelligence (AI) seems to present solutions to many challenges across different domains. However, there is now a widespread understanding of the range of potential risks and harms to people and the planet that AI can produce if conceived, designed, and governed in irresponsible ways. In response to this, many proposals, frameworks and laws have been advanced for the responsible development and use of AI systems. In tandem, more and more AI ‘solutions’ are emerging around the world, which attempt to contribute to the public good, whilst upholding best-practice standards of responsibility.

From January to October 2023 the Global Partnership on Artificial Intelligence (GPAI)’s Working Group on Responsible AI undertook a project. A report was published based on the findings of the project. The project, called Scaling Responsible AI Solutions (SRAIS), matched teams from five different countries who were at different maturity stages of developing AI solutions, with experts and specialists in RAI. The teams received tailored mentoring, aimed at supporting them both to integrate best practice standards of RAI, and to scale their AI solution.

Teams gained an opportunity to collaborate with GPAI experts and draw on insights from various perspectives, define and adopt RAI performance metrics and showcase results. The overall objective of the project was to produce tangible outcomes towards scaling RAI.

At the GPAI Summit 2023 organized by the Ministry of Electronics and Information Technology (MeitY), the teams presented the challenges and solutions they faced during the project. The teams are:

  • Wysdom Smart AI Analytics Tools — Team from Canada
  • COMPREHENSIV: A Digital Platform for at Home Universal Primary Health Care, Data Life Cycle Management Challenges and Strategies — Team from India
  • Jalisco’s AI Forest Mapping System 
  • MexicoParticip.ai One — Team from Germany
  • ergoCub: Wearables and Robotics for Assessment, Prediction and Reduction of Biomechanical Risk in the Workplace — Team from Italy

The objective of the team was to advise, support and learn from teams across the world who have developed an AI solution and want to scale them in a responsible way.

Wysdom Smart AI Analytics Tools — Team from Canada

Wysdom is a Canadian company, offering chatbot analytics to clients. Wysdom.ai utilizes AI to evaluate intent-based conversational bots, to gain insights into and address challenges with their performance and behavior, in order to improve user experiences. Wysdom’s analytics focuses on helping companies to optimize conversational flow and bot responsiveness.

The GPAI mentors agreed that it will be important for Wysdom to think much more systematically about how their application may intersect with RAI considerations. However, the mentors also noted at the outset of the process, that in doing so Wysdom is well-positioned to take advantage of a significant opportunity, to assist companies with the responsible adoption of generative AI, and to contribute to the dissemination and implementation of RAI principles. They identified this as a valuable focus for the mentoring programme. As such, the mentors and the Wysdom team agreed that the key focus of their participation in the SRAIS project should be to guide the use of the Wysdom chatbot analytics system for the identification and mitigation of RAI risks.

COMPREHENSIV- at home universal primary health care

COMPREHENSIV is a smartphone application designed to be used by trained field personnel to screen for and manage a large range of early-stage disease conditions, in real time, based on images of the conditions, along with other relevant images (such as of living situations) which provide context for sociodemographic, economic, WASH (Water, Sanitation, Habitat and Hygiene), nutrition and disability status. It is aimed to be easily operated by any trained field worker, including in instances of low digital literacy.

In preliminary discussions, the GPAI team identified a range of areas to focus on with Hi Rapid Labs, with respect to scaling COMPREHENSIV responsibly. It was identified that there should be greater consideration of how the tool complements the work of existing healthcare workers, to enhance their capabilities. There was also a sense that the role of AI in this project should clearly highlight the social benefit of specific AI uses, and be understandable in the settings where it is (or will be) being used. The use of AI within the tool should also be transparent, and its parameters and outcomes explainable and able to be understood by users, especially those living in resource constrained settings. Multiple multimedia communication pathways, beyond traditional research publications, were discussed as potential solutions.

Jalisco’s AI Forest Mapping System

The Western part of Mexico’s Jalisco region is a deforestation hotspot. This project aims to develop AI models that can monitor and detect illegal deforestation as early as possible, to enable authorities to identify the most critical/urgent restoration and conservation needs, and to respond more rapidly. The region in question is part of the avocado production belt, and avocado plants and other crops being planted illegally in forested areas is a serious challenge for the government—especially due to Jalisco’s importance as a highly biodiverse area, as well as its value in terms of carbon stocks.

During the SRAIS they discovered that:

  • Robustness of the AI models should be improved
  • The importance of developing and easy to use UI
  • Translating the interface into the different languages spoken by the indigenous communities
  • Need to include an adequate complaints and dispute resolution mechanism
  • The need for the application, reports and dashboard to be tailored to the requirements of different stakeholders

Particip.ai One — Team from Germany

Particip.ai One is a telephone voicebot-based participation and feedback platform, intended for use by citizens, employees and consumers to voice their feedback and participate in decision-making processes. The project aims to improve citizens’ ability to participate in government processes and to foster government accountability; to bolster employees’ ability to provide internal feedback within their organizations; and to enable customers to offer feedback and suggestions for the improvement of goods and services.

At the outset of the mentoring process, the team identified scalability challenges as mostly infrastructural, alongside the need to sustain user engagement and reinforce data protection and security protocols. The key challenges they saw related to accommodating a growing user base and user load, as well as handling larger volumes of data and effectively managing and analyzing the increasing volume of feedback. Achieving this at scale requires greater infrastructural and computational capacity. The team also felt that as the tool is scaled to more users, it will be important to find ways of sustaining user engagement—proposing strategies such as personalized experiences, gamification and targeted notifications to achieve this, though it is important to note that such strategies have also been associated with risks and harms to users in some contexts. Finally the team pointed to the need to customize the system to the diverse needs of different organizations and communities, which will present an increasing challenge as it is scaled to different contexts.

ergoCub: Wearables and Robotics for Assessment, Prediction and Reduction of Biomechanical Risk in the Workplace

ergoCub aims to apply embodied AI technologies to prevent musculoskeletal disorders in workers, and for use in healthcare settings. ergoCub notes that musculoskeletal diseases have been identified as the most common occupational disease globally. This team, based in Italy, seeks to intervene in this problem, through the development of wearable technologies, as well as humanoid robots. The intention is to monitor and predict workers’ physical conditions, and provide real-time alerts using sensor-equipped suits, and to enable proactive and preventative risk management including by delegating certain high-risk tasks to humanoid machines—or enabling “collaboration” between workers and robots.

At the outset of the mentoring processes, the mentors identified a number of RAI concerns and considerations for ergoCub which went beyond privacy and individual data protection. The mentors suggested that the mentoring process should focus on developing a comprehensive set of tailored RAI indicators to guide the development and scaling of wearables in healthcare and industry—encompassing inter alia considerations around proportionality and appropriateness, equity and non-discrimination, informed consent, job quality and decent work, stakeholder participation, consultation and co-design, and surveillance and data governance. It was noted that there was a need to find “common ground” for RAI indicators across different use cases in healthcare settings and industry, given their differences.

Recommendations

Through the report the team presented recommendations for policy makers and key findings of the report.

Click here to read the full report.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in