Today, AI is embedded in every walk of life. Be it business, governance, medicine, or any other field, AI is helping people and organizations augment human intelligence. However, the need is not just to develop and advance AI technology but to share best practices in the responsible development, deployment, and use of AI.

Since AI didn’t give birth to moral machines, AI algorithms and systems can amplify bias and discrimination around gender, race, political and economic leanings, etc. despite its good intentions. Another concern that revolves around it is the possible autonomy of machines over humans and the distribution of its burden and benefits. Only by embedding ethical principles into AI processes, can we build systems that are trustworthy and unbiased.

Over the past few years, the World Economic Forum has been working on a project to advance ethics in AI technology and has embarked on a series of case studies featuring organizations that have made meaningful contributions to the progress of technology ethics. The first case study of this responsible innovation was presented by Microsoft. The second edition is owned by IBM, which is underway on its mission to develop ethical AI technology.

IBM has recognized an AI Ethics Board to discuss, advise, and guide the ethical development and deployment of AI at the organization. It guided the creation of IBM’s “Principles for Trust and Transparency” and “Pillars of Trust”.

It has three core principles that guide its approach to data and AI:

  • The purpose of AI is to augment human intelligence. AI is often perceived as a human labour adversary, taking up their jobs. However, a harmonious relationship is possible by considering AI as a tool to help people fulfil their tasks.
  • Data and insights belong to their creator. Via this principle, IBM keeps the ownership of the data with people even when it’s been processed by IBM AI. This is important for security, privacy, government access, and cross-border data flow.

  • New technology, including AI systems, must be transparent and explainable. This principle maintains the assurance that AI is working in ways that make sense to people. The companies should be transparent in client-data ownerships and address bias proactively.

Built on these principles are IBM’s Pillars of Trust. Each pillar acts as a mid-level principle focussing on the larger picture of building a trustworthy AI:

  • Explainability – It focuses that AI-powered decisions should be accompanied by explanations and reasons.
  • Fairness – It looks at the issue of different treatment by AI models for different groups, such as different genders or races.
  • Robustness – It underscores the importance of robust systems in coping with attacks that target the weaknesses of AI algorithms.
  • Transparency – This pillar highlights the need for transparent AI systems such that people can understand and evaluate how these systems work.
  • Privacy – It addresses the right of the people to have their sensitive data protected and be notified when and how their data is being used.

The company supports its commitment to these pillars of trust and has created open-source toolkits supporting them. Every toolkit has an extensive website that describes its content and uses as well as a development platform, GitHub that showcases all the open-source algorithms.

The development of GitHub clearly demonstrates that IBM is not just committed to developing tools for itself, rather help the entire industry adopt trustworthy and responsible AI. GitHub has an active community of followers and followers. Over 1300 people have already copied their codes for work and thousands of others have made a positive note of it.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE