The explainability of real-world ML systems has been brought to light through stakeholder interviews and regulatory frameworks. 

Machine-learning models highlighting suspected ailments in X-rays for a radiologist need to know how much weight to assign their recommendations. However, machine-learning models are so vast and sophisticated that even the scientists who create them must fully comprehend how the models produce predictions. As a result, they develop strategies known as saliency methods to describe model behaviour.

Saliency cards

Researchers developed a tool to assist users in selecting the appropriate saliency strategy for their work. They created saliency cards, which provide standardised documentation of how a method works, including its strengths and shortcomings and explanations to assist users in accurately interpreting it. A new set of "saliency cards" summarises machine-learning saliency techniques in terms of 10 user-centric characteristics. 

Experts in artificial intelligence and related fields agreed that the cards are helpful for swiftly comparing alternative approaches and selecting the most appropriate one for a given task. Moreover, the right approach paints a clearer picture of the model's behaviour, allowing users to understand the results more confidently. Saliency Cards are a framework for describing and comparing saliency methods.

Optimal solution

Previously, the researchers assessed saliency approaches using the concept of faithfulness. In this application, fidelity refers to how faithfully a method replicates the decision-making process of a model. Choosing the "wrong" strategy, on the other hand, can have catastrophic implications. One saliency measure, integrated gradients, compares the relevance of picture components to a meaningless baseline. The significant features above the baseline are the most important to the model's prediction. Saliency methods are a type of machine learning interpretability methodology that calculates how relevant each input feature is to the output of a model. 

Evaluation

The team conducted a user survey with eight domain experts, ranging from computer scientists to a radiologist who was inexperienced with machine learning after they had developed numerous cards. During interviews, every participant stated that the succinct descriptions assisted them in prioritising qualities and comparing techniques. However, a few surprises were also disclosed throughout the interviews. For example, researchers frequently assume clinicians seek a crisp approach focusing on a specific object in a medical image. However, the clinicians in this study favoured some noise in medical images to assist them in reducing uncertainty.

Conclusion

Due to the quick pace of development, the researchers discovered that users need help to keep up with the strengths and limits of new approaches and, as a result, adopt ways for illegitimate reasons (e.g., popularity). Furthermore, despite increasing evaluation measures, present methods for saliency methodologies (e.g., fidelity) presuppose universal desiderata that do not account for various user needs. In response, the researchers develop saliency cards, which provide formal evidence of how saliency strategies work and perform across a battery of evaluation metrics. 

The researchers discovered that saliency cards provide a detailed vocabulary for discussing individual methods and allow for a more systematic selection of task-appropriate methods through nine semi-structured interviews with users from various backgrounds, including researchers, radiologists, and computational biologists. Furthermore, using saliency cards, researchers can conduct a more organised analysis of the research landscape to uncover chances for new methodologies and assessment metrics for unmet user demands.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in