Researchers at MIT have devised a technique that enables a user to comprehend a machine-learning model's thinking and its relation to human reasoning.

In machine learning, it's essential to know why a model makes a particular choice to know if that choice is correct. For example, a machine-learning model might correctly guess that a skin lesion is cancerous, but it might have done so by looking at a blip on a clinical photo that has nothing to do with the lesion.

Moreover, experts have tools to help them understand a model's reasoning. But these methods give information about one decision at a time, and each one has to be evaluated by hand. In addition, models with millions of pieces of data make it nearly impossible for a person to look at enough decisions to find patterns.

What is shared interest?

Researchers at MIT and IBM have made a method that lets users collect, sort, and score these different explanations to see how a machine-learning model works quickly.

The researchers call their method "Shared Interest," which uses numbers to compare how well a model's thinking matches a person's.

The shared interest could simplify users to spot troubling trends in a model's decision-making. For example, the model may be readily confused by distracting, irrelevant characteristics such as background objects in images. By combining these insights, the user might rapidly and quantitatively decide whether a model is trustworthy and ready to be implemented in a real-world setting.

Aligning humans and AI

The shared interest uses saliency methods, which are common ways to show how a machine-learning model made a particular choice. If the model is trying to put an image into a category, saliency methods offer the parts of the image that the model thought were important when it decided. A heatmap is on top of the original image. If the model decided the image is of a dog and the dog's head, those pixels were essential to the model when it made that decision.

In addition, the shared interest works by looking at how saliency methods compare to real-world data. In an image dataset, ground-truth data are usually annotations around the essential parts of each image made by humans. In the last example, the box would go all the way around the dog in the photo. Shared interest looks at how well the model-generated saliency data and the human-generated ground-truth data for the same image match up when evaluating an image classification model.

Several metrics measure how aligned (or not aligned) the decisions are, and then each decision is put into one of eight categories. The categories range from perfectly human-aligned to wholly distracted. In addition, the method highlights essential words rather than image regions when working with text-based data.

Analysis

The researchers used the following three case studies.

  • The first case study used "shared Interest" to help a dermatologist decide if he could trust a machine-learning model that could use photos of skin lesions to help diagnose cancer. "Shared interest" made it easy for the dermatologist to see examples of when the model was suitable and when it was wrong. The dermatologist decided he couldn't trust the model because it made too many predictions based on image artefacts instead of actual lesions.
  • The second case study shows how researchers can use "shared interest" to evaluate a saliency method by showing problems with the model that users did not know before. Their approach lets the researcher look at thousands of right and wrong decisions in a fraction of the time it would have taken to do it by hand.
  • In the third case study, they used "shared interest" to learn more about a particular example of image classification. By changing the "ground-truth" part of the image, researchers could do a "what-if" analysis to determine which parts of the image were most important for confident predictions.

Conclusion

The researchers hope to apply "shared interest" to many forms of data in the future, including tabular data seen in medical records. They also intend to employ "shared interest" to augment existing saliency strategies. Moreover, researchers think this study will spur more research into quantifying machine-learning model behaviour in understandable ways to humans.

For more information, refer to the article.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in