Results for ""
Yilun Zhou and Julie Shah from MIT CSAIL and Marco Tulio Ribeiro from Microsoft Research created a mathematical framework - Explanation Summary (EXSUM), to evaluate and quantify the understandability of explanations for machine-learning models. This might assist researchers in highlighting model behaviour insights that would be missed if they are only assessing a few individual explanations in order to comprehend the complete model.
A lot of machine learning models deployed in real-life use cases in the financial, judicial, and even medical domain are considered "black boxes". To be precise, when it comes to explainability, even the researchers who build these models are not able to decipher how these models make predictions. As a result, they often use explanation methods to describe individual model decisions that come with certain limitations, which the researchers tried to solve. These include:
"With this framework, we can have a very clear picture of not only what we know about the model from these local explanations, but more importantly, what we don't know about it," says Yilun Zhou, an electrical engineering and computer science graduate student in the CSAIL and lead author of a paper presenting this framework.
The point here to consider, a lot of research has been done, and lots are underway to prepare a framework for the future. However, before delving into the issue, there is a need to understand the issue.
It's not hard to believe that because of long-standing worries about "black box" models, a lot of businesses remain skeptical of machine learning. The term "black box" refers to models that are so complicated that they are difficult for humans to understand. Take, for instance, in health care, where so many decisions are actually of life and death, a lack of interpretability in prediction algorithms can be fatal and thus weaken trust in those models. However, there has been a recent explosion of research aiming to overcome these challenges in explainable machine learning.
It is important to note that the current success of AI can be attributed to its employment in low-stakes applications such as Amazon purchase recommendations, Facebook face detection, Twitter sentiment analysis, and Google Translate machine translation. However, AI/ML is rarely employed in the same way in risky applications where lives are on the line.
"Everybody understands that explainability is important, especially with things like GDPR in Europe. I think it's not just a western thing. Even a company that develops software for a bank in Europe today has to have explainability because the customer in Europe can go back and sue the bank according to the GDPR regulations. So it's something that everybody has to deal with," said Prof Vineeth N Balasubramaniam from IIT-H in an earlier conversation with INDIAai.
Most people add a layer of explainability to an ML model that is already deployed. "Now these are called post-hoc methods where you first predict; then you come up with a layer of explainability to reason. The problem now is that it becomes an issue of accountability. So if something goes wrong, which team do you blame: the team that built the machine learning model or the team that built the explainability model? So it's a big legal problem going forward," he further added.
To conclude, a lot of discussion around Ethical AI, explainable AI (XAI), and AI bias has gained traction in recent days. However, spotting loopholes or problems within the black box is hard. Considering a simple case, if programmers don't know which elements of the input are weighed and analysed to create the output, this makes it more difficult for them to filter out improper content and measure bias. The age of black-box ML models is over, and the industry is accepting this fact very well.
Source: MIT