Results for ""
One of the significant challenges in building public trust in AI has been the lack of transparency regarding the rationale a machine learning program followed in arriving at a particular conclusion. Termed as AI Black Box by the experts, they have often automated decision-making programs based on machine learning over big data, which map a user’s features into a class predicting the characteristic traits of a person, such as credit risk, health status, etc, without exposing the reasons. This has also lead to the creation of AI Biase, which has resulted in a wide uproar.
However, in a bid to overcome this challenger, earlier this week in London, Google’s cloud computing division introduced a new system called Explainable AI. The search giant is hoping to leverage this to close the gap on competitions from Microsoft and Amazon, which has been dominating the cloud computing sector.
According to Google, Explainable AI is a set of tools and frameworks to help you develop interpretable and inclusive machine learning models and deploy them with confidence. With it, you can understand feature attributions in AutoML Tables and AI Platform and visually investigate model behaviour using the What-If Tool. It also further simplifies model governance through continuous evaluation of models managed using AI Platform.
Speaking to BBC, Prof Andrew Moore, who leads Google’s Cloud AI Division, stated that the era of black-box machine learning is behind us.
“With the new Explainable AI tools where we’re able to help data scientists do strong diagnoses of what’s going on. But we have not got to the point where there’s a full explanation of what’s happening. For example, many of the questions about whether one thing is causing something or correlated with something - those are closer to philosophical questions than things that we can purely use technology for,” he added.