In recent years, the hype about “AI-powered industry 4.0” has reached a fever pitch. But, as we enter a new decade, we are yet to witness any concrete indicators marking the arrival of the next industrial revolution despite notable advancements in the fields of machine learning and other AI technologies. 

Today, artificial intelligence technologies have penetrated into most aspects of our daily lives, and this has motivated many enterprises and organisations, small scale and large scale, to adopt AI in their business processes with mixed success. Some estimates even point out, as noted by MathWorks Michale Agostini in his Matlab Expo 2019 Keynote, that around 80-90 per cent of these AI initiatives end up failing. This phenomenon raises an important question. Why is AI struggling even after being seen as one of the most powerful transformative agents of human civilisation?

The curious case of New Zealand’s dairy industry’s AI deployment 

An excellent answer to the above question comes from New Zealand, as pointed out by Michael Agostini in his keynote. The Industrial Information and Control Centre (I2C2), a joint research institute between Auckland University of Technology and the University of Auckland, was established to improve process simulation and control in New Zealand’s dairy industry. The I2C2 institute’s industrial partners Fonterra, the largest producer of milk powder in the country, set out to use AI to try and optimise the process of converting raw milk to milk powder.  

The process of converting raw milk into powdered milk is a long continuous process that can take up to 3 days where milk is being constantly poured at one end with the powder being generated at the other end. However, if there is a defect in terms of the quality of the milk used or any other components added, it will take three whole days to find about it, leading to discarding of the entire batch of milk powder. 

The expectation was to create an algorithm that can predict the quality of the product in real-time. For that, three plants were chosen which have data from a period of 6 years with millions of data points. However, when the final product was deployed, it failed miserably. “As the developers’ tool all the data and naively fed into an AI model, building algorithms and they ended up with garbage,” noted Agostini. 

On further examination, the researchers found out that even though the plants were similar, with two of them being identical, they operated in entirely different ways. In fact, the plants’ operating state changed year after year. 

Furthermore, even after trying to address these issues by just taking data from one plant from a period of one year, the algorithm wasn’t giving the desired result, leading the researchers to find a bias in the training data with regard to the bulk density of the milk powder- a key aspect for the quality assessment. 

The rise of domain expert

The case study points fingers at the fact that effective deployment of AI in any enterprise depends more than good programmers and data scientists. It requires domain experts, in this case, someone with deep knowledge on the functioning of the dairy industry and conversion processes. 

While the IT expert understands the nuances of machine learning systems, algorithm families needed to solve the particular problem, and how to tune the algorithm to accomplish the desired accuracy, it is the domain expert’s job to bring domain-specific knowledge such as the source and usability of datasets and the quality of the algorithm’s recommendation. 

“Today developing an AI system requires a deep understanding of the domain within which the system will operate,” said Jeff McGehee, senior data scientist and IoT practice lead at Very to The Enterprise Project. “Experts in developing AI systems will rarely be experts in the actual domain of the system. Domain experts can provide critical insights that will make an AI system perform its best,” he added. 

Understanding the limitations of machine learning

In many quarters, AI or machine learning is seen as a magical tool that can bring one fix solution to all the challenges and problems faced by any organisation. This perception often leads many of us to completely miss out the reality that “machine learning algorithms, whether they transcribe speech or identify animals in photographs, are little more than reflections of the correlations between inputs and outputs in their training data.“

A draft paper from a round-table held at the University of California -Berkeley on 'AI and Domain Knowledge: Implications of the Limits of Statistical Inference' explains that in the case of speech transcribing, “the system knows which letters arranged in a particular order match the sounds a human voice is making because it has seen the same or very similar pairings of letters and sounds in its training data. In the case of identifying an animal from a photo, the system produces the word “dog” when presented with a photograph whose pixels constitute an image of a canine because it has been trained on a datasets that includes many photographs with similar pixel arrangements paired with the word “dog.” This kind of analysis, where ML algorithms infer what to do based on a statistical analysis of data, powers most of the headline-writing AI achievements and “AI-powered” software features that have become commonplace.”

The biggest limitation for ML today is the fact that it cannot handle inputs that are statistically eccentric compared to those examples they have seen in their training data. As a result, Today’s AI systems are limited to work reliably well in decision-making domains with two features. “Firstly, they must be non- dynamic, in that they must not frequently present an AI system with situations or information that it hasn’t seen before. Second, they must be narrow, meaning that the universe of possible inputs and outputs must be able to be comprehensively described by training data.” states the draft paper. 

Why domain expertise is crucial 

At present, AI thrives in a non-dynamic and narrow problem domain; often, those are unique to a particular industry. It should also be noted that “the critical underpinning of firms most successful in using AI to solve these kinds of problems is expertise in the relevant problem and how to use AI to solve it, rather than technical robustness or sophistication,” noted the draft paper indicating the value of domain expertise over the technological know how’s. 

 “f you look at any AI solution, there is a domain part of it, and there is a software part of it. We can only find value is in the intersection of all these things together,” says Nataraju Vusirikala, head of Bosch Center for AI in India. 

Organisations that have successfully leveraged AI may not often have the best AI talent or cutting-edge machine learning model to delivers. Rather their domain expertise provides them with a better picture on the business problems faced in their niche domain and as well as the knowledge on what data is required to create the output-comprehensive datasets needed to solve them.

A good example is the success story of Biota, a firm that helps oil and gas companies to determine the size, interconnectivity and other valuable information about shale gas or old deposit with minimal drilling. To achieve this, the company takes DNA sampling from microorganisms living in the hydrocarbons which they then and run this against their massive datasets of information about other deposits around the country. The case of Biotas AI implementation reaffirm the fact that only organisations firm with deep expertise in the particular problem domain, in this case the energy industry, will be able to design and build successful solutions as opposed to what Fonterra did. 

“Today, innovations are increasingly taking place more in these boundaries. Gone are the days when you can take a particular technology and apply it across industries successfully,” points out Nataraju. “We are seeing computer programmers, along with scientists, astrophysicist, or even agriculturist coming together to solve some of the biggest challenges in their field.”

“Without having domain expertise we are blindly applying algorithms to garbage in and garbage out the situation,” he noted.

Expertise in the problem domains that AI is meant to solve remains the central determinant of AI’s success over algorithmic sophistication or other technical prowess. Today the greatest positive impact of AI will come from those with expertise in learning how to apply AI methods, more than from the development of more sophisticated AI techniques.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE