Results for ""
You must have encountered examples of AI failing to identify an animal or an object correctly. First let’s decode how the machine can see and identify objects - deep neural networks or multi-layered systems are built to process images and other data through the use of mathematical modelling. While they are capable of provided sophisticated results, they can also make some rather obvious errors, some of which can have serious repercussions.
In a paper published in Nature Machine Intelligence, philosopher Cameron Buckner from the University of Houston suggests that “common assumptions about the cause behind these supposed malfunctions may be mistaken”. Buckner said it is critical to understand the source of apparent failures caused by adversarial examples – this is when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They’re rare and are called “adversarial” because they are often created or discovered by another machine learning network – a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them.
“Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner said.
This could happen as a result of an interaction between what the network is asked to process and the actual patterns involved. “Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts,” Buckner wrote. “ … Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively.”
Adversarial events that cause these machine learning systems to make mistakes aren’t necessarily caused by intentional malfeasance, but that’s where the highest risk comes in – it could lead to malicious actors duping systems that otherwise work on reliable networks. For instance, security systems and traffic light setups that use facial recognition technology could be compromised
Contrary to earlier assumptions, there are some naturally occurring adversarial examples – times when a machine learning system misinterprets data through an unanticipated interaction rather than through an error in the data. They are rare and can be discovered only through the use of artificial intelligence. But they are real, and reinforces the need to rethink how researchers approach the anomalies, or artifacts.