Results for ""
A study by York University highlights how deep-network models take potentially dangerous ‘shortcuts’ in solving complex recognition tasks. The study, ‘Deep Learning Models, fail to capture the configure nature of human shape perception’, was published in the journal iScience.
The study employed novel visual stimuli called “Frankensteins” to explore how the human brain and DCNNs process holistic, configural object properties. According to Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, Frankensteins are objects that have been taken apart and put back together the wrong way around. He added that as a result, they have all the right local features but in the wrong places.
The researchers found that while the human visual system is confused by Frankensteins, DCNNs are not revealing an insensitivity to configural object properties. Elder stated that their results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain.
These deep models tend to take ‘shortcuts when solving complex recognition tasks. While these ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some real-world AI applications we are currently working on with our industry and government partners.
For instance, one such application is traffic video safety systems. The object in a busy traffic sense, the vehicles, bicycles and pedestrians, obstruct each other and arrive at the year of a driver as a jumble of disconnected fragments.
According to the researchers, modifications to training and architecture aimed at making networks more brain-like did not lead to configural processing, and none of the networks was able to accurately predict trial-by-trial human object judgements.