Results for ""
The researchers at the Johns Hopkins have discovered that neurons in the visual cortex, visual area number 4, aka V4, which is tuned for orientation, spatial frequency, and colour, represents 3D shape fragments, not just the 2D shapes as understood for the last 40 years. The Johns Hopkins researchers also discovered that similar to the natural neurons, artificial neurons to have an identical response which presumably helps natural and artificial vision to ascertain image detection of solid, 3D objects. They observed this similarity in an early stage (layer 3) of AlexNet, an advanced computer vision network.
"I was surprised to see strong, clear signals for 3D shape as early as V4," said Ed Connor, a neuroscience professor and director of the Zanvyl Krieger Mind/Brain Institute. "But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels."
The development is a huge step for Artificial Intelligence (AI) researchers who have tried emulating human functionalities such as vision. Deep networks such as AlexNet have accomplished major feats in object recognition with the help of high capacity Graphical Processing Units (GPU). These technologies were initially developed for gaming, and massive training sets fed by the trillions of images and videos on the internet.
Connor and his team discovered that natural and artificial neurons reacted similarly to the same database of images as the response patterns for the V4 and the AlexNet layer three were the same! How was there such striking similarity?
Perhaps the fact that deep networks such as AlexNet were created by emulating the multi-stage visual networks in the brain, Connor said.
"Artificial networks are the most promising current models for understanding the brain. Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence," Connor said.