Presenter: Prof. Manan Suri

Moderator: Jibu Elias 

The next generation of AI is expected to enter new frontiers that are equivalent to several traits of human cognition such as interpretation and autonomous adaptation. With the AI solutions powered by deep neural networks hitting its limitations, the next-generation AI should be focused on addressing its shortcomings, such as "brittleness", in order to give AI traits such as abstract reasoning and generalizes about the world as well as the ability automate ordinary human activities.

In this context, a new stage in hardware development needed to be explored and developed. Neuromorphic Computing, often referred to as hardware designs that mimic the human central nervous system's information processing architecture, is one such technology. Neuromorphic Computing is an intersection of diverse disciplines including neuroscience, machine learning, microelectronics, and computer architecture. 

Prof. Suri leads the NVM & Neuromorphic Hardware Research Group at IIT-Delhi. He is an Assistant Professor with the Department of Electrical Engineering and the founder of CYRAN AI Solutions (an IIT-Delhi start-up). His R&D focus areas include semiconductor AI, Neuromorphic and NVM hardware.

Data Dissection and Collection

Data is not just voluminous or large. One should take a step back and dissect data and its origin. Numbers is just one aspect. Data has four pillars – to build a digital empire, to have more intelligent machines, to enhance technology and a reflection of the world we live in. It is Important to dissect into broad categories like neuromorphic and neuromemetic.

A range of verticals today are responsible for generating reams of data – these include IoT sensors, Industry 4.0, retail, finance and social media are some examples. Its being estimated that 10 21 of data is being generated today. There’s 62,000 billion GB of data being generated. Moore's Law states that as the VLSI/ semiconductor industry/ chip industry progresses, the number of transistors/ computational devices will keep increasing every year. Through a positive feedback loop, sensor costs, storage and computation have been made economical. We have an abundance of data generation devices and the cost to collect data has become quite low. AI is now reaching a state where expectations are going beyond functionality. Once the job gets done, there are higher expectations like ethics, and sustainability.

Tech should functionally do your job and also be sustainable. This is where hardware becomes important. The energy cost of managing so much data has an implicit effect on the environment. While we are talking about NLP, AI, ML and deep networks, while computing the giga/tera operations per sec, the energy costs starts becoming huge, esp towards the environment,

We need to take inspiration from nature. Time and again, nature has shown to get the job done in an energy-efficient manner

Suppose you simulate the brains of a cat – it could be related to 128,000 processers with each one having a 1GB memory each. Studies show you need atleast 100 petaflops to emulate a human brain. To power up this cluster, you would need a mini data center. There is something inherent how mammalian brains process that make them low power and energy efficient. Researchers and the industry at large should explore bio inspired neuromorphic structures as we will be looking into the sustainability domain very soon.

Mammalian brains are structured in a way that it can run on low power levels. This structure is what we should aspire to build for our machines. The more we get into AI, the more we tread into territory of sustainability. Neuromorphic systems lie at the intersection of the best of computer science, computational neuroscience & nanoelectronics. While biomemetic systems are a proper replica of biology in exact values; bio inspired aims to look at biological systems and model artificial systems on them.

Brains are massively parallel computational blocks in a densely connected 3d structure – a fantastic piece of engineering. A human brain is really small – at any given point, it holds no more than 2 liters of volume. It is low power, immune to noise and variability. Despite environment and intrinsic noise, computational functionality continues regardless. Variability addresses quality control when we build digital or electronic systems and indicates This is to indicate every piece is identical. However, natural architectures or computational metholodigies are actually very different. The Von Neumann structure indicate that building blocks are highly deterministic, predictable, well behaved, not noise prone and follow rules. But natural systems are far from this. They are intuitive and learn from the surroundings constantly. The brain doesn’t connect to a cloud, it computes on the edge.

What Makes Neuromorphic Systems So Powerful?

Storage and processing are not isolated functions. In traditional digital computing – storage and processing are separate. One set of hardware will do the math, while another will process. When boundaries start getting mixed, both blocks can do both functions and this is extremely powerful. In biology – these can be compared to neurons and synapses. Think of a neuron as CPU, and memory as a synapse. In biology, these start getting mixed, they are parallel, tightly integrated and in a 3D mesh.

India’s Work in Neuromorphic Computing

AI is not perfect, and may even be hyped. But hardware is very real and neuromorphic hardware is one of them. Neuromorphic hardware is no hype and we are looking at very specialising systems for new computation techniques, low footprint, secure which are done by GPUs, CPUs. Some notable mentions of industry leading by example are Google TPU, NVIDIA GPU 54bn+ transistors, Cerebras, Intel Poholki Springs. This is taking AI in a meaningful direction for sustainability and performance.

We always talk about manmade structures like the Eiffel Tower or Burj Khalifa. These are about 800 m and 160 floors in height and considered civil engineering marvels. In nanoscale memory tech, with 3D semiconductor flash, it has nearly 128 floors of data structures, packed and neatly engineered in a few 100 microns. Semiconductor hardware is no hype – this is what we need to focus on for a richer future in AI.

Nano structures like facechange memory, magnetic memory or memoristors are being used to build neuromorphic systems. We have pioneered them as well. Indian research groups are doing a great job and we’ve been doing bio-inspired neural networks as early as 2010. We are building facechange memory, conductive bridge memory based art nanoelectric synapses, and using this to realise unsupervised for image and auditory processing tasks. This design is inspired by the structures of the retina and cochlear. Once we figured the synaptic structures, we went on to build neurons.

The testing systems for this hardware is also being done in IIT Delhi. We came up with the non-volatile inference accelerator with MRAM tech, and it was used for low power healthcare applications like wearables. The benefits of low power hardware are for socially relevant sectors like healthcare and education.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE