AI is the future, and all upcoming innovations are going to be powered by AI and related technologies. According to the experts from related industries, the future of AI that we had been waiting for is here already. However, the immense possibilities of AI and ML need a boost; else they will face a roadblock. With the increasing application areas, the expectations in terms of computational powers too are increasing. 

The biggest challenges around AI Hardware revolves around CPU, GPU, Memory, Network, and Storage IOPS.

Though achieving this level of capabilities too had been possible for few giants in the market, such as Google, IBM, Amazon, and more. Compute-heavy technologies are experiencing hindrances owing to the limitations in terms of hardware requirements. In easier terms the with increasing compute requirements, there is a need for more compatible hardware. We need to fit more transistors to increase the compute power; however, there are limitations with reducing the size of logic gates too. These limitations say that reducing the size of the logic gates to less than 5nm will distort their working according to "quantum tunnelling".

Another aspect that explains this roadblock is Moore's law which states that the number of transistors in a dense integrated circuit doubles about every two years. Hence, we face the question of how to leverage AI innovations cost-effectively and match the ever-increasing compute power requirement. The answer lies in technologies like 3D stacking, having them together in the same die, or on the same multi-chip module or system. 

The goal is to achieve cost efficiency and performance excellence in AI and ML applications. Also, at the same time, we must innovate our ML algorithms to adapt to the changing hardware needs and be flexible.

This means that AI/ ML applications cannot work in silos and require close sync with semiconductors companies. According to a report by McKinsey&Company "by 2025, AI-related semiconductors could account for almost 20 percent of all demand, which would translate into about $67 billion in revenue." 

The picture below shows how semiconductors are enabling AI/ML future market with cutthroat innovations.

Moving ahead, let's touch upon another aspect critical to the progress of AI/ ML ecosystems called training and inference.

Training in this context refers to the process of creating an ML algorithm. It involves the use of a framework, like PyTorch or TensorFlow, and a training dataset. Data scientists, engineers use this dataset and a model to train a variety of use cases. Inference refers to using a trained machine learning algorithm to make a prediction. Inference is when the computer is learning from the data that it is trained on.

Many tech giants are already way ahead in the process of the confluence of these chip/semiconductor innovations and AI/ ML algorithms, as we talk right here about it.

The need of the hour is to focus on improving hardware capabilities such as computational power, cost-efficiency, cloud, and edge computing, faster Insights, new materials, and new architectures.

Let's see what and how the industry leaders plan to beat the hindrances we talked about initially in this article.

Amazon Web Services AWS, in an attempt to reduce latency which would lead to increased throughput, has come up with Inferentia chip. It is a custom chip built to accelerate machine learning inference workloads and optimize related costs. 

Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

AWS Inferentia supports popular machine-learning frameworks like TensorFlow, PyTorch, and MXNet, with AWS Neuron. AWS Inferentia used in Amazon Alexa, results in 25% lower end-to-end latency, and 30% lower cost compared to GPU-based instances for Alexa's text-to-speech workloads.

Google's Cloud TPU is an AI accelerator application-specific integrated circuit developed specifically for neural network machine learning, particularly using Google's own TensorFlow software. 

Google Cloud TPU comes with following offering:

  • Cloud TPU v2 - 180 teraflops / 64 GB High Bandwidth Memory (HBM)
  • Cloud TPU v3 - 420 teraflops / 128 GB HBM
  • Cloud TPU v2 Pod - 11.5 petaflops / 4 TB HBM / 2-D toroidal mesh network
  • Cloud TPU v3 Pod - 100+ petaflops / 32 TB HBM / 2-D toroidal mesh network

NVIDIA is one of the leaders and innovators in the AI compute hardware market. Its graphical processor units (GPUs), originally designed for high-end media work and gaming, are today used to accelerate both training and inferencing of the deep learning systems that are crucial for applications like computer vision, voice recognition, and natural language processing (NLP). 

The NVIDIA® T4 GPU accelerates diverse cloud workloads, including high-performance computing, deep learning training and inference, machine learning, data analytics, and graphics. It delivers exceptionally well when combined with accelerated containerized software stacks from NGC. It is based on the new NVIDIA Turing™ architecture. T4 is optimized for mainstream computing environments and features multi-precision Turing Tensor Cores and new RT Cores.

With AI advancements, hardware requirements will change in terms of compute, memory, storage, and networking—and that will translate into different demand patterns. Hence, the hardware is a key element in making a difference in giving the desired results expected from leading AI/ML applications.

In an endeavor to innovate and progress, change is critical. This change can be collectively brought by researchers, enthusiasts, and tech leaders together. Not to forget, in the current context of hardware limits, it is vital to collaboratively work with the other stakeholders and industries involved, such as semiconductor companies. No one size fits all, and hence not everyone using AI/ML needs and immediate transformation. Companies shall assess their needs, skilling requirements, tools, and cost and energy estimations before any drastic change.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in