With the growth of AI, hardware is fashionable again after years of software being the centre of attraction. The statistics are indicative of the trend, too; McKinsey has estimated that that hardware, such as head nodes, inference accelerators, and training accelerators, will account for 40-50% of total value to AI vendors. Moreover, as the scale of chip components gets closer and closer to that of individual atoms, it's become impossible keep up the pace of Gordon Moore’s prediction for the semiconductor industry. It's now more expensive and technically difficult to double the number of transistors, and thus the processing power, for a given chip every two years. The redundancy of Moore’s law means AI needs to enable machines to continue to make improvements in training and inference.

While components such as Central Processing Units (CPU) and Graphics Processing Units (GPU) have become a part of common parlance, there are further innovations that are happening in the field. Here we give an insight into the most valuable innovations in AI hardware.

  • Application-specific integrated circuit (ASIC): An application-specific integrated circuit is an integrated circuit chip customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder is an ASIC. 
  • Field Programmable Gate Array (FPGA): A field-programmable gate array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". FPGAs are semiconductor devices that are based around a matrix of configurable logic blocks.
  • Chips for Edge Computing: Traditionally, most AI applications have resided in the cloud for both training and inference, given its scale advantage. However, inference at the edge is becoming increasingly common for applications where latency in the order of microseconds is mission critical. With self-driving cars, for instance, the decision of braking or accelerating must occur with near-zero latency, making inference on the edge the optimal option. Edge computing is also emerging as the favoured choice for applications where privacy issues and data bandwidth are paramount, such as AI-enabled CT-scan diagnostics.
  • Neuromorphic hardware: Custom-designed neuromorphic chips excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. They’ve also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems. Intel’s 14-nanometer Loihi chip — its flagship neuromorphic computing hardware — contains over 2 billion transistors and 130,000 artificial neurons with 130 million synapses.
  • Quantum Hardware: Computers that perform quantum computations are known as quantum computers. IBM Q System One is the world's first-ever circuit-based commercial quantum computer, introduced by IBM in January 2019. IBM Q System One is a 20-qubit computer. Google has developed Foxtail, Bristlecone, and most recently Sycamore, the quantum processor that took humanity beyond classical computations.

Sources of Article

Image from pxhere

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE