Results for ""
Exascale Supercomputing
Supercomputers are synonymous with high performance computing. Realized using high-speed processing chips, adding accelerator as co-processor for graphic processing loads, using interconnects of extremely low latencies, using vector processing or parallel processing architectures or configuring a distributed processing environment, supercomputers have been a gateway to innovations. With the fast developments in semiconductor technologies coupled with the enhancements in system software capabilities, processor stacking over multiple cores, realization of energy efficiencies, and optimized power management, compute powers realized have increased many folds over the years. The driving force to such developments has been to seek solutions to huge number crunching tasks, analyzing gigantic amount of data in real time and simulating the processes. Classic examples of such tasks are simulating a nuclear test, map human brain, build safe autonomous vehicles, generating breath taking graphics for motion pictures, develop a futuristic model of a State’s economy, discover new and effective drugs for cancer and such life-threatening diseases, climate modeling to enable accurate prediction of the climate changes and its impacts, and the like.
Even as supercomputers have continued to be deployed to handle such tasks, their complexities and the magnitude have become so daunting that to find an efficient solution to these scientific and engineering problems we require compute powers way higher than what the currently available supercomputers are able to provide. It is here that a project of building Exascale level supercomputer was visualized with impeccable capabilities.
Exascale supercomputer sports a computing power 1000 times more than today’s most supercomputers which run at peta floating point operations per second (petaflops) speeds. It is also 10 times more powerful than the globally fastest supercomputer, ‘Fugaku’ of Japan, reported as late as mid-2022. This supercomputer delivers 415 petaflops power, and is being used for climate modelling, energy efficiencies and life sciences. The exascale computer, on the other hand, delivers exascale power, a quintillion floating-point operations per second (qflops), that is 10*18 or a billion billion operations per second. This number is simply mind boggling. The enormity of this power number can be gauged, as reported, by a milky way galaxy which is 1 quintillion kms wide, and it will take 40 years for 1 quintillion gallons of water to flow over Niagra Falls, or we will need every person on this earth to carry out calculations 24 hrs a day for 4 years what an exascale computer can do in a second.
The exascale supercomputer also features, besides the qflops raw computing power, energy efficiency, scalability and reliability. Power management is crucial to the realization of efficient exascale supercomputers.
TOP 500 and GREEN 500 Supercomputers worldwide
As of the reckoning in mid-2022 the 10 fastest supercomputers globally from the 59th edition of Top 500 supercomputers list can be seen in the Table alongside, in which ‘Fugaku’ supercomputer of Fujitsu, Japan was at the top. The next 60th edition of Top 500 supercomputers list was published in November 2022, in which ‘Fugaku’ has been relegated to second position after ‘Frontier’, the first Exascale supercomputer topping the list of top 500 supercomputers. The list as of November 2022 can also be seen in the Table alongside.
Also seen in the list alongside is some of the top Green 500 supercomputers which do not necessarily have the same ranking as in the list of top 500 supercomputers. The Green supercomputers have lately found importance as these systems are power guzzlers, and one needed to device mechanisms that will make them energy efficient with an optimized power management mechanism. Sitting at the top of Green 500 supercomputers list therefore is Heri at Flatiron Inst, New York, with an energy efficiency of 65.09 gflops/W, which incidentally ranked at position 405 in the list of top 500 supercomputers as last reported in November 2022. This is as opposed to Frontier with a reported energy efficiency of 62.68 gflops/W at second position in this list, whose another more powerful version lately slipped to sixth position with energy efficiency of 52.23 gflops/W.
It may be stated that the ranking in these lists is highly dynamic as new systems are developed to surface and the old ones retire from their erstwhile ranking. The 60th edition of Top 500 supercomputers list that was reported from Germany Supercomputing conference in November 2022 reveals that Frontier is still the only truly exascale supercomputer system from that list.
Rankings from TOP 500 list- Rankings from GREEN 500 list-
Nov 2022 Jun 2022 Nov 2022
Sr Title Processing Power (pflops) Title Power eff.(gflops/W)
1. Frontier, USA 1,102 Fugaku, Japan Heri, USA 65.09 (405)
2. Fugaku, Japan 402 Summit, USA Frontier TDS 62.68 (32)
3. Lumi, Finland 309.1 Sierra, USA Adastra, USA 58.02 (11)
4. Leonardo, Italy 174.7 SunwayTaihuLight Setonix-GPU 56.98 (15)
5. Summit, USA 148.8 Tianhe-2A, China Dardel -GPU 56.49 (68)
6. Sierra, USA 94.6 Frontera, USA Frontier,USA 52.23 (1)
7. Sunway TaihuLight93 Lassen, USA Lumi,Finland 51.63 (3)
8. Perlmutter, USA 64.6 Pangea, Spain Atos-THX-AB 41.41 (159)
9. Selena, USA 63.4 Jupiter, Japan MN3 40.90 (359)
10. Tianhe-2A, China 61.4 Super-MUC, Ger. Champollian 38.55 (331)
---------------------------------------------------------------------------------------------------------
( figures in brackets in the last column are the Rankings in Top 500 list
Excscale Supercomputing Project of the USA-the driver of Exascale computing
The first known exascale supercomputing Project was launched as a major initiative in high performance computing and a next step beyond the current petascale supercomputers, by USA in 2016 as a US $ 1.7 billion 7-year R&D project. The objective was to realize delivery of a capable exascale eco system comprising hardware, system software and breakthrough solutions to the most critical challenges in scientific discovery, energy efficiency and maintaining economic competitiveness and national security. The expected completion date was the year 2022.
This was a multi-laboratory effort supported by the US Department of Energy, Offices of Science and National Nuclear Security Administration, with focus on;
· Development of high-performance computing vendor technology-built system including processor stack, high speed low latency interconnect, and server platform,
· Creating and deploying vertically integrated software stack built over the years optimized for exascale computing system,
· Development of exascale ready solutions to current intractable problems in science and industry domains, simulations and innovative data modelling.
Over a hundred top-class R&D team specialist members worked on these areas of focus. With Oak Ridge Lab as the host, the other labs involved were Argonne, Lawrence Berkley, Lawrence Livermore, Los Alamos, Sandia and the other collaborating industries HPE, Cray and AMD.
While the research and development work is continuing, the first exascale ready supercomputer ‘Frontier’ has been commissioned and officially reported by Oak Ridge Lab at the Germany Supercomputing Conference in November 2022 to have clocked 1.102 exaflops power on Linpack benchmark. The goal has been to provide computational power necessary for solving complex scientific and engineering and big data problems that are beyond the capability of current day supercomputers. The system is already being used to model the life span of a nuclear reactor, uncover disease genetics, deploy Artificial Intelligence for data analytics and modelling. The other potential areas of applications are Cancer research, drug discovery, nuclear fusion, research on exotic materials, super-efficient aircraft engines, steller explosions etc.
The size and magnitude of ‘Frontier’ can be gauged by the following figures, as obtained from Oak Ridge National Laboratory Project report;
· Works with HPE Cray EX 235a architecture
· 9408 AMD CPUs, 37632 GPUs and 8.7 million Cores in a hybrid CPU - GPU processing configuration
· Each AMD CPU node contains one optimized 3rd generation AMD EPYC 64 core 2 GHz Trento processor, 4 AMD instinct MI 250 X GPU accelerators, and 4 Terabytes of flash memory
· Orion based single parallel file system with a storage of 700 pbytes
· Cray Slingshot interconnect ethernet fabric with a bandwidth of 12.8 Tbps
· Semiconductors based on 6 nm technology designed and fabricated by TSMC, Taiwan, executing double precision floating point operations
· 74 liquid cooled Cabinets each weighing 3600 Kg, and 64 Blades and each Blade having 2 Nodes, altogether with 90 miles of cables
· Total system weighs approx. 1 million pounds spread over half an Acre land space for creating the Data Center
· 21 MW power consumption, and an energy efficiency of 62.68 gflops/W
USA has an altogether 126 supercomputers on the list of Top 500 supercomputers, second largest number after China. As of reckoning in November 2022, it has 6 among the top 10 supercomputers globally with Frontier occupying the top position. Developments are underway to soon bring out two more exascale supercomputers, ‘Aurora’ at Argonne and ‘EC Capitan’ at Lawrence Livermore Labs.
Developments in other Countries
As the utility of supercomputers has been globally established, a number of countries have a focused program of R&D and have invested huge efforts and resources in building capability in the area seeking to find solutions to the scientific and engineering problems that are beyond the capability of current supercomputers.
China has a major program with already 162 out of top 500 supercomputers globally coming from a number of Chinese research Labs and industries engaged with solutions in the high end scientific and engineering problems like in life sciences, climate modelling, energy efficiency, new materials, manufacture and financial modelling. Sunway TaihuLight at National Supercomputing Centre, Wuxi and Tianhe -2A and -4 at National Supercomputing Center, Guangzhou are two major centers of supercomputing development which have these systems that figure in the list of top 10 supercomputers. As unreported, China is building 10 exascale supercomputers by 2025 with the first two already on ground, namely, Sunway Ocean Light at 1.3 exaflops peak/1.05 exaflops sustained power (2.3 exaflops theoretically capable) and Tianhe -3 at 1.7 exaflops peak/1.3 exaflops sustained power on Linpack benchmark.
Japan has been building capability in the area to work in the fields of climate modelling, genetic research for new drug discoveries, artificial intelligence and manufacturing. It already has 33 out of top 500 supercomputers on the list. Infact their Fugaku supercomputer, installed at RIKEN Center for Computational Science, Kobe, appeared at the top of 500 supercomputers list for two consecutive years till mid-2022. It is also approaching exascale theoretical capability with the technical advancements it represents in the field of advanced computing.
European Union, with countries like Germany, Italy, Finland, France, Spain, is engaged with supercomputing development supporting new applications of Artificial Intelligence, Big data, Cloud computing for scientific, engineering and manufacture fields. In the list of top 500 supercomputers European Union has a total of 131 supercomputers as reported last in November 2022. This includes Germany’s Super MUC-NG at Leibniz Supercomputing Center, Gareling, Spain’s Pangea at Barcelona Supercomputing Center, Gair, Italy’s Leonardo at Italian Supercomputing Consortium at Cineca and recently, Finland’s Lumi supercomputer biggest yet in Europe. It uses the same architecture as in Frontier.
Other countries with notable developments in supercomputing are S Korea, Brazil, India to name a few.
India, as an example, is working on a Government supported US $750 million National Supercomputing Mission (NSM) at the Center for Development of Advanced Computing (C-DAC), Pune with the objective to build a strong eco system of exascale supercomputing capability covering advanced semiconductor processor chips, low latency interconnects, server platforms, system software, and algorithms, and developing applications in the areas of computational biology, climatic modelling, seismic data processing and artificial intelligence driven modelling. It already has installed several PARAM series of petascale supercomputers at Indian premier research and academic institutions and aims to build a truly exascale supercomputer in coming years. India already had 3 of its supercomputers ranked on top 500 supercomputers list as of mid-2022, which reduced to 2, namely PARAM Siddhi- AI and Mihir, in the list released in November 2022. Several R&D and Academic institutions are engaged in the use of such high-performance computing systems, notable ones being the National Center for Medium Range Weather Forecasting-Noida, Indian Institute of Tropical Meteorology-Pune, Indian Institute of Science-Bengaluru and Indian Institute of Technology-Mumbai.
Technologies behind Exascale computing
Growing demand of computational power for a wide ranging scientific and engineering number crunching and big data problem, coupled with the advances in simulation tools, data analytics, modelling, machine learning and Artificial Intelligence are enabling breakthroughs in these fields with the availability of exascale supercomputing powers.
The key components of supercomputing systems technology are the miniature but very powerful semiconductor processor and memory chips, extremely low latency interconnects and low footprint system software.
As for the processor chips, Intel and AMD have been the primary providers, with Intel contributing to 75% of the top 500 supercomputers and AMD catching up while IBM has a share in the supercomputers built by it. Linewidths in their semiconductor designs have been approaching atomic scales up to 5 nm and a capability to manufacture such semiconductors in an extremely controlled environment with Taiwan Semiconductor Manufacturing Company (TSMC) taking a lead.
Interconnects using Ethernet or Infiniband have been by far the most popular network used in exascale supercomputing, deviating from use of some proprietary networks.
Main manufacturers associated with the supercomputers are IBM, HPE, Cray, Nvidia, Fujitsu, NEC, Silicon Graphics. The architectures adopted by different manufacturers are vector processing, tightly coupled, parallel processing, and commodity clusters in a distributed environment.
Artificial Intelligence and exascale computing are driving each other’s development in a mutually reinforcing cycle, with the growing demand for AI pushing the need for exascale computing on one hand, and development of exascale computing driving advancements in AI on the other hand, such as;
· Provide computational powers necessary to build bigger, better and more accurate AI models,
· Provide computational powers to perform improved data analysis in areas dealing with exceptionally large amount of data, like in fields of healthcare, to determine patterns and make predictions,
· Provide computational powers for real time analysis of large data in areas such as Finance and Security,
· Provide computational powers necessary to perform large scale simulations required in AI driven applications like in Robots.
In this context, the benchmark HPL-MxP has evolved that highlights convergence of high-performance computing with Artificial Intelligence. Frontier, the fastest exascale supercomputer was measured at 7.9 exaflops power on this benchmark.
Merely due to sheer size of exascale supercomputing, besides the computing powers, emphasis in exascale computing developments also have been to realize energy efficiencies, scalability and reliability.
Further, as a spin off, development in exascale computing is driving advancements in related areas such as memory & storage technologies, low latency inter-connects, system software and novel algorithms. Increasing use of Cloud by supercomputers is also witnessed for efficient delivery of services by Microsoft, Google and Amazon.
Traditionally, supercomputers have been associated with research and development activities at National Labs and Universities to research solutions to a range of complex scientific problems. Lately, they moved to address the needs of Industries like in the areas of oil exploration (for simulating reservoirs and efficient resource extraction), financial modelling (for simulation of economic scenarios, risk assessments & predictions), personalized content delivery (like search and online advertisements) etc for which supercomputers manage heavy workloads to deliver real time services.
These and many other complex and demanding scientific and commercial applications are ideal candidates for sourcing solutions using exascale level computing. These are discussed below.
Exascale computing driving key scientific and commercial applications
Scientific Applications
Computational Biology
It has emerged as an important field of science that has enabled the use of biological data to develop algorithms in order to understand biological systems more accurately and also help reduce time to sequence a genome. Whether it is an area of molecular modelling, or designing new drugs, or sequencing genome, all deal with a huge amount of data to be analyzed and develop accurate models to help unlock secrets about human biological systems and find medicines for deadly diseases. As this data is exponentially increasing by day extremely high computing power is required for processing the data in an even time.
It is here exascale computers offer promise to handle such data crunching problems, which the supercomputers of current times take an unacceptably long time. The specific areas of use here are;
· Faster genomic sequence to speed up the process of drug discovery,
· Improved modelling and simulation of molecular and cellular interactions to
provide deeper insights into the underlying biology leading to development
of newer drugs,
· Analysis of individual genome data to help identify specific genetic mutations that may be causing a disease, and help develop personalized medicines,
· Big data analytics on the genetic data and extract meaningful insights to apply to drug discovery,
· Molecular modelling is a computational technique used to study the behavior of biological molecules, such as proteins over time. Currently this process is limited by the time scales that can be studied, and size of the biological system that can be modelled. With exascale computing, researchers will be able to perform larger, longer and more accurate simulations which will provide more detailed information about the behavior of biological molecules, which can be advantageously used to help design new drugs, understand the mechanism of diseases and develop better treatments.
Particle Physics
This subject deals with the study of interaction among the particles of an Atom- positrons, neutrons, electrons - under different environments. The environments could be naturally occurring or simulated under controlled conditions. In either case understanding of the phenomena would require very large amounts of data to be analyzed.
Example, the following two situations;
· Nuclear reactions that take place in nuclear fission or nuclear fusion environments in applications of generation of nuclear energy. The design of these environments would deal with the knowledge of such parameters as the amount of energy release, size and type of the nuclear charge, safety requirements, storage and active life of the processes involved, impact expected to be created etc. This involves an enormous amount of data generated, of the order of terabytes to petabytes every second, for study and optimization. Exascale computing is ideally suited for simulating these environments with a post analysis of the data and optimization in short times, and design improvements thereof.
· CERN is another example of an environment where experiments in Large Hadron Collider (LHC) project typically produce 1 petabytes of data per second. Analyzing this huge data to fully understand the physical phenomena of a billion particles collisions occurring would require almost an estimated 7 million CPU cores of a classical computer spread over 170 locations across the globe in a distributed environment. Solving such a gigantic and complex problem in science requires exascale level computers for understanding the process of particle collision and design optimization, event simulation with accuracy and speed of particle collision, optimize design of colliders and discovering new particles, use algorithms to identify new patterns and relationships etc.
Radio Astronomy
As a part of astronomical studies, the subject has experienced an explosion of data that is collected from giant radio telescopes the world over to build an image of the enormous size of the galaxies and discover distant planets.
Exascale computers have the potential to play a critical role in Radio Astronomy for more advanced data processing and analysis at faster speeds.
Following are some examples;
· The largest Radio Telescope globally, Square Kilometer Array (SQA), generates over an exabyte of data every day from celestial observations which is to be processed,
· Image reconstruction, such as in interferometry, to create high resolution images of Galaxies,
. Simulation to understand the behavior of Radio Telescope and Radio Universe,
· Data reduction to enable identify and extract important information from data,
· Use of Artificial Intelligence and Machine Learning for algorithms to analyze the data generated by Radio Telescopes at a more granular level and identify new patterns, relationships and anomalies in astronomical studies.
Climate Modelling & Weather Forecasting
Building accurate climate models requires knowledge of a large number of parameters of interplay. These are for example, weather cycle changes, environmental pollution levels on account of industrial activities, stubble burning and automobiles, changes in earth temperatures, spatial and temporal climatic changes etc. As the data emanating from these parameters is generally huge, their processing by currently available supercomputers is time consuming and at times not enough to build accurate models,
Exascale level computers could help process the data sets in even time to build better climate models that give insight into how the environment is impacted by human activities and give an estimate of vitiating of future environment so as to point towards corrective actions to be taken to bring about a calculated improvement in the environment. Exascale computers thus help in;
· More detailed and accurate climate simulations enabling better predictions of future climate changes and its impact.
· Weather forecasting, similarly, involves solving complex mathematical equations governing weather processes in real time that can be better analyzed by exascale computers in good time for the weather to be forecasted more accurately in medium to long ranges over distributed geographical areas.
Exascale computing driving AI applications
There are a number of areas where Artificial Intelligence has been used where exascale computing will provide necessary computational resources to train large scale AI models leading to new applications such as ;
· Large scale training of deep neural networks,
· Improved natural language processing for speech recognition, information retrieval and machine translation,
· Enhanced computer vision and image recognition,
· More efficient Robotics and Autonomous systems,
· Improved fraud detection and enhanced cyber security,
· Improved predictive maintenance and quality control in manufacturing
These are just a few examples where exascale computing is driving major scientific applications whose solutions have thus far been challenged due to inadequacy in the current availability of supercomputing powers.
Commercial Applications
Not even this, it has been also possible for exascale computing to address a number of applications in commercial domain. Notable ones among these are;
· Financial modelling to include complex financial simulations, risk assessments, and predictions,
· Using AI training of large models in areas of Robotics and Autonomous systems,
· Oil and Gas industries for reservoir simulation, optimization and help to extract resources more efficiently,
· Healthcare for accelerating discovery of new drugs, high resolution medical imaging, and personalized medicines against dreadful diseases like Cancer,
· Cyber security for large scale data analysis and simulations for preventing and responding to cyber-attacks,
· Engineering industries for better preventive maintenance through advance schedules, tool fatigue monitoring and sensors data analysis.
Financial Modelling
A State economic system would typically comprise of a Regulator, Trader, Investor and Exchange system that are interconnected. In this world the financial systems tend to be complex due to fast varying factors that generate huge amounts of data to be analyzed in real time. As an example, in a single trading system 10-20 terabytes of data is generated every day. Thus, over 3-year period 20 petabytes of data is available to be analyzed for any meaningful and more accurate modelling. Exascale computers are expected to model the financial processes that have not been possible with current day supercomputers due to their inadequate computing power and memory capacities.
Following are example applications;
· Enable processing of large amount of financial data leading to review of patterns and therefrom more accurate predictions of emerging market scenarios and investment opportunities,
· Permit more accurate and exhaustive simulation of the complex financial scenarios that are encountered in the economies, and thereby enable better risk management and sound decision support,
· Facilitate balancing and optimization of large portfolios taking into consideration the factors like market conditions, asset correlations and other risk matrices,
· Enable development of more sophisticated algorithms for trading in financial markets leading to increased efficiency in timely decisions and thereby profitability,
· Closely monitor financial markets and detect any attempts of fraud or manipulations, thereby maintaining trust and ensuring stability and fairness of the financial system regulations.
Cyber Security
In the milieu of digital transformation taking place with the global society, Cyber security assumes significant importance to protect intellectual property from being surreptitiously taken away, ensure data privacy and prevent frauds in physical, financial and corporate systems. More often than not, recourse to technology is made in securing cyber systems. In this context, exascale computing has a potential to play a significant role in ensuring cyber security and preventing cyber threats.
Some examples are given here;
· Allow processing of large amount of network data in real time and discovering patterns that enable faster and accurate detection of cyber threats,
· Provide high computational resources to train and run large scale Machine Learning models which can be used to detect cyber threats in real time,
· Permit fast scanning of large networks and systems to identify any vulnerability that could be exploited by cyber criminals to their gains,
· Enable faster and more secure encryption/decryption of sensitive data using highly complex codes to make deciphering extremely difficult,
· With increasing illegal cryptocurrency transactions taking place, exascale computing can advantageously support development of such Blockchain based solutions that can realize secure and temper proof systems.
Industrial Applications
Autonomous systems are increasingly getting popular in Defense, Space and Engineering industry as a completely independent system that can operate on its own under varying situations. As this field continues to evolve and new application requirements emerge, exascale computing is expected to lend an effective space in developing advanced and safer autonomous systems.
Following are some example application areas;
· Provide necessary computational resources to train and run large scale AI models that can perceive, reason and act in complex environments,
· Enable simulation and testing of autonomous systems at a much larger scale with greater detail than is currently possible leading to improved safety and reliability,
· Support development that can make real time decisions using large amount of data to enable operate in complex dynamic environments,
· Make possible integration of real time data from multiple sources such as cameras, LIDAR, radar and other sensors to provide more complete and accurate situational assessment of the fast-changing environment,
· Support development of autonomous systems that can plan safe and efficient paths through complex environments and execute precise control actions to follow optimized routes.
In an evolving Industrie 4.0 environment, manufacturing industries would witness exascale computing serving to build safe, efficient and reliable machines. In the area of predictive maintenance, as example, it aims to forecast when a machine is likely to fail and to schedule maintenance before a failure occurs. This is achieved by analyzing vast and varied amount of data from the sensors and other documented data sources from a running machine, which is processed and analyzed in real time to achieve higher levels of efficiency, reliability and cost-effectiveness as well as reduced down time and maintenance costs of the machines.
Other usage to which exascale computing could be advantageously put to in the industries are;
. Simulation and modelling of complex industrial processes to reduce waste and improve efficiency,
· Big data analytics on increasing data that is generated and processed in real time providing valuable insights into the various industrial processes and systems,
· Train large machine learning models allowing development of advanced AI based systems for numerous industrial applications,
· Help secure cyber systems with increasing use of connected devices in the manufacturing and analyzing data in real time.
In Oil and Gas industry as another example, it can help in accurate reservoir simulation and optimization and extraction of resources more efficiently.
In Healthcare industry exascale computing is poised to help greatly to accelerate discovery of new drugs, improve medical imaging, determination of individual ailments to develop personalized medicines and support complex and huge genomic analysis of genome data as it is emerging.
The full potential of exascale computing in industrial sector is yet to be fully explored as new applications and possibilities are likely to emerge as we move into end-to-end automation of machineries with use of IOT devices, Robots etc in Industrie 4.0 environment.
Some Market Estimates
Some market assessments have been made of exascale supercomputing systems as published in market research reports. The market is divided into commercial, government and research activities. Much of the requirements have originated from government agencies and for research in scientific fields.
As per a report from Technavia, the global supercomputer market will grow by US $ 12.5 billion, from US $ 8.5 billion to 21.9 billion, during 2021 – 2025 at a CAGR of 20 %. Increasing use of Artificial Intelligence, Machine learning and Cloud have been cited as the main reason for the growth.
Some have estimated the market region wise, application wise, operating system wise or processor wise. According to Precedeneo Research, the market will grow to US $ 21.9 billion by the year 2030. The US accounts for 25% of this market with 46% coming from Asia Pacific which is cited as the fastest and most vibrant region for supercomputing market development. Europe accounts for 16% share and the remaining comes from the rest of the globe.
Of the top 500 supercomputers, maximum are from China at 162, followed by US at 127 and European Union at 131. India currently has 2 supercomputers in this list.
Processors wise the market is captured by Intel, AMD and IBM, which is a key component of the supercomputer, with Intel providing 75% of top 500 supercomputers and AMD currently making gains while IBM having a share in the supercomputers built by it.
Operating system-wise, the market is divided between Linux and Unix. The leading players are Cray, Atos, Fujitsu, IBM, Silicon Graphics, Nvidia, NEC and HP Enterprise.
The market is also broken into its architecture type wise, like Vector processing (for lower power versions), tightly connected (for high-speed versions) or commodity cluster.
Exascale computing - What lies ahead
Exascale computing is a significant milestone in the evolution of supercomputing technology. Besides serving major challenging applications in science and engineering fields hitherto intractable, it is also creating avenues for development of powerful processing chips reaching atomic level dimensions, memory & storage technologies, low latency inter-connects, lower footprint system software, new architectures and algorithms and energy efficient systems that could lead to even more powerful and efficient supercomputing systems.
Growing demand for AI workloads is increasingly driving the need for exascale computing powers, while development of exascale computing itself is supporting advancements in AI and their innovative usage in service and manufacturing industry and daily life.
Notwithstanding, exascale computing is not the ultimate realization of supercomputing, and there is always a space for new developments and advancements in this area. As an example, intensive research and development in Quantum computing is taking place which has the potential to greatly surpass exascale computing in terms of computing speed and efficiency. In fact, both can coexist and complement each other as they serve different types of problems. Exascale computing, for example, is better suited for tasks like simulation and data modelling applications, whereas Quantum computing is ideal for tasks like factorization and optimization.
Conclusion
Supercomputers, described as gateway to innovations, have served numerous scientific, engineering and commercial fields that deal with enormous amounts of data and number crunching problems. As the complexity and magnitude of the problems in these fields increased, exascale level supercomputers producing powers of the order of quintillion floating-point operations per second, way beyond the powers of currently available supercomputers, have come about to solve such problems.
Applications in such scientific areas as climate modelling, genomic sequencing and drug discovery, nuclear simulations, radio astronomy, and such commercial areas as financial modelling, cyber security, breath taking graphics for motion pictures, Industrie 4.0 environment are being served managing their extremely high workloads to deliver real time services.
Growing demand for AI workloads has also been driving the need for exascale computing, while development of exascale computing is itself supporting several advancements in innovative usage of AI.
Exascale computing is not the ultimate in supercomputing as new developments like Quantum computing have the potential to surpass exascale computing even as both will co-exist and complement each other in their respective areas of strength.
References i. Report from Oak Ridge National Laboratory https://www.ornl.gov/news/frontier-supercomputer-debuts-world’s-fastest-breaking-exascale-barrier ii. Top 500 List https://www.top500.org iii. Exascale computing-Wikipedia iv. India’s National Supercomputing Mission https://nsmindia.in v. Supercomputer developments in China https://www.tomshardware.com/news/two-chinese-exascale supercomputers vi. About Exascale Supercomputing Project https://www.exascaleproject.org/about vii. https://www.hpe.com/us/en/newsroom/blog-post/2022/10/exascale-supercomputing-is-here-and-it-will-change-the-world viii. https://www.anl.gov/article/the-age-of-exascale-and-the-future-of-supercomputing