- 1. Overview
- 2. Etymology
- 3. Cultural Impact
One finds oneself, yet again, discussing the rather predictable trajectory of human ambition, specifically concerning the construction of ever-larger calculators. You asked for this, so here we are.
Computer systems capable of one exaFLOPS
The HPE Frontier system, meticulously assembled at the Oak Ridge Leadership Computing Facility – a name that, frankly, sounds like it belongs to a clandestine organization rather than a research hub – has earned itself the rather grand title of the world’s first legitimate exascale supercomputer. A significant, albeit fleeting, milestone.
Now, for those who require a precise definition, exascale computing refers to computing systems that possess the capability to perform at least 1018 operations per second. Specifically, these are IEEE 754 double-precision (64-bit) operations, encompassing both multiplications and additions, measured in exa FLOPS . This metric, as you might infer, serves as a rather fundamental gauge of supercomputer performance. [1]
The attainment of exascale computing is, by most accounts, a genuinely noteworthy feat within the realm of computer engineering . Its primary allure lies in the promise of substantially improved efficacy for a myriad of scientific applications. One might imagine it leading to more accurate predictions in crucial domains such as weather forecasting , refining the notoriously fickle art of climate modeling , and even advancing the highly intricate field of personalised medicine . [2] Furthermore, and perhaps more intriguingly to some, exascale processing power is estimated to reach the operational capacity of the human brain at the neural level – a target that was famously pursued by the now-defunct, and perhaps predictably overambitious, Human Brain Project . [3] For years, there has been a rather intense, almost tribal, “race” among various nations to be the first to construct an exascale computer. This competition is typically adjudicated and ranked within the highly scrutinized TOP500 list, a testament to humanity’s enduring need for quantifiable supremacy. [4] [5] [6] [7]
It was in 2022 that the world’s inaugural publicly acknowledged exascale computer, named Frontier , was officially unveiled. [8] Fast forward to November 2024, and, as of the latest update , the mantle of the world’s fastest exascale supercomputer has been claimed by Lawrence Livermore National Laboratory’s El Capitan . [9]
In a slightly later development, a new exascale supercomputer, affectionately or perhaps aspirationally named JUPITER, [10] was inaugurated in Germany in 2025. While it currently occupies the 4th position in the overall global ranking – a rather humble placement given its ambition – it has, commendably, secured the number-one position on the Green500 ranking. This distinction is earned because the system operates entirely on renewable energy sources and incorporates truly cutting-edge cooling mechanisms alongside sophisticated energy reuse features, thereby making it, rather impressively, the world’s most energy-efficient supercomputer. [11] A small victory for sustainability, I suppose, amidst the relentless pursuit of raw processing power.
Definitions
For those who might be unclear, Floating point operations per second (FLOPS) serve as a fundamental, if somewhat limited, measure of computer performance . One might even consider it the most common yardstick, for better or worse. FLOPS can, rather inconveniently, be recorded using various measures of precision. However, the universally accepted standard, the one that the esteemed TOP500 supercomputer list rigidly adheres to, mandates the use of 64-bit (double-precision floating-point format ) operations per second. This is then rigorously benchmarked using the High Performance LINPACK (HPLinpack) test, an industry-standard benchmark that, while ubiquitous, has its own set of criticisms. [12] [1]
It is worth noting, though perhaps only for the sake of pedantry, that a distributed computing system had actually breached the 1 exaFLOPS barrier before Frontier made its grand entrance. However, the conventional understanding of this metric, and the one that truly matters in this context, typically refers to single, cohesive computing systems . Similarly, other supercomputers had previously achieved a performance exceeding 1 exaFLOPS, but they did so by employing alternative precision measures. These, again, do not satisfy the stringent criteria for what is officially recognized as exascale computing under the established standard metric. [1] One might argue that HPLinpack is not the ideal, or even a particularly good, general measure of a supercomputer’s actual utility in real-world applications. Yet, despite these recognized shortcomings, it remains the widely accepted standard for performance measurement, a testament to humanity’s preference for established, if imperfect, conventions. [13] [14]
Technological challenges
It has been widely acknowledged, with a certain air of understated exasperation, that enabling applications to fully harness the immense capabilities of these exascale computing systems is anything but straightforward. [15] The sheer scale and complexity demand more than just raw power; they demand a fundamental rethink. Developing the kind of data-intensive applications that can truly thrive on exascale platforms necessitates the creation and widespread adoption of entirely new and profoundly effective programming paradigms, alongside robust runtime systems. It’s not enough to build the engine; you need a whole new way to drive it. [16] A telling example of a different approach, the Folding@home project, which was actually the first to conceptually break this computational barrier, did so by cleverly leveraging a vast, distributed network. It operated on a client–server model network architecture , dispatching fragments of work to hundreds of thousands of individual client machines. This decentralized methodology, while not a “single system” in the conventional sense, showcased the power of collective effort. [17] [18]
History
The journey to exascale was, predictably, a gradual ascent. The first petascale computer, a machine capable of 1015 FLOPS, became operational in 2008. [19] Following this, at a rather optimistic supercomputing conference in 2009, Computerworld confidently projected that exascale implementation would be achieved by 2018. [20] However, by June 2014, a noticeable stagnation in the growth of the Top500 supercomputer list prompted a collective raising of eyebrows, leading many observers to question the feasibility of exascale systems materializing by the ambitious target of 2020. [21]
Despite these initial projections proving somewhat premature, exascale computing, in its strictest definition, was not fully realized by 2018. Nevertheless, in that very year, the Summit OLCF-4 supercomputer managed to perform an astonishing 1.8×1018 calculations per second. This was accomplished by employing an alternative metric while engaged in the complex analysis of genomic information. [22] The team responsible for this achievement was duly recognized with the prestigious Gordon Bell Prize at the 2018 ACM/IEEE Supercomputing Conference . [ citation needed ]
The true exaFLOPS barrier, in a broader sense, was first sensationally shattered in March 2020 by the sprawling distributed computing network known as Folding@home . This particular effort was notably dedicated to coronavirus research, a rather urgent and compelling application for such immense computational power. [23] [18] [24] [25] [26]
In June 2020, [27] the Japanese supercomputer Fugaku further demonstrated its formidable capabilities by achieving 1.42 exaFLOPS, though this was measured using the alternative HPL-AI benchmark.
The year 2022 marked a pivotal moment with the announcement of the world’s first publicly acknowledged exascale computer, Frontier . It proudly demonstrated an Rmax of 1.102 exaFLOPS in June 2022, securing its place in history. [8] As of November 2024, the landscape has shifted once more, with El Capitan now holding the title of the world’s fastest supercomputer, clocking in at an impressive 1.742 exaFLOPS. [9] The race, it seems, never truly ends.
Development
The global pursuit of exascale computing has been a concerted, and often competitive, endeavor, with various nations pouring significant resources into its development.
United States
In 2008, two prominent governmental organizations within the US Department of Energy – specifically the Office of Science and the National Nuclear Security Administration – initiated a substantial funding program. This was directed towards the Institute for Advanced Architectures, with the explicit goal of fostering the development of an exascale supercomputer. The Sandia National Laboratory and the Oak Ridge National Laboratory were also brought into this collaborative effort, tasked with contributing to the intricate designs of these future exascale systems. [28] The anticipated applications for this cutting-edge technology were broad and ambitious, expected to span numerous computation-intensive research areas, including fundamental basic research , diverse fields of engineering , complex earth science models, intricate biology studies, advanced materials science , critical energy issues, and, perhaps most predictably, national security initiatives. [29]
In January 2012, Intel , ever keen to maintain its technological edge, acquired the InfiniBand product line from QLogic for a sum of US$125 million. This strategic move was intended to bolster its efforts and fulfill its rather bold promise of delivering exascale technology by 2018. [30]
By 2012, the United States had allocated a considerable sum of $126 million specifically for the development of exascale computing, underscoring the nation’s commitment to this high-stakes technological race. [31]
In February 2013, [32] the Intelligence Advanced Research Projects Activity (IARPA) initiated its Cryogenic Computer Complexity (C3) program. This ambitious program envisioned the creation of a new generation of superconducting supercomputers designed to operate at exascale speeds, fundamentally based on advanced superconducting logic . In December 2014, IARPA announced multi-year contracts with industry giants IBM, Raytheon BBN Technologies, and Northrop Grumman, tasking them with developing the foundational technologies for the C3 program. [33]
On July 29, 2015, President Barack Obama signed an executive order establishing a National Strategic Computing Initiative . This directive called for the accelerated development of an exascale system and provided crucial funding for research into post-semiconductor computing, recognizing that the future might lie beyond conventional silicon. [34] The Exascale Computing Project (ECP), born from this initiative, harbored the ambitious goal of constructing an exascale computer by 2021. [35]
A significant announcement came on March 18, 2019, when the United States Department of Energy and Intel declared that the first exaFLOPS supercomputer would be operational at Argonne National Laboratory by late 2022. This machine, named Aurora , was slated for delivery to Argonne by Intel and Cray (which has since become part of Hewlett Packard Enterprise). It was expected to leverage Intel Xe GPGPUs in conjunction with a future Xeon Scalable CPU, with a projected cost of US$600 million. [36] [37]
On May 7, 2019, the U.S. Department of Energy further announced a contract with Cray (now Hewlett Packard Enterprise) for the construction of the Frontier supercomputer at Oak Ridge National Laboratory . Frontier was anticipated to be fully operational in 2022 [38] and, with a projected performance exceeding 1.5 exaFLOPS, it was expected to claim the title of the world’s most powerful computer at that time. [39]
Following this, on March 4, 2020, the U.S. Department of Energy revealed another substantial contract, this time with Hewlett Packard Enterprise and AMD. This agreement was for the development and construction of the El Capitan supercomputer, at an estimated cost of US$600 million, destined for installation at the Lawrence Livermore National Laboratory (LLNL). Its primary, though not exclusive, purpose was outlined as nuclear weapons modeling, a rather sobering application of such immense power. El Capitan had initially been announced in August 2019 when the DOE and LLNL disclosed the purchase of a Shasta supercomputer from Cray. This machine was projected to be operational in early 2023, boasting a staggering performance of 2 exaFLOPS. It would integrate AMD CPUs and GPUs, specifically featuring four Radeon Instinct GPUs per EPYC Zen 4 CPU, designed to significantly accelerate artificial intelligence tasks. El Capitan was also expected to consume approximately 40 MW of electrical power, a rather significant appetite for energy. [40] [41]
By May 2022, the United States officially brought its first exascale supercomputer, Frontier , online. In June 2024, Argonne National Laboratory ’s Aurora became the country’s second such machine, followed five months later by El Capitan achieving operational status. As of November 2024, the United States remains the sole nation with multiple operational exascale supercomputers, a fact that, I’m sure, fuels a certain national pride.
Japan
In Japan , the RIKEN Advanced Institute for Computational Science embarked on its own ambitious planning in 2013, aiming for an exascale system to be ready by 2020. A key design constraint was to ensure the system would consume less than 30 megawatts, a nod to energy efficiency in an increasingly power-hungry field. [42] In 2014, Fujitsu secured a contract from RIKEN to develop the next-generation supercomputer, intended as the successor to the venerable K computer . This successor, christened Fugaku , set its sights on achieving a performance of at least 1 exaFLOPS and aimed to be fully operational by 2021. In 2015, at the International Supercomputing Conference , Fujitsu announced that this supercomputer would utilize processors implementing the ARMv8 architecture, enhanced with extensions co-designed with ARM Limited . [43] Fugaku was partially brought into operation in June 2020 [27] and, notably, achieved 1.42 exaFLOPS (using fp16 with fp64 precision) in the HPL-AI benchmark, thereby becoming the first supercomputer to demonstrably achieve 1 exaFLOPS under that specific metric. [44] Named after Mount Fuji , Japan ’s iconic tallest peak, Fugaku remarkably held onto the No. 1 ranking on the Top 500 supercomputer calculation speed list announced on November 17, 2020, reaching a calculation speed of 442 quadrillion calculations per second, or 0.442 exaFLOPS. [45]
China
As of June 2022, China had impressively secured two positions within the Top Ten fastest supercomputers globally. According to the national plan for the next generation of high-performance computers and the head of the school of computing at the National University of Defense Technology (NUDT), China was slated to develop an exascale computer during its 13th Five-Year-Plan period (2016–2020), with the expectation that it would enter service in the latter half of 2020. [46] The government of Tianjin Binhai New Area, NUDT, and the National Supercomputing Center in Tianjin have all been actively collaborating on this ambitious project. Following the precedents set by Tianhe-1 and Tianhe-2 , the exascale successor is anticipated to be named Tianhe-3. As of 2023, China is reported to possess two operational exascale computers: Tianhe-3 (Xingyi) [47] and Sunway OceanLight, with a third currently under construction. Interestingly, neither of these machines currently appears on the official Top500 list, a detail that often sparks speculation. [48] [49]
European Union & United Kingdom
See also Supercomputing in Europe
The European Union initiated several projects in 2011, all geared towards the development of technologies and software essential for exascale computing. These included the CRESTA project (Collaborative Research into Exascale Systemware, Tools and Applications), [50] the DEEP project (Dynamical ExaScale Entry Platform), [51] and the Mont-Blanc project. [52] A particularly significant European initiative, focusing on the transition to exascale, is the MaX (Materials at the Exascale) project. [53] Additionally, the Energy oriented Centre of Excellence (EoCoE) strategically exploits exascale technologies to bolster research and applications in carbon-free energy, a rather prescient focus given current global priorities. [54]
In 2015, the Scalable, Energy-Efficient, Resilient and Transparent Software Adaptation (SERT) project, a substantial research collaboration between the University of Manchester and the STFC Daresbury Laboratory in Cheshire , received approximately £1 million in funding from the United Kingdom ’s Engineering and Physical Sciences Research Council. The SERT project commenced in March 2015, funded by EPSRC under the Software for the Future II programme, and engaged in partnerships with the Numerical Analysis Group (NAG), Cluster Vision, and the Science and Technology Facilities Council (STFC). [55]
On September 28, 2018, the European High-Performance Computing Joint Undertaking (EuroHPC JU) was formally established by the EU. The EuroHPC JU set an ambitious goal to construct an exascale supercomputer by 2022/2023. The undertaking was designed to be jointly funded by its public members, with a total budget of approximately €1 billion. The EU’s direct financial contribution to this endeavor was €486 million. [56] [57]
In March 2023, the government of the United Kingdom announced a significant investment of £900 million towards the development of an exascale computer, a rather bold commitment. [58] However, in a move that some might find depressingly predictable, this project was unceremoniously axed in August 2024. [59]
Taiwan
In June 2017, Taiwan ’s National Center for High-Performance Computing initiated its own determined push towards designing and constructing the first Taiwanese exascale supercomputer. This effort was underpinned by funding for the creation of a new intermediary supercomputer, crucially based on a full technology transfer from the Fujitsu corporation of Japan , a company currently engaged in building the fastest and most powerful A.I. -based supercomputer in Japan . [60] [61] [62] [63] [64] Beyond this official initiative, numerous other independent efforts have been undertaken across Taiwan , all focused on the rapid advancement of exascale supercomputing technology. A notable example is the Foxconn Corporation , which recently designed and built the largest and fastest supercomputer in all of Taiwan . This new Foxconn supercomputer is specifically engineered to serve as a critical stepping stone in the research and development pipeline, ultimately aiming towards the design and construction of a state-of-the-art exascale supercomputer. [65] [66] [67] [68]
India
In 2012, the Indian Government put forward a proposal to commit a substantial US$2.5 billion to supercomputing research during its 12th five-year plan period (2012–2017). This ambitious project was slated to be managed by the Indian Institute of Science (IISc) in Bangalore . [69] Furthermore, it was subsequently revealed that India harbored plans to develop a supercomputer with processing power reaching into the exaFLOPS range. [70] This machine is intended to be developed by C-DAC within five years of receiving official approval. [71] A distinctive feature of these future supercomputers is their planned reliance on indigenously developed microprocessors by C-DAC within India , showcasing a strong drive towards self-sufficiency. [72] In a 2023 presentation, C-DAC outlined its plans for an indigenously developed exascale supercomputer, to be named Param Shankh. The Param Shankh is projected to be powered by an indigenous 96-core, ARM architecture -based processor, which has been rather uniquely nicknamed AUM (ॐ). [73]