Asian Scientist Magazine (Oct. 12, 2022) —By Jill Arul and Hannan Azmir — As a general rule of the trade, researchers are constantly working towards achieving bigger, better and faster tools. This is especially true when it comes to high performance computing (HPC)—a field that underpins a variety of modern research endeavors from climate studies to biomedical sciences.
For the last 14 years, the sector has been operating at petascale—beginning with IBM’s Roadrunner in 2008, capable of a sustained performance of 1.026 quadrillion floating point operations per second, or 1.026 petaFLOPS. The next major milestone, operating at a quintillion FLOPS—known as exascale—has just been officially reached. Looking ahead, exascale computing is expected to drive research across the globe with speedy and efficient calculations.
Just this year, the TOP500 list of the world’s fastest supercomputers revealed the Frontier system at Oak Ridge National Laboratory in the United States to be the first ‘true’ exascale machine—achieving a LINPACK performance of 1.102 exaFLOPS. However, it is important to note that there have been other machines to operate at exascale, like Japan’s Fugaku, where the feat was achieved at alternative benchmarks rather than the most commonly applied LINPACK benchmark.
While the Frontier system is considered the first true exascale machine at the moment, it may not be the only one for long as Asia’s supercomputing centers leap towards sustained exascale performance.
More than a drop in the ocean
Operating at 93 petaFLOPS, China’s Sunway Taihu Light at the National Supercomputing Center in Wuxi remains in the top 10 of the world’s most powerful supercomputers. However, it seems that China has more in the works. The nation is thought to be operating two exascale supercomputers—neither one has been officially publicly disclosed.
The successor to the Sunway TaihuLight, known as the Sunway Oceanlite is reported to have reached a 1.3 exaFLOPS peak performance after being benchmarked in March 2022.
According to available research, the machine is already in use and plays a starring role in a recent project designed to approach brain-scale AI where the number of parameters is similar to the number of synapses in the human brain. In fact, the project is the first to target training brain-scale models on an entire exascale supercomputer, revealing the full potential of the machine.
So far, it has been reported that the largest tested configuration of the OceanLight system accessed 107,520 notes for an impressive 41.93 million cores across 105 cabinets.
Meanwhile, at the National University of Defense Technology, China holds another supercomputer, Tianhe-3, potentially capable of performing at roughly the same speeds. Similarly deployed for training deep neural networks, Tianhe-3 operates on fully domestic architecture. Specifically, the machine is based on the FeiTing line of processors from Phytium. As China continues to forge ahead as a leading HPC center, it does so independently and relies on native architectures to build bigger and faster machines.
The importance of application
Operating at a LINPACK benchmark of 442 petaFLOPS, Japan’s Fugaku supercomputer previously held the top spot on the TOP500 list from June 2020 to November 2022. Housed at the RIKEN Center for Computational Science and jointly developed by Fujitsu and RIKEN, the machine has played a significant role in research in Japan—leading to breakthroughs in medicine and climate modeling.
For example, it was announced that with Fugaku, researchers were able to develop a new artificial intelligence (AI) system capable of swiftly predicting tsunami flooding on regular computers. In January 2021, the supercomputer was also used to simulate the movement of molecules and determine how proteins interact with roughly 2,000 existing drugs to find an effective treatment for COVID-19.
In such applications, supercomputers can operate at reduced precision to increase performance while maintaining sufficient accuracy—a state in which Fugaku has achieved performance above one exaFLOP.
“Typically, there are hundreds, even thousands, of jobs running on Fugaku,” explained Professor Satoshi Matsuoka, director of the RIKEN Center for Computational Science, in an interview with Supercomputing Asia. “It’s hard to determine if it’s operating at full exascale at any one time, but it’s definitely operating at a level that’s expected of exascale machines.”
Significantly, Matsuoka describes the Fugaku supercomputer as a general-purpose machine—with portions of it available to researchers and commercial users all over the world for a variety of uses. As such, projects usually don’t take up very much of the machine and may not demonstrate its full power. However, while it is clear that meeting the needs of researchers exceeds the symbolic need to reach specific benchmarks—it remains important to occasionally demonstrate the limits of the supercomputer.
“We must not only test the limits of the methodology, but also determine how fast the machine goes at a full scale run with half or more of it in use,” said Matsuoka.
Since pushing the envelope with Fugaku’s capabilities, the team at RIKEN has begun working on its more powerful successor slated to be launched by end of the decade.
“This machine will be a considerable effort involving all the major supercomputing stakeholders in Japan, as well as entities and major companies abroad,” shared Matsuoka.
Working together for exascale success
Leaps in supercomputing are also happening in Southeast Asia. At the end of 2021, the National Supercomputing Center (NSCC) Singapore and National University Health System (NUHS) announced an agreement that would lead to Singapore’s third supercomputer. The new machine, named PRESCIENCE, is a petascale supercomputer that will be dedicated towards providing better healthcare and improving the current information infrastructure in Singapore’s public healthcare system.
An abundance of patient data is generated and kept by public health systems globally, with an estimated average of 50 petabytes of data produced annually. With this, hospitals are in a prime position to make use of this data using machine learning and AI to provide further insight into a patient’s outcome and give the best course of treatment to improve their wellbeing if the patient’s health is deteriorating—NUHS plans to do just that with PRESCIENCE.
The current collaborative agreement with NUHS is part of the NSCC’s larger roadmap towards growing and maturing Singapore’s HPC research landscape, which include moving towards exascale computing.
The NSCC has already made headways to work with supercomputing centers in Japan, Finland and Australia in a long-term bid to build up the nation’s own HPC resources and provide much needed access to established exascale computing.
The road ahead
Earlier this year, India’s National Supercomputing Mission (NSM) announced the deployment of “PARAM Ganga”, a new petascale supercomputer part of the “PARAM” (“supreme” in Sanskrit) series of machines at the Indian Institute of Technology (IIT) Roorkee. Operating at 1.66 petaFLOPS, this machine is one of a whopping nine new supercomputing systems slated to launch this year.
The NSM, launched in 2015, has deployed 15 systems across the nation since its inception. With many more machines planned, this follows the Indian government’s ambitious goal of developing its own supercomputing systems built with parts developed and manufactured in India. Development of indigenous technology is part of the Indian government’s bigger goal of furthering supercomputing research and boosting India’s own research capabilities across the public and private sectors.
Eventually, upscaling from current petascale computing to exascale computing via the PARAM series is the next step forward for India. This goal is inching ever closer with the planned launch of the PARAM SHANKH exascale system in 2024, which would overtake India’s current most powerful machine the PARAM Siddhi-AI, clocking in at a LINPACK performance of 4.62 petaFLOPS.
Between now and the PARAM SHANKH launch, India’s researchers and engineers are working on developing and improving native architecture to support exascale computing. International partnerships with Intel Foundry Services and Taiwan Semiconductor Manufacturing Company are further driving India’s progress to exascale computing through collaboration.
As Asia continues to make great strides, so too does the rest of the world. Over in the EU, Finland’s LUMI system now occupies the third spot in the TOP500 list clocking in a LINPACK performance of 151.9 petaFLOPS. In addition to speed, LUMI and the US’ Frontier system are focused on power efficiency as well, with both reaching the top three in the TOP500’s GREEN500 list.
However, in testing and ranking the limits of supercomputers worldwide, it is important to remember the reason for their existence—not just to compete for speed, but to aid researchers efficiently and effectively as they collaborate to seek solutions to the world’s most pressing challenges.
Accelerating computing and modeling at CityU
A new HPC cluster built by global edge-to-cloud company Hewlett Packard Enterprise (HPE) has recently arrived on campus at the City University of Hong Kong (CityU). Named CityU Burgundy, this supercomputing cluster can deliver nearly ten times faster computing speed over the university’s previous HPC resources to advance research discoveries from biomedicine to behavioral science. CityU Burgundy features HPE Apollo 2000 and HPE Apollo 6500 Gen10 Plus systems, which are purpose-built and density-optimized platforms designed for demanding HPC and AI applications. The new cluster is also equipped with 328 AMD EPYC™ 7742 processors, 56 Nvidia V100 Tensor Core graphical processing units (GPUs) and eight Nvidia A100 80-gigabyte Tensor Core GPUs. Together, these components can offer more efficient image analysis, modeling and simulation capabilities that are key in AI and machine learning.
Besides the substantial boost in speed, CityU Burgundy is also set to support inclusive and collaborative research. It serves as a centralized facility, consolidating the HPC resources into a single location to enable easier access for various stakeholders while reducing space requirements for data centers.
Biomedical researchers are already using the new HPC cluster to integrate genetic and environmental data from diverse populations. Such endeavors seek to better understand the complexity of chronic diseases and explore novel diagnostic and treatment approaches.
With the improved GPU capabilities and lower latency, CityU Burgundy’s resources are also being applied to other disciplines that traditionally were not use cases for HPC—including new data visualization projects, analytics for public policy and consumer behavioral science.
“The new HPC cluster brings us one step closer to building Hong Kong’s most powerful HPC platform for academia while achieving operational efficiency and reduced costs,” said Dr. Dominic Chien, senior scientific officer (HPC), in a press release. “HPC plays a critical role in helping us build and support a world-class research team to continue making scientific breakthroughs for humankind.”
Leveling up Singapore’s quantum ecosystem
As a thriving hub for digital innovation, Singapore has further invested in its quantum sector, launching three national platforms to take the country’s technological prowess to new heights. The three initiatives are the National Quantum- Safe Network (NQSN), National Quantum Computing Hub (NQCH) and National Quantum Fabless Foundry (NQFF).
Under Singapore’s Research, Innovation and Enterprise 2020 (RIE2020) plan, the Quantum Engineering Programme of the National Research Foundation Singapore (NRF) has pledged at least S$23.5 million to support these platforms for up to three and a half years.
Onboard the NQSN project are over 15 private and government collaborators, including the Infocomm Media Development Authority (IMDA), with the initiative led by the Centre for Quantum Technologies (CQT) teams at the National University of Singapore, as well as Nanyang Technological University, Singapore. The NQSN will conduct nationwide trials on quantumsafe communication technologies, aiming to achieve robust network security to safeguard sensitive data and critical infrastructure.
Together with A*STAR’s Institute of High Performance Computing (IHPC), CQT teams are also involved in the NQCH initiative to develop quantum computing hardware and middleware. In support, the National Supercomputing Center (NSCC) Singapore will host a quantum computing facility and provide resources to explore industrial applications in finance, logistics, chemistry and more.
Meanwhile, the NQFF at A*STAR’s Institute of Materials Research and Engineering (IMRE) will develop microfabrication techniques to manufacture component materials and quantum devices for computation, communication and sensing applications.
“The launch of the three national platforms signals the intent and ambition of Singapore to build upon our past investments in quantum technologies, and take it further through close industry development with our partner agencies,” shared NRF CEO Professor Low Teck Seng, to the press.
By fostering talent development and forging public-private partnerships, these national quantum initiatives are envisioned to create an enabling environment for quantum innovation across Singapore.
Scaling the supercomputing sector
Steady growth is on the horizon for the HPC industry in the coming decade, after global revenue hit US$42 billion worldwide in 2021. Market research and consultancy firm Emergen Research has forecasted a 6.2-percent compound annual growth rate (CAGR) to push the market size to US$71.8 billion by 2030.
Meanwhile, according to technology research and advisory company Technavio, the Asia-Pacific region is expected to account for 49 percent of the global HPC market’s growth from 2021 to 2026. Technavio also predicted more significant progress, pegging the CAGR at 11.31 percent and an overall revenue increase of US$27.15 billion over the forecast period.
In particular, the cloud computing and data center segments are likely to become key drivers of the HPC sector’s growth. Cloud computing can potentially encourage HPC adoption among small and medium-sized enterprises by connecting them to otherwise high-cost and inaccessible supercomputing resources.
With big data analytics on the rise, supercomputer servers and data centers are needed to store and process voluminous amounts of data—ultimately to generate meaningful insights and accelerate digital transformation across industries. As such, this segment is also predicted to gain further ground among enterprises and governments, spurring substantial HPC market growth over the decade.
This article was first published in the print version of Supercomputing Asia, July 2022.
Click here to subscribe to Asian Scientist Magazine in print.
—
Copyright: Asian Scientist Magazine. Illustration: Shelly Liew