
AsianScientist (Feb. 13, 2019) – The film begins ordinarily enough, with a couple picnicking on the green lawn of a Chicago park. But then the camera pulls back, the couple getting smaller, to an area ten meters across, then 100 meters, then a kilometer… and so on until it reveals a vista 1024 meters across, the estimated size of the observable universe.
Boomeranging back to the couple on the grass, the camera next zooms in on the man’s hand, then a layer of skin, then a single cell, then a DNA molecule. The dizzying journey finally ends at a field of view an infinitesimal 10-16 meters across; at this magnification, and if such a camera did indeed exist, we would be able to see the quarks within a proton within an atom.
Powers of Ten, a 1977 short film from Charles and Ray Eames—designers better known for their iconic chairs—crams 40 orders of magnitude of spatial scale into a mere nine minutes. Mind-bending as this is, consider that the entire scientific enterprise already spans this spectrum of sizes, whether it is the interstellar considerations of astronomy and cosmology, the planetary proportions of geology and earth systems, the human-sized domains of physiology and medicine, the microscopic worlds of genetics and cell biology, or the subatomic ‘spooky’ realms of quantum mechanics.
Of course, natural phenomena, as well as the methods and equipment scientists use to understand them, vary widely across different spatial scales. But just about every branch of science is now generating more data than ever before, with an accompanying demand for ever more sophisticated computing power to make sense of it. Supercomputers, then, look set to be a common thread running through science at all scales.
The radio star
Probing the vastness of the universe is a project called the Square Kilometer Array (SKA), an effort—also enormous in terms of its own physical size and international scope—to build the world’s largest radio telescope, with a collecting area of more than one square kilometer.
Composed of telescope arrays in two locations— Western Australia’s Murchison Shire and South Africa’s Karoo region—the SKA will let researchers survey the cosmos faster and at higher resolution and sensitivity than ever before.
“The SKA is going to be essentially a step change in radio astronomy observatories in terms of capabilities,” said Dr. Miles Deegan, a high-performance computing (HPC) and data analytics architecture specialist at the SKA Organization, headquartered at Jodrell Bank Observatory in the UK.
Once the SKA—currently in design phase—starts producing scientifically interesting data in the mid-2020s, researchers will use its superior sky-scanning abilities to study some of the biggest questions in astrophysics, potentially transforming our understanding of the universe, said Deegan.
For instance, by using the SKA to measure the effect of gravity on pulsars that orbit black holes, researchers may be able to detect gravitational waves of different frequencies from those previously discovered through the Nobel Prize-winning efforts of the Laser Interferometer Gravitational-Wave Observatory—work that would further push the boundaries of Einstein’s theory of general relativity.
Not all about the FLOPS
The computational needs associated with ground-breaking astrophysics research are immense—the first phase of the SKA alone is expected to generate 160 terabytes of raw data per second, approximately five times the global internet traffic in 2015. Processing this barrage of data requires complex supercomputing hardware, software and analytics, which are being designed by the SKA’s Science Data Processor Consortium, comprising researchers from institutions in 11 countries.
The two planned supercomputing facilities in Capetown, South Africa and Perth, Australia are likely to have a combined processing power of around 260 petaFLOPS, said Deegan. But acquiring more FLOPS isn’t the answer to the SKA’s computing challenges, which have more to do with input-output, memory bandwidth and data management, Deegan emphasized.
“It’s about the sheer volume of data—capturing that data to start with, being able to ingest it all in time, and managing it through various levels of storage and buffers. That’s where the complexity is; the FLOPS are not an afterthought, but it’s not all about the FLOPS,” he said.
The two supercomputing centers will churn out multiple types of science ‘data products,’ such as pulsar timing information, said Deegan. But the process doesn’t end there—for the data to be fully analyzed and exploited, it then needs to be disseminated to an international network of researchers.
“We need to take about 300 petabytes per annum per telescope and distribute that data to a worldwide network of SKA regional centers… that’s another challenge—being able to move vast amounts of data [over] intercontinental distances and distributing to a big network of collaborating institutes,” said Deegan.
The SKA will likely adopt a model similar to how European particle physics research organisation CERN moves its data internationally via a worldwide grid infrastructure, he added.
Digging deep for neutrinos
Sprawled across two of the world’s most remote desert locations, the SKA’s antennae fields would be right at home in a science fiction movie. But one doesn’t have to look too far to find another research facility rivalling it in scale and strangeness—just one kilometer beneath Mount Ikeno, near the city of Hida, Japan.
There, the Super-Kamiokande (Super-K) detector—a 40-meter-deep steel cylinder filled with 50,000 tons of ultrapure water and lined with more than 11,000 photomultiplier tubes—deals with research at the other end of the spatial scale: it was built to detect neutrinos, one of the most abundant subatomic particles in the universe.
Neutrinos arrive at the detector from various sources, said Professor Masayuki Nakahata, director of the Kamioka Observatory, which hosts Super-K. About a third of the approximately 30 neutrino events Super-K records every day are atmospheric neutrinos, produced when cosmic rays collide with atoms in the atmosphere, while the remaining two thirds are solar neutrinos, generated by fusion reactions in the sun, said Nakahata. These observations have helped researchers make discoveries about the fundamental properties of neutrinos, including Nobel Prizewinning work showing that the particles do indeed have mass.
The subterranean facility, which is operated by an international consortium of some 40 research institutes, is also being upgraded to improve its chances of detecting neutrinos emitted by a third source—supernovae, said Nakahata. Such events, which shed light on the process of star formation, are much rarer and have thus far been detected just once: in 1987, when Super-K’s predecessor, Kamiokande, picked up neutrinos from a supernova in the nearby Large Magellanic Cloud. With the upgrade, Super-K is expected to detect neutrinos from supernovae in faraway galaxies, as well as ‘supernova relic neutrinos’ emitted by stars that exploded in the distant past.
“By observing those supernova neutrinos, we are able to understand the history of the universe—how heavy massive stars exploded and distributed various elements to the universe,” said Nakahata.
Blink and you’ll miss it
Despite their abundance, neutrinos are notoriously difficult to detect, since their miniscule mass and lack of electric charge allows them to pass through matter unobstructed, like quantum-scale ninjas. But neutrinos traversing Super-K’s huge volume of ultrapure water occasionally do bump into nuclei or electrons, generating charged, high-energy particles that move faster than the speed of light in water.
When this happens, the particles emit a type of radiation called Cherenkov light, which is picked up by the detector’s photomultiplier tubes as ring-shaped images. These signals are funneled into Super-K’s experiment analysis system, a Fujitsu HPC setup that includes a data processing system, an 85-server cluster and a high-speed distributed file system located at the underground site.
“We use the computing resources to count the number of rings or observed particles, reconstruct the interaction positions and the energies of each particle, and identify the type of particles from the observed signal,” said Associate Professor Yoshinari Hayato, who manages computing at Kamioka Observatory.
Further, researchers studying neutrino oscillation—a quantum mechanical phenomenon (discovered in part at Super-K) in which neutrinos switch back and forth between different identities—also need to generate large volumes of simulated data, a computationally intensive process, added Hayato.
Since supernova bursts could happen at any moment, Super-K’s top HPC priority is stability, said Hayato—the system cannot afford to blink, or it may miss the chance to record a rare event.
“[Supernovae] are known to burst once in decades, and the duration of the burst is just a few seconds… therefore, the number of neutrinos is limited and we need to maximize the opportunity to observe them,” he said.
Supercomputing and serendipity
The already-cavernous Super-K will soon be dwarfed by a successor ten times its volume. Kamioka Observatory’s planned Hyper-Kamiokande (Hyper-K) neutrino detector, scheduled for completion in the mid-2020s, will allow researchers to address some of physics’ most fundamental questions. One of its main goals is to detect a phenomenon in neutrinos known as charge–parity symmetry violation, which could explain why there is more matter than antimatter in the universe.
While official specs are not yet available for Hyper-K’s supercomputing infrastructure, Nakahata and Hayato expect it to require orders of magnitude more compute power than Super-K. This will allow it to handle data from the new detector’s more than 40,000 photomultiplier tubes (quadruple the number in Super-K), as well as accommodate higher-precision measurements, said Hayato.
Because of the complex nature of the computing infrastructure involved, international scientific collaborations on the scale of the SKA, Super-K and Hyper-K will not only further the science itself, but may also result in new advances in HPC and info-communications technology, said Deegan, referring to the SKA.
“A lot of scientific and technical breakthroughs are often serendipity. But I think the size and complexity of [the SKA] is such that we will have to come up with new ways of doing things. I think [this is] not just on the technology aspects, but also [in terms of] worldwide collaboration… just learning new ways of doing complex science projects,” said Deegan.
This article was first published in the print version of Supercomputing Asia, January 2019.
Click here to subscribe to Asian Scientist Magazine in print.
———
Copyright: Asian Scientist Magazine; Photo: Shutterstock.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.