An Eye On AI

Backed by immense computing power, breakthroughs in AI are transforming multiple facets of society, from the way we deliver patient care to how we harness renewable energy resources like solar power.

AsianScientist (May. 16, 2024) –In 1982, a young actor David Hasselhoff sits in a self-driving car and gives instructions with just a voice prompt. The car—Knight Industries Two Thousand, or KITT— was arguably one of the first popular television depictions of artificial intelligence (AI), responding to Hasselhoff’s character like a modern day Alexa. What was once a lofty sci-fi dream has been made possible by advances in AI, backed by the breakneck speeds of high-performance computing (HPC).

Today, enormous amounts of computing power are used to build generative AI (GenAI) models, which are trained on terabytes of data and have parameters in the billions—with some models reaching the trillion parameter mark.

At the same time, HPC is also becoming more democratized as cloud computing creates easier access to innovative AI tools and gives enterprises the ability to create their own AI-driven solutions.

Coming full circle, supercomputers are leveling up their own capabilities, powering AI-driven design tools that accelerate the development of increasingly sophisticated semiconductor chips.

 

BATTLE OF THE BOTS

Since OpenAI’s ChatGPT tool exploded onto the scene in late 2022, a flurry of large language models (LLMs) has been released as tech giants duke it out to create GenAI capable of accomplishing more complex tasks with greater accuracy.

Across the Pacific, China’s GenAI space is booming. Many Chinese tech firms have built their own proprietary LLMs to give their existing products and services a GenAIpowered refresh, as well as to provide AI-based business solutions on the cloud. As of October 2023, the country’s tech sector has produced at least 130 LLMs—40 percent of the global total.

In March 2023, Baidu, one of China’s largest internet companies, released its own chatbot—Ernie Bot. Since then, Ernie Bot has gained traction, amassing 45 million users since its further release for public use five months later. With a voice prompt, Ernie Bot can create a TV commercial, solve complex geometry problems and even write a martial arts novel filled with twists and turns.

Tapping on its existing userbase, Baidu is using Ernie to redesign and rebuild its products and services, which include Baidu’s search engine, Baidu Maps, and cloud computing service Baidu Cloud. According to a Baidu representative, the Ernie foundation model was built on trillions of data points and hundreds of billions of knowledge points.

In October 2023, Baidu unveiled the latest iteration of its chatbot, Ernie 4.0. At the launch, Baidu CEO Robin Li demonstrated how text- and voice-responsive AI assistants can provide customized search results, navigate a city and add subtitles to videos on a cloud drive.

Beyond products for individual consumers, Baidu has created Qianfan, a Model-as-a-Service (MaaS) cloud platform for AI models, targeted at enterprises across diverse sectors including finance, marketing and media.

According to Li, this business model contrasts with other public cloud services that focus on providing computing power and storage, as Qianfan also offers businesses access to foundation models built by both Baidu and other third parties. Qianfan users can use their proprietary data to fine-tune these preinstalled LLMs, creating tailored solutions for their needs.

Baidu is not the only contender in the LLM arena. The rise of GenAI in China has been dubbed the “war of a hundred models” by Jie Jiang, Vice President of Chinese tech giant Tencent, which released its own model, Hunyuan, in September 2023. Developed with businesses in mind, Hunyuan is available to Chinese enterprises via Tencent’s cloud platform. The company has also integrated Hunyuan into its own products, such as popular mobile messaging app WeChat.

Another player in the space is Alibaba, famed for popular e-commerce platforms like Taobao. In April 2023, Alibaba joined the GenAI frenzy with Tongyi Qianwen, updating it to a version 2.0 release in October. Hundreds of millions of Taobao shoppers now have a personal assistant that converses with them to provide tailored product recommendations.

Alibaba is also collaborating with New Zealandbased metaverse company Futureverse to train the latter’s text-to-music generation model, JEN-1, on the former’s updated Platform for Artificial Intelligence (PAI).

 

THE SUPERPOWERED SEARCH FOR DRUGS

AI is also guiding biotech companies in the quest for the next groundbreaking medical treatment, a traditionally taxing and costly process. Typically, scientists first go on a scavenger hunt for a molecular target in the human body. After setting their sights on a target protein or gene, they may screen up to millions of chemical compounds before landing on a few promising hits. These then need to be optimized in the lab before experimental testing in animal models. These stages of drug discovery—which are completed before starting first-in-human trials—can take up to six years and cost over US$400 million to get one viable drug candidate.

To accelerate the drug discovery process, AI might take over the heavy lifting for many of these steps. In fact, the world’s first fully AIgenerated small molecule drug, developed by Hong Kong-headquartered biotech company Insilico Medicine, is currently in Phase 2 clinical trials to evaluate effectiveness after it cleared human safety trials in mid-2023. This path to human trials took less than 30 months at just one-tenth of the typical cost.

To design the drug—aimed at a chronic lung disease called idiopathic pulmonary fibrosis—Insilico used a full suite of AI-driven tools to tackle steps from target discovery to compound generation. In particular, the company created a GenAI drug design engine called Chemistry42, which churns out never-before-seen molecular structures within days. The fully automated platform is powered by NVIDIA V100 Tensor Core GPUs and can be deployed both in the cloud and on site.

Meanwhile, in Japan, the RIKEN Center for Computational Science (R-CCS) and Fujitsu are developing a next-generation IT drug discovery technology with the help of Asia’s fastest supercomputer, Fugaku.

A key aspect of designing optimized drugs is making sure they bind effectively to their target proteins, which makes modeling drug-protein interactions a crucial step in the process. However, proteins are incredibly flexible, toggling between many different conformations and often undergoing significant structural changes when bound to other molecules.

The R-CCS and Fujitsu collaboration addresses the challenge of protein flexibility by combining Fujitsu’s deep learning and RIKEN’s AI drug discovery simulation technologies. By the end of 2026, the project aims to deliver technology that can analyze drug-protein complexes and predict large-scale structural changes in molecules with high speed and accuracy.

 

PETASCALE POWER FOR PHYSICIANS

Besides supporting biomedical research, HPC is also bringing these discoveries from bench to bedside. Singapore’s newest petascale supercomputer, Prescience, powers AI models designed to tackle the country’s healthcare needs. The fruit of collaboration between the National University Health System (NUHS) and the National Supercomputing Centre (NSCC) Singapore, Prescience’s infrastructure is housed at the National University Hospital (NUH) and has been up and running since July 2023. Its on-premise construction obviates the need to de-identify patient data from massive datasets, speeding up model training and enhancing patient data protection.

Packed with multiple NVIDIA DGX A100 compute nodes for the GPU horsepower to handle colossal amounts of data, Prescience is tailored for training LLMs such as RUSSELL—a ChatGPT equivalent for healthcare professionals. Apart from automating administrative tasks like summarizing clinical notes and writing referral letters, RUSSELL also contains NUHS protocols, medical information and rosters to support clinicians in daily tasks.

To help doctors better plan patient treatment and optimize resource allocation, researchers are also using Prescience to build a patient trajectory prediction model. The model feeds doctors’ notes and tests from a patient’s medical emergency and first day of inpatient admission to estimate the patient’s length of stay. Importantly, the model also provides explainable factors used in its prediction that doctors can easily understand.

Beyond helping clinicians streamline workflows, Prescience is also helping dental patients attain picture-perfect smiles. Through the Smart Monitoring and Intelligent Learning for Enhancing oral health (SMILE AI) project, Singapore’s National University Centre for Oral Health (NUCOHS) has been collecting hundreds of dental images to build machine learning models to speed up the routine task of tooth charting and predict the risk of gum disease.

Using X-rays of the upper and lower jaw, NUCOHS’s gum disease prediction model aims to stratify patients by their risk of disease. In a push toward preventive healthcare, the model could be implemented on a population level, allowing dentists in the wider community to intervene before disease onset.

“These models are expected to support both clinicians and patients to achieve better outcomes as well as reduce wait times and costs,” said Professor Ngiam Kee Yuan, NUHS Group Chief Technology Officer, in an interview with Supercomputing Asia.

 

CLIMATE CRYSTAL BALLS

Even with such medical advances, healthcare systems in some regions experience added strain from record-breaking heatwaves, which have led to widespread hospitalization and even deaths. One such scorcher that swept across Asia in 2023 saw many countries log temperatures soaring past 40°C, with China’s Xinjiang province hitting a searing 52.2°C in July. Such extreme weather events have become more frequent due to climate change, cutting agricultural yields and flooding communities.

To help mitigate damage from such adverse events, AI-driven global climate models provide projections which can aid the design of suitable counter-strategies. That said, the resolution of such global models— which broadly divide Earth into 3D grid cells ranging from 150 to 280 km—often lacks detailed information on regional climates.

To overcome these limitations, the Pawsey Supercomputing Centre (PSC) in Australia is creating high-resolution 3 km grid models for Western Australia (WA), a global biodiversity hotspot. Pawsey’s effort is part of the Climate Science Initiative (CSI), a multi-institutional partnership which also includes the WA Department of Water and Environmental Regulation, Murdoch University and the New South Wales Government.

“With finer resolution, we will be able to more accurately predict when and where adverse climate events, such as bushfires and floods, will impact the region. We can also develop better tools to predict the impact of those events,” said Mark Stickells, PSC Executive Director, in an interview with Supercomputing Asia.

The project is an ambitious one. The team strives to provide comprehensive 75-year climate projections by running simulations from the years 1950 to 2100. With each simulation having two future climate scenarios and two modeling configurations, this entails an immense amount of computing power.

Taking on this gargantuan task is Pawsey’s Setonix, the most powerful supercomputer in the Southern Hemisphere, making one-second calculations that would take humans 1.5 billion years to achieve. Setonix is dedicating 40 million core hours and 1.54 petabytes of storage space to CSI—one of the supercomputer’s largest allocations.

Taking on this gargantuan task is Pawsey’s Setonix, the most powerful supercomputer in the Southern Hemisphere, making one-second calculations that would take humans 1.5 billion years to achieve. Setonix is dedicating 40 million core hours and 1.54 petabytes of storage space to CSI—one of the supercomputer’s largest allocations.

 

STRIVING FOR SILICON SUCCESS

Even as HPC makes predictions to safeguard our future, it also advances the hardware that powers our present. Semiconductors keep the technological world running, from pocket-sized smartphones to massive supercomputing centers. For example, Japan’s Fugaku is powered by 158,976 Fujitsu-designed A64FX semiconductor chips working in tandem. Each systemon- chip contains 48 computing cores with two or four assistant cores, serving as a powerful HPC-tailored processor.

Rapid advances in AI call for more computing power under tight deadlines, and chipmakers are constantly innovating to meet this demand through more advanced silicon chip designs and more efficient manufacturing workflows. Today’s semiconductors and supercomputers exist in a symbiotic relationship, with AI stepping in to assist its own makers.

Some of the world’s largest chip manufacturers, such as Taiwan Semiconductor Manufacturing Company and Samsung, have leveraged electronic design automation (EDA) to streamline their processes. These Asian chipmakers have partnered with EDA company Synopsys, creators of AI-driven tools to support human engineers in figuring out where to lay out billions of transistors onto tiny silicon pieces. This blueprint is critical as the exact placement of transistors affects a chip’s performance.

With chips becoming increasingly complex, engineers often conduct months of manual iterative experiments to land on the best designs for different goals. EDA narrows down design options so engineers focus on the most promising—reducing experimental workload and time taken. Synopsys also provides its EDA software, Synopsys DSO.ai™, on Microsoft’s cloud computing platform Azure, allowing companies to

leverage HPC for faster, better results. With  HPC and AI support, chipmakers are creating next-generation chips that provide greater speed and consume less power. However, success with silicon is not infinite. The material transmits light and conducts electricity poorly, making it less-than ideal for optoelectronic devices like solar cells. At present, the energy conversion efficiency of pure crystalline silicon solar cells has a theoretical limit of 29 percent.

In search of a promising alternative, many scientists are turning to perovskites, a class of crystalline compounds with excellent light absorption properties. By layering perovskites on top of silicon, a perovskitesilicon tandem (PST) solar cell can absorb different wavelengths of light, leading to a higher theoretical efficiency of 43 percent. That said, building a stable and efficient PST solar cell is incredibly challenging as there are approximately 572 possible permutations in a tandem device stack.

In a study published in Nature Energy, researchers from South Korea’s Chonnam National University led an international collaboration to fabricate a solar cell by stacking two different crystalline structures (or polymorphs) of the perovskite cesium lead iodide (CsPbI3). CsPbI3 has four different polymorphs, two of which are light-absorbing and promising for solar cells. However, the light absorbing polymorphs can easily convert to non-light-absorbing ones at room temperature, compromising solar cell efficiency.

Through computational simulations by the Roar Supercomputer at Pennsylvania State University, US, the team found that bringing the two light-absorbing polymorphs of CsPbI3 together could form a stable atomic interface without distortion. This property allowed the researchers to create a solar cell with a high efficiency of almost 22 percent, which could be stably maintained after 200 hours of storage under ambient conditions.

With such advancements, HPC has evolved from the first 3-megaflop supercomputer in 1964 to the exascale supercomputers we have today. In parallel, the chess- and checkersplaying AI programs of the early 1950s have given way to LLM-powered chatbots. Hand-inhand, HPC and AI will no doubt continue to make leaps in the decades ahead.

This article was first published in the print version of Supercomputing Asia, January 2024.
Click here to subscribe to Asian Scientist Magazine in print. 

Copyright: Asian Scientist Magazine.

Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Pei Ling received her PhD in Biomedical Sciences from the Icahn School of Medicine at Mount Sinai, USA and her BSc in Biochemistry & Molecular Biology from Brown University, USA. She is currently a research fellow at the Institute of Molecular and Cell Biology and a freelance science writer.

Related Stories from Asian Scientist