Pitfalls of past predictions
Around one minute before the 2011 Great East Japan Earthquake was felt, the nationwide Earthquake Early Warning system—made possible by over 1,000 seismometers scattered across the country—notified millions of the impending quake. Minutes later, at 2:49 PM, the Japan Meteorological Agency (JMA) issued the first tsunami warning.
Despite their timely warnings for the two hazards, JMA critically underestimated the earthquake’s magnitude and the tsunami’s height. At first, the agency estimated the magnitude at 7.9 and the tsunami to have heights of three to six meters. Given these estimates, certain residents in the Iwate Prefecture opted to remain in place as they felt that the ten-meter seawalls would protect them. It was only 30 minutes after the earthquake that the tsunami height was updated to more accurate estimates of six to ten meters.
Ultimately, the seawalls offered little security against the towering tsunami, with the waves streaming over and partially damaging the walls. By then, most power and communications systems had failed, with the revised information failing to reach most members of the public. True enough, subsequent surveys by the Japanese government showed that nearly half of the population in the affected areas did not receive any information about the tsunami—and up to 70 percent did not receive the revised information on tsunami heights.
Historically, coastal tsunami predictions were made by selecting the most similar earthquake and tsunami conditions, comparing these with observations from databases prepared in advance by simulations and then gradually adjusting the predictions as offshore tsunami observations poured in. These observations are made possible by ocean bottom pressure gauges connected to cables or sea surface buoys, providing real-time data before tsunamis reach coastlines.
However, successfully consolidating and analyzing all these data is easier said than done.
“In the case of a megathrust earthquake, it is difficult to accurately estimate the tsunami source because of limited observation data,” noted Professor Yusuke Oishi, research principal at the Fujitsu’s Artificial Intelligence Laboratory Research Unit, in an interview with Supercomputing Asia. “For example, several different results of tsunami source analysis were reported for the 2011 Great East Japan Earthquake.”
To accurately predict an incoming tsunami, Oishi also added that models must have highly detailed area inputs—such as the specific locations and even shapes of buildings—as well as high spatial resolution. Meanwhile, to forecast a tsunami’s impact, simulations must also consider factors like the earthquake’s timing to the arrival of the tsunami.
“Since the analysis is performed at high resolution for a long time, the computational amount is large,” he added.
Accordingly, such simulation calculations and analyses either relied on large-scale supercomputers like Fugaku’s predecessor called the K computer or crude database searches, with the sheer scale making it difficult to implement and operate in real-life conditions.