Skip to main content

China’s New Supercomputer Puts the US Even Further Behind

Main page content

Source: African Scientific Institute

This week, the China’s Sunway TaihuLight officially became the fastest supercomputer in the world. The previous champ? Also from China. What used to be an arms race for supercomputing primacy among technological nations has turned into a blowout.


The Sunway TaihuLight is indeed a monster: theoretical peak performance of 125 petaflops, 10,649,600 cores, and 1.31 petabytes of primary memory. That’s not just “big.” Former Indiana Pacers center Rik Smits is big. This is, like, mountain big. Jupiter big.


But TaihuLight’s abilities are matched only by the ambition that drove its creation. Fifteen years ago, China claimed zero of the top 500 supercomputers in the world. Today, it not only has more than everyone else—including the United States—but its best machine boasts speeds five times faster the best the US can muster. And, in a first, it achieves those speeds with purely China-made chips.


Think of TaihuLight, then, not in terms of power but of significance. It’s loaded with it, not only for what it can do, but how it does it.


The Super Supercomputer


If you think of a supercomputer as a souped-up version of what you’re playing EVE Online with at home, well, as it turns out you’re not entirely wrong. “At one level they’re not very different from your desktop system,” says Michael Papka, director of the Argonne Leadership Computing Facility (home to Mira, the world’s sixth-fastest supercomputer). “They have a processor that looks very similar to the one in a laptop or desktop system—there’s just a lot of them connected together.”


Your MacBook, for example, uses four cores; Mira harnesses just under 800,000. It uses those them to simulate and study everything from weather patterns to the origins of the universe. The faster the supercomputer, the more precise the models and simulations.


On that basis alone, TaihuLight is a singular accomplishment. Its 10.6 million cores are more than three times the previous leader, China’s Tianhe-2, and nearly 20 times the fastest U.S. supercomputer, Titan, at Oak Ridge National Laboratory. “It’s running very high rates of execution speed, very good efficiency, and very good power efficiency,” says University of Tennessee computer scientist Jack Dongarra. “It’s really quite impressive.”


If anyone’s qualified to say so, it’s Dongarra. He created the benchmark by which supercomputers were first compared in 1993 by TOP500, the organization that still ranks them today, and published the first independent evaluation of TaihuLight’s capabilities.


Still, hardware’s not everything. Because supercomputers run specialized tasks, they require specialized software. “You can use a factory as an example,” says Papka. “A lot of people are working on putting a car together at the same time, but they’re all working in a coordinated manner. People who write programs for supercomputers have to get all of the pieces working together.”
 

Jack Dongarra


TaihuLight passes that test, too. In fact, three of the six finalists for a prestigious high-performance computing award are applications built to run on TaihuLight. Aside from relatively slow memory—a conscious trade off to save money and power consumption—this rig is ready to go to work. “This is not a stunt machine,” says Dongarra. And it’s years ahead of anything the US has.


A Command Line Lead


TaihuLight is faster than anything scheduled to come online in the US until 2018, when three Department of Energy sites will each receive a machine expected to range from 150 to 200 petaflops. That’s ahead of where China is now—but two years is half an eternity in computer-time. That the lead has gotten so large galls some lawmakers for reasons both political and practical. Legislation exists calling for a supercomputer funding boost, but has spent the last year mired in the Senate.


“Massive domestic gains in computing power are necessary to address the national security, scientific, and health care challenges of the future,” says Rep. Randy Hultgren, a Republican from Illinois whose American Super Computing Leadership Act has twice been passed by the House of Representatives. “It is increasingly evident that America is losing our lead.” Meanwhile the DOE is working on innovating with the budget it has.
The other significant TaihuLight achievement stings US interests even more, because it’s political. China’s last champ, Tianhe-2, had Intel inside. But in February of 2015, the Department of Commerce, citing national security concerns—supercomputers excel at crunching metadata for the NSA and their foreign equivalents—banned the sale of Intel Xeon processor to Chinese supercomputer labs.


Rather than slow the rate of Chinese supercomputer technology, the move appears to have had quite the opposite effect. “I believe the Chinese government put more research funding into the projects to develop and put in place indigenous processors,” Dongarra says. “The result of that, in some sense, is this machine today.”


A Race Worth Winning


Broadly, it’s true that better supercomputers benefit the whole world, assuming scientists get to work on them. It doesn’t exactly matter what flavor the chips are. “On some level, it’s a trophy that you put on your mantel,” Dongarra says. “But what’s more important is what kind of science it does, what kind of discoveries you make.”


TaihuLight’s stewards tell Dongarra that they’re putting all that power toward advanced manufacturing, Earth-system modeling and weather forecasting, life science, and big data analytics. That sounds like a broad range, but it’s just a small slice of what supercomputers’ capabilities. “Each time we make an increase, we can add more science to the problem,” Papka says. “For the foreseeable future, until we can model the real world on a quark-for-quark basis, we’ll need more powerful computers.”


And those computers are coming—especially if the US gets serious about catching up.