The Never Ending Quest To Simulate Doomsday

How scientists learned to stop worrying and simulate the bomb.

By Eric Betz|Monday, August 13, 2018
DSC-A0918_02
DSC-A0918_02
Kellie Jaeger/Discover

The bomb arrived in pieces. Workers assembled the device behind steel-reinforced concrete walls in the desert, mating radioactive materials with high explosives. It was called Kearsarge.

And on a hot August day in 1988, a crew lowered the bomb through a hole drilled thousands of feet into the Nevada Test Site, then entombed it beneath millions of pounds of sand.

Thirty miles away, Los Alamos Director Siegfried Hecker sat nervously in the control room. Seven top Soviet nuclear scientists watched intently. What if the bomb fizzled, Hecker thought. What happens to America’s nuclear deterrent?

Officials had negotiated this Joint Verification Experiment for years. The United States and Soviet Union had long conducted test explosions of the bigger weapons in their arsenals, both to make sure they really worked and as a show of force. The adversaries were willing to permanently stop blowing up the biggest bombs, but first scientists needed a way to verify violations. Each country would test its monitoring techniques on the other side’s bomb. If today’s nuclear test went well, it might be among the last.

The detonation order went out. Kearsarge exploded with 10 times more energy than Hiroshima. Vital signs from the bomb raced up cables as they vaporized. One hundred thousand raw data points fed into computers, eventually confirming theory with reality. The earth shook. Ninety miles away in Las Vegas, lamps danced over pool tables at the Tropicana.

DSC-A0918_03
DSC-A0918_03
An above-ground nuclear bomb test in 1957.
Omikron/Science Source

Oh, thank God, Hecker thought. Later, his Soviet counterpart congratulated him over lunch. Their eyes met. It was like looking in the mirror. “The world . . . would never be the same,” says Hecker, whose job was first held by Robert Oppenheimer, father of the atomic bomb.

In the years that followed, the Cold War would end, and so would the days of shaking the desert.

By 1992, President George H.W. Bush reluctantly signed a nine-month moratorium on nuclear weapons tests. For generations, mutual assured destruction had been the cornerstone of military might. Testing showed the world that a nuclear strike, by anyone, would be suicide. Without it, scientists needed a new way to prove America’s arsenal was safe and reliable. They had intended the bombs to last only 10 to 15 years — and some were already decades old. And because scientists had long depended on explosive tests over theoretical models, they didn’t fully understand the physics of the bombs. Now they’d have to predict how aging radioactive components might change the performance of a geriatric weapon.

High-performance computers had been a staple at weapons labs since the Manhattan Project in the 1940s. So to scientists, they were the obvious way forward. If they couldn’t blow up nukes anymore, scientists would simulate the detonations. But first, they’d need computers 10,000 times faster than any the world had seen. The labs that invented the Atomic Age had to fast forward the digital age.

ScreenShot20180807at14748PM
ScreenShot20180807at14748PM
How A Nuke Works
Nuclear warheads are like avocados. They’re similarly shaped with an inner core, called a pit. The bomb’s typically grapefruit-sized pit is often hollow and lined with plutonium. Instead of delicious green fruit surrounding it, the warhead has high explosives aimed inward, to create an implosion. This squeezes the plutonium pit until it’s so dense that particles start smashing into plutonium nuclei, literally splitting atoms and unleashing their incredible energy. That simple design worked for Fat Man (above), detonated in Nagasaki in 1945. But today’s stockpiled warheads are thermonuclear devices, commonly called H-bombs because they use hydrogen. These have a secondary stage — like a second pit next to the plutonium pit. As the first pit erupts in a nuclear explosion, its radiation bounces off the hardened shell of the second pit and reflects back inward. The first blast ignites nuclear fusion within the secondary pit, making the blast much bigger and more powerful.
Alex Wellerstein/Nuclearsecrecy.com

And now, amid rising geopolitical tensions, nuclear weapons designers are once again trying to spark a new technological revolution. The U.S. is spending $1 trillion to modernize its aging nuclear weapons arsenal, from subs and jets to revitalized warheads, with billions more dollars spent on pushing the limits of supercomputing. Old competitions have been renewed. And a new rival has emerged: China. As the two superpowers race to build the first machine as powerful as the human brain, they’ll also help improve weather forecasts and medical treatments. But, as in the past two decades, that new technology will emerge in service of the true goal: refurbishing and maintaining our nuclear bombs.

A Boomless Bomb

America’s modern nuclear program is the brainchild of an engineer-turned-bureaucrat named Vic Reis. He ran the Defense Advanced Research Projects Agency (DARPA) — the military’s research agency — under President Bush, and then in 1993 President Bill Clinton tapped him to oversee defense research at the Department of Energy. With the former Soviet Union in shambles, a debate raged over the bomb’s future. The weapons labs and military wanted to resume testing, but others wanted to extend the ban forever.

In an unassuming memo, Reis proposed a middle ground. The way he saw it, America had already exploded 1,000 nuclear bombs. A few more wouldn’t reveal much about existing weapons. The important thing, to stay ahead militarily, was to create a program that truly challenged the labs. Maintaining a deterrent would require scientific superiority. Reis called the new program Science Based Stockpile Stewardship. If nuclear weapons research wasn’t based on physical tests, then what?

To figure it out, Reis rallied top scientists and directors from the three weapons labs — Los Alamos and Sandia in New Mexico, and Lawrence Livermore. They gathered around a whiteboard and started to build the new testless program. Funding levels would stay the same as they were with nuclear testing, roughly $4 billion to $5 billion per year, but instead they’d take turns building the world’s largest computers and only pretend to blow up bombs. As a result, all our nuclear tests since 1992 have been simulated ones.

But not everyone approved. Many old-school scientists didn’t think computer models could replace tests. The very idea violated basic notions of the scientific method — hypothesize and test. “Our weapons designers were extremely skeptical, even to the point of being very negative,” Hecker says. “As the director, I had to come in and say, ‘Well that’s just too damn bad.’ ”

DSC-A0918_05
DSC-A0918_05
A field of craters left over from underground detonations in Yucca Flat, Nevada, demonstrates America’s propensity for nuclear testing during the Cold War.
Los Alamos National Laboratory/Science Source

Bob Webster, who runs Los Alamos’ weapons program, says real-life testing had made it comparatively easy to study bombs at the right temperature, density, pressure and more. So even with computer-only blasts, they’d need physical experiments — including explosives and multibillion-dollar laser facilities — to feed real numbers into their simulations, and to use as a check on their results.

The approach was a daunting one. “It wasn’t clear it would work,” Webster says. The weapons labs would be tasked with certifying the stockpile was healthy each year. If a specific weapons system had major problems, the nation might have to retire it, or scramble to rebuild the nuclear facilities that make it — even resume testing in an extreme scenario.

There were two major hurdles: We still didn’t really understand plutonium, and we didn’t have enough computational horsepower.

Survival 101

Hollywood loves to blow up its hometown in disaster flicks. But when the RAND Corporation, a non-profit think tank, studied the consequences of a real-world nuclear attack, it found the aftermath extends far beyond Tinseltown.

A nuclear blast at the nearby Port of Long Beach — specifically, a 150-kiloton explosion shown here — kills some 60,000 instantly around the critical global shipping center. But chaos ensues as 6 million people flee LA, global shipping finds new routes, insurance providers go under and the West Coast searches for new gasoline supplies. (Click to enlarge)

Zach Bush

Even at the Cold War’s peak — with John F. Kennedy suggesting families build fallout shelters — many Americans doubted we’d ever be nuked. And if it happened, most assumed we’d die in the global Armageddon.

So, how do you prepare people for an improbable and seemingly unsurvivable disaster? That’s still a problem for emergency managers, says Alex Wellerstein, a nuclear bomb historian at the Stevens Institute of Technology in New Jersey.

A couple of years ago, he pitched a project to re-examine how to talk to Americans about preparedness and the risk of nuclear threats. At first, his team struggled to attract interest — until 2017. After North Korea’s missile tests, suddenly even the Federal Emergency Management Agency was asking for training help. But Americans — or their government agencies — still aren’t as prepared as they should be.

Wellerstein hopes his study can help. The goal isn’t to scare people, he says. It’s to see nuclear bombs as real things rather than Hollywood stand-ins for the apocalypse. And that includes telling people that if you’re beyond the main blast radius, the biggest immediate threats you’ll face are windows breaking and things falling off the ceiling. “Going under your desk will probably [improve] your chances of surviving considerably,” he says.

It’s Elemental

Plutonium doesn’t exist in nature. Humans invented it in the 1940s, and harnessing the deadly metal was perhaps the Manhattan Project’s greatest challenge. Oppenheimer called it a “terrible substance.” It’s such a tricky material to work with, his scientists struggled even to agree on its density. “Plutonium is by far the most complicated element on the periodic table,” Hecker says.

No one knew what happened to plutonium as it ages, and that meant no one knew how long our nuclear weapons would work. It’s not just a matter of replacing it, either, because America’s only plutonium factory stopped production in 1989 after toxic waste leaks. And plutonium wasn’t the only thing getting older. Warheads include a vast array of complex metals and electronic parts, any one of which could cause problems with age. Weapons scientists charged with making sure the old bombs still work compare their situation to storing a vintage car for 40 years without ever starting it, but still making sure it’d work on the first turn of the key.

Click to enlarge.
Jay Smith
In the 1990s, researchers realized that answering all their questions would require significant advances in fundamental materials science and physics. To do that, they’d need better computers to test how those aging components alter a nuclear explosion — and they had to be fast enough to spit out the answers in a useful time frame, too.

These machines would have to be staggeringly powerful compared with existing technology. Moore’s law famously suggested in 1965 that computers should double in speed every 18 months. Researchers estimated they’d need to double that.

Technology needed a new direction. Luckily, researchers already had a viable alternative.

Getting Up to Speed

For decades, supercomputers had solved one problem at a time. “We like to think of a pipeline,” explains supercomputer pioneer Jack Dongarra of the University of Tennessee, Knoxville. “You start at one end, and you go along a line until you complete it.”

When Reis directed DARPA, the agency was pumping money into a revolutionary architecture called massively parallel computing. As opposed to that single pipeline, parallel processing tackles multiple tasks at the same time. Each gets fed to an individual processor that solves its designated chunk of the overall question. (See “Parallel Powers")

“There had always been this feeling that people could do multiple things simultaneously, and that would let you do things faster,” says Dongarra, who also co-founded the “Top500” list that serves as the semiofficial ranking of the world’s fastest computers. But as the Cold War ended, massively parallel computers were largely confined to university and industry research labs, smaller in scale than the government needed.

“There weren’t any problems that said, ‘I’ve got to have a massively parallel computer by X years,’ ” Reis says. Faced with one in the form of the aging nuclear stockpile, he took matters into his own hands and launched the Accelerated Strategic Computing Initiative (ASCI) in 1995.

Previous supercomputers were made with custom-ordered parts, but ASCI would build machines entirely from common computer chips and components, available off the shelf. The key was in getting them all to work together to solve problems simultaneously. The decision to use off-the-shelf parts proved revolutionary, says Horst Simon, deputy director of Lawrence Berkeley National Laboratory. It allowed tech companies like IBM and Intel to sell government-funded advances back to the public. “Eventually, the technological transition [in civilian computers] would have happened,” he says. “But it wouldn’t have happened as rapidly as it did without ASCI’s investment.”

By 1996, Intel finished the project’s first supercomputer, called ASCI Red. It was the first to break the so-called teraflop barrier by making 1 trillion calculations per second. With it, Sandia owned easily the world’s fastest computer.

Four years later, IBM’s ASCI White at Lawrence Livermore surpassed ASCI Red. And for another decade, that’s how it went. Along with Japan, the national labs traded bragging rights for the world’s fastest computer, until China suddenly took the lead in 2010.

It was a Sputnik moment for supercomputing. The U.S. government tried to slow China’s dominance by banning U.S. chip sales to such supercomputer projects, citing their use in “nuclear explosive activities.” But China spent billions deploying its own tech, until just one Chinese supercomputer — the Sunway TaihuLight — could nearly outperform every U.S. weapons lab machine combined. The U.S. finally reclaimed the top spot this June with Oak Ridge National Laboratory’s new Summit supercomputer. (See “The Chain Saw, the Beaver and the Ant")

Despite the back and forth, scientists say it’s not a race. The world’s big questions — like, will our nukes still work? — simply demand faster and faster supercomputers. It’s a natural progression, not a competition. However, those same experts will point out the importance of being first. And the possibilities go beyond improved nuclear simulations.

“These computers help build things and help answer questions and help look into the future,” Dongarra says. “If you have the fastest computer, you’ll be able to do those things with much faster turnaround.”

The Chain Saw, the Beaver and the Ant

Supercomputer history can fit into three eras, says Horst Simon, deputy director of Lawrence Berkeley National Laboratory. “If you want to chop down the Amazon rainforest, you can have one chain saw, 100 beavers, or you can use 1 million ants,” he says.

The chain saw represents early supercomputers — expensive and high-powered, but capable of only felling one tree at a time. Beavers work longer per tree, but chew on 100 simultaneously, so they’re ultimately more productive. That’s the massively parallel supercomputers, which dominated until recently.

Now the industry is pushing toward ants — the tiny components of exascale computers. They’re abundant, use little energy and can accomplish the task more quickly.

One Billion Billion

That’s why the two superpowers are racing toward the next step for supercomputers, called exascale. These computers will make 1 billion billion calculations — 1,000,000,000,000,000,000 — every second. The U.S. should have its first exascale computer online at Argonne National Laboratory near Chicago in 2021, and China will likely also have one around the same time. Either country could be first.

But over the past five years, the Top500 list has revealed a troubling trend: Supercomputers aren’t improving as quickly. For five decades, chips have shrunk in half every 18 to 24 months. Now Moore’s law may finally end. As chips get smaller, running them gets more expensive. Researchers say we’ve reached a technological turning point like the one that transformed pipeline processors into massively parallel machines two decades ago.

It can’t come soon enough for nuclear weapons scientists and researchers. John Sarrao oversees some 700 nuclear weapons researchers as associate director for theory, simulation and computation at Los Alamos. He says scientists already have problems that only an exascale computer can solve.

For Sarrao, understanding plutonium aging is high on the list. In 2007, a major report suggested the plutonium in warheads should age gracefully, lasting up to 85 to 100 years. But not all scientists agree. Hecker, now a Stanford University professor, led some of the last major research into plutonium aging around the time of that report. He disagreed with its conclusions, and now the former lab director is volunteering at Los Alamos, working once again on plutonium aging.

Sarrao hopes exascale computers can help. Current computers still can’t run highly detailed plutonium models that capture the element’s microstructure.

Click to enlarge.
Jay Smith. Map Background by Eckler/shutterstock

 Exascale machines won’t just build better bombs, either. They’ll simulate extremely fine-scale phenomena, like the intricacies of ocean currents or blood flow through the body. Supercomputers even underpin weather models, so better tech means better forecasts. These advances eventually reach the public. Twenty years ago, Intel’s ASCI Red reigned as the world’s fastest computer with its trillion calculations per second. Last year, Intel introduced a desktop computer chip with that much power. If history repeats, the decades to come will see an exascale computer in every pocket.

“Exascale is not the end of the race,” Dongarra says. As long as the Nevada Test Site remains quiet, and simulated bombs are preferred to real ones, scientists and governments will make sure computing power keeps improving. “Who has the fastest computer is something like a trophy on a mantel,” he says. “The real question is what kind of science are we doing on these things.”

Previous supercomputers were made with custom-ordered parts, but ASCI would build machines entirely from common computer chips and components, available off the shelf. The key was in getting them all to work together to solve problems simultaneously. The decision to use off-the-shelf parts proved revolutionary, says Horst Simon, deputy director of Lawrence Berkeley National Laboratory. It allowed tech companies like IBM and Intel to sell government-funded advances back to the public. “Eventually, the technological transition [in civilian computers] would have happened,” he says. “But it wouldn’t have happened as rapidly as it did without ASCI’s investment.”

By 1996, Intel finished the project’s first supercomputer, called ASCI Red. It was the first to break the so-called teraflop barrier by making 1 trillion calculations per second. With it, Sandia owned easily the world’s fastest computer.

Four years later, IBM’s ASCI White at Lawrence Livermore surpassed ASCI Red. And for another decade, that’s how it went. Along with Japan, the national labs traded bragging rights for the world’s fastest computer, until China suddenly took the lead in 2010.

DSC-A0918_09
DSC-A0918_09
Oak Ridge National Laboratory unveiled its Summit supercomputer, currently the fastest in the world, in June.
Oak Ridge National Laboratory

It was a Sputnik moment for supercomputing. The U.S. government tried to slow China’s dominance by banning U.S. chip sales to such supercomputer projects, citing their use in “nuclear explosive activities.” But China spent billions deploying its own tech, until just one Chinese supercomputer — the Sunway TaihuLight — could nearly outperform every U.S. weapons lab machine combined. The U.S. finally reclaimed the top spot this June with Oak Ridge National Laboratory’s new Summit supercomputer. (See “The Chain Saw, the Beaver and the Ant")

Despite the back and forth, scientists say it’s not a race. The world’s big questions — like, will our nukes still work? — simply demand faster and faster supercomputers. It’s a natural progression, not a competition. However, those same experts will point out the importance of being first. And the possibilities go beyond improved nuclear simulations.

“These computers help build things and help answer questions and help look into the future,” Dongarra says. “If you have the fastest computer, you’ll be able to do those things with much faster turnaround.”

One Billion Billion

That’s why the two superpowers are racing toward the next step for supercomputers, called exascale. These computers will make 1 billion billion calculations — 1,000,000,000,000,000,000 — every second. The U.S. should have its first exascale computer online at Argonne National Laboratory near Chicago in 2021, and China will likely also have one around the same time. Either country could be first.

But over the past five years, the Top500 list has revealed a troubling trend: Supercomputers aren’t improving as quickly. For five decades, chips have shrunk in half every 18 to 24 months. Now Moore’s law may finally end. As chips get smaller, running them gets more expensive. Researchers say we’ve reached a technological turning point like the one that transformed pipeline processors into massively parallel machines two decades ago.

It can’t come soon enough for nuclear weapons scientists and researchers. John Sarrao oversees some 700 nuclear weapons researchers as associate director for theory, simulation and computation at Los Alamos. He says scientists already have problems that only an exascale computer can solve.

For Sarrao, understanding plutonium aging is high on the list. In 2007, a major report suggested the plutonium in warheads should age gracefully, lasting up to 85 to 100 years. But not all scientists agree. Hecker, now a Stanford University professor, led some of the last major research into plutonium aging around the time of that report. He disagreed with its conclusions, and now the former lab director is volunteering at Los Alamos, working once again on plutonium aging.

Sarrao hopes exascale computers can help. Current computers still can’t run highly detailed plutonium models that capture the element’s microstructure.

 Exascale machines won’t just build better bombs, either. They’ll simulate extremely fine-scale phenomena, like the intricacies of ocean currents or blood flow through the body. Supercomputers even underpin weather models, so better tech means better forecasts. These advances eventually reach the public. Twenty years ago, Intel’s ASCI Red reigned as the world’s fastest computer with its trillion calculations per second. Last year, Intel introduced a desktop computer chip with that much power. If history repeats, the decades to come will see an exascale computer in every pocket.

“Exascale is not the end of the race,” Dongarra says. As long as the Nevada Test Site remains quiet, and simulated bombs are preferred to real ones, scientists and governments will make sure computing power keeps improving. “Who has the fastest computer is something like a trophy on a mantel,” he says. “The real question is what kind of science are we doing on these things.”

ADVERTISEMENT
Comment on this article
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
DSC-CV1218web
+