This story was originally published by Bulletin of the Atomic Scientists and appears here as part of the Climate Desk collaboration.

Long before Vinton Cerf and Martin Hellman changed the world with their inventions, they were young assistant professors at Stanford University who became fast friends. Chances are that you relied on their innovations today.

Cerf is considered one of the “fathers of the internet” for having invented, along with Robert Kahn, the internet’s protocols and architecture, known as Transmission Control Protocol/Internet Protocol (TCP/IP).

Hellman is seen as a “father of public key cryptography” for having invented, along with Whitfield Diffie and Ralph Merkle, the technology that protects monetary transactions on the internet every day. More than 50 years and two technological revolutions later, the friendship between Vint and Marty — as they know each other — endures. This is despite, or perhaps because of, their sometimes different views. You see, while they do not always agree, they both enjoy a good intellectual debate, especially when the humans they sought to bring together with their inventions face existential threats.

Not long after giving the world public key cryptography, Hellman switched his focus from encryption to efforts that might avoid nuclear war. “What’s the point of developing new algorithms if there’s not likely to be anybody around in 50 to 100 years?” Hellman recalls thinking at the time. He did not then envision that cybersecurity would also become an existential threat or what it is today — an escalatory step toward nuclear threats that could lead to nuclear use.

Sometime after TCP/IP provided the foundation for the internet, Cerf joined Google as its chief internet evangelist and vice-president, where he leads efforts to spread the internet, via global policy development, to billions of people around the world without access. Among other projects, he has also had a hand in supporting NASA’s effort to build an interplanetary internet that operates today.

The world has changed in dramatic ways since Cerf and Hellman met 50 years ago. Yet the foundation of their friendship — good intellectual debates — has not. On a recent private phone call with each other, the two friends discussed the National Academies of Sciences, Engineering, and Medicine’s project seeking to answer the question, “Should the U.S. use quantitative methods to assess the risks of nuclear war and nuclear terrorism?” While both agree that the U.S. needs to understand the risk of nuclear war, they disagree about whether a quantitative analysis is necessary. What follows are their thoughts, presented here for Bulletin readers.

Susan D'Agostino

The quantitative argument

Martin Hellman: When the risk is highly uncertain, how do you determine who’s right?

Just because the Earth’s explosive vest has not yet gone off, doesn't mean it never will. @MartinHellman2 @vgcerf spell out the risks. #NuclearWar

Is the risk of nuclear deterrence failing acceptable? Former secretary of defense James Schlesinger thought so. In a 2009 interview, he stated that the U.S. would need a strong nuclear deterrent “more or less in perpetuity.” In contrast, former secretary of defense Robert McNamara stated in the 2003 documentary Fog of War that “the indefinite combination of human fallibility and nuclear weapons will destroy nations.” So, is the risk of failure acceptable or unacceptable? When nuclear risk is stated in generalities, it is difficult to determine who is right.

In the 1970s, such questions were, at best, of passing interest to me. My research in cryptography consumed me. That wasn’t all bad since it led to the invention of public key cryptography and the foundation of much of modern cybersecurity. But, slowly and painfully, I came to see that my over-focus on career and logic was killing my marriage. Then, in 1981, Ronald Reagan’s assumption of the presidency brought the nuclear threat into sharp focus.

As explained in a book that my wife, Dorothie, and I wrote, I realized that it wasn’t smart to neglect risks either to my marriage or to the planet (and there was a surprising connection between the two). I shifted my research from information security to international security, with a focus on the risk of nuclear deterrence failing. Almost as soon as I looked at that question with new eyes, I saw that the risk of nuclear devastation was unacceptably high. I have continued to develop that line of thinking and here I summarize my current perspective.

This article uses a simple, quantitative estimate to show that the risk of a full-scale nuclear war is highly unacceptable, and that a child born today may well have less-than-even odds of living out his or her natural life without experiencing the destruction of civilization in a nuclear war.

Some, including my friend and colleague, Vinton Cerf, prefer a qualitative analysis for reasons he explains in his companion article. Others argue that a quantitative estimate of the risk of a full-scale nuclear war is not possible because such an event has never occurred. They are right in the limited sense that it is not possible to determine if the risk of a nuclear war is one per cent per year versus two per cent per year. But it is possible to upper and lower bound it.

If someone were to propose that the risk were one per cent per day, I’d rule that out as far too high because then nuclear war would be almost certain within the next year. Similarly, if someone were to suggest that the probability were one in a million per year, I’d consider that too low because it would imply that nuclear deterrence as currently practised could work for approximately a million years. Considering the historical record of nuclear near-misses, some of which are detailed in Vint’s article below, a million years is far too optimistic.

(Of course, if humanity survives for another million years, major events will change the risk appreciably. Here, I seek only to estimate the risk over the next year, during which time such changes will be minimal. Extrapolating that annualized estimate to the next several decades also is reasonable, especially given the estimate’s large uncertainty bounds.)

These two extreme cases — nuclear catastrophe either in the next year or in a million years — establish initial upper and lower bounds on the risk.

Next, I sought to narrow the range. In my estimation, and based on my extensive study of nuclear risks, 10 per cent per year is also an upper bound since we have survived approximately 60 years of nuclear deterrence without the use of any nuclear weapons in warfare, much less a full-scale exchange. Similarly, 0.1 per cent per year seems too low because that would imply that current policies could be continued for approximately 1,000 years before civilization would be expected to be destroyed.

Over that time period, and subject to the above caveat about the risk changing over time, I extrapolate from past events and estimate that we would expect on the order of 10 major crises comparable to Cuba 1962; 100 lesser crises comparable to the 1995-1996 Taiwan Straits Crisis, the 2008 Russo-Georgian War, or the ongoing conflict in Ukraine that started in 2014; plus a large number of other events that could lead to nuclear threats and therefore, potentially, to nuclear use.

If you agree with my reasoning that the risk of a full-scale nuclear war is less than 10 per cent per year but greater than 0.1 per cent per year, that leaves one per cent per year as the order of magnitude estimate, meaning that it is only accurate to within a factor of 10. For related reasons, that one per cent per year estimate really spans a range from roughly 0.3 to three per cent per year.

A risk of one per cent per year would accumulate to worse-than-even odds over the lifetime of a child born today. Even if someone were to estimate that the lower bound should be 0.1 per cent per year, that would be unacceptably high — that child would have an almost 10 per cent risk of experiencing nuclear devastation over his or her lifetime.

The above arguments show the importance of not only estimating the risk of a full-scale nuclear war, but also establishing a maximum acceptable level for that risk. If someone argues that 0.1 per cent per year or some other value is an acceptable level of risk, society can then judge whether or not it agrees.

This quantitative approach avoids the ambiguity of non-quantitative arguments such as Schlesinger’s and McNamara’s. When the risk is highly uncertain, how do you determine who’s right? A quantitative approach, even to an order of magnitude as done here, requires both proponents and opponents of nuclear deterrence to justify their positions in ways that others can more easily decide who to believe.

I hope you will agree with either my quantitative approach or Vint’s qualitative approach, both of which conclude that the risk of a nuclear war is unacceptably high and risk reduction measures are urgently needed. For those who accept neither of our approaches, I have two questions:

First, what evidence supports the belief that the risk of nuclear deterrence failing is currently at an acceptable level?

Second, can we responsibly bet humanity’s existence on a strategy for which the risk of failure is totally unknown?

As Ronald Reagan and Mikhail Gorbachev once said, “A nuclear war must never be fought and cannot be won." Photo by Pixabay / Pexels

The qualitative argument

Vinton Cerf: Numbers are not needed for people to see the unacceptable risk we face.

Although I am not an expert on nuclear conflict, my good friend, Martin Hellman, has drawn me into a serious discussion about the risk of nuclear war. Along with Robert Kahn, I am the co-inventor of the internet’s architecture and core protocols. I would prefer for humanity to endure and not obliterate itself. Both Hellman and I consider nuclear deterrence — threatening to destroy civilization in an effort to preserve the peace — to be untenable as a long-run strategy. But we differ on the need to quantify the risk of deterrence failing. In these two companion articles, we explain our different thinking.

I prefer a qualitative approach because most people relate to it better. Many are confused by mathematical arguments. While numbers do not lie, some human beings do. Quantitative estimates run either the real or perceived risk of being twisted to support whatever conclusion is desired.

Instead, I prefer to rely on qualitative arguments like one that Marty has devised: Imagine that a man wearing a TNT vest were to sit down next to you and tell you that he wasn’t a suicide bomber. Rather, there are two buttons for setting off his explosive vest. One was in the White House with Trump for the last four years, and recently was given to Biden. The other is with Putin in Moscow. You’d still get away as fast as you can! Why, then, has society “sat here” for decades assuming that, just because the Earth’s explosive vest has not yet gone off, it never will? A qualitative argument like that will convince far more people than any mathematical reasoning.

But, most fundamentally, I prefer a qualitative approach because there are too many examples of sheer luck averting the use of nuclear weapons. Numbers are not needed for people to see the unacceptable risk we face.

As one example, during the 1962 Cuban missile crisis, American destroyers attacked three Soviet submarines near Cuba and forced them to surface. No American, not even President Kennedy or his military advisers, knew that each of those submarines carried a nuclear torpedo. According to an officer on one of those submarines, its captain gave orders to arm the nuclear torpedo, but was talked down. The captain’s order makes more sense when one remembers that the last he had heard before submerging was that World War III seemed imminent; he was under attack; and surfacing would be a humiliating defeat. Fortunately, luck won out: the captain suffered humiliation, but civilization was not devastated.

Some may object that such Cold War incidents should not be used as guides in today’s very different world. Yet, in June 1999, at the start of NATO’s peacekeeping mission in Kosovo, an American general gave orders that his British subordinate feared was extremely dangerous. Their memoirs agree that a heated argument ensued, which ended with the British general telling the American, “Sir, I’m not starting World War III for you.”

More recently, on Jan. 13, 2018, Hawaiians received the following emergency alert: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Fortunately, it was a false alarm and did not cause a response that might have been misinterpreted by our adversaries.

As of September 2020, it is estimated that there are 13,410 nuclear weapons in the world, with 91 per cent of those divided between the United States and Russia. Speaking purely qualitatively, I see no scenario in which so many weapons are needed, even under the theory of Mutually Assured Destruction, or MAD.

Marty argues that risk-reduction efforts will languish until the risk of nuclear war is shown to be clearly unacceptable. Society’s lack of action, or even concern, might seem to justify his thinking. But, even if a study were to produce an unacceptably high numerical risk value, some would still dispute the numbers or find them unintuitive.

As I read Marty’s analyses, I am struck by the thought that any positive risk of a global nuclear exchange is simply unacceptable. The mere existence of nuclear weapons and the possibility that their availability might lead to their use seems self-defeating. As Ronald Reagan and Mikhail Gorbachev stated, “A nuclear war must never be fought and cannot be won."

We would do well to ask ourselves how we might accomplish risk reduction regardless of the quantified risk. If sufficient people desire risk reduction, regardless of their agreement or disagreement as to its quantification, we might succeed in making the world a safer place. Is there any other logical course of action?

Keep reading

Of course this is a false debate. There is no reason someone pointing out the insanity of our nuclear situation should leave out either quantitative or qualitative arguments.