This story was originally published by The Bulletin of the Atomic Scientists and appears here as part of the Climate Desk collaboration.

Italian physicist Enrico Fermi had a knack for back-of-the-envelope calculations. In a famous lunch-time conversation in 1950, Fermi used his knowledge of astronomy and probability to highlight a problem: If intelligent life exists elsewhere in the galaxy and if long-distance space travel is achievable, then Earth should have been visited by aliens by now.

So, Fermi asked his colleagues: “Where are they?”

Despite tantalizing hints, such as the inexplicable sightings by US Navy pilots recently reported in the New York Times, there is still no reliable evidence of alien life, either on our humble planet or elsewhere in this infinite universe.

The discrepancy between the expectation of intelligent alien life and the absence of evidence of them is known as the Fermi Paradox.

The paradox is made more acute by recent studies that show that there are billions of planets in this galaxy alone, and that water is common in our solar system and beyond. Given these facts, it seems likely that there are many worlds where life could arise, and that some alien life forms will have advanced to the point where they can travel between the stars.

So, to reiterate: Where are they? And why haven’t they contacted us?

There are many conceivable answers to the Fermi Paradox. The Zoo Hypothesis posits that aliens have accorded Earth a park-like status because human beings are unready for contact. Star Trek fans will recognize this as the Prime Directive, which prohibits Starfleet crews from revealing themselves to species that have not yet achieved warp-speed technology.

Then there is the Rare Earth Hypothesis, which claims that the conditions necessary for life are extremely uncommon. For example, while there are many planets that exist within the Goldilocks Zone around a star—where temperatures are neither too hot nor too cold, but just right for liquid water, the presence of which could be a precondition for life—few such planets have a large single moon generating tides and therefore intertidal zones like those that hosted Earth’s first lifeforms.

Another possibility is the Great Filter (a term coined by economist Robin Hanson of George Mason University), which postulates that all life has to make it past an extremely exacting challenge that renders survival improbable. For example, abiogenesis—the gradual process whereby the first self-replicating molecules become increasingly complex as the result of randomly occurring chemical reactions—might be an extremely low-probability event. No matter how favorable the other conditions for life might be, the chances of it ever starting might be so very low as to make what happened here on Earth nearly unique.

But it is also possible that abiogenesis is a common event. Indeed, life appeared on Earth almost as soon as the surface was cool enough to allow for liquid water.

Yet another possible reason for the absence of intelligent life out there—and hence for why no one’s contacted us—involves asteroid strikes, which are random but inevitable events in a universe full of gravity and orbits. Sixty-six million years ago, an asteroid strike changed Earth’s climate, wiping out the mighty dinosaurs (except for the birds) but sparing a few of the small but adaptable mammals.

Self-inflicted climate change has frequently been identified as a possible Great Filter. According to this theory, any intelligent lifeform will consume vast amounts of energy as it develops technologies. Since harnessing energy always results in some kind of pollution, the planet’s ecosystem will eventually be degraded to the point where it imperils the polluting species.

With this in mind, consider anthropogenic climate change. Our species has increased Earth’s average temperature by only slightly more than 1 degree Celsius (1.8 degrees Fahrenheit), yet we are seeing increasingly frequent and severe floods, droughts, and forest fires, as well as melting sea ice, crumbling glaciers, sea level rise, ocean acidification, and widespread biodiversity loss.

With atmospheric carbon dioxide levels at 415 parts per million and rising, we are on track to shoot far past the 2-degree Celsius increase (3.6 degrees Fahrenheit) that scientists have identified as the safe outer limit for preserving our civilization—and some researchers warn that even that 2-degree figure is far too optimistic to be considered safe).

Add in all the known and unknown feedback loops and tipping points—such as the possible release of the vast stores of methane trapped in the now-melting Arctic—and the future of our species is looking rather bleak.

Somewhere out there in the vastness of space, other forms of intelligent life likely faced similar problems; some might have been able to develop cleaner energy sources from the start, or switch to them before calamity struck. There is still an outside chance that humanity could do this—though we are running out of time, fast.

Still, the universe will always be a dangerous place, and this fact opens up the possibility that the Fermi Paradox could be explained by the fact that every species must surmount not one, but a series of pass-fail survival tests before it becomes advanced enough to be detectable on a galactic scale. This could collectively reduce the statistical odds of long-term survival to almost zero. This insight is based on Lotka’s Curve, named after the early twentieth-century Polish-American mathematical biologist who identified it, A.J. Lotka.

Lotka’s Curve explains why in any specialized field—from scientific publishing to aerial combat to the game of golf—only a very few individuals consistently win, while everyone else mostly loses. This is because they are constantly being put through a series of trials with binary outcomes. Your paper is published, or it is not. A golfer wins the Masters, or he does not. A species survives an existential threat, or it does not. There are no middle outcomes.

Originally applied only to scientific publishing, Lotka’s Law stated that the number of authors publishing multiple papers goes down dramatically as the number of papers goes up. This dynamic was found to hold true in many different fields: Only a handful of individuals consistently win, while everyone else mostly loses. Lotka’s Law may also apply to the survival of species in the universe, and explain why no intelligent life has contacted us: There are just very few species of intelligent life forms out there, or even none, who have overcome multiple serious obstacles. Image courtesy of Tim Bates, under Creative Commons License

Lotka showed that the number of contenders who survive a given number of trials goes roughly as an inverse power of that number. Winning once might not be particularly difficult, but winning consistently is very, very hard.

Lotka’s insight consequently points the way to another possible answer to Fermi’s puzzle. Our universe might generously allow for the possibility of life, yet ruthlessly cull it as soon as it emerges—again, and again, and again. There is no single Great Filter, just the merciless statistical odds against long-term survival.

The human species is presently faced with several possible survival filters. Some, such as asteroid impacts, arise randomly; others are self-inflicted, such as nuclear weapons, anthropogenic climate change and, perhaps quite soon, runaway artificial intelligence.

One last filter could be that intelligent space-faring aliens might decide to terminate our war-mongering species before we become too dangerous to the rest of the universe. (Ironically, such an encounter would also prove that it is, in fact, possible for a species to survive long enough to develop the technology to travel between the stars.)

Our response to this daunting list of hazards should not be fatalism, but a thoughtful examination of what it takes to stay ahead of the odds in a tough game where the house almost always wins.

For half a century, arms control treaties have helped humanity to avoid nuclear war. Countries are now cooperating on the asteroid threat, with the first deflection test-mission planned for 2021. And scientists and entrepreneurs are racing to develop and implement technologies that could, potentially, get us through the climate change filter—even if politicians are unable to exercise foresight on this issue.

We know that human beings have the capacity for intelligent foresight and large-scale cooperation. It cannot be pure luck that our species has survived as long as it has.

But now, we need to raise our game. Are we an exceptional species, or just another flash in the cosmic pan?

Keep reading

To address the authors' final line - we are definitely not an exceptional species. Exceptionalism (perceiving ourselves as superior to other life forms) is part of the western ideology pushing us towards extinction. It is a form of narcissism . Other cultures (nature-based cultures in particular such as North American Indigenous Peoples) do not have this feature: they see themselves as equal and in relationships with other species. Based on their record of living in harmony with their environment for tens of thousands of years, it probably is a better worldview when it comes to long-term survival.

Sounds like John-Jacques Rousseau and his "Noble Savage" romanticism. Tell it to the exterminated megafauna. Or just all the woods on the Stoney Nation when the price of lumber spiked in the early 1990s; they clear-cut so aggressively there was bad erosion into the river. That kind of ended my belief in the natural and inherent environmentalism of First Nations cultures.

Then there's the theory that truly advanced - psychologically, emotionally, morally advanced - species simply don't see any point in endless growth and expansion into the universe, or the manipulation of energies that can be detected at cosmic distances, (presumably so that vast levels of comfort and luxury can be enjoyed by a limited population).

Maybe they've realized that exponential growth would eventually convert every gram of matter in the universe into a heaving sea of flesh (Isaac Asimov, in a science-fact essay in the 1950s calculated some 600 years at then-common growth rates, the global population of 2 billion doubling every 40 years, to have 64 trillion people), and you've got to stop sometime, so why not stop at the one planet you've already got? And they did, and lived happily ever after, so we can't detect them.

Any other supposition is actually very hard to believe; if you, say, declared the Earth to be "full", today, with 7 billion, and could hold down the growth rate to a doubling every 25,000 years, so that we had 25,000 years to find and fill up another Earth - and if you could find one around every star in the galaxy - you'd fill the galaxy in under one million years. The bad news: you now have only 25,000 years to fill another galaxy. Alas, if the speed of light is a limit, the nearest one is 2 million years away. SF writers call it "The Light-Speed Cage": exponential expansion very soon is faster than the speed of light.

So exponential expansion - typical of life from bacteria to us - is impossible in the long run. Not even at 1/500th the current rate of doubling, about every 50 years. It does not, physically can not work, over astronomical stretches of time. Any species with over 100,000 years of civilization has overcome the urge to do so, or died of resource exhaustion. Period.