Q&A: Existential and catastrophic risk to humanity

Jun 5, 2017

By UCLA Samueli Newsroom

In March, the B. John Garrick Institute for the Risk Sciences at UCLA held the first Colloquium on Catastrophic and Existential Risk. Some 40 scholars from around the world gathered to discuss catastrophic risks facing humans, including risks that threaten the entire future of humanity. Topics discussed included climate change, an asteroid strike, global pandemics, artificial general intelligence and nuclear and biological terror. But the conference attendees’ main goal was to start a discussion on how to quantify those risks and how to provide sound information to policy makers. During the colloquium, UCLA Engineering sat down with three of the colloquium’s featured speakers to get their thoughts on those risks to humanity and what to do about them.

Panelists

Albert Carnesale is UCLA Chancellor Emeritus and a professor of mechanical and aerospace engineering, and public policy. Carnesale, also as senior fellow with the Garrick Institute, focuses his research and teaching on public policy issues having substantial scientific and technological dimensions.

Seán Ó hÉigeartaigh is the executive director of Centre for the Study of Existential Risk, an interdisciplinary research center at the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or the collapse of civilization.

Christine Peterson is the co-founder and past president of the Foresight Institute, a think tank and public interest organization based in the San Francisco Bay Area that is focused on emerging world-shaping technologies. Peterson studies and writes on coming powerful technologies, especially nanotechnology and life extension.

Following are some excerpts that have been condensed and edited from that conversation.

What is existential risk and what is catastrophic risk?

Carnesale: Existential risk, in the extreme case, means eliminating the human species on Earth. Literally, the very existence of life on this planet. That’s the extreme case. Everything from that on down to something on down to something that would perhaps kill tens of thousands of people all qualifies in different people’s mind as catastrophic risk.

And some don’t have to kill tens of thousands of people immediately. You can imagine a financial disaster that over time would affect many people, no one would dies from the financial disaster, but a lot of people might die. Or something that would happen to the agricultural system. There’s not one that that would happen, but suddenly a lot of people might die, but it would affect a lot of people and many of them might die.

Why is it important to study these types of risks?

Carnesale: The reason it’s important to study this stuff is so that hopefully we can find ways to reduce these risks, and make it less likely that these risks will occur. And if they do occur, that their consequences would be reduced. So people are studying them in order to better offer better advice on how you can reduce those risks.

Ó hÉigeartaigh: When we’re talking about particularly about risks in the extreme scale, we’re often talking about uncertain scenarios, or low probability scenarios, or speculative scenarios. I think it’s tremendously important to look at which of these are scientifically plausible, which ones do we have good reasons to think might come about on a timescale that we might care about. And then which ones we can say are sufficiently unlikely that we don’t need to think about them that much or just simply don’t look plausible and now we can relegate to the realms of science fiction for the moment.

By doing this kind of analysis, we can better focus our attention on what we really should care about and what we should invest our resources in, in trying to mitigate or prevent to the extent that we can.

What are the technologies under development that also is a possible concern?

Peterson: Probably the technology change that’s gotten the most attention at this meeting is the prospect of artificial general intelligence. This creates tremendous opportunity – think of the diseases that could be cured and the economic advances, tackling the tough problems like environmental issues.

But there’s also a concern, is this something we can control?’ Is this something that would take over the world in some way?

What we’ve been trying to do at this conference is come up with numbers (on potential threats) –
whether it’s timeframe, the likelihood, the costs to prevent things, the payoff of preventing problems. These things are matters of tremendous debate and they’re very unclear. But we have to try. Without numbers, you can’t make decisions.

What do you think poses the biggest risk to civilization?

Carnesale: The most easily identifiable is climate change. But there is a lot of uncertainty there. That is the classic example with the insurance analogy.

There’s no question that carbon dioxide in the atmosphere increases global warming. You don’t have to have much more than a high school education with a chemistry course to know that. Now when you start to ask the questions — How much of an effect does it have? Over what timescale? And when you’re trying to predict 50 years ahead, how long human consumption and different energy sources change? How much of the CO2 will be absorbed in the oceans? Where might there be a tipping point? Where might there be uncertainties?

But the fact that you don’t know everything doesn’t mean you don’t know anything. We do know more carbon dioxide in the atmosphere is going to go in the wrong direction.

I think that’s the best example of where the scientific community is fairly well agreed that if we don’t do anything, we’re going to have a big problem. You can argue about what we should do and how fast we should do it and the like. But the idea of “let’s wait, it’s only a theory” doesn’t work.

Peterson: I divide these catastrophic risks into natural and man-made.

Of the natural ones, the ones that have me the most nervous are pandemics. Looking at this history we can say with some confidence that these will come. Hopefully, we all remember the so-called Spanish Flu of 1918. There are more people now. We live in bigger cities. Are we well-prepared? I don’t think we are well-prepared for this. Of course, it will hit developing nations harder. But it could be very bad all over.

Looking at man-made problems, in the near-term – the cyberattack issue. You can think about a cyberattack in terms of the Internet of Things, and that gets a lot of press. For example there was a hotel where guests were locked out of their rooms. There was a hospital that was threatened. Those are real concerns, but they are not so much I would say currently catastrophic risks.

But I think the electrical grid, is a catastrophic risk. A cyberattack on the grid could cause a lot of damage and quite a few fatalities, large numbers. From what I’ve been able to find out, it’s not being addressed quickly enough. It could take decades to fix this and that’s not acceptable because the threat is real right now.

How do you get people to think about long-term challenges and have a long-term view of things? Not just policy makers, but people in general.

Peterson: As Sean said, one way to for a lay person to think about is insurance. We know the odds of our house burning down are not high, but we buy insurance anyway.

Some of these risks are high-probability, some are low probability. But, even for the low-probability ones, you want that insurance policy. And that’s what we’re here gathered at the colloquium to work on. What’s the best insurance policy for humanity on each of these issues? And how can we try to move forward into getting people to buy into paying the money it takes to insure against each of these issues.

Ó hÉigeartaigh: We’re doing an awful lot to make the world better place. Global poverty is going down. Quality of life is going up, particularly in the developed world. But we’re also degrading the planet’s resources faster rate than can be contained that can possibly be sustained.

We have a hell of a responsibility and we do need to take that seriously. This community have done amazing things in reducing the risk of a global nuclear war. I think our responsibility going forward get onto sustainable trajectories in terms of energy production, and resource production and use.

And we need to be thinking carefully and responsibly about the way we develop new sciences and technologies. They will be more powerful, for better and worse.

What’s next for the conference attendees?
Ó hÉigeartaigh: You need to start putting numbers to that and the science alone doesn’t necessarily do that. We don’t have infinite resources, policy makers certainly don’t have infinite resources. They need to have some sort of information to figure out how to allocate resources

I think it’s so important that John Garrick and the Garrick Institute have come in and said, “All that science, all that philosophy, all that long-term thinking that’s fine. But turn it into something solid that we can present to decision makers so that then the world can be changed.

Carnesale: Let me just add that’s almost an ideal description of the difference between science and engineering. Fundamentally, engineering is taking knowledge and applying it for the benefit of mankind. Where John (Garrick) is at, is at the intersection of, “how do I take all this stuff and apply it so I can reduce the risk?” It isn’t just understanding the risk for the sake of understanding the risk in some abstract way.

In order to do that, you got to have numbers in the science. You need numbers in the economics. You somehow need to have to include numbers in the feasibility of what will it take, for decision makers to do this. Actually applying it and getting things done is exactly what engineering is supposed to be.

Peterson: I can see at this very first colloquium that it is going to make a difference. The group of people here, these are the people who needed to speak with each with each other.

You can tell from the types of interactions that have been happening that that there’s this hunger (to do more), from the conceptual folks to the theoretical folks, with people with heavy numbers expertise, who have a lot of experience in the risk and decision-making areas.

It’s very difficult what we’re trying to do here, but certainly it’ worth it to try.

Share this article