Lessons for Artificial Intelligence from Other Global Risks

by , , ,

21 November 2019

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior Advisor Gary Ackerman. The paper will be published in a new CRC Press collection edited by Maurizio Tinnirello titled The Global Politics of Artificial Intelligence.

The study of each of the four other risks contains valuable insights for the study of AI risk. Biotechnology and AI are both risky technologies with many beneficial applications. Episodes like the 1975 Asilomar Conference on Recombinant DNA Molecules and the ongoing debate over gain-of-function research show how controversies about the development and use of risky technologies could play out. Nuclear weapons and AI are both potentially of paramount strategic importance to major military powers. The initial race to build nuclear weapons shows what a race to build AI could be like. Global warming and AI risk are both in part the product of the profit-seeking of powerful global corporations. The fossil fuel industry’s attempts to downplay the dangers of global warming show one path corporate AI development could take. Finally, asteroid risk and AI risk are both risks of the highest severity. The history of asteroid risk management shows that policy makers can learn to take even risks that have a high “giggle factor” seriously.

The paper draws several important overarching lessons for AI from the four global risks it surveys. First, the extreme severity of global risks may not be sufficient to motivate action to reduce the risks. Second, how people perceive global risks is influenced by both their incentives and their cultural and intellectual orientations. These influences may be especially strong when the size of the risk is uncertain. Third, the success of efforts to address global risks often depends on whether they have the support of people who stand to lose from those efforts. Fourth, the risks themselves and efforts to address them are often heavily shaped by broader social and political conditions.

The paper also demonstrates the value of learning lessons for global catastrophic risk from other risks.

Academic citation:
Seth D. Baum, Robert de Neufville, Anthony M. Barrett, and Gary Ackerman, 2022. Lessons for artificial intelligence from other global risks. In Maurizio Tinnirello (editor), The Global Politics of Artificial Intelligence. Boca Raton: CRC Press, pages 103-131.

Download Preprint PDFView The Global Politics of Artificial Intelligence

Image credits:
Computer chip: Aler Kiv
Influenza virus: US Centers for Disease Control and Prevention
Nuclear weapon explosion: US National Nuclear Security Administration Nevada Field Office
Asteroid: NASA
Smoke stacks: Frank J. Aleksandrowicz

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine