Lessons for Artificial Intelligence from Other Global Risks

by , , ,

21 November 2019

Baum, Seth D., Robert de Neufville, Anthony M. Barrett, and Gary Ackerman, 2022. Lessons for artificial intelligence from other global risks. In Maurizio Tinnirello (editor), The Global Politics of Artificial Intelligence. Boca Raton: CRC Press, pages 103-131.

Download Preprint PDF

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior Advisor Gary Ackerman. The paper will be published in a new CRC Press collection edited by Maurizio Tinnirello titled The Global Politics of Artificial Intelligence.

The study of each of the four other risks contains valuable insights for the study of AI risk. Biotechnology and AI are both risky technologies with many beneficial applications. Episodes like the 1975 Asilomar Conference on Recombinant DNA Molecules and the ongoing debate over gain-of-function research show how controversies about the development and use of risky technologies could play out. Nuclear weapons and AI are both potentially of paramount strategic importance to major military powers. The initial race to build nuclear weapons shows what a race to build AI could be like. Global warming and AI risk are both in part the product of the profit-seeking of powerful global corporations. The fossil fuel industry’s attempts to downplay the dangers of global warming show one path corporate AI development could take. Finally, asteroid risk and AI risk are both risks of the highest severity. The history of asteroid risk management shows that policy makers can learn to take even risks that have a high “giggle factor” seriously.

The paper draws several important overarching lessons for AI from the four global risks it surveys. First, the extreme severity of global risks may not be sufficient to motivate action to reduce the risks. Second, how people perceive global risks is influenced by both their incentives and their cultural and intellectual orientations. These influences may be especially strong when the size of the risk is uncertain. Third, the success of efforts to address global risks often depends on whether they have the support of people who stand to lose from those efforts. Fourth, the risks themselves and efforts to address them are often heavily shaped by broader social and political conditions.

The paper also demonstrates the value of learning lessons for global catastrophic risk from other risks. This is one reason why GCRI has always emphasized studying multiple global catastrophic risks. Another reason is that study multiple risk allows cross-risk evaluation and prioritization.

Academic citation:
Seth D. Baum, Robert de Neufville, Anthony M. Barrett, and Gary Ackerman, 2022. Lessons for artificial intelligence from other global risks. In Maurizio Tinnirello (editor), The Global Politics of Artificial Intelligence. Boca Raton: CRC Press, pages 103-131.

Download Preprint PDFView The Global Politics of Artificial Intelligence

Image credits:
Computer chip: Aler Kiv
Influenza virus: US Centers for Disease Control and Prevention
Nuclear weapon explosion: US National Nuclear Security Administration Nevada Field Office
Asteroid: NASA
Smoke stacks: Frank J. Aleksandrowicz

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics