GCRI at SRA 2016 Annual Meeting

28 September 2016

Global Catastrophic Risk Session
Society for Risk Analysis 2016 Annual Meeting
11-15 December, San Diego.

Part of GCRI’s ongoing SRA presence.

Symposium: Current and Future Global Catastrophic Risks
Time: Wednesday 14 December, 10:30-12:10
Chair: Anthony Barrett

Title: Technology Forecasting for Analyzing Future Global Catastrophic Risks
Author: Anthony Barrett, Global Catastrophic Risk Institute (with S Baum)

Title: Nuclear Winter: Science and Policy
Author: Michael Frankel, Johns Hopkins University Applied Physics Laboratory (with J Scouras)

Title: Nuclear Autumn, Deterrence, Crisis Stability and Adversary Models, Tying Them Together To Address A Global Catastrophic Risk
Author: John Lathrop, Innovative Decisions, Inc.

Title: Value alignment for advanced machine learning systems as an existential priority
Author: Andrew Critch, Machine Intelligence Research Institute (with J Tailor, P LaVictoire)

Title: Artificial General Intelligence Risk Analysis
Author: Roman Yampolskiy, University of Louisville & Global Catastrophic Risk Institute

***

Symposium: Current and Future Global Catastrophic Risks
Chair: Anthony Barrett

Title: Technology Forecasting for Analyzing Future Global Catastrophic Risks
Author: Anthony Barrett, Global Catastrophic Risk Institute (with S Baum)
Emerging technologies in several domains, including artificial intelligence (AI) and synthetic biology, are becoming increasing powerful. While these technologies offer great benefits, they also pose hazards of accident, misuse or unintended consequences that could result in global catastrophe at some point in the future. Such risks can be significantly reduced with enough foresight and advance warning, but also can be difficult to characterize due to the general challenges of technological and long-term forecasting. For some technologies, the key factor may simply be when it is invented or available; for others, it may performance or affordability in a specific context. In this work, we primarily present an initial set of graphical and quantitative models of future development of AI technologies, as well as intervention options that could affect risks, derived from published literature. We also contrast with forecasting of other technologies such as in synthetic biology. Finally, we discuss general issues such as evaluation of forecasting performance, and integration of forecasting models in risk and decision analysis.

Title: Nuclear Winter: Science and Policy
Author: Michael Frankel, Johns Hopkins University Applied Physics Laboratory (with J Scouras)
Climate concerns presently stand very much at the intersection of science and public policy. How to manage a perceived warming are the stuff of impassioned ideological cum scientific debate. Much less publicized is another component of the climate risk universe which has gained renewed attention in some corners of the academic scientific community; one in which there is no dispute over its anthropogenic origin and one that perversely leads to global cooling. And that is the prospect that a “limited” nuclear war, confined to a “modest” exchange of weapons between regional powers such as an India and Pakistan, would have physical effects far beyond the geographical boundaries of conflict. In the 1990s, government interest in the “original” nuclear winter scenario associated with a large arsenal exchange between the Cold War superpowers, seemed to wane after a decade or so, coinciding with the precipitous drop in the deployed arsenals of the US and Russia and with the changed political circumstances following the demise of the Soviet Union. But predictions made with more modern calculational tools now assert that such a local regional engagement, casting a pall of smoke and soot that would spread around the globe, intercepting sunlight and precipitating a nuclear winter-like agricultural catastrophe and stripping the ozone layer, would ultimately cause the deaths of billions of human beings situated far from the contending powers. We will present a review of the state of uncertainties associated with these predictions and discuss available risk management policies.

Title: Nuclear Autumn, Deterrence, Crisis Stability and Adversary Models, Tying Them Together To Address A Global Catastrophic Risk
Author: John Lathrop, Innovative Decisions, Inc.
We take a decision aiding approach to global catastrophic risks (GCRs). That is, we describe an approach to aiding decisions to address a GCR, as opposed to other approaches that focus on assessing, describing or understanding a GCR. We do that by identifying the strategically significant links in the probabilistic causal network from initiation, observables and risk-addressing decisions, to consequences. We model that causal network with what in some cases may be a very approximate, conceptual probabilistic risk assessment (PRA). We exercise that model to achieve three goals: 1.) Develop insights into the problems of deciding among risk-addressing actions; 2.) Make the problem and linkages from decisions to events to consequences more vivid and salient, to encourage action; 3.) Guide and encourage further research and development of processes to address that GCR. A key theme in this work: Epistemological Modesty, i.e. to be explicitly aware of what we don’t know and can’t know, and the consequences of that lack of knowledge for addressing that GCR. A key example: We have no way of knowing initiation rates for wars or terrorist actions, so aiding risk-addressing decisions in those cases must account for that lack of knowledge. We describe that framework by applying it to the specific GCR of inter-hegemon nuclear exchange causing a global catastrophe termed a “Nuclear Autumn,” that is, a partial but still extremely devastating version of the Nuclear Winter projected as a consequence of a superpower nuclear exchange. That example will combine concepts of deterrence, crisis stability and adversary models to model the network from initiation, observables and risk-addressing decisions, to consequences. We then use that example as a basis for a discussion of concepts for a strategically coherent approach to addressing the risks of the several GCRs we face, e.g. inter-hegemon nuclear exchanges, pandemics, unbounded bio WMD terrorist attacks, and climate change.

Title: Value alignment for advanced machine learning systems as an existential priority
Author: Andrew Critch, Machine Intelligence Research Institute (with J Tailor, P LaVictoire)
I will present some arguments that value alignment research for advanced machine learning systems should be considered a top priority for mitigating existential risks, and that such research is possible and actionable today. I will also give some overview of technical problems that I believe are currently tractable and relevant to mitigating existential risks from highly capable and autonomous AGI systems, and some progress that has been on tackling them.

Title: Artificial General Intelligence Risk Analysis
Author: Roman Yampolskiy, University of Louisville & Global Catastrophic Risk Institute
Many scientists, futurologists and philosophers have predicted that humanity will achieve a technological breakthrough and create Artificial General Intelligence (AGI). It has been suggested that AGI may be a positive or negative factor in the global catastrophic risk. In order to mitigate a dangerous AGI system it is important to understand how the system came to be in such a state. In this talk, I will survey, classify and analyze a number of pathways, which might lead to arrival of dangerous AGI.

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...