GCRI at SRA 2021 Annual Meeting

10 November 2021

Global Catastrophic Risk Session and Poster Presentation
Society for Risk Analysis 2021 Annual Meeting
5-9 December, Washington, DC.

Part of GCRI’s ongoing SRA presence.

Global Catastrophic Risk Presentations

SRA Late Breaking Poster Session
Date: 6 December 2021
Time: 2:00 – 4:00 pm EST

Title: Military Artificial Intelligence and Global Catastrophic Risk
Authors: Uliana Certan* and Seth Baum, Global Catastrophic Risk Institute

Title: Moral Circle Expansion as a Means of Advancing Management of Global Catastrophic Risks
Authors: Manon Gouiran,* Swiss Center for Affective Sciences, Dakota Norris,* University of Saskatchewan, and Seth Baum, Global Catastrophic Risk Institute

Title: Policy Attention to Extreme Catastrophic Risk: The Curious Case of Near-Earth Objects
Authors: Aaron Martin,* Rutgers University, and Seth Baum, Global Catastrophic Risk Institute

*These individuals are part of GCRI’s 2021 Advising and Collaboration Program and GCRI’s inaugural 2021 Fellowship program.

***

SRA Poster Session
Date: 6 December 2021
Time: 2:00 – 4:00 pm EST

Title: Military Artificial Intelligence and Global Catastrophic Risk
Authors: Uliana Certan* and Seth Baum, Global Catastrophic Risk Institute

Rationale/Background: This presentation studies the intersection of two domains: artificial intelligence (AI) and global catastrophic risk (GCR). AI is a class of emerging technology with diverse applications, including for military affairs. GCR is a class of risk corresponding to the most extreme high-severity risk – risk to the survival of global civilization. GCR is of intellectual and normative significance due to its extreme severity. Military affairs are commonly implicated in GCR, most evidently regarding risks from nuclear and biological weapons. As AI is progressively integrated into military roles, operations, and systems, their ramifications for extreme risk and security become increasingly complex and controversial. It is therefore important to investigate how AI may be affecting current military GCRs and potentially creating new ones.

Approach: We are surveying the range of ways in which military applications of AI can affect GCR. Specific applications include: (1) nuclear weapons systems, including delivery vehicles, countermeasures to detect and destroy delivery vehicles, and the cybersecurity of nuclear weapons systems; (2) biological weapon systems, including research and development of new pathogens and their dispersal; and (3) autonomous weapons with conventional munitions, including their direct impacts and their effects on strategic stability. We additionally study (4) the role of militaries in the development of new and riskier forms of AI technology. For each of these, we consider the risk itself as well as potential risk management options.

Results and Discussion: Initial analysis finds relatively limited effects of military AI on GCR. Numerous military applications of AI are relevant to GCR, but their impact doesn’t appear to be significant. AI may reduce the risk of inadvertent nuclear war by improving nuclear attack detection systems, but detection may be limited more by sensor physics and less by heightened AI processing of produced information. Also, although autonomous weapons constitute a new class of weapon, their primary implications seem to be concentrated on smaller scale battlefield operations, not GCR. Further, we expect to find that AI-facilitated advancements in creating designer pathogens and poison producing microbes are more pertinent to targeted attacks than to GCR. We caution that these findings are tentative and subject to change as the analysis continues.

Management/Policy Implications: The research presented here has two types of management/policy implications. The first implication concerns military policy, in particular how militaries incorporate AI technology to achieve security objectives and warfighting advantages, while reducing risks, including GCRs. By surveying the GCR associated with the range of military applications of AI, this research will inform policy decisions on the responsible incorporation of AI into military operations and systems. The second implication concerns GCR management. Initiatives to reduce GCR depend on analysis of which GCRs and GCR factors are most significant, in order to prioritize their focus. This research informs GCR management decisions by clarifying the various ways in which military AI may affect GCR and what accompanying risk management options may be available.

Title: Moral Circle Expansion as a Means of Advancing Management of Global Catastrophic Risks
Authors: Manon Gouiran,* Swiss Center for Affective Sciences, Dakota Norris,* University of Saskatchewan, and Seth Baum, Global Catastrophic Risk Institute

Rationale/Background: How does one advance the management of risks in which the people or institutions that need to manage it are not motivated to do so? This is a common situation, especially for risks in which the harms are distributed widely across space and time. It is a particular challenge for global catastrophic risks (GCRs). GCRs are risks of the most extreme severity, with harms accruing to a large portion of the global human population or even ending in the collapse of global civilization. GCRs are inherently global in scope and could have major consequences for future generations. This extreme severity makes them an important class of risk. However, they can go under-addressed by actors who are more motivated to address local-scale risks.

Approach: We are exploring the potential for moral circle expansion (MCE) as a means of advancing management of risks in general and GCRs in particular. An agent’s moral circle is the scope of what they have moral concern for. Small moral circles imply care about themselves and few others; large moral circles imply care about the world at large. MCE is an established topic in philosophy and moral psychology. We are considering its application to risk management. Specifically, we are developing a decision-analytic framework for the relative effectiveness of MCE for risk management and assessing its effectiveness through (1) synthesis of relevant literature in moral psychology and (2) analysis of a case study in environmental politics involving the Te Urewera rainforest in New Zealand.

Results and Discussion: MCE shows potential as a means of advancing the management of GCRs and other risks, though the available evidence is inconclusive. Moral psychology research on MCE has mainly focused on describing people’s moral circles; new studies would be needed to evaluate and how moral circles can be expanded. The Te Urewera case shows that moral circles can expand in a variety of ways, with complex implications for risk management. The case involves greater concern for nature resulting, somewhat counterintuitively, in reduction of risks to humans. MCE has appeal as a potentially durable motivator of risk management, though it also raises questions of the appropriateness of changing other people’s moral views. Meanwhile, non-MCE approaches retain appeal.

Management/Policy Implications: MCE merits consideration within the overall portfolio of options for advancing the management of GCRs and other risks in which relevant actors are not motivated to do address the risks. Significant uncertainties remain regarding the effectiveness and appropriateness of MCE, so it would not be appropriate to initiate large-scale MCE programs at this time. Instead, we recommend an agenda of research to reduce the uncertainties, dialog to assess the appropriateness of MCE, and pilot programs to provide real-world experience. This should be done in parallel with work on other risk management approaches, such as the development of policies with co-benefits that appeal to relevant actors.

Title: Policy Attention to Extreme Catastrophic Risk: The Curious Case of Near-Earth Objects
Authors: Aaron Martin,* Rutgers University, and Seth Baum, Global Catastrophic Risk Institute

Rationale/Background: Extreme catastrophic risks such as nuclear war, pandemic disease, or supervolcano eruption are those that are high severity and low probability, with the potential to curtail human civilization or even threaten its existence. It is often proposed that policymaking processes are unable and unwilling to address extreme, speculative global catastrophic risks due to psychological underestimation of the risk and institutional disincentive to address it. We formalize this idea as the extreme risk neglect hypothesis (ERNH). The ERNH posits that the more extreme a risk is in terms of high severity and low probability, the less policy attention it will get relative to the attention warranted by the size of the risk as measured by the product of its probability and severity.

Approach: We evaluate the ERNH through a case study of policy on the risk from near-Earth objects (NEOs, i.e. asteroids, comets, and meteors). Specifically, we explore the history of NEO policy in the United States, with emphasis on Congressional legislation directing activities by NASA and other federal agencies to address NEO risk. These policy documents concentrate on three areas: detection, preparedness, and deflection or disruption. We focus our analysis on detection policy (arguably the most active policy area), which commonly specifies the size range of NEOs agencies should seek to detect: the more extreme low-frequency, high-severity portions of the risk, corresponding to larger NEOs, or the more moderate-frequency, moderate-severity portions of the risk, corresponding to more moderately sized NEOs.

Results and Discussion: U.S. NEO policy is inconsistent with the ERNH. Instead, the more extreme segment of NEO risk, corresponding to larger NEOs, has received the bulk of the policy attention allocated to the issue. The first formal policy for NEO risk in the U.S., House Resolution 5649 in 1990, focuses on action to detect “large” asteroids, which a 1992 NASA report specifies as asteroids of diameter > 1km. NEOs of this diameter collide with Earth roughly once per 100,000 years with an explosive force of 125,000 Mt TNT. Over time, U.S. NEO policy has gradually expanded to include smaller NEOs. Most recently in 2017, it includes NEOs of diameter < 140m, corresponding to an impact interval of 20,000 years and an explosive force of 300 Mt TNT.

Management/Policy Implications: Policy neglect of extreme catastrophic risk is not inevitable, as demonstrated by U.S. NEO policy. Efforts to promote policy attention to extreme catastrophic risk should move beyond the low-probability, high-severity nature of these risks and instead consider a wider range of risk attributes. For example, NEO risk is distinctive in that the more extreme portion of the risk has a less speculative and more robust scientific basis – large NEOs are easier for astronomers to detect. Therefore, policymaking on extreme catastrophic risk may benefit from scientific resolution of uncertainties. Beyond risk characteristics, policy contexts, the position of experts, and risk management norms are also critical factors. Precise causal explanations for policy attention to or neglect of extreme catastrophic risks, however, is a topic for future research.

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...