Ethics
Concern about global catastrophic risk often comes from an ethical concern for all those around the world and in the future who may be harmed or killed by global catastrophe. Other related issues also raise profound ethical challenges.
An Introduction to Ethics
Ethics is the study of right and wrong and good and bad. It is about what we should do and how we should live our lives. It is fundamental to the choices that each of us makes as individuals and to the collective choices that we make as a society, the directions we choose to go in. The study of ethics can help us refine our own thinking and provide us with guidance on our activities and the issues we face. This holds true for all issues, and it certainly holds true for global catastrophic risk.
A major argument for prioritizing global catastrophic risk reduction derives from the ethics of equality. It starts with the view that we should value everyone equally, regardless of where or when they live. It then notes the potential for global catastrophe to cause harm at an extreme scale, potentially even into the distant future. If one truly does care about everyone equally, then even a small reduction in the probability of global catastrophe holds massive value for the world. From this perspective, reducing global catastrophic risk should be a top priority for individuals and society.
A different perspective on ethics points to a strategic approach to reducing global catastrophic risk. This perspective recognizes that people hold a wide range of ethical views, and while almost everyone agrees that global catastrophe is bad, many people are not as motivated to reduce the risk. Therefore, efforts to reduce the risk should consider what people do care about and relate that to global catastrophic risk. For example, someone concerned about the cost of living in their local community may support housing policies that also reduce risks from climate change.
Some aspects of global catastrophic risk involve other challenging ethics issues. For example, artificial intelligence technology raises issues of how the technology should be designed, by whom, and to what ends. Global catastrophic risk is part of this, but it is only one part. An appreciation of the broader landscape of AI ethics is valuable for understanding the full range of AI issues and figuring out how global catastrophic risk fits in.
An essential ethical question is what kind of world do we want to live in. Disagreement over this can be a driver of global catastrophic risk. Throughout the Cold War, disagreement between societies favoring communism or capitalism brought a risk of nuclear war. Similar disagreements continue to underlie nuclear war risk and also thwart progress on other risks such as climate change and AI. There is more to these disagreements than just ethics, but ethics is a major part of it.
Finally, there are profound ethical issues in what could happen if humanity manages to avoid global catastrophe. If catastrophe is avoided, then humanity can pursue good outcomes at potentially massive scales, into the distant future and even into outer space. This raises questions of which outcomes should be pursued. Rapid technological change lends a potential urgency to these questions even while posing new risks. The stakes are immense and the issues merit careful scrutiny.
Image credits: fork in the road: Curtis Gregory Perry; apartment construction: Xnatedawgx; Soviet military parade: US Department of Defense
Featured GCRI Publications on Ethics
Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.
The concept of global catastrophe is almost always regarded as something bad that happens to humans, but moral philosophy often considers that bad things can also happen to nonhumans. This paper, published in the journal Science and Engineering Ethics, surveys the wide range of ideas about the intrinsic moral value of nonhumans.
A major approach in AI ethics is to use social choice, in which the AI is designed to act according to the aggregate views of society. This paper, published in the journal AI & Society, shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society.
Â
Full List of GCRI Publications on Ethics
Baum, Seth D. and Andrea Owe. On the intrinsic value of diversity. Inquiry, forthcoming, DOI 10.1080/0020174X.2024.2367247.
Baum, Seth D. Manipulating aggregate societal values to bias AI social choice ethics. AI and Ethics, forthcoming, DOI 10.1007/s43681-024-00495-6.
Owe, Andrea, Seth D. Baum, and Mark Coeckelbergh, 2022. Nonhuman value: A survey of the intrinsic valuation of natural and artificial nonhuman entities. Science and Engineering Ethics, vol 28, no. 5, article 38. DOI 10.1007/s11948-022-00388-z.
Owe, Andrea, 2023. Greening the universe: The case for ecocentric space expansion. In James S. J. Schwartz, Linda Billings, and Erika Nesvold (Editors), Reclaiming Space: Progressive and Multicultural Visions of Space Exploration. Oxford: Oxford University Press, pages 325-336, DOI 10.1093/oso/9780197604793.003.0027.
Baum, Seth D. and Andrea Owe, 2023. From AI for people to AI for the world and the universe. AI & Society, vol. 38, no. 2 (April), pages 679-680, DOI 10.1007/s00146-022-01402-5.
Owe, Andrea and Seth D. Baum, 2021. The ethics of sustainability for artificial intelligence. In Philipp Wicke, Marta Ziosi, João Miguel Cunha, and Angelo Trotta (Editors), Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI (CAIP 2021), Bologna, pages 1-17, DOI 10.4108/eai.20-11-2021.2314105.
Baum, Seth D. and Andrea Owe, 2023. Artificial intelligence needs environmental ethics. Ethics, Policy, & Environment, vol. 26, no. 1, pages 139-143, DOI 10.1080/21550085.2022.2076538.
Owe, Andrea and Seth D. Baum, 2021. Moral consideration of nonhumans in the ethics of artificial intelligence. AI & Ethics, vol. 1, no. 4 (November), pages 517-528, DOI 10.1007/s43681-021-00065-0.
Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019. Long-term trajectories of human civilization. Foresight, vol. 21, no. 1, pages 53-83, DOI 10.1108/FS-04-2018-0037.
Baum, Seth D., 2020. Social choice ethics in artificial intelligence. AI & Society, vol. 35, no. 1 (March), pages 165-176, DOI 10.1007/s00146-017-0760-1.
Baum, Seth D., 2016. The ethics of outer space: A consequentialist perspective. In James S.J. Schwartz and Tony Milligan (editors), The Ethics of Space Exploration. Berlin: Springer, pages 109-123.
Baum, Seth D., 2015. The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives. Futures, vol. 72 (September), pages 86-96, DOI 10.1016/j.futures.2015.03.001.
Baum, Seth, 2014. The lesson of Lake Toba. Bulletin of the Atomic Scientists, 21 October.
Baum, Seth and Grant Wilson, 2013. The ethics of global catastrophic risk from dual-use bioengineering. Ethics in Biology, Engineering and Medicine, vol. 4, no. 1, pages 59-72, DOI 10.1615/EthicsBiologyEngMed.2013007629.
Baum, Seth, 2013. Making the universe a better place. Current Exchange/Technophilic Magazine, Spring, pages 22-23.
An Introduction to Ethics
Ethics is the study of right and wrong and good and bad. It is about what we should do and how we should live our lives. It is fundamental to the choices that each of us makes as individuals and to the collective choices that we make as a society, the directions we choose to go in. The study of ethics can help us refine our own thinking and provide us with guidance on our activities and the issues we face. This holds true for all issues, and it certainly holds true for global catastrophic risk.
A major argument for prioritizing global catastrophic risk reduction derives from the ethics of equality. It starts with the view that we should value everyone equally, regardless of where or when they live. It then notes the potential for global catastrophe to cause harm at an extreme scale, potentially even into the distant future. If one truly does care about everyone equally, then even a small reduction in the probability of global catastrophe holds massive value for the world. From this perspective, reducing global catastrophic risk should be a top priority for individuals and society.
A different perspective on ethics points to a strategic approach to reducing global catastrophic risk. This perspective recognizes that people hold a wide range of ethical views, and while almost everyone agrees that global catastrophe is bad, many people are not as motivated to reduce the risk. Therefore, efforts to reduce the risk should consider what people do care about and relate that to global catastrophic risk. For example, someone concerned about the cost of living in their local community may support housing policies that also reduce risks from climate change.
Some aspects of global catastrophic risk involve other challenging ethics issues. For example, artificial intelligence technology raises issues of how the technology should be designed, by whom, and to what ends. Global catastrophic risk is part of this, but it is only one part. An appreciation of the broader landscape of AI ethics is valuable for understanding the full range of AI issues and figuring out how global catastrophic risk fits in.
An essential ethical question is what kind of world do we want to live in. Disagreement over this can be a driver of global catastrophic risk. Throughout the Cold War, disagreement between societies favoring communism or capitalism brought a risk of nuclear war. Similar disagreements continue to underlie nuclear war risk and also thwart progress on other risks such as climate change and AI. There is more to these disagreements than just ethics, but ethics is a major part of it.
Finally, there are profound ethical issues in what could happen if humanity manages to avoid global catastrophe. If catastrophe is avoided, then humanity can pursue good outcomes at potentially massive scales, into the distant future and even into outer space. This raises questions of which outcomes should be pursued. Rapid technological change lends a potential urgency to these questions even while posing new risks. The stakes are immense and the issues merit careful scrutiny.
Image credits: fork in the road: Curtis Gregory Perry; apartment construction: Xnatedawgx; Soviet military parade: US Department of Defense
Featured GCRI Publications on Ethics
Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.
The concept of global catastrophe is almost always regarded as something bad that happens to humans, but moral philosophy often considers that bad things can also happen to nonhumans. This paper, published in the journal Science and Engineering Ethics, surveys the wide range of ideas about the intrinsic moral value of nonhumans.
A major approach in AI ethics is to use social choice, in which the AI is designed to act according to the aggregate views of society. This paper, published in the journal AI & Society, shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society.
Â
Full List of GCRI Publications on Ethics
Baum, Seth D. and Andrea Owe. On the intrinsic value of diversity. Inquiry, forthcoming, DOI 10.1080/0020174X.2024.2367247.
Baum, Seth D. Manipulating aggregate societal values to bias AI social choice ethics. AI and Ethics, forthcoming, DOI 10.1007/s43681-024-00495-6.
Owe, Andrea, Seth D. Baum, and Mark Coeckelbergh, 2022. Nonhuman value: A survey of the intrinsic valuation of natural and artificial nonhuman entities. Science and Engineering Ethics, vol 28, no. 5, article 38. DOI 10.1007/s11948-022-00388-z.
Owe, Andrea, 2023. Greening the universe: The case for ecocentric space expansion. In James S. J. Schwartz, Linda Billings, and Erika Nesvold (Editors), Reclaiming Space: Progressive and Multicultural Visions of Space Exploration. Oxford: Oxford University Press, pages 325-336, DOI 10.1093/oso/9780197604793.003.0027.
Baum, Seth D. and Andrea Owe, 2023. From AI for people to AI for the world and the universe. AI & Society, vol. 38, no. 2 (April), pages 679-680, DOI 10.1007/s00146-022-01402-5.
Owe, Andrea and Seth D. Baum, 2021. The ethics of sustainability for artificial intelligence. In Philipp Wicke, Marta Ziosi, João Miguel Cunha, and Angelo Trotta (Editors), Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI (CAIP 2021), Bologna, pages 1-17, DOI 10.4108/eai.20-11-2021.2314105.
Baum, Seth D. and Andrea Owe, 2023. Artificial intelligence needs environmental ethics. Ethics, Policy, & Environment, vol. 26, no. 1, pages 139-143, DOI 10.1080/21550085.2022.2076538.
Owe, Andrea and Seth D. Baum, 2021. Moral consideration of nonhumans in the ethics of artificial intelligence. AI & Ethics, vol. 1, no. 4 (November), pages 517-528, DOI 10.1007/s43681-021-00065-0.
Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019. Long-term trajectories of human civilization. Foresight, vol. 21, no. 1, pages 53-83, DOI 10.1108/FS-04-2018-0037.
Baum, Seth D., 2020. Social choice ethics in artificial intelligence. AI & Society, vol. 35, no. 1 (March), pages 165-176, DOI 10.1007/s00146-017-0760-1.
Baum, Seth D., 2016. The ethics of outer space: A consequentialist perspective. In James S.J. Schwartz and Tony Milligan (editors), The Ethics of Space Exploration. Berlin: Springer, pages 109-123.
Baum, Seth D., 2015. The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives. Futures, vol. 72 (September), pages 86-96, DOI 10.1016/j.futures.2015.03.001.
Baum, Seth, 2014. The lesson of Lake Toba. Bulletin of the Atomic Scientists, 21 October.
Baum, Seth and Grant Wilson, 2013. The ethics of global catastrophic risk from dual-use bioengineering. Ethics in Biology, Engineering and Medicine, vol. 4, no. 1, pages 59-72, DOI 10.1615/EthicsBiologyEngMed.2013007629.
Baum, Seth, 2013. Making the universe a better place. Current Exchange/Technophilic Magazine, Spring, pages 22-23.