Artificial Intelligence Needs Environmental Ethics

by ,

16 November 2021

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide whom to harm in hypothetical crash scenarios. Less attention has been paid to the environmental impacts of autonomous vehicles, even though the environmental impacts are arguably much more important. Environmental ethicists could make the moral case for attention to this and to other environmental issues involving AI.

Second, environmental ethicists can help analyze novel ethical situations involving AI. These situations specifically involve artificial versions of phenomena that have long been studied in environmental ethics research. For example, AI and related technology could result in the creation of artificial life and artificial ecosystems. Environmental ethicists have often argued that “natural” life and ecosystems have important moral value. Perhaps the same reasoning would apply to artificial life and ecosystems, or perhaps it would not due to their artificiality. Environmental ethicists can help evaluate these sorts of novel ethical issues. Such work is especially important because existing work on AI ethics has focused more narrowly on issues centered on humans; environmental ethicists can help make the case for a broader scope.

Third, environmental ethicists can provide valuable perspectives on the future-orientation of certain AI issues. Within the communities of people working on AI issues, there is a divide between people focused on near-term AI issues and people focused on long-term AI issues. Global catastrophic risks associated with AI are often linked to the long-term AI issues. Similar debates exist on environmental issues, due to the long-term nature of major environmental issues such as global warming, natural resource depletion, and biodiversity loss. Environmental ethicists have made considerable progress on the ethics of the future, which can be applied to debates about AI.

The paper builds on prior GCRI research and experience as environmental ethicists working on AI. Moral consideration of nonhumans in the ethics of artificial intelligence documents the tendency for work on AI ethics to focus on humans and calls for more robust attention to nonhumans. Reconciliation between factions focused on near-term and long-term artificial intelligence describes the debate between those favoring attention to near-term AI issues and those favoring attention to long-term AI issues. Artificial intelligence, systemic risks, and sustainability analyzes risks associated with near-term applications of AI in sectors related to environmental sustainability such as agriculture and forestry.

Academic citation:
Baum, Seth D. and Andrea Owe, 2023. Artificial intelligence needs environmental ethics. Ethics, Policy, & Environment, vol. 26, no. 1, pages 139-143, DOI 10.1080/21550085.2022.2076538.

Download Preprint PDFView in Ethics, Policy, & Environment

Image credit: Buckyball Design

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine