From AI for People to AI for the World and the Universe

by ,

1 December 2021

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for the World” or “AI for the Universe”. The paper is part of a collection “AI for People” to be published as a special issue of the journal AI & Society.

The paper grounds its arguments in fundamental moral philosophy concepts. As the paper explains, humans are not the only entities with morally relevant attributes, such as the ability to experience pleasure and pain or the possibility of having a life worth living. The fact that nonhuman entities possess morally relevant attributes is a strong reason to morally value them.

For AI, the stakes can be quite high. Modern AI systems use a lot of energy and other resources, with significant environmental impacts. Additionally, AI technology can be used to address environmental issues, though if not used carefully, it can end up doing more harm than good. Finally, advanced future AI systems, such as runaway superintelligence, could lead to outcomes whose moral value is highly sensitive to how the AI systems account for the moral value of nonhumans.

The paper proposes a twofold effort. First, moral philosophy work on AI should recognize the moral importance of nonhumans and explore the implications of nonhumans for AI ethics. Switching to names like “AI for the World” or “AI for the Universe” is one way to start in this direction. Second, computer science work on AI ethics should develop techniques that enable AI systems to account for the moral value of nonhumans. This could include exploring proxy schemes instead of the existing observational approaches to inferring the values of moral subjects and aligning AI systems to them.

The paper contributes to a significant line of GCRI research on environmental ethics and AI. Moral consideration of nonhumans in the ethics of artificial intelligence and The ethics of sustainability for artificial intelligence document the tendency for work on AI ethics to focus on humans and call for more robust attention to nonhumans. Artificial intelligence, systemic risks, and sustainability analyzes risks associated with near-term applications of AI in sectors related to environmental sustainability such as agriculture and forestry. Social choice ethics in artificial intelligence discusses how to handle nonhumans within common AI ethics paradigms. Finally, Artificial intelligence needs environmental ethics calls for environmental ethicists to contribute their perspectives to AI ethics.

Academic citation:

Baum, Seth D. and Andrea Owe, 2023. From AI for people to AI for the world and the universe. AI & Society, vol. 38, no. 2 (April), pages 679-680, DOI 10.1007/s00146-022-01402-5.

Download Preprint PDFView in AI & SocietyView in ReadCube

Image credit: Pixabay

Related Topics:

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine