The Case for Long-Term Corporate Governance of AI

by

8 November 2021

This article makes the case for long-term corporate governance of AI, emphasizing three main points. First, the long-term corporate governance of AI, which they define as the corporate governance of AI that could affect the long-term future, is an important area of long-term AI governance. Second, corporate governance of AI has been relatively neglected by communities that focus on long-term AI issues. Third, there are tractable steps these communities could take to improve the long-term corporate governance of AI.

The article is authored by Seth Baum of GCRI and Jonas Schuett of the Legal Priorities Project. It builds on prior work by Baum and Schuett, including Corporate governance of artificial intelligence in the public interest (co-authored with Peter Cihon) and AI certification: Advancing ethical practice by reducing information asymmetries (co-authored with Peter Cihon and Moritz Kleinaltenkamp).

The article is available in the Effective Altruism Forum.

Image credit: Max Bender

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine