Superintelligence Skepticism as a Political Tool

by

24 August 2018

For decades, there have been efforts to exploit uncertainty about science and technology for political purposes. This practice traces to the tobacco industry’s effort to sow doubt about the link between tobacco and cancer, and it can be seen today in skepticism about climate change and other major risks. This paper analyzes the possibility that the same could happen for the potential future artificial intelligence technology known as superintelligence.

Artificial superintelligence is AI that is much smarter than humans. Current AI is not superintelligent. Some people believe that superintelligence can be built, and that if built, it would have extreme consequences, which could be either good or bad depending on its design. However, other people are skeptical of these claims, and of the claim that this issue is important enough to merit attention today. This skepticism could be the basis for politicized skepticism such as exists for other issues.

The paper examines current superintelligence skepticism and finds that it is sometimes used politically, but not to nearly the same extent as is found for issues like climate change. Some AI researchers appear to profess superintelligence skepticism in order to protect the reputation and funding of their field. Some AI technology corporations show hints of politicized skepticism, but not to any significant extent. However, if superintelligence skepticism is politicized, then it could be very successful, including due to the difficulty of resolving uncertainty about this possible future technology.

The paper is part of an ongoing effort by the Global Catastrophic Risk Institute to accelerate the study of the social and policy dimensions of AI by leveraging insights from other fields. Other examples include the paper On the promotion of safe and socially beneficial artificial intelligence, which leverages insights from environmental psychology to study how to motivate AI researchers to pursue socially beneficial AI designs, and ongoing research modeling the risk of artificial superintelligence (see this, this, and this), which leverage risk analysis techniques that GCRI previously used for the risk of nuclear war. This capacity to leverage insights from other fields speaks to the value of GCRI’s cross-risk approach to the study of global catastrophic risk.

Academic citation:
Seth D. Baum, 2018. Superintelligence skepticism as a political tool. Information, vol. 9, no. 9, article 209, DOI 10.3390/info9090209.

View in Information

Image credit: Melissa Thomas Baum

Related Topics:

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine