Our Final Invention: Is AI the Defining Issue for Humanity?

by

11 October 2013

This article discusses the risk of AI catastrophe through a review of James Barrat’s book Our Final Invention.

The article begins as follows:

Humanity today faces incredible threats and opportunities: climate change, nuclear weapons, biotechnology, nanotechnology, and much, much more. But some people argue that these things are all trumped by one: artificial intelligence (AI). To date, this argument has been confined mainly to science fiction and a small circle of scholars and enthusiasts. Enter documentarian James Barrat, whose new book Our Final Invention states the case for (and against) AI in clear, plain language.

Disclosure: I know Barrat personally. He sent me a free advance copy in hope that I would write a review. The book also cites research of mine. And I am an unpaid Research Advisor to the Machine Intelligence Research Institute, which is discussed heavily in the book. But while I have some incentive to say nice things, I will not be sparing in what (modest) criticism I have.

 The central idea is haltingly simple. Intelligence could be the key trait that sets humans apart from other species. We’re certainly not the strongest beasts in the jungle, but thanks to our smarts (and our capable hands) we came out on top. Now, our dominance is threatened by creatures of our own creation. Computer scientists may now be in the process of building AI with greater-than-human intelligence (“superintelligence”). Such AI could become so powerful that it would either solve all our problems or kill us all, depending on how it’s designed.

The remainder of the article is available in Scientific American or in PDF archive.


This blog post was published on 11 October 2013 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Related Topics:

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine