View in Project Syndicate

This article discusses the risk of advanced AI and how it should not be taken as mere science fiction.

The article begins as follows:

NEW YORK – Recent advances in artificial intelligence have been nothing short of dramatic. AI is transforming nearly every sector of society, from transportation to medicine to defense. So it is worth considering what will happen when it becomes even more advanced than it already is.

The apocalyptic view is that AI-driven machines will outsmart humanity, take over the world, and kill us all. This scenario crops up often in science fiction, and it is easy enough to dismiss, given that humans remain firmly in control. But many AI experts take the apocalyptic perspective seriously, and they are right to do so. The rest of society should as well.

To understand what is at stake, consider the distinction between “narrow AI” and “artificial general intelligence” (AGI). Narrow AI can operate only in one or a few domains at a time, so while it may outperform humans in select tasks, it remains under human control.

The remainder of the article is available in Project Syndicate.

The article was reprinted in The New Times (Rwanda), Khaleej Times (United Arab Emirates), World Economic Forum, MarketWatch, Japan Times, Asia Times, Médias24 (Morocco), Times of Oman, Khmer Times, Taipei Times, and Dagens Perspektiv (Oslo).


This blog post was published on 16 May 2018 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.