Deep Learning and the Sociology of Human-Level Artificial Intelligence

by

18 June 2020

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology, especially language, and their implications for AI. This is a worthy contribution, all the more so because social science perspectives are underrepresented in the study of AI relative to perspectives from computer science, cognitive science, and philosophy. On the other hand, the book does not do well in its treatment of the AI techniques it addresses. A better book for that is Rebooting AI by Gary Marcus and Ernest Davis; I would recommend this for readers outside the field of computer science who would like to understand the computer science of AI.

Artifictional Intelligence argues that deep learning—the current dominant AI technique—cannot master human language because it is based on statistical pattern recognition of large datasets, whereas language often addresses novel situations for which data is scarce or absent. (Rebooting AI also makes this argument.) Artifictional Intelligence shows this via some clever and entertaining experiments, such as using Google Translate to translate certain phrases from English to another language then back into English. For example, “I field at short leg”, an expression from cricket, is more successfully translated to and from Afrikaans (“I field on short leg”) than Chinese (“I am in the short leg field”), which makes sense given the geography of cricket. (The translations listed here are from the time of writing this blog post. The translations constantly change as the Google Translate algorithm is updated and as it processes more data.)

The book further argues that for an AI to achieve human-level language ability, it would need to be embedded in human society. Only then would it master the nuances of human language. The book draws on Collins’s experience as a sociologist studying communities of gravitational wave physicists. Collins participated in imitation games in which he tried to pass himself off as a gravitational wave physicist, analogous to the well-known Turing test for AI. Collins attributes his own success at these games to his extensive time embedded in gravitational wave physics communities. This experience, as well as his understanding of the relevant sociology, prompts Collins to conclude that an AI would need to be similarly embedded in order to reach human-level ability in language.

One serious problem with the book is that it consistently treats human-level AI as a scientific endeavor without considering its ethical and societal implications. Collins wishes the field of AI was more like the field of gravitational physics in its narrow focus on big scientific breakthroughs. That is bad advice. The field of AI needs more attention to its ethical and societal implications, not less. AI has profound ethical and societal implications given its many current and potential future applications. AI experts need to participate in efforts to address these matters in order to ensure that these efforts are based on a sound understanding of the technology.

Academic citation:
Baum, Seth D., 2020. Deep learning and the sociology of human-level artificial intelligence. Metascience, vol. 29, no. 2 (July), pages 313-317, DOI 10.1007/s11016-020-00510-6.

Download Preprint PDFView in MetascienceView in ReadCube

Image credit: Wiley

Related Topics:

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

Recent Publications from GCRI

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

The issue of democratic backsliding has gotten extensive attention in recent years. That is the issue of countries becoming less democratic than they previously were. Often, it is discussed in terms of governments becoming more authoritarian. That is undoubtedly very...

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Depending on who you ask, the AI industry is either on track to render humans obsolete within the next few years or about to go bust, or maybe somewhere in between. This matter is among the most important questions right now, not just for global catastrophic risk but...

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

It may seem ridiculous to entertain the idea of world peace at this time. With major active wars in Ukraine, the Middle East, Myanmar, and multiple locations in Africa, plus smaller conflicts and tense relations between many countries and subnational factions, this is...

Democratic Participation and Global Catastrophic Risk

Democratic Participation and Global Catastrophic Risk

Parsing AI Risk in Early 2025

Parsing AI Risk in Early 2025

Advance World Peace to Address Global Catastrophic Risk?

Advance World Peace to Address Global Catastrophic Risk?

The Nuclear War Scenarios that Lurk Ahead in Ukraine

The Nuclear War Scenarios that Lurk Ahead in Ukraine