Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems

by

28 July 2020

Baum, Seth D., 2021. Artificial interdisciplinarity: Artificial intelligence for research on complex societal problems. Philosophy & Technology, vol. 34, no. S1 (November), pages 45-63, DOI 10.1007/s13347-020-00416-5.

Download Preprint PDF

One major challenge in making progress on global catastrophic risk is its interdisciplinarity. Understanding how best to address the risk requires input from risk analysis, public policy, social science, ethics, and a variety of other fields pertaining to specific risks, such as astronomy for asteroid risk and computer science for artificial intelligence (AI) risk. Working across all these disparate fields is a very difficult challenge for human minds. This paper explores the use of AI to help with the cognitive challenge of interdisciplinary research so as to advance progress on global catastrophic risk and other complex societal problems. It coins the term “artificial interdisciplinarity” to refer to AI systems that help with interdisciplinary research.

While all areas of research can be cognitively difficult, interdisciplinary research poses several distinct challenges. First, it is often difficult to bridge divides between different academic disciplines due to their differences in terminology, paradigms or ways of thinking, and views on what makes for good research. Second, the quantity of literature of relevance to complex interdisciplinary topics can be overwhelmingly large, much too much for any one researcher to master. Third, it is difficult to conduct peer review of interdisciplinary research manuscripts and funding proposals because reviewers often lack expertise across all the disparate disciplines included in the research. Finally, insights from the study of one interdisciplinary topic are not readily transferred to the study of other, similar interdisciplinary topics because of the psychological “distance” between the topics.

Current AI systems already help with some of these challenges. Search engines such as Google Scholar and Semantic Scholar help identify relevant literature and expert reviewers across disciplines. Ditto for recommendation engines, such as the project http://x-risk.net, which uses a custom artificial neural network to produce recommendations of literature on catastrophic risk. Machine learning tools are also being used for “automated content analysis” to map the literature on specific topics. All of these tools facilitate interdisciplinary research, but they are limited by the fundamental limitations of current AI techniques, in particular the inability of machine learning to handle causal relationships, hierarchies, and open-ended environments.

Future “artificial interdisciplinarity” systems could add more value if they can improve at certain key tasks, including the interpretation of texts, the translation of language and ideas from one discipline to another, and the transfer of insight from one topic to another. Each of these is an active area of AI research. For example, the publisher Elsevier has sponsored the project ScienceIE to work on interpretation. The field of AI has major lines of work dedicated to translation across human languages and to transfer learning. Progress on these fronts may require breakthroughs beyond current AI paradigms, but it would be of high value to understanding and addressing global catastrophic risk and other interdisciplinary societal problems.

Over the long-term, it is not hard to imagine some future AI that can accomplish all the cognitive tasks of interdisciplinary research. An advanced artificial general intelligence (AGI) may be able to think at least as well as humans across the full range of cognitive tasks. Such an AI would presumably also be very capable of doing interdisciplinary research. Indeed, some current projects seeking to build AGI are motivated by the cognitive difficulty of interdisciplinary research for human minds. On the other hand, advanced AGI may not be available any time soon and may itself pose major risks, even if it is designed as an “oracle” that can only answer questions that humans pose to it.

This paper builds on several prior lines of GCRI research. All of our research is interdisciplinary, providing us with experience in the cognitive challenges addressed in the paper. We specialize in the transfer of insights across issues, such as in our papers Lessons for artificial intelligence from other global risks and On the promotion of safe and socially beneficial artificial intelligence. Our prior work also cuts across near-term, medium-term, and long-term AI, such as our papers Medium-term artificial intelligence and society and Reconciliation between factions focused on near-term and long-term artificial intelligence. Finally, our knowledge of the motivations of current AGI projects derives from our paper A survey of artificial general intelligence projects for ethics, risk, and policy.

Academic citation:
Baum, Seth D., 2021. Artificial interdisciplinarity: Artificial intelligence for research on complex societal problems. Philosophy & Technology, vol. 34, no. S1 (November), pages 45-63, DOI 10.1007/s13347-020-00416-5.

Download Preprint PDFView in Philosophy & TechnologyView in ReadCube

Stockholm Stadsbiblioteket photo credit: Gunnar Ridderström

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Recent Publications from GCRI

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Download PDF Preprint Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on...

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Download PDF Preprint Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s...

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics