GCRI Receives $250,000 Donation for AI Research and Outreach

15 December 2018

We are delighted to announce that GCRI has received a $250,000 donation from Gordon Irlam, to be used for GCRI’s research and outreach on artificial intelligence. The donation will be used mainly to fund Seth Baum and Robert de Neufville during 2019.

The donation is a major first step toward GCRI’s goal of raising $1.5 million to enable the organization to start scaling up. Our next fundraising priority is to bring GCRI Director of Research Tony Barrett on full-time, and possibly also one other senior hire whom we can only discuss privately.

In regards to his donation, Irlam states:

“GCRI does solid and important work on vitally important topics and is one of the only US organizations working on these issues. They have done this work in the past on a very small budget. Advanced AI will have a profound effect on society. It is important that this effect be beneficial. My giving to GCRI is in the hope that they can scale up their research, and scale up their research outreach, so that societal and corporate policies and responses to artificial general intelligence are shaped appropriately.”

All of us at GCRI are grateful for this donation and excited for the work it will enable us to do.

Here is a summary of the specific research and outreach projects funded by this donation:

Corporate governance of AI: Following GCRI’s recent publications on AI skepticism and misinformation, this project seeks to improve how the for-profit sector handles AI risks. It will begin with outreach to people at AI companies and may include further research on strategies for improving corporate governance of AI.

National security dimensions of AI: This project conducts research and outreach on the risks associated with national security and military involvement in AI. The project builds on GCRI’s recent success in outreach to the US national security community on AI, as well as our backgrounds in AI and national security.

Anthropocentrism in AI ethics: This project evaluates the extent to which AI ethics favors humans, develops proposals for how AI ethics should handle questions of human favoritism, and conducts outreach to improve the state of AI ethics conversations. The project extends recent GCRI research on Social choice ethics in artificial intelligence.

Prospects for collective action on AI: This project assesses how to promote positive interactions between different AI groups and avoid dangerous forms of competition, such as races in which groups cut corners on safety in order to build AI first. The project applies GCRI’s expertise on social science topics such as the governance of common-pool resources.

Governance of AI and global catastrophic risk: This project draws on prior scholarship and experience on risk governance to develop general insights and strategies for the governance of global catastrophic risk, with emphasis on AI risk.

Support for the AI and global catastrophic risk talent pools: Finally, this project involves GCRI identifying, training, and mentoring people who are trying to be more active in AI and global catastrophic risk. The project will support GCRI’s efforts to scale up and will also support the wider AI and global catastrophic risk community.

Recent Updates from GCRI

Open Call for Advisees and Collaborators, September 2024

UPDATE: The call for advisees and collaborators is now closed. Thank you to everyone who applied. However, anyone interested in seeking our advice and/or collaborating with us is still welcome to contact us as per the instructions below and we will include them in our...

2023 Annual Report

2023 was a year of learning and transition for GCRI, and likewise a relatively quiet year for us. A year ago, we lost two team members, McKenna Fitzgerald and Andrea Owe, leaving the GCRI team with just its two co-founders, Seth Baum and Tony Barrett. We certainly...

2023 GCRI Fellowship Program

GCRI is pleased to announce the 2023 Fellowship Program. The GCRI Fellowship Program aims to highlight exceptional collaborators GCRI had the opportunity to partner with over the course of the year. This year, we have three 2023 Fellows. One of them is collaborating...