CHAI aims to reorient the foundations of AI research toward the development of provably beneficial systems. Currently, it is not possible to specify a formula for human values in any form that we know would provably benefit humanity, if that formula were instated as the objective of a powerful AI system. In short, any initial formal specification of human values is bound to be wrong in important ways. This means we need to somehow represent uncertainty in the objectives of AI systems. This way of formulating objectives stands in contrast to the standard model for AI, in which the AI system's objective is assumed to be known completely and correctly.

Therefore, much of CHAI's research efforts to date have focussed on developing and communicating a new model of AI development, in which AI systems should be uncertain of their objectives, and should be deferent to humans in light of that uncertainty. However, our interests extend to a variety of other problems in the development of provably beneficial AI systems. Our areas of greatest focus so far have been the foundations of rational agency and causality, value alignment and inverse reinforcement learning, human-robot cooperation, multi-agent perspectives and applications, and models of bounded or imperfect rationality. Other areas of interest to our mission include adversarial training and testing for ML systems, various AI capabilities, topics in cognitive science, ethics for AI and AI development robust inference and planning, security problems and solutions, and transparency and interpretability methods.

In addition to purely academic work, CHAI strives to produce intellectual outputs for general audiences as well. We also advise governments and international organizations on policies relevant to ensuring AI technologies will benefit society, and offer insight on a variety of individual-scale and societal-scale risks from AI, such as pertaining to autonomous weapons, the future of employment, and public health and safety.

Below is a list of CHAI's publications since we began operating in 2016. Many of our publications are collaborations with other AI research groups; we view collaborations as key to integrating our perspectives into mainstream AI research.

0. NOTE:

0. NOTE:

1. Overviews

1.1. Books

1.2. Overviews of societal-scale risks from AI

1.3. Overviews of beneficial AI applications

  • Anca Dragan, Andrew Alleyne, Frank Allgöwer, Aaron Ames, Saurabh Amin, James Anderson, Anuradha Annaswamy, Panos Antsaklis, Neda Bagheri, Hamsa Balakrishnan, Bassam Bamieh, John Baras, Margret Bauer, Alexandre Bayen, Paul Bogdan, Steven Brunton, Francesco Bullo, Etienne Burdet, Joel Burdick, Laurent Burlion, Carlos Canudas de Wit, Ming Cao, Christos Cassandras, Aranya Chakrabortty, Giacomo Como, Marie Csete, Fabrizio Dabbene, Munther Dahleh, Amritam Das, Eyal Dassau, Claudio De Persis, Mario di Bernardo, Stefano Di Cairano, Dimos Dimarogonas, Florian Dörfler, John Doyle, Francis Doyle III, Magnus Egerstedt, Johan Eker, Sarah Fay, Dimitar Filev, Angela Fontan, Elisa Franco, Masayuki Fujita, Mario Garcia-Sanz, Dennice Gayme, WPMH Heemels, João Hespanha, Sandra Hirche, Anette Hosoi, Jonathan How, Gabriela Hug, Marija Ilić, Hideaki Ishii, Ali Jadbabaie, Matin Jafarian, Samuel Qing-Shan Jia, Tor Johansen, Karl Johansson, Dalton Jones, Mustafa Khammash, Pramod Khargonekar, Mykel Kochenderfer, Andreas Krause, Anthony Kuh, Dana Kulić, Françoise Lamnabhi-Lagarrigue, Naomi Leonard, Frederick Leve, Na Li, Steven Low, John Lygeros, Iven Mareels, Sonia Martinez, Nikolai Matni, Tommaso Menara, Katja Mombaur, Kevin Moore, Richard Murray, Toru Namerikawa, Angelia Nedich, Sandeep Neema, Mariana Netto, Timothy O’Leary, Marcia O’Malley, Lucy Pao, Antonis Papachristodoulou, George Pappas, Philip Paré, Thomas Parisini, Fabio Pasqualetti, Marco Pavone, Akshay Rajhans, Gireeja Ranade, Anders Rantzer, Lillian Ratliff, J Anthony Rossiter, Dorsa Sadigh, Tariq Samad, Henrik Sandberg, Sri Sarma, Luca Schenato, Jacquelien Scherpen, Angela Schoellig, Rodolphe Sepulchre, Jeff Shamma, Robert Shorten, Bruno Sinopoli, Koushil Sreenath, Jakob Stoustrup, Jing Sun, Paulo Tabuada, Emma Tegling, Dawn Tilbury, Claire Tomlin, Jana Tumova, Kevin Wise, Dan Work, Junaid Zafar, Melanie Zeilinger. 2023. Control for Societal-scale Challenges: Road Map 2030. IEEE Control Systems Society Publication, 2023
  • Jonathan Stray. 2022. Designing Recommender Systems to Depolarize. First Monday
  • Raphael Taiwo Aruleba, Tayo Alex Adekiya, Nimibofa Ayawei, George Obaido, Kehinde Aruleba, Ibomoiye Domor Mienye, Idowu Aruleba, Blessing Ogbuokiri. 2022. COVID-19 diagnosis: a review of rapid antigen, RT-PCR and artificial intelligence methods. Bioengineering 9 (4), 153
  • Jocelyn Maclure, Stuart Russell. 2021. AI for Humanity: The Global Challenges. Reflections on Artificial Intelligence for Humanity

2. Core topics

1.2. Overviews of societal-scale risks from AI

2.1. Foundations of rational agency & causality

2.2. Value alignment and inverse reinforcement learning

2.3. Human-robot cooperation

2.4. Multi-agent perspectives and applications

2.5. Models of bounded or imperfect rationality

2.6. Models of human cognition

3. Other topics

3.1. Adversarial training and testing

3.2. AI capabilities, uncategorized

3.3. Cognitive science, uncategorized

3.4. Ethics for AI and AI development

3.5. Robust inference, learning, and planning

3.6. Security problems and solutions

3.7. Transparency & interpretability