SODAS Lecture: AI Alignment and Human Normativity

AI - photo: colourboxThe Centre for Social Data Science (SODAS), is pleased to announce its Spring Lecture Series 2018. The theme of the series is Social Data Sciences. Speaking across domains as diverse as computational privacy, legal systems, international finance and big data infrastructures, our speakers will highlight the challenges that we face with these new social data configurations, and the methodological innovations that we need to foster in order to understand and intervene in them.

Lectures take place in Building 35, Floor 3, Room 20 (35.3.20) of the CSS Campus, Copenhagen University, from 11.00am - 12.30pm.


Gillian HadfieldThe fourth speaker is Gillian Hadfield, Richard L. and Antoinette Schamoi Kirtland professor of law and professor of economics, School of Law, University of Southern California.

AI Alignment and Human Normativity

Abstract

Much of the discussion of AI safety and alignment focuses on how we should regulate AI: what norms AIs should observe. Less attention is paid to the question of how we can build AI systems that are capable of observing human norms. In this talk, I’ll explore two dimensions of this question ofhow to design AI systems that can interface with human normative systems. The first looks at how human systems deal with the inevitable problem of incompleteness in the contracts intended to specify how human agents should perform. Here we see human systems depend heavily on external institutional structure providing normative information about what is preferred behavior and internal cognitive structure that can read and use that external institutional information to fill out incomplete contracts. AI, we argue, will similarly need the ability to read the external normative environment and convert that information into inevitably incomplete reward structures. In the second part of the talk, I’ll present an example of why developing aligned AI systems will require much more sophisticated models of human normative systems than we currently possess. This example demonstrates why we should expect that human normative systems contain “silly rules”—rules with little functional content—in order to support the stability of equilibria in which important rules are enforced. The point is that human normative systems are more complex than a simple focus on a handful of ethical puzzles or normative debates reveals. AI systems that lack models that track the complexity of human normative systems will, we argue, struggle to integrate into human societies.

Short Bio

Gillian Hadfield is a leading proponent of the reform and redesign of legal systems for a rapidly changing world facing tremendous challenge from globalization and technology. Her extensive research examines how to make law more accessible, effective, and capable of fulfilling its role in balancing innovation, growth, and fairness. Hadfield is a member of the World Economic Forum’s Global Future Council on the Future of Technology, Values and Policy, and co-curates the Forum’s Transformation Map for Justice and Legal Infrastructure. She was appointed in 2017 to the American Bar Association’s Commission on the Future of Legal Education, serves as Director of the USC Center for Law and Social Science, and is a member of the World Justice Project’s Research Consortium. She serves as an advisor to The Hague Institute for the Innovation of Law, LegalZoom, and other legal tech startups. Hadfield holds a J.D. from Stanford Law School and Ph.D. in economics from Stanford University.

Further Info: https://gillianhadfield.com