Workshop: Algorithmic Fairness

Workshop organized by the Responsible AI project at University of Copenhagen.

Attendance is free but please send an email to Sune Holm (suneh@ifro.ku.dk) if you plan to attend.

Programme

Find abstracts below.

08.30 - 09.00 Coffee and croissants
09.00 - 10.00 Procedural and Outcome Algorithmic Fairness
Sune Holm, University of Copenhagen
10.15 - 11.15 On the Apparent Conflict Between Individual and Group Fairness
Reuben Binns, Oxford University
11.30 - 12.30 What Law, and When: Legally Contextualizing Algorithmic Discrimination and De-biasing Design
Jacob Livingston Slosser, University of Copenhagen
12.30 - 13.30 Lunch
13.30 - 14.30 Discrimination and fairness in government algorithmic profiling
Frej Klem Thomsen, The Danish Institute for Human Rights
14.45 - 15.45 Algorithm-Based Sentencing and Discrimination
Kasper Lippert-Rasmussen, Aarhus University
16.00 - 17.00 Sentencing and Algorithmic Transparency
Jesper Ryberg, Roskilde University
17.30 Drinks and dinner at Radio

Abstracts

Sune Holm: Procedural and Outcome Algorithmic Fairness
When is an algorithm fair? In the wake of several cases in which algorithmic prediction-based decisions have been shown to disparately affect socially salient groups, recent work on fairness in machine learning has mainly been concerned with problems of outcome fairness. In this paper I discuss a recent attempt to make the case for procedural fairness, where procedural fairness is defined as a matter of which features an algorithm is permitted to use to make prediction-based decisions. I first consider empirical and theoretical approaches to determining which features a procedurally fair algorithm should be permitted to deploy. I then turn to consider the more general question about the relationship between procedural and outcome algorithmic fairness. Can algorithmic fairness be reduced to be purely a matter of procedure or outcome? Or does it involve some kind of equilibrium between the two?

Reuben Binns: On the Apparent Conflict Between Individual and Group Fairness
A distinction has been drawn in fair machine learning research between `group' and `individual' fairness measures. Many technical research papers assume that both are important, but conflicting, and propose ways to minimise the trade-offs between these measures. This paper argues that this apparent conflict is based on a misconception. It draws on discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict. First, it outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness, thereby suggesting that there need be no conflict in principle. Second, it considers the concept of individual justice, from legal philosophy and jurisprudence, which seems similar but actually contradicts the notion of individual fairness as proposed in the fair machine learning literature. The conclusion is that the apparent conflict between individual and group fairness is more of an artefact of the blunt application of fairness measures, rather than a matter of conflicting principles. In practice, this conflict may be resolved by a nuanced consideration of the sources of `unfairness' in a particular deployment context, and the carefully justified application of measures to mitigate it. 

Jacob Livingston Slosser: What Law, and When: Legally Contextualizing Algorithmic Discrimination and De-biasing Design
With an ever increasing amount of information and complexity, decisions are increasingly made automatically or semi-automatically by algorithms. But algorithms, which build rules from statistics notoriously reproduce patterns of past inequalities and transform them into weights for the purpose of new decisions. Algorithmic discrimination is the phenomenon that arises when computer applications that are supposed to operate in way that is equal for all turns out to be biased. De-biasing software is designed to neutralize statistics that reproduce structural inequalities that help create algorithmic discrimination. However, the implementation of algorithms and accompanying de-biasing systems are not so straightforward. What kind of legal principles should determine the content of de-biasing software to alleviate legal responsibility for corporations and public institutions for algorithmic discrimination? And how do those principles adapt to context specific discrimination that is justified given a specific algorithm or factual context.

For example, some algorithms are used for medical purposes helping diagnose diseases or allocate costs, among others. Some of these algorithms need to be designed with equal outcomes in mind, but some, by necessity, do not. Diagnosing algorithms, for instance, may be designed specifically for people of a given gender, age, or race. These may be perfectly legal, since they are intended to achieve a legitimate aim: the best possible prognosis for some disease that may have different prognoses or likelihoods dependent on sex, age, or race. Other kinds of intentional or unintentional discrimination may not be legal. For instance, outcomes based on a protected class or proxies thereof, with no underlying medical reasoning behind the distinctions. An algorithm may or may not be biased depending on what the reasons for the bias is, and depending on what the legal and factual context is.

In this paper we focus only on unintended bias and the regulation of such bias by asking about both the positive and negative obligations under European human rights law and EU law. We ask one, whether and how unintended bias that lead to discriminatory practices may be addressed legally; i.e. under what circumstances would it possible to challenge a practice as illegal and how could a person who is the victim of algorithmic discrimination redress that harm? And two, whether (and if so under what conditions) a private corporation or public institution that uses/runs an algorithmic system may be held responsible for discriminatory practices even when the corporation or public institution has sought to avoid such practice by implementing de-biasing software in its algorithms. We argue in this paper that the frequent calls for more ethical approaches to AI, which often mention discrimination, are misguided because there can be no catch-all solution to the problem of what, in a specific context, constitutes equality. Instead, based on the most recent trends in equality jurisprudence, we propose how de-biasing software could be vetted against existing legal standards of equality.

Frej Klem Thomsen: Discrimination and Fairness in Government Algorithmic Profiling
My ambitions in this paper are fairly modest. I aim to introduce the problem of discriminatory algorithmic profiling, by clarifying how algorithmic profiling works, and how it can be discriminatory, and to present some of the work on fair machine learning that Data and Computer Scientists have developed. My main argument will be that how one ought to evaluate criteria of fairness and non-discrimination for algorithmic profiling depends heavily on which background theory of what makes discrimination wrong one adopts. In section two, I first say a little about algorithmic profiling and the process of employing machine learning to train a profiling model. While these topics are interesting in their own respect, my main reason to do so is that some points of clarification will be of use in the later discussion of discrimination. In section three, I present the four ways in which algorithmic profiling can be discriminatory and some of the work on fair machine learning that Data and Computer Scientists have developed. Next, in section four, I present the three arguably most prominent current accounts of wrongful discrimination. Finally, I illustrate some of the ways in which the need to explain wrongful discriminatory algorithmic profiling can inform theory as well as the implications the accounts have for what one must think of fairness and discrimination in algorithmic profiling.

Kasper Lippert-Rasmussen: Algorithm-Based Sentencing and Discrimination
US courts are increasingly using actuarial recidivism risk prediction instruments in estimating offenders’ dangerousness and, thus, the warranted severity of the punishment. Some argue that this so-called evidence-based sentencing bypasses well-known biases in non-actuarially based recidivism risk assessments. However, I argue that, in the present US context, evidence-based sentencing is discriminatory. I argue that it is quite likely to be indirectly discriminatory and that its endorsement might also be directly discriminatory. Finally, I argue that insofar as it amounts to unfair discrimination – whether direct or indirect – against African Americans, this might have radical implications regarding the US penal system in general, to wit, that it too, for reasons that so far have not been noticed and like most other legal systems, is unfairly discriminatory against African Americans or other relevantly similar groups. 

Jesper Ryberg: Sentencing and Algorithmic Transparency
Algorithmic transparency constitutes a topic that has attracted increasing attention in academic discussions of the use of AI in criminal justice practice. Roughly put, the worry is that the introduction of algorithms in the sentencing process may challenge the scrutability and insight into judicial decision-making. However, there is striking contrast between the repeated emphasis of the significance of transparency and the lack of attempts at answering precisely what the transparency challenge consists in and at devising viable solutions. This paper is a modest attempt at bringing research forward in this respect by examining, on the one hand, various reasons as to why transparency in the factors underlying sentencing decisions are important and, on the other, by examining how the need for transparency can be satisfied.