Colloquium

Gabbrielle Johnson (Claremont McKenna College)

Proxies Aren't Intentional; They're Intentional 

This paper concerns the Proxy Problem: often machine learning programs utilize seemingly innocuous features as proxies for socially-sensitive attributes, posing various challenges for the creation of ethical algorithms. I argue that to address this problem, we must first settle a prior question of what it means for an algorithm that only has access to seemingly neutral features to be using those features as “proxies” for, and so to be making decisions on the basis of, protected-class features. Borrowing resources from philosophy of mind and language, I argue the answer depends on an explanatory criterion according to which algorithms are reasoning on the basis of protected-class features when those protected classes explain why the algorithm picks out the individuals that it does. This criterion rules out standard theories of proxy discrimination in law and political theory that rely on overly intellectual views of the intentions of the agents involved or on overly deflationary views that reduce proxy use to mere statistical correlation. Instead, this explanatory criterion highlights two distinct ways an algorithm can be reasoning on the basis of proxies. Either the proxies themselves are meaningfully about the protected classes, highlighting a new kind of intentional content for philosophical theories in mind and language, or the algorithm explicitly represents the protected-class features themselves, and proxy discrimination becomes regular, old, run-of-the-mill discrimination.