SF DA's office is adpoting an old definition of algorithmic fairness

Contents

On Thursday afternoon, It was reported that the San Francisco DA’s office will adopt a new machine-learning toolkit to help remove ‘bias’ when prosecutors decide whether to charge someone with a crime. The toolkit, developed by Stanford’s Computational Policy Lab, uses machine learning to remove any information in case files that might allow a prosecutor to infer a defendant’s race, like names, physical characteristics like eye and hair color, and location names.

It’s notable that this tool, unlike so many others deployed in the carceral system, isn’t a decision-system. It doesn’t deal in risk assessment, prediction, or prescription. It just automatically removes information that might let a prosecutor be implicitly racist. It approaches the line of identifying racist decision-making (think bias audits), but instead simply changes the rules of the decision-making itself.

In this way, the tool is more of a policy choice than an application of AI,
and could easily be described without mentioning machine learning at all:
Prosecutors will receive no information on descriptive characteristics of
defendants, police officers, or crime location involved in their cases.

Policies like this are intuitive. Many people’s conception of “bias” in the carceral system is of an orchestra of small decisions that, in concert, converge into systemic racism. In that framing, this could help dramatically: reducing racism in smaller decisions by any amount should change how systemic racism operates. However, while I don’t deny that this might help, it’s likely to be a very small change, and the approach is misleading in its focus.

As a computational researcher, this policy choice makes me think of the ways that algorithmic fairness researchers have dealt with “sensitive attributes” like race or gender in algorithms. As James Johndrow and Kristian Lum explain, there is a camp in the fairness literature that “assumes a model will be fair if the protected variable(s) are omitted from the analysis.” 1, an argument now generally rejected wholesale. Removing “protected” variables, such as race or gender, doesn’t remove other data that algorithms learn to use as a proxy, like zip code or income.

Instead, modern algorithmic fairness research borrows the term “disparate impact” from legal scholars2. Disparate impact definitions usually declare that a system is “fair” if the accuracy of it’s predictions are similar across groups, like racial or gender identifies. For example, an algorithm that predicts mood from facial expressions might be considered ‘fair’ if the distribution of wrong predictions is similar between black and white people.

This new policy by the SF DAO is effectively in the first camp, claiming that by removing protected variables, the model – the DA office’s prosecutorial system – will be more fair. We know that in perfect mathematical systems, this is not the case. Why would it be any different in a DA’s office?

Even if prosecutors don’t see the race or neighborhood of a defendant, there will inevitably be “correlated variables” that get presented in a case. For example, any drug charge that lands on a San Francisco prosecutor’s desk is 19 times more likely to be for a black defendant than any other race, even though rates of drug use across racial groups are similar3. Removing characteristics like hair color or neighborhood won’t change that arrest rate, and won’t change the over-criminalization of black communities by police.

Because so much of the racism in the police system happens before a case gets to prosecution, this policy also misses where truly progressive DA offices should be focusing their resources. If a DA office wants to make real, lasting change in over-policed communities, they should actually pay close attention to those protected characteristics instead of erasing them. By knowing who defendants are and where they are coming from, prosecutors can actively fight to direct them to restorative justice and state services, rather than incarcerate them. They can understand where communities are hurting the most and where over-policing is at its worst.