Data School

News

A GLIMPSE INTO THE DIGITAL POORHOUSE

Review of the “Privacyrede 2019” by Virginia Eubanks at TivoliVredenburg, Utrecht, 16-01-2019

Although Virginia Eubanks describes herself as a “hard-won optimist”, it is difficult not to feel a tinge—or a wave—of technological pessimism when she lays out the devastating consequences of the automation of various social services across the United States. One of the first examples she gives in her lecture on the social consequences of algorithmic decision-making is of a mother of two boys, who lost access to much-needed cancer treatment and died exactly one day before that access was restored. The automated system that replaced her personal healthcare worker registered an error somewhere in the process of filling out and submitting forms and shut her out of her healthcare program due to a supposed “failure to cooperate”. The notification letter provided no further explanation or indication about what kind of specific error was made. This experiment in systems automation caused millions of people in the state of Indiana to lose access to the government-funded Medicaid program, and the state’s governor canceled the contract with the company responsible, IBM, three years into the planned ten-year period. IBM sued the state successfully, and hundreds of millions of dollars that could have been invested into restoring the healthcare system were now used to compensate the company that caused this devastating mess in the first place.

Both in her talk and in the book upon which it was based, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018), Eubanks offers more such cases which show the deeply troubling and far-reaching effects of replacing direct human decision-making processes with algorithmic tools in areas like healthcare, childhood services, and social housing. What is particular about the cases she studies is that the tools in question are often not at all poorly designed or developed by incompetent or malicious engineers. Something ominous lurks behind her claim that these tools are “some of the best we have available right now”. The tools are generally designed to be neutral; they are meant to avoid the various biases that almost inevitably come into play when humans are the ones making the decisions. Racism and sexism come to mind immediately here, but Eubanks notes especially the hatred of the poor that pervades North-American culture. This hatred, according to Eubanks, affects the way America’s social services are inherently organized: not to emancipate people out of impoverishment, but to make moral judgments about who is “worthy” of such emancipation and who “deserves” to remain in poverty. What is emerging now is what she calls a “digital poorhouse”, whereby algorithms and automated systems are the key actants in making those judgments.

The problem with the algorithmic tools that have seen increasingly widespread implementation since the late 1960s, then, is not (or, not only) that they make the pre-existing systematic biases caused by racism, sexism, and classism worse—the dataset itself is biased. Eubanks demonstrates that the algorithmic tools she investigated are neutral, but the data they are working with still suffer from the same problems we have been familiar with for many decades already. If the only data available about potential child neglect cases are from families who have requested government-funded childhood services, any predictive algorithm—no matter how unbiased—will confuse parenting while poor with poor parenting. If lower-class African-American and Black families are 350% more likely to be referred to child protection by their neighbors than any other racial or ethnic group, the outcomes of any algorithmic analysis of those referrals will inevitably be racist. Never even mind the fact that this limited dataset will hardly contain any data from relatively affluent middle-class families, meaning that the algorithm could never detect any cases of child abuse in households above a certain income threshold. It seems that Elie Wiesel’s famous adage holds true in the realm of algorithmic bias as well: “We must take sides. Neutrality helps the oppressor, never the victim.”

Eubanks is no neo-Luddite. She does not argue that we should do away with automated systems entirely, but does caution that politicians and designers should not be so enthusiastic to replace hands-on social workers with algorithmic tools because human-to-human labor is so much more than sheer data processing. Finally, she notes once more that the tools she discusses were designed according to a “progressive” methodology—transparent, unbiased, well-intentioned—but still yielded undesirable outcomes. We should therefore design against bias (instead of without bias) and perhaps be asking an additional set of questions: Is the data itself collected in a biased or coercive way? Is it possible to stop destructive systems after they’ve been implemented? Can we remedy the harm done by misguided algorithmic tools? I find it questionable whether Eubanks’ optimism is entirely warranted, with companies like IBM more powerful than ever and authoritarian states like China expanding their surveillance programs to Orwellian proportions. She does, however, offer some guidance on how to make things better than they are now. She implores us to try.

Dennis Jansen, Utrecht University

18 January 2019

Do you have any questions or feedback? Please let us know! Contact EN
Feedback

Feedback form

Feedback