On 23rd June 2020, the Coalition for Critical Technology published an open letter titled “Abolish the #TechToPrisonPipeline” to Springer, protesting the forthcoming publication of research paper titled "A Deep Neural Network Model to Predict Criminality Using Image Processing." The paper was supposed to be published on the "Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence." The open letter called for the review committee at Springer to "publicly rescind the offer for the publication of this specific study, along with an explanation of the criteria used to evaluate it." 

The coalition includes renowned AI researchers such as Yoshua Bengio and Yaan LeCunn- both considered as the fathers of modern AI with Geoffrey Hinton, alongside hundreds of researchers from universities such as MIT, Harvard, Princeton, to companies such as Google and Facebook. 

They pointed out that the upcoming publication warrants a collective response because it is emblematic of a larger body of computational research that claims to identify or predict "criminality" using biometric and criminal legal data. As a response to the open letter, Springer stated that they will not be publishing the paper and the paper was rejected after a thorough peer-review process," reported MIT Tech Review.

However, in the broader perspective of things, this is just another example of a more general trend that has emerged in data science and machine learning, where researchers use "socially-contingent data to try and predict or classify complex human behaviour," as pointed out by James Vincent of The Verge.

The open letter comes at a time when selective brutality by law enforcement agencies towards racial minorities in the United States has reached a boiling point triggering numerous anti-racist moments across the country.

According to the now-withdrawn press release on the controversial research paper published by Harrisburg University, the model can "predict if someone is a criminal based solely on a picture of their face," with "80 per cent accuracy and with no racial bias." The coalition points out that these claims as proposed by the researchers at Harrisburg University, which included a former NYPD officer, are based on unsound scientific premises, research, and methods, which were debunked by numerous studies in the various disciplines. 

Unfortunately, time and time again, these discredited claims keep resurfacing with latest being under behind the facade of new and purportedly neutral statistical methods such as machine learning. They also warn that more and more governments are set to embrace machine learning and artificial intelligence as as "a means of depoliticising state violence and reasserting the legitimacy of the carceral state, often amid significant social upheaval." This arises despite a massive protest from scholars and community organisers against the use of AI technologies by law enforcement, particularly facial recognition. 

In fact, "part of the appeal of machine learning is that it is highly malleable — correlations useful for prediction or detection can be rationalised with any number of plausible causal mechanisms. Yet the way these studies are ultimately represented and interpreted is profoundly shaped by the political economy of data science and their contexts of use," noted the coalition. 

Furthermore, they pointed out that "machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world. These research agendas reflect the incentives and perspectives of those in the privileged position of developing machine learning models and the data on which they rely. The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups,"

Threats posed by AI-based crime prediction systems

With more and more governments across the globe taking an authoritarian approach and more and more surveillance and predictive policing initiatives getting green light as a result of COVID19 pandemic, it is imperative that we must examine the critical shortcomings with present-day AI-based crime prediction, as presented by the coalition. 

Firstly, the data generated by the criminal justice system cannot be used to "identify criminals" or predict criminal behaviour. In response to Harrisburg University researchers' claim, researchers point out that there is no way to develop a system that can predict or identify "criminality" that is not racially biased. This is because the category of "criminality" itself is racially biased. 

"Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral," the coalition argued. 

We have to also consider the fact that these data reflect who police choose to arrest, how judges decide to rule, and which people are granted longer or more lenient sentences. These process, are proved time and time again, are biased towards unprivileged communities across the globe. In the case of United States, studies have shown that people of colour are treated more harshly than similarly situated white people at every stage of the legal system, which results in severe distortions in the data. 

Secondly, technical measures of "fairness" distract from fundamental issues regarding an algorithm's validity, points the coalition. "Machine learning scholars are rarely trained in the critical methods, frameworks, and language necessary to interrogate the cultural logics and implicit assumptions underlying their models", noted the open letter. "Many efforts to deal with the ethical stakes of algorithmic systems have centred mathematical definitions of fairness that are grounded in narrow notions of bias and accuracy."

Finally, crime-prediction technology itself reproduces injustices and causes real harm. "Recent instances of algorithmic bias across race, class, and gender have revealed a structural propensity of machine learning systems to amplify historic forms of discrimination, and have spawned renewed interest in the ethics of technology and its role in society." There are profound political implications when crime prediction technologies are integrated into real-world applications, which go beyond the frame of "tech ethics" as currently defined.

In conclusion, the coalition argues that AI applications that claim to predict criminality based on physical characteristics are a part of a legacy of long-discredited pseudosciences such as physiognomy and phrenology, which were and are used by academics, law enforcement specialists, and politicians to advocate for oppressive policing and prosecutorial tactics in poor and racialised communities. 

On the other hand, the hard work put by activists and scholars is beginning to gain public recognition. In recent weeks, major tech companies such as IBM, Amazon and Microsoft have announced that they will stop collaborating with law enforcement organisations to deploy facial recognition technologies. However, as many have pointed out, technology, whether AI or biotechnology, is just another tool with humongous potential, and the impact they generate depends entirely on the human who operates them. 


[Disclaimer: The author of this article is one of the signatories of the open letter “Abolish the #TechToPrisonPipeline”. We believe in promoting promoting diverse views and opinions and they need not always conform to our editorial positions]

Sources of Article

Image from the paper Automated Inference on Criminality Using Face Images (Xiaolin Wu, McMaster Univ. and Xi Zhang, Shanghai Jiao Tong Univ. Nov. 21, 2016)

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE