Racism in predictive justice, the issues with algorithmic policing

A new article by Will Douglas Heaven, senior AI editor at the MIT technology review has called for an end to the use of predictive policing and justice, powered by AI algorithms. The article looks at a number of ways that race feeds into AI algorithms, and how this can detriment minorities. The article suggests that current AI systems, when applied to justice, end up continuing to reinforce existing systemic racism, and potentially lead to an increased bias, as judgement formed by a supposedly objective system, then reinforces existing bias.

Heaven, therefore, suggests that until AI has been developed to the point where it can be genuinely objective, it should not be used in such an important decision-making capacity, particularly as discussions continue in the US and globally around racism and bias in the justice system.

Visit the MIT technology review to read the full argument.

Brought to you by ICLR.