For those who either fear or welcome the world of Philip K. Dick’s Minority Report, we’re getting there and it’s time to take stock. Although we aren’t talking about actual clairvoyance of crimes and criminals, or about preventative detention based on algorithms, the theory that crime happens not randomly but in “patterned ways,” combined with the confidence in big data being used to predict all kinds of social behavior and phenomena, have taken hold in cities looking to spend their federal policing grants on shiny things. This is true even though crime is decreasing overall (and as we see below, although violent crime periodically spikes back up, predictive policing is least effective against it).

And while there are legal limits on law enforcement’s direct use on some data appending products, we’re finding that agencies may use aggregators to get around even the most rigorous civil rights protections.

Not everyone is excited. Here are the most important reasons why:

  1. Policing algorithms reinforce systemic racism 

The simplest iteration of this argument is: most data to be folded into predictive policing comes from police. A lot of it comes from community members. Racism undeniably exists across these populations, as “AI algorithms are only able to analyze the data we give them . . . if human police officers have a racial bias and unintentionally feed skewed reports to a predictive policing system, it may see a threat where there isn’t one.” In fact, Anna Johnson, writing for Venture Beat about the failure of predictive policing in New Orleans, says that city’s experience basically proved that biased input creates biased results.

  1. Predictive crime analytics produce huge numbers of false positives

Kaiser Fung, founder of Principal Analytics Prep, has a very plainly-spoken and often bitingly funny blog where last month he devoted two posts to “the absurdity of predictive policing.”

One thing Fung points out is that certain crimes are “statistically rare” (even if they seem to happen a lot). A predictive model has to generate many more red flags (targets to be investigated) than actual instances of the crime occurring in order to be “accurate.”

“Let’s say the model points the finger at 1 percent of the list,” he writes. “That would mean 1,000 potential burglars. Since there should be only 770 burglars, the first thing we know is that at least 230 innocent people would be flagged by this model.” That’s a lot of suspects. How many of them will be pressured into confessing to something they didn’t do, or at a minimum, have their lives painfully disrupted.

  1. Attributing crime prevention to predictive systems is meaningless: you can’t identify things that didn’t happen

This is a particularly devastating observation from Fung’s posts about predictive policing. If you flag an area or individual as “at risk” and then police that area or individual, you may or may not have prevented anything. You can’t prove that the prediction was accurate in the first place, and Fung finds it absurd that sales reps of these systems basically say ” Look, it flagged 1,000 people, and subsequently, none of these people committed burglary! Amazing! Genius! Wow!” They can get away with claiming virtually 100% accuracy through this embarrassing rhetorical slight-of-hand. Call it statistical or technological illiteracy. It’s also deeply cynical on the part of those promoting the systems.

  1. Predictive analytics falls apart when trying to predict violent crimes or terrorism

One area where predictive policing seems to at least . . . predict the risk of crime, is property crime. When it comes to literally anything more dreadful than burglary, though, the technology doesn’t have much to say in its favor. Timme Bisgaard Munk of the University of Copenhagen’s school of information science wrote a scathing review in 2017 entitled “100,000 false positives for every real terrorist: Why anti-terror algorithms don’t work,” and the title does justice to the article. In particular, Munk points out that predictive analytics of terrorist attack risks borrows from prediction work around credit card fraud. But terrorism is “categorically less predictable” than credit card fraud. In general, violent crime is the least predictable kind of crime.

  1. Predictive policing is mostly hype to make a frightened public trust the police

After reviewing many studies and analyses, Munk concluded that European agencies’ choices of predictive policing programs is based more on pacifying the public, particularly a European public frightened of terrorism. “The purchase and application of these programs,” Munk wrote in the 2017 article, “is based on a political acceptance of the ideology that algorithms can solve social problems by preventing a possible future.” This is striking because there is no evidence, certainly no scientific evidence, that predictive counter-terrorism is a thing. And in a more general sense, there’s no consensus that any predictive policing technology works.

  1. There’s no such thing as neutral tech.

We read a powerful post by Rick Jones, an attorney at Neighborhood Defender Service of Harlem, and president of the National Association of Criminal Defense Lawyers. The post is obviously written from the point of view of a public defender, and written to highlight the public suspicion of policing technology. But a sound argument is a sound argument. Jones reminds us “that seemingly innocuous or objective technologies are not, and are instead subject to the same biases and disparities that exist throughout the rest of our justice system.” Jones may be assuming a “garbage in/garbage out” metaphor that doesn’t precisely describe what happens when algorithms and data sets synthesize new knowledge “greater than the sum of its parts,” de-colonizing that data, “removing” bias from its inputs and practitioners, needs to be proactive at a minimum, and then may not be adequate anyway.

  1. Guess what data these programs rely on? Data from previously over-policed neighborhoods

Attorney Jones specifically talks about a system called “PredPol” which uses data on location, time, and nature of crimes to mark “high-risk” areas for future crime. It calls those areas “hot spots,” a stunning display of unoriginality. And speaking of unoriginal, PredPol literally uses the very data that policing—and specifically over-policing, has generated. It’s basically incestuous data collection that demonstrates the very thing it needs to prove to justify more overpolicing. It’s a “feedback loop” that “enables police to advance discriminatory practices behind the presumed objectivity of technology.”