Sanchez argued that predictive policing systems are built with “dirty data” compiled over decades of police misconduct, and that there’s no current method by which this can be resolved with technology. Her testimony was based on a detailed study conducted by the AI Now Institute last year that detailed how predictive policing systems are inherently biased. She told the committee: During the hearing, Sanchez described predictive policing systems as little more than a method by which to automate corruption: Read: Predictive policing AI is a bigger scam than psychic detectives AI Now warned US regulators last year that predictive policing was a problem, and the message hasn’t changed much for the international audience. Per Sanchez today: The reason these systems are so dangerous? Simply put, a long history of corrupt police practices has created a pool of untrustworthy data. For example, while researching the Chicago Police Department (CPD) – an agency that settles an average of one misconduct suit every other day – AI Now identified a pipeline between police corruption and biased AI predictions. As Sanchez explained: AI Now’s warnings have, so far, been largely ignored. A few jurisdictions in the US have put a stop to predictive policing, and there’s mutterings from the UK and Europe about “pausing” its use in some areas. Yet the use of both predictive policing and facial recognition by law enforcement continues to rise globally. Read the full transcript of of Andrea Nill Sanchez’ remarks here. You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track.