loader

Two police officers stand alongside a woman in a white button-up shirt. They are looking at a whiteboard.

University researchers are collaborating with police departments to create AI algorithms designed to minimize bias and improve operational efficiency.

When Yao Xie began her Ph.D. journey at the Georgia Institute of Technology, she envisioned focusing on machine learning and algorithms to tackle real-world challenges. Seven years later, her research has taken an unexpected turn, partnering with the Atlanta Police Department.

“After discussions with them, I was surprised by how I could help address their challenges,” said Xie, now a professor in industrial engineering.

Xie utilized artificial intelligence to assist the department in optimizing resource allocation and establishing a fair policing framework devoid of racial and economic biases.

Most Popular

She is part of a growing cohort of academics collaborating with local law enforcement to explore the potential of AI in policing while addressing the associated challenges.

These initiatives have taken various forms. Researchers at the University of Texas at Dallas partnered with the FBI and the National Institute of Standards and Technology to evaluate police officers’ facial recognition capabilities against AI algorithms. At Carnegie Mellon University, researchers developed AI tools to analyze images where a suspect’s face is obscured by a mask or other objects.

Dartmouth College researchers created algorithms to interpret low-quality images, such as blurry license plate numbers. Meanwhile, researchers from the Illinois Institute of Technology collaborated with the Chicago Police Department to develop algorithms for assessing potentially high-risk individuals.

These projects are part of a multi-year, $3.1 million initiative from the National Institute of Justice aimed at fostering partnerships between educational institutions and law enforcement, focusing on areas such as public safety video analysis, DNA analysis, gunshot detection, and crime forecasting. Recently, the spotlight has shifted towards AI applications.

“It’s definitely a trend; there’s a genuine need, but challenges remain, particularly in ensuring trust and reliability in AI algorithm outcomes,” Xie noted. “[Our project] affects everyone in Atlanta: How can we guarantee fair treatment for citizens and eliminate hidden disparities in the system’s design?”

Addressing Ethical Concerns

Xie was first approached by the Atlanta Police Department in 2017, seeking academic assistance in developing algorithms applicable to police data. Her seven-year collaboration, which concluded in June, resulted in three significant projects:

  1. Analyzing police reports to identify “crime linkages,” where the same offender is involved in multiple incidents, creating algorithms to sift through over 10 million cases to enhance efficiency.
  2. Redesigning police districts, which often have uneven officer distribution. An algorithm was created to optimize zoning divisions for improved response times and to prevent over-policing in certain areas.
  3. Assessing “neighborhood integrity” to ensure equitable service levels for all residents while incorporating fairness considerations into the police response system.

“I have friends who said, ‘I could never work with the police,’ due to their mistrust, which is a challenge that AI might help address,” she explained. “We can pinpoint the sources of mistrust. If officers are not being fair, it could be intentional or unintentional. Analyzing the data could reveal gaps and facilitate improvements.”

At Florida Polytechnic University, Vice President and CFO Allen Bottorff is also navigating the complexities of collaborating with law enforcement while prioritizing bias awareness. The university announced in June its partnership with the Lakeland Sheriff’s Department to establish a unit focused on AI-assisted cybercrime. A select group of students will work within the sheriff’s office to understand how criminals exploit AI for cybercrimes, identity theft, and extortion.

The university is also developing AI algorithms for various applications, including detecting deepfakes that could deceive victims into believing they are interacting with a loved one rather than a criminal. Florida Polytechnic is considering creating an “AI toolkit,” according to Bottorff, which would compile and prioritize data for officers, ensuring they have all necessary actionable information before leaving their patrol cars.

Bottorff believes this partnership aligns perfectly with his institution’s mission. “We approach higher education and STEM differently; we aim for applied learning, helping students understand how to operate in real-world settings rather than merely learning theoretical concepts,” he stated. “It’s about engaging in practical situations in less controlled environments.”

While universities strive to collaborate with police to reduce biases in policing, they must also be vigilant about the biases inherent in AI itself, ensuring that it does not lead to over-policing in specific neighborhoods or unfair targeting of certain demographics. Experts have highlighted that AI often relies on limited online information, which may disproportionately affect marginalized communities.

Bottorff suggested that one potential solution is to develop open-source data devoid of inherent biases, a research initiative Florida Polytechnic is exploring.

“The key question would be, ‘Does this data contain bias or not?’ but more importantly, ‘If it’s 35 percent biased, I need to reconsider,’” he explained.

Duncan Purves, an associate professor of philosophy at the University of Florida, has spent the past three years investigating ethical predictive policing, which he identifies as having “numerous issues,” including the persistent problem of racial bias, following a grant from the National Science Foundation.

This initiative culminated in the establishment of guidelines for ethical predictive policing. Purves emphasized that institutions collaborating with law enforcement—especially in the AI domain, which has faced scrutiny for bias—must prioritize ethics alongside the development and application of new technologies.

“Police departments are eager to engage in practices that won’t provoke public backlash, but many lack the knowledge to do so,” he noted. “They want to demonstrate their commitment to ethics, but they’re not ethicists—they’re law enforcement. This presents an opportunity for academics to exert influence over technology implementation, and I’ve found police to be receptive to this approach.”