Police use of facial recognition technology soars in Minnesota
A Hennepin County Sheriff's Office spokesperson said the technology is exclusively used in criminal investigations, not surveillance
By Libor Jany
MINNEAPOLIS — The growing popularity of facial recognition among local law enforcement in Minnesota has renewed public debate about how, when and why the powerful technology is deployed.
Since 2018, police have run nearly 1,000 searches through the Hennepin County Sheriff’s office’s facial recognition system, with more than half of those searches coming this year alone, according to new county figures.
And while the Twin Cities still lags behind other jurisdictions in using the technology, its increasing use here has caught the attention of civil liberties advocates, who say it’s a threat to privacy and discriminatory. The county records also reveal that the Minneapolis Police Department was using the technology in 2018, when a spokeswoman denied that was happening.
The county figures offer a glimpse into the scope of police use of the technology, which employs machine learning algorithms to automatically detect human faces from surveillance cameras, social media and other sources against a countywide mug shot database.
They show that outside agencies used the sheriff’s facial recognition platform 516 times through the first nine months of the 2020, far more than any previous year. The Sheriff’s Office processed 308 such requests all of last year, up from 18 in 2015.
The program’s users range from the obvious — St. Paul police, with 83 requests — to the obscure — the state Department of Commerce, which used facial recognition as part of an insurance fraud investigation. Regional drug task forces were also regular clients.
Among federal agencies, the Drug Enforcement Agency has used the system 14 times, according to the figures, the FBI 6 times, Homeland Security twice, the U.S. Postal Inspector once, and the Bureau of Alcohol, Tobacco and Firearms 10 times, all in the past two years.
The agency’s biggest client, the Minneapolis Police Department, has for years deflected questions about its use of the technology. In 2018, a spokeswoman told the Star Tribune that the department had no plans to use the technology, in response to questions for a story about a City Council member’s proposal to restrict its use.
The county records show that MPD investigators used the system’s software 237 times between Oct. 1, 2015 and Sept. 28, 2020. The county figures, obtained by the Star Tribune through a data practices request in September, reflect every request from an outside agency to use facial recognition software, but don’t provide details about the underlying cases.
Munira Mohamed, a policy associate with the state branch of the American Civil Liberties Union, said the ACLU and others say there’s a lack of transparency in police facial recognition use, despite nagging concerns over privacy and false matches.
Study after study, she says, has shown the technology is particularly problematic in identifying people of color, women, the “young, old, trans, and LGBTQ — basically anyone who’s not a white man.”
Mohamed said the group has been working with Minneapolis Council Member Steve Fletcher and other officials to draft a citywide moratorium on facial recognition, at least until new standards governing their use are set.
“It’s such an opaque dark universe of stuff, you never really know what sort of technology is being used, you never really know how it’s being funded and … how it’s being deployed,” she said, adding that the proposed ban comes after Minneapolis lawmakers passed a resolution pledging to protect citizens’ privacy. She said that the moratorium, an early version of which is expected later this month, may not address third-party facial recognition platforms.
At the same time, efforts to address its use at the state level are still in their infancy, she says.
The office of then- Hennepin Sheriff Rich Stanek obtained the technology in 2012, an effort first uncovered years later in a lengthy court battle by Tony Webster, a local investigative journalist and privacy advocate.
According to internal documents obtained by Webster, the agency uses software from Cognitec, a German R&D firm. Like most systems, Cognitec works by analyzing people’s unique facial measurements — noting, for instance, the space between the eyes or the contour of the lips, and breaking them down into long strands of code called “feature vectors” or “faceprints” — to create a virtual map, which can be compared against the county’s database of more than 1.4 million mug shots.
In many ways, facial recognition tools are already part of everyday life, transforming the way people check in at airports, unlock smartphones or tag their friends in photos on social media.
As facial recognition becomes more powerful, cities must try to balance its potential to improve public services with the ability to cause harm, says Shobita Parthasarathy, a public policy professor at the University of Michigan. After all, she says, the technology’s reliance on algorithms, not humans, gives it the illusion of objectivity.
“Technologies are the product of the society that build, and because our society has biases, our technologies will also have biases, full stop,” said Parthasarathy, who recently co-authored a study on the emergence of facial recognition in schools.
And yet, she adds, there’s no federal statute on the technology, whose use is governed by a patchwork of state and local laws.
Parthasarathy said she worries not only about the potential expansion of government surveillance, but about the long-term psychological toll on people of knowing they’re being watched every time they leave their homes.
“The more you can surveil, the more you can say, ‘Oh, you’re doing something that’s a deviant behavior,’ the more you can say, ‘Oh you’re doing something that’s technically incorrect,’ ” she said.
In all, more than a 100 different law enforcement departments and agencies had accessed the sheriff’s program.
Jeremy Zoss, a spokesman for Sheriff Dave Hutchinson, said the agency itself had used the technology 73 times over the past five years, but it was not clear how many of those cases led to arrests. But, he added, a positive identification through facial recognition alone doesn’t constitute probable cause, and would require more legwork before an arrest could be made.
He said “small number of trained users” from the agency’s Criminal Intelligence Division have access to the system, which costs $22,500 a year to operate.
Zoss said the technology is exclusively used in criminal and death investigations, and is not connected to surveillance cameras and “there is no ability to operate any ‘choke points,’ nor can it be used for active surveillance.”
He disputed a report by BuzzFeed News from earlier this year that said the Sheriff’s Office ran hundreds of facial recognition searches through Clearview AI, a controversial start-up that has amassed a database of billions of photos scraped from Facebook, YouTube and Google. He said that in June 2019 a crime analyst from another agency who was assigned to the office signed up for a 30-day free trial version of the software, but that it “has not been used by anyone (with the Sheriff’s Office) since this trial period.”
Many in law enforcement have defended the technology as too important a tool to ignore in an increasingly wired world. With the help of facial recognition, even a grainy image captured on a security camera or social-media account can lead investigators to a suspect, as it did in the case of the man who brutally assaulted an elderly man after an argument aboard a Metro Transit bus last winter.
The defendant, Leroy Davis-Miles, in that case was recently sentenced to more than 13 years prison time for his role in the attack.
(c)2020 the Star Tribune (Minneapolis)