This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.
Hi, everyone! It’s nice to be back after a short break since Friday’s newsletter — a few days that felt like 30 years. I want to dig in again today on the ongoing debate around the use of facial recognition technology by the police.
Civil rights advocates and some researchers are adamant that software that seeks to identify people using image databases should be banned in all or some instances because it too often misidentifies people with darker skin and contributes to police bias against Black communities. Proponents and some law enforcement officials insist that the technology is a helpful crime-fighting tool.
James Tate, a member of Detroit’s City Council, had to make a call on whether the Police Department should be allowed to use facial recognition software. He was among a 6-to-3 majority that approved a contract extension for the software in September after a heated debate.
There is not much middle ground to be found between opponents and proponents of the technology. But Tate told me he believed the facial recognition software — with appropriate guardrails, including multiple steps for approval recently imposed by city officials — was an imperfect but potentially effective tool among other methods for law enforcement in Detroit.
“This is a balancing act,” Tate said. “It’s not just a bright line.”
The balancing act that Detroit and other U.S. cities have struggled with is whether and how to use facial recognition technology that many law enforcement officials say is critical for ensuring public safety, but that tends to have few accuracy requirements and is prone to misuse.
My colleague Kashmir Hill reported that Detroit police officers wrongly arrested a Michigan man, Robert Julian-Borchak Williams, for shoplifting early this year, based on flawed police work that relied on a faulty facial recognition match.
“It’s terrible what happened to Mr. Williams,” Tate told me.
“What I don’t want to do,” he continued, “is hamper any effort to get justice for people who have lost loved ones” to violent crime. “I’ve lived in Detroit my entire life and seen crime be a major issue my entire life.”
Tate, who is Black, said he had heard from Black constituents who opposed facial recognition software and called his vote a betrayal. But he said he still believed that, with oversight, law enforcement would be better off using facial recognition software than not.
That’s the position of facial recognition proponents: That the technology’s success in helping to solve cases makes up for its flaws, and that appropriate guardrails can make a difference. It’s a tricky argument, because it’s difficult to know whether criminals might have been identified without the technology, whether imposing restrictions is effective and whether there are better alternatives to the time and money spent on the software.
My colleague Kash has also talked about how people tend to believe that computers spit out the “right” answers. The fine print about the limits of facial recognition technology is sometimes overlooked.
Phil Mayor, an attorney at the American Civil Liberties Union of Michigan, which represented Williams, said facial recognition software was hopelessly prone to misuse and that it wasn’t worth the risk or the harm to people like Williams, who was arrested in front of his family.
Tate said the city of Detroit sought to rectify some of those problems with a policy passed last year. The new guidelines limited the Police Department’s use of facial recognition software to more serious crimes, required multiple approvals to use the software and mandated reports to a civilian oversight board on how often facial recognition software was used. (The policy wasn’t in place when the police first charged Williams in August 2019. Mayor said even before the policy was put in effect, the Detroit Police Department made assurances that there were multiple layers of protection from faulty facial recognition matches.)
Tate said he made a mistake by voting in 2017 to approve the Police Department’s initial contract for facial recognition software without such checks in place.
He also said he had learned his lesson. When election officials asked to use cameras to monitor ballot drop boxes before the recent election, he said he asked whether there were policies about who could access the cameras and what happened to the data. When they said no, Tate said he voted against it, but it passed anyway.
Facebook uses data to make the wrong point
This should be a moment for reflection on how internet information machines operate. Instead, Facebook is arguing about data.
On Tuesday, Facebook released cherry-picked information that sought to demonstrate that the posts that show up most often in people’s news feeds were not from the hyperpartisan political extremes but rather the more tame stuff, like mainstream news articles and heartwarming animal posts from a site called The Dodo.
Facebook periodically steps in like this to counter the idea that the most popular material on its site is from the shouty people, particularly right-wing political figures and commentators. It remains true that political partisans are among those that generate the most engagement — comments, shares, likes and other reactions — from users on the app. Facebook is arguing that’s not the most important measure of what is popular.
But what gets people engaged on Facebook matters. When the messages that can make Americans thrilled or angry enough to hit the “like” or “angry” icons or to type “the president has no shame!!!” in the comments — welp, that is important. It tells us something about how Facebook works, and perhaps how humans work, too.
Facebook has been so invested in getting people deeply engaged on its site that three years ago, the company began prioritizing posts that generated significant interactions. Facebook imagined that we would write a kind comment or have similarly “meaningful interactions” on a friend’s engagement announcement. It turned out that many of our interactions on Facebook were with shouty political commentators.
Facebook executives and data scientists are debating how “popularity” is defined on the site, but a better use of the company’s time might be spent reflecting on what it means that many people see NBC News articles, for instance, but are more motivated to interact with rants — whether they are falsely claiming voter fraud or accusing the president of faking his coronavirus infection. Does Facebook feel good about this? Should it ditch the reaction buttons or rejigger how it circulates posts to turn down the partisan temperature?
Those would be useful conversations to have. Instead we have duels over data.
Before we go …
-
Silence from Q: The Washington Post writes that President Trump’s election defeat is a “crisis of faith” moment for believers in the sprawling and baseless QAnon conspiracy that claims President Trump is a savior. My colleague Kevin Roose also says that Q, the pseudonymous message board user whose posts have fueled the conspiracy, has not posted since Mr. Trump’s election loss.
-
Black Friday is nothing compared with Singles’ Day: Wednesday’s edition of the wildly popular annual Chinese online shopping holiday known as Singles’ Day is a moment for delivery couriers to draw attention to their low wages and grueling working conditions, my colleague Vivian Wang reports.
-
Being extremely online seems extremely exhausting: My colleague Taylor Lorenz takes us inside the brain of Hasan Piker, the 29-year-old leftist political commentator who got a popularity jolt on Twitch with his slightly chaotic, marathon live streams of election coverage. He logged 80 hours of live election webcasts just this week!
Hugs to this
This little goat likes to have its tummy tickled.
We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.
If you don’t already get this newsletter in your inbox, please sign up here.