This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.
January’s riot at the U.S. Capitol showed the damage that can result when millions of people believe an election was stolen despite no evidence of widespread fraud.
The Election Integrity Partnership, a coalition of online information researchers, published this week a comprehensive analysis of the false narrative of the presidential contest and recommended ways to avoid a repeat.
Internet companies weren’t solely to blame for the fiction of a stolen election, but the report concluded that they were hubs where false narratives were incubated, reinforced and cemented. I’m going to summarize here three of the report’s intriguing suggestions for how companies such as Facebook, YouTube and Twitter can change to help create a healthier climate of information about elections and everything else.
One broad point: It can feel as if the norms and behaviors of people online are immutable and inevitable, but they’re not. Digital life is still relatively new, and what’s good or toxic is the result of deliberate choices by companies and all of us. We can fix what’s broken. And as another threat against the Capitol this week shows, it’s imperative we get this right.
1) A higher bar for people with the most influence and the repeat offenders: Kim Kardashian can change more minds than your dentist. And research about the 2020 election has shown that a relatively small number of prominent organizations and people, including President Donald Trump, played an outsize role in establishing the myth of a rigged vote.
Currently, sites like Facebook and YouTube mostly consider the substance of a post or video, divorced from the messenger, when determining whether it violates their policies. World leaders are given more leeway than the rest of us and other prominent people sometimes get a pass when they break the companies’ guidelines.
This doesn’t make sense.
If internet companies did nothing else, it would make a big difference if they changed how they treated the influential people who were most responsible for spreading falsehoods or twisted facts — and tended to do so again and again.
The EIP researchers suggested three changes: create stricter rules for influential people; prioritize faster decisions on prominent accounts that have broken the rules before; and escalate consequences for habitual superspreaders of bogus information.
YouTube has long had such a “three strikes” system for accounts that repeatedly break its rules, and Twitter recently adopted versions of this system for posts that it considers misleading about elections or coronavirus vaccinations.
The hard part, though, is not necessarily making policies. It’s enforcing them when doing so could trigger a backlash.
2) Internet companies should tell us what they’re doing and why: Big websites like Facebook and Twitter have detailed guidelines about what’s not allowed — for example, threatening others with violence or selling drugs.
But internet companies often apply their policies inconsistently and don’t always provide clear reasons when people’s posts are flagged or deleted. The EIP report suggested that online companies do more to inform people about their guidelines and share evidence to support why a post broke the rules.
3) More visibility and accountability for internet companies’ decisions: News organizations have reported on Facebook’s own research identifying ways that its computer recommendations steered some to fringe ideas and made people more polarized. But Facebook and other internet companies mostly keep such analyses a secret.
The EIP researchers suggested that internet companies make public their research into misinformation and their assessments of attempts to counter it. That could improve people’s understanding of how these information systems work.
The report also suggested a change that journalists and researchers have long wanted: ways for outsiders to see posts that have been deleted by the internet companies or labeled false. This would allow accountability for the decisions that internet companies make.
There are no easy fixes to building Americans’ trust in a shared set of facts, particularly when internet sites enable lies to travel farther and faster than the truth. But the EIP recommendations show we do have options and a path forward.
Before we go …
-
Amazon goes big(ger) in New York: My colleagues Matthew Haag and Winnie Hu wrote about Amazon opening more warehouses in New York neighborhoods and suburbs to make faster deliveries. A related On Tech newsletter from 2020: Why Amazon needs more package hubs closer to where people live.
-
Our homes are always watching: Law enforcement officials have increasingly sought videos from internet-connected doorbell cameras to help solve crimes but The Washington Post writes that the cameras have sometimes been a risk to them, too. In Florida, a man saw F.B.I. agents coming through his home camera and opened fire, killing two people.
-
Square is buying Jay-Z’s streaming music service: Yes, the company that lets the flea market vendor swipe your credit card is going to own a streaming music company. No, it doesn’t make sense. (Square said it’s about finding new ways for musicians to make money.)
Hugs to this
A kitty cat wouldn’t budge from the roof of a train in London for about two and a half hours. Here are way too many silly jokes about the train-surfing cat. (Or maybe JUST ENOUGH SILLY JOKES?)
We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.
If you don’t already get this newsletter in your inbox, please sign up here.