Twitter announced this week that it will stop enforcing its previous policy that targeted COVID-19 misinformation on its platform.
“Eagle-eyed” Twitter users noticed a quiet change to the platform’s policy, CBS News reported. That change which they saw in Twitter’s online rules notes a big change in its now-former stance against COVID-19 misinformation:
Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.
The previous policy dated back to March of 2020 when Twitter told its users it wanted to “make it easy to find credible information” and “limit the spread of potentially harmful and misleading content.” To that end, Twitter rolled out special warnings and labels it applied to tweets that went against “authoritative sources of global and local public health information.”
Twitter users who appeared to get their medical degree from social media didn’t like the effort. It made it harder for them to spread their theories about things ranging from essential oils to animal dewormers to disinfectants being “more effective” and “safer” than the COVID-19 vaccine.
But public health experts, naturally, applauded the effort. U.S. Surgeon General Vivek H. Murthy called such misinformation “a serious threat to public health.”
That’s common sense. What’s curious to me is that any reasonable person would want COVID-19 misinformation circulating in the first place.
Twitter told us years ago what it watched for.
In a blog post dating back to March 2020, Twitter listed what it was on the lookout for. It listed the following types of content that would prompt them to require users to remove tweets.
The list included the following:
- Statements intended to influence others to “violate recommended COVID-19 guidance from global or local health authorities to decrease someone’s likelihood of exposure to COVID-19
- Misleading claims that unharmful but ineffective methods are cures or absolute treatments for COVID-19.
- Descriptions of harmful treatments or preventative measures known to be ineffective or that are shared out of context to mislead
- Denial of established scientific facts about transmission during the incubation period or transmission guidance from global and local health authorities
- False or misleading information that would allow the reader to diagnose themselves
- Unverified claims that have the potential to incite people to action, could lead to the destruction or damage of critical infrastructure or could lead to widespread panic/social unrest
- Tweets offering the sale or facilitation of non-prescription treatments or cures for COVID-19
- Specific and unverified claims made by people impersonating a government or health official or organization, including parody accounts
- Claims that specific groups or nationalities are never susceptible to COVID-19
Those are things that any reasonable person should want flagged at the very least.
If you can’t see that, you might be part of the very COVID-19 misinformation problem they hoped to fight.
So why would Twitter stop enforcement?
I’m not saying Twitter has always been accurate in its effort to prevent COVID-19 misinformation. Neither is anyone else, from what I can tell. The Washington Post reported that Twitter struggled to fact-check everything. Even worse, it “recently began labeling some factual information” as misinformation. It also banned “scientists and researchers who attempted to warn the public of the long-term harm of covid on the body.”
But that’s no reason to suspend a policy. If anything, that’s all the more reason to work harder. To do more. To be better.
There’s a reason that “free speech” doesn’t grant you the right to shout “fire” in a crowded theater.
Anyone who celebrates the suspension of efforts to fight false and unverified information being distributed on a platform that wants to be a “public square” should face a lot of scrutiny themselves. There’s something wrong there.