Twitch has updated its policy to include specific criteria to identify “harmful misinformation actors.” And anyone who meets these indicators could be banned from the streaming platform.
Social media and other online platforms were previously hailed for offering new ways to disseminate breaking news much faster than traditional mediums. But, as most people learned over the last couple of years, that also means spreading misinformation can be faster and reach a much wider audience than ever before.
The Amazon-owned streaming giant is the latest platform to introduce new policies that seek to curb the spread of misinformation on its services. “Our goal is to prohibit individuals whose online presence is dedicated to spreading harmful, false information from using Twitch,” the company said in the announcement post. That means not everyone who makes “one-off statements containing misinformation” will be punished.
As mentioned, the policy updates are focused on what Twitch calls “harmful misinformation actors.” The company said it consulted with experts to determine how a Twitch user could be characterized as one.
Twitch will consider users as “harmful misinformation actors” if their channel and other online pages outside Twitch are “dedicated to (1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics, such as conspiracies that promote violence.” The company said these characteristics were chosen because combining them creates the “highest risk” that could result in real-life dangers. The company also encourages users to report creators who may fit these descriptions to the dedicated email address [email protected].
Twitch’s community guidelines now include a specific section for harmful misinformation actors. The company has also identified types of misinformation content that are likely being persistently peddled by creators.
Twitch noted several times in the announcement that misinformation is “not currently prevalent” on its platform but recognized the harm it could cause. The shortlist includes misinformation targeting protected groups, conspiracy theories about “dangerous treatments” and COVID-19/vaccine misinformation, content tied to and promotes violence and content that perpetuates “verifiably false claims” about political processes like election fraud allegations. The new rules also cover misleading content about public emergencies, such as natural catastrophes and active shootings.
Photo by Marco Verch from Flickr (CC BY 2.0)