Twitch has updated its policy to include specific criteria to identify “harmful misinformation actors.” And anyone who meets these indicators could be banned from the streaming platform.
Social media and other online platforms were previously hailed for offering new ways to disseminate breaking news much faster than traditional mediums. But, as most people learned over the last couple of years, that also means spreading misinformation can be faster and reach a much wider audience than ever before.
The Amazon-owned streaming giant is the latest platform to introduce new policies that seek to curb the spread of misinformation on its services. “Our goal is to prohibit individuals whose online presence is dedicated to spreading harmful, false information from using Twitch,” the company said in the announcement post. That means not everyone who makes “one-off statements containing misinformation” will be punished.
As mentioned, the policy updates are focused on what Twitch calls “harmful misinformation actors.” The company said it consulted with experts to determine how a Twitch user could be characterized as one.
Twitch will consider users as “harmful misinformation actors” if their channel and other online pages outside Twitch are “dedicated to (1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics, such as conspiracies that promote violence.” The company said these characteristics were chosen because combining them creates the “highest risk” that could result in real-life dangers. The company also encourages users to report creators who may fit these descriptions to the dedicated email address [email protected].
Twitch’s community guidelines now include a specific section for harmful misinformation actors. The company has also identified types of misinformation content that are likely being persistently peddled by creators.
Twitch noted several times in the announcement that misinformation is “not currently prevalent” on its platform but recognized the harm it could cause. The shortlist includes misinformation targeting protected groups, conspiracy theories about “dangerous treatments” and COVID-19/vaccine misinformation, content tied to and promotes violence and content that perpetuates “verifiably false claims” about political processes like election fraud allegations. The new rules also cover misleading content about public emergencies, such as natural catastrophes and active shootings.
Photo by Marco Verch from Flickr (CC BY 2.0)


Texas App Store Age Verification Law Blocked by Federal Judge in First Amendment Ruling
China Proposes Stricter Rules for AI Services Offering Emotional Interaction
FTC Praises Instacart for Ending AI Pricing Tests After $60M Settlement
John Carreyrou Sues Major AI Firms Over Alleged Copyrighted Book Use in AI Training
China’s LandSpace Takes Aim at SpaceX With Reusable Rocket Ambitions
Elon Musk’s xAI Expands Supercomputer Infrastructure With Third Data Center to Boost AI Training Power
Republicans Raise National Security Concerns Over Intel’s Testing of China-Linked Chipmaking Tools
Apple Opens iPhone to Alternative App Stores in Japan Under New Competition Law
Applied Digital Stock Rises on AI Cloud Spinoff Plan and ChronoScale Launch
U.S. Lawmakers Urge Pentagon to Blacklist More Chinese Tech Firms Over Military Ties
OpenAI Explores Massive Funding Round at $750 Billion Valuation
Trump Administration Reviews Nvidia H200 Chip Sales to China, Marking Major Shift in U.S. AI Export Policy
MetaX IPO Soars as China’s AI Chip Stocks Ignite Investor Frenzy
Oracle Stock Surges After Hours on TikTok Deal Optimism and OpenAI Fundraising Buzz
Oracle Stock Slides After Blue Owl Exit Report, Company Says Michigan Data Center Talks Remain on Track
Dina Powell McCormick Resigns From Meta Board After Eight Months, May Take Advisory Role
Samsung Electronics Secures Annual U.S. Licence for China Chip Equipment Imports in 2026 



