As artificial intelligence platforms become more deeply embedded in daily life, concerns about their role in enabling dangerous behavior continue to grow. A New Zealand startup is now developing a groundbreaking tool that could redirect users displaying violent extremist tendencies toward professional deradicalization support — marking a significant step forward in AI safety innovation.
ThroughLine, a crisis intervention company already contracted by OpenAI, Anthropic, and Google, currently routes at-risk users to mental health helplines when signs of self-harm, domestic violence, or eating disorders are detected. Founder Elliot Taylor, a former youth worker, is now exploring how that same infrastructure can be expanded to address online radicalization before it escalates into real-world violence.
The proposed system would use a hybrid model combining a specialized deradicalization chatbot with referrals to vetted, human-run mental health services. Unlike standard AI platforms, the tool would be trained using guidance from subject-matter experts rather than generic large language model datasets. ThroughLine is currently in active discussions with The Christchurch Call, an international initiative launched after New Zealand's 2019 terrorist attack, to develop and validate the technology.
This initiative comes as AI companies face mounting legal pressure over their failure to prevent platform-enabled violence. Canada's government threatened OpenAI with regulatory intervention after it emerged that a school shooter had been quietly banned from the platform without law enforcement being notified.
Research has consistently shown that aggressive content moderation can drive extremist sympathizers to less regulated platforms like Telegram, making early, compassionate intervention all the more critical. Taylor argues that cutting off vulnerable users mid-conversation leaves them without support and potentially more dangerous.
With over 1,600 helplines across 180 countries in its network, ThroughLine is uniquely positioned to bridge the gap between AI detection and real-world crisis response — potentially reshaping how tech platforms handle radicalization online.


NASA Artemis II: First Crewed Moon Mission Since Apollo Takes Four Astronauts on 10-Day Lunar Journey
Pony.ai, Uber, and Verne Launch Europe's First Commercial Robotaxi Service in Zagreb
Foreign Investors Pour $18.65 Billion into Japanese Stocks Amid Market Stabilization
Alibaba Shares Slide as Jefferies Slashes Price Target Over AI Spending and Business Losses
Kia Cuts EV Sales Target for 2030 Amid Slowing Demand and U.S. Policy Shifts
MATCH Act Targets ASML and Chinese Chipmakers in New U.S. Export Crackdown
Samsung Electronics Eyes Record Q1 Profit Amid AI-Driven Chip Boom
San Francisco Suspect Arrested After Molotov Cocktail Attack on OpenAI CEO Sam Altman's Home
TSMC Posts Record Q1 2026 Profits Driven by Surging AI Chip Demand
Bendigo and Adelaide Bank Posts Strong Q3 Earnings, Announces AI-Driven Job Cuts
Bank of America Identifies Top Asia-Pacific Semiconductor Stocks Poised for AI-Driven Growth
Chinese Brands Are Taking Over Brazil — And It's Just Getting Started
MATCH Act: How New U.S. Chip Legislation Could Freeze China's Semiconductor Ambitions
Rio Tinto's California Boron Assets Attract Over a Dozen Bidders, Valued at Up to $2 Billion
Anthropic's Mythos AI Model Sparks Emergency Cybersecurity Meeting With Top U.S. Bank CEOs
Lumentum Holdings Rides AI Wave With Order Book Filled Through 2028 



