OpenAI has developed a watermarking system for ChatGPT to detect AI-generated text, but internal disagreements and user backlash concerns delay its implementation.
OpenAI's Year-Old Watermarking Tool Faces Internal Disagreements and Potential Financial Impact
The Wall Street Journal reported that OpenAI has had an instrument to detect the watermark and a system for watermarking ChatGPT-created text ready for approximately a year. However, the organization has internal disagreements regarding whether or not to disclose it. It is the responsible course of action; however, it could hurt its financial performance.
OpenAI's watermarking is defined as modifying the model's prediction of the most probable words and phrases that follow the most recent ones, thereby establishing a discernible pattern. (That’s a simplification, but you can check out Google’s more in-depth explanation for Gemini’s text watermarking for more).
The company has found this to be “99.9% effective” for making AI text detectable when there’s enough of it — a potential boon for teachers trying to deter students from turning over writing assignments to AI. While not affecting the quality of its chatbot’s text output, this system could significantly enhance the integrity of AI-generated content. In a survey the company commissioned, “people worldwide supported the idea of an AI detection tool by a margin of four to one,” the Journal wrote.
OpenAI Weighs User Backlash and Circumvention Risks in Watermarking ChatGPT Texts
However, OpenAI is concerned that implementing watermarking could discourage surveyed ChatGPT users. According to The Verge, nearly 30% of these users informed the company that they would use less software if watermarking were implemented.
The Journal reported that certain employees harbored additional apprehensions, including the potential for watermarking to be readily circumvented through methods such as bouncing the text between languages with Google Translate or causing ChatGPT to add emojis and subsequently deleting them.
Employees continue to perceive the methodology as effective, regardless of this. However, the article suggests that some methods that are "potentially less controversial among users but unproven" should be tried in response to persistent user sentiments. This commitment to addressing user concerns should reassure the audience about OpenAI's dedication to maintaining a positive user experience.


SK Hynix Shares Surge on Hopes for Upcoming ADR Issuance
Evercore Reaffirms Alphabet’s Search Dominance as AI Competition Intensifies
Microsoft Unveils Massive Global AI Investments, Prioritizing India’s Rapidly Growing Digital Market
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
Trump Criticizes EU’s €120 Million Fine on Elon Musk’s X Platform
Trump Signs Executive Order to Establish National AI Regulation Standard
Apple App Store Injunction Largely Upheld as Appeals Court Rules on Epic Games Case
US Charges Two Men in Alleged Nvidia Chip Smuggling Scheme to China
Coca-Cola’s Costa Coffee Sale Faces Uncertainty as Talks With TDR Capital Hit Snag
Intel’s Testing of China-Linked Chipmaking Tools Raises U.S. National Security Concerns
U.S.-EU Tensions Rise After $140 Million Fine on Elon Musk’s X Platform
Moore Threads Stock Slides After Risk Warning Despite 600% Surge Since IPO
EssilorLuxottica Bets on AI-Powered Smart Glasses as Competition Intensifies
Westpac Director Peter Nash Avoids Major Investor Backlash Amid ASX Scrutiny
SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
Nvidia Develops New Location-Verification Technology for AI Chips
ADB Approves $400 Million Loan to Boost Ease of Doing Business in the Philippines 



