Innovation

OpenAI’s New Tool Detects AI Content with 99.9% Accuracy, But Won’t Be Released

12 August 2024

|

Zaker Adham

Summary

OpenAI has developed a highly accurate tool capable of detecting AI-generated content with 99.9% precision. Despite its effectiveness, the company has decided not to release it to the public.

 

The California-based company has hinted at this technology for some time, suggesting it was still in development. However, insiders revealed to the Wall Street Journal (WSJ) that the tool has been ready for months. The hesitation to release it stems from concerns that it might reduce the appeal of OpenAI's products.

 

As AI adoption increases, detecting AI-generated content has become a significant challenge. Legislators have proposed laws requiring AI developers to include watermarks and other identifiers in AI content, but these measures have not been widely adopted.

 

This issue is particularly pressing in fields like education, where a recent study found that 60% of middle- and high-school students use AI for schoolwork.

 

According to insiders, OpenAI solved this detection challenge over a year ago but has chosen not to make the tool publicly available. “It’s just a matter of pressing a button,” said one source.

 

OpenAI argues that delaying the release is necessary to protect users, citing “important risks.” A company spokesperson told WSJ, “We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”

 

The company also fears that if the technology is widely available, bad actors could find ways to circumvent it. Additionally, a survey last year revealed that 70% of ChatGPT users were against the new tool, with one-third indicating they would switch to competitors if it were implemented.

 

Senior executives have thus far suppressed the tool, claiming it is not ready for public launch. In a recent meeting, top executives stated that the tool, which relies on watermarking outputs, was too controversial and that alternative solutions should be explored.

 

OpenAI’s competitors, including Google, have faced similar challenges. Google’s Gemini LLM, a leading AI model, has developed a similar tool called SynthID, but it has not been released publicly either.

 

For AI to function effectively within legal frameworks and overcome growing challenges, it needs to integrate enterprise blockchain systems that ensure data quality and ownership. This approach can keep data secure while guaranteeing its immutability. For more insights on this emerging technology, check out CoinGeek’s coverage on why enterprise blockchain will be the backbone of AI.