Key Takeaways
- Watermarking makes AI-generated text detectable with 99.9% accuracy
- OpenAI has been working on watermarking tech for about two years
- Implementing watermarking could drive users away from ChatGPT in favor of other models
It’s fair to say that there’s one group of people who aren’t too pleased about the rise of generative AI , and that’s educators. The impressive abilities of AI chatbots allow students to use them to generate entire assignments in a matter of seconds. Being able to detect those who are cheating on their assignments isn’t always easy, and it’s even harder to prove reliably.
However, as reported in the Wall Street Journal, OpenAI has been working on a method of watermarking ChatGPT-generated content so that it can be detected with an accuracy of 99.9%. According to internal documents viewed by the Wall Street Journal, the technology has been ready to go for almost a year and could quickly be deployed in ChatGPT if OpenAI chose to do so. A blog post on OpenAI’s website confirms that the company has been working on text watermarking.
Tim Cook says ChatGPT will be integrated into iOS 18 by the end of the calendar year
In Apple’s latest earnings call, the CEO confirmed that ChatGPT integration will arrive before the year is out.
Watermarking would make ChatGPT-generated content detectable to a select few
The tech tweaks ChatGPT’s output in a recognizable way
Detecting AI-generated content is challenging. There are plenty of sites out there that claim to do so, but they suffer from poor accuracy and a significant rate of false positives. OpenAI launched its own AI classifier in January 2023, but by July 2023 the product had been pulled due to a low rate of accuracy.
Watermarking takes a different approach by adding a unique signature to ChatGPT-generated text. If you have the right detection technology, you can look for this signature to determine whether the text has been generated by ChatGPT . The internal documents seen by the Wall Street Journal claim that the watermarking detection is 99.9% accurate.
With watermarked text, the tokens would be selected in a slightly different way that would leave a clear indication that the text was generated by ChatGPT.
The technique involves subtly changing how tokens are selected when generating text. When an AI chatbot outputs text, it adds each word, or fragment of a word, based on the probability that it is the best choice to come next. With watermarked text, the tokens would be selected in a slightly different way that would leave a clear indication that the text was generated by ChatGPT. These changes would not be obvious to a human reader, however.
The method is not fool-proof; in a blog post on the OpenAI website, the company admits that it can be circumvented by methods such as using a different AI model to paraphrase the text, by running it through translation software, or by inserting and then deleting special characters.
OpenAI is concerned watermarking could impact specific groups
It may also see ChatGPT lose users to other models
According to insiders that spoke to the Wall Street Journal, OpenAI has been working on the project for around two years, and that the watermarking tech has been ready to go for about a year. Why hasn’t OpenAI released it into the wild?
There are a number of reasons why OpenAI appears to be keeping the watermarking technique under wraps for now. The first is that, according to OpenAI’s blog post, “the text watermarking method has the potential to disproportionately impact some groups.” The blog post states that the technology could stigmatize the use of AI for non-native speakers who use ChatGPT to help them generate text. However, there may be a more compelling reason why we don’t have access to this anti-cheating tech for ChatGPT.
It seems that although watermarking would make life a little easier for educators, ultimately it would probably hurt ChatGPT’s bottom line.
In 2023, an OpenAI survey found that nearly 30% of ChatGPT users would use ChatGPT less if it used watermarking and other AI chatbots didn’t. In other words, if ChatGPT introduced tech designed to stop people from cheating, those people would most likely move to a rival AI chatbot that didn’t have any watermarking tech.
It seems that although watermarking would make life a little easier for educators, ultimately it would probably hurt ChatGPT’s bottom line. Google has developed the SynthID technology for watermarking text and video in Google Gemini , but unless every other AI chatbot follows suit, we may not see watermarking for ChatGPT for some time, if at all.