OpenAI has developed a tool designed to automatically watermark AI-generated content, yet there remains a division among company leadership about whether to make it publicly available.
As reported by The Wall Street Journal, this innovative tool has been in development for two years and is capable of labeling content produced by OpenAI’s large language models (LLMs).
Sources familiar with the situation indicate that the tool operates by subtly altering the selection process of tokens, akin to Google’s SynthID for Text. These modifications result in a distinct pattern known as a watermark.
Internal documents reveal that the watermarks generated by OpenAI’s tool are 99.9% effective, provided that a sufficient amount of new text is produced by ChatGPT.
Despite the tool being ready for release, it has been caught in a web of internal discussions for the past two years.
One significant concern is the potential for the tool to deter users from utilizing ChatGPT.
An April 2023 survey conducted by the company among loyal ChatGPT users indicated that nearly one-third of respondents would be dissuaded by the watermarking technology, primarily due to its ability to identify cheating and plagiarism.
Photo credit: Daniel Chetroni/Shutterstock
Read more: OpenAI Announces Plans to Combat Misinformation Amid 2024 Elections
- 0 Comments
- Ai Process
- Artificial Intelligence