Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

OpenAI Says It Can Now Detect Images Spawned by Its Software Most of the Time

WSJ via MSN: “AI is getting better at recognizing its own work. OpenAI on Tuesday is launching a new tool that can detect whether an image was created using the company’s text-to-image generator, DALL-E 3. OpenAI officials said that the tool is highly accurate in detecting DALL-E 3 images, but that small changes to a picture can confuse it—reflecting how artificial-intelligence companies are playing catch up in the ability to track their own technology. A surge of fake images and other media created using generative AI has created confusion about what is and isn’t real, and fueled discussion about the way images are affecting election campaigns in 2024. Policymakers are concerned that voters are increasingly encountering AI-created images online and the wide availability of tools like DALL-E 3 make it possible to create such content even faster. Other AI startups and tech companies are also building tools to help…”

Sorry, comments are closed for this post.