New OpenAI Tool Can Detect Dall-E 3 AI Images With 98% Accuracy

The rapid proliferation of generative artificial intelligence is making it hard to know what’s real on the internet. More than any other company, ChatGPT maker OpenAI is driving this change. It has promised tools to identify AI content, and it’s finally delivering. Developers can now apply to help test its new image detection classifier, which can identify up to 98% of AI fakes. However, plenty of ways exist to sneak one past the detector.
Currently, your best bet to identify an AI image from OpenAI’s Dall-E 3 bot is to check the metadata. The company says it started adding C2PA metadata to all Dall-E images earlier this year, marking them as AI content. The Sora video generator will also have this metadata when it’s available to the public. The AI metadata survives some, but not all, image manipulation and editing techniques. OpenAI says it has also joined the C2PA (Coalition for Content Provenance and Authenticity) steering committee to improve the standard.
Metadata is good, but it’s far from a silver bullet. Soon, you might be able to plug images into the OpenAI image classifier to see if they are a product of AI. In a new blog post, OpenAI says its internal testing has shown the image classifier can identify up to 98% of AI content with a false positive rate of just 0.5%. The classifier can allegedly maintain high detection rates even when images are edited; for example, by cropping, changing saturation, and compressing the data.
That sounds great, but it could still be too easy to evade detection in practice. More significant changes to the image, like adjusting the hue or adding noise, can derail the detector algorithm. OpenAI only mentioned the biggest problem in passing—the detector is only good at identifying AI images from Dall-E. Feed it an image from Midjourney or Stable Diffusion, and the detection rate drops to 5-10%.
That limitation raises many questions. Would someone who wanted to create deceptive images even use an “official” tool like Dall-E? Publicly available AI generators have guardrails in place to limit malicious use. Someone with nefarious intent would probably modify an existing AI model to ignore guardrails. Could that change the output enough that AI detectors like this one fail to detect the fakes?
Unfortunately, no one knows how pervasive or easily detectable AI fakes will be. The models are constantly improving, and OpenAI has only started working on its content classifier. It is inviting researchers and journalism nonprofits to join the testing program to help it assess the tool’s effectiveness. It’s good to see OpenAI try to solve this problem, but it might have been nice to have working AI detectors before everyone could conjure up their wildest dreams with a text prompt.
ExtremeTech supports Group Black and its mission to increase greater diversity in media voices and media ownerships.
© 2001-2024 Ziff Davis, LLC., a Ziff Davis company. All Rights Reserved.
ExtremeTech is a federally registered trademark of Ziff Davis, LLC and may not be used by third parties without explicit permission. The display of third-party trademarks and trade names on this site does not necessarily indicate any affiliation or the endorsement of ExtremeTech. If you click an affiliate link and buy a product or service, we may be paid a fee by that merchant.

source

About admin

Check Also

, Pick up Microsoft Office Professional for Windows for Just $49.99, #Bizwhiznetwork.com Innovation ΛI

Pick up Microsoft Office Professional for Windows for Just $49.99

The business landscape today typically involves fast-paced efficiency. It’s almost a requirement now to have …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation