We are successfully growing in the space of AI, every day there is something new that comes across. But there’s a flip slide to this too. AI-generated content is the next big pandemic that we have to deal with. It can easily be used to spread misinformation and fake news to provoke people. Keeping that thought in mind, OpenAI, one of the leading names in the AI space, has introduced tools that identify content produced by its DALL-E AI image generator.
In order to spread awareness of the misuse of AI-generated content, OpenAI has not only developed tools that identify content produced by DALL-E but has also added more enhanced watermarking methods to flag the AI-generated content. This watermarking method is tamper-resistant.
Also read: YouTube creators now have toAI-generated ‘realistic’ content: Here’s why
The new classifier is specifically developed to identify content that DALL-E has developed. The company claims that this tool offers great performance, accuracy, and reliability. It can also help identify images when they are cropped, compressed, alterations in saturation, etc. It has a 98% accuracy level.
One downside to this tool is that it can only identify content generated by DALL-E. Stuff generated by other platforms, such as Midjourney, it doesn’t do pretty well with. It can only flag about 5-10% of such stuff.
Also read: Meta toAI-generated images on FB, Instagram & Threads: Know more
OpenAI has become a member of the Coalition of Content Provenance and Authority (C2PA), alongside tech giants such as Microsoft and Adobe. Hence, it has taken such steps to show its commitment to the cause.
That’s not all. OpenAI has even added watermarking to the Voice Engine as well. This is its text-to-speech platform currently in limited preview. However, both tools are currently works of progress and are being refined.