Meta to label AI-generated images on FB, Instagram & Threads: Know more
Meta will soon start labelling AI-generated images on Facebook, Instagram and Threads.
Meta already applies an “Imagined with AI” watermark to images created using the Meta AI feature.
Meta is also adding a feature for people to disclose that they're sharing AI-generated video or audio so that the company can put add a label on it.
Have you ever scrolled through your social media feed and saw an image that made you stop and wonder that it seemed too perfect or even surreal? In today’s digital age, where visual content dominates our online interactions, differentiating between authentic and artificially generated images has become very challenging. Well, the good news is that Meta has announced that it will soon start labelling AI-generated images on Facebook, Instagram and Threads.
“In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry standard indicators that they are AI-generated,” Meta announced in a blogpost.
Meta already applies an “Imagined with AI” watermark to images created using the Meta AI feature, and the company will start doing the same by labelling AI-generated photos made with tools from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
Also read: Meta to launch new AI chatbot this week for young users: Report
Meta explained that although other companies have begun integrating signals into their image-generating systems, they have yet to extend this practice to AI tools generating audio and video at the same scale. As a result, Meta can’t detect those signals and label audio and video content from other companies.
To counter this problem, Meta is adding a feature for people to disclose that they’re sharing AI-made video or audio so that the company can put add a label on it.
When someone shares a video that looks real or audio that sounds real but was made or changed digitally, they’ll have to use this feature to let Meta know. If they don’t, they might face consequences.
“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context” the company explained.
Meta further stated that currently, the identification of all AI-generated content remains challenging, with methods available for individuals to remove invisible markers. So, Meta is exploring various approaches. It is working to create classifiers capable of automatically detecting AI-generated content, even in instances where such content lacks invisible markers.
Ayushi Jain
Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile