Meta Rolls Out Labels for AI-Generated Content

Meta is proactively identifying and labeling AI-generated images on its platforms. This initiative comes in response to concerns about potential misuse of AI for spreading disinformation in the upcoming 2024 elections.

Meta Enhances AI Transparency and Security with Labels

In the coming months, Meta will begin adding “Created with AI” labels to images generated with tools from Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. According to a blog post published Tuesday by Nick Clegg, Meta’s president of global affairs.

Meta already applies a similar label, “Imagined with AI,” to photorealistic images created with its own AI generative tools.

Clegg said that Meta is working with AI tool developers to implement common technical standards. This will allow Meta to apply labels across Facebook, Instagram, and Threads in all languages supported by each app.

Online information experts, lawmakers, and tech executives raised concerns about the potential misuse of AI tools to spread false information.

Meta’s move addresses these concerns, aiming to prevent such misuse ahead of the 2024 elections in the US and other countries.

Meta’s own Oversight Board has also previously criticized the company’s “incoherent” policies on media manipulation.

The Importance of Label Transparency and Tips for Users

The new industry standard markers that will allow Meta to apply labels do not yet apply to AI-generated video and audio.

Meta is currently implementing a feature that empowers users to identify when video or audio content they share is created with AI. Clegg confirms mandatory disclosure and penalties for not labeling realistic, digitally altered video or audio.

Meta may add more prominent labels to digitally created or altered images, videos, or audio. The company prioritizes labeling high-risk content that could materially deceive the public on important issues.

Meta is also working to prevent users from removing invisible watermarks from AI-generated images. Clegg emphasizes the importance of this initiative due to the potential for increased misuse of AI-generated content for deception.

He anticipates that individuals and organizations seeking to mislead others will try to circumvent safeguards, highlighting the ongoing need for vigilance.

Clegg advises checking account credibility and unnatural details to assess potential AI-generated content.

Meta Expands “Take it Down” Anti-Extortion Tool

The “Take it Down” anti-extortion tool, supported by National Center for Missing & Exploited Children, is expanding to serve additional users.

The Take it Down tool helps teens protect their intimate images with unique identifier, allowing platforms to easily remove them. This tool, launched last year in English and Spanish, is expanding to serve 25 additional languages and countries.

The Take it Down announcement comes after Meta CEO Mark Zuckerberg, along with other social media company leaders, met with U.S. President Joe Biden to discuss ways to combat the spread of online misinformation.

Meta takes these moves to increase transparency and security around AI content and protect users from misleading information.

Also Read: ClimateGPT: Open-Source AI Platform Tackles Climate Disinformation