CIO Insider

CIOInsider India Magazine

Separator

Meta to Label Content as 'Made with AI' Starting May

CIO Insider Team | Saturday, 6 April, 2024
Separator

Starting in May, Meta will include a 'Made with AI' label in addition to amending its policy regarding AI-generated content, and the guidelines will be applicable to Facebook, Instagram, and Threads content.

Meta has announced that it would begin designating more audio, video, and image content as artificial intelligence (AI)-generated, admitting that its present policy is "too narrow." Although it didn't elaborate on its detection method, labels would be applied either when users confess using AI tools or when Meta recognizes "industry standard AI image indicators."

Given the major developments in AI and the ease with which media may be manipulated into incredibly convincing deepfakes, the board asked Meta to rapidly review its strategy to altering media in February.

The board's warning was issued in the midst of concerns over the widespread exploitation of AI-powered applications for platform disinformation in this crucial election year for elections both domestically and internationally.

Video, audio, and image content generated or modified with AI will be identified by Meta's new "Made with AI" labels. In addition, content judged to have a high potential for deceiving the public will bear a more conspicuous label.

As part of their agreement, Meta, Google, and OpenAI decided to watermark photographs created by their AI applications using a standardized standard.

The agreement reached in February by major tech companies and AI players to crack down on manipulated content meant to mislead voters is connected to these new labeling measures.

As part of their agreement, Meta, Google, and OpenAI decided to watermark photographs created by their AI applications using a standardized standard.

According to Meta, the implementation will take place in two stages. The first will involve the labeling of AI-generated content in May 2024, and the second will stop the removal of altered media based only on the previous policy in July.

The new guideline states that unless content violates other Community Standards, such those that forbid hate speech or meddle in elections, it will remain on the platform, even if it has been altered by artificial intelligence.

The agreement reached in February by major tech companies and AI players to crack down on manipulated content meant to mislead voters is connected to these new labeling measures.

As part of their agreement, Meta, Google, and OpenAI decided to watermark photographs created by their AI applications using a standardized standard.

Current Issue
Datasoft Computer Services: Pioneering The Future Of Document Management & Techno-logical Solutions