Meta · Community Guidelines · February 12, 2024
Meta expands their misinformation policies for Facebook, Instagram and Threads to cover AI generated content. The updated Community Guidelines now include a subsection for “Content Digitally Created or Altered that May Mislead”. The company promises to create informative labels for artificially produced content that creates a particularly high risk of misleading people on a matter of public importance.
The changes in Facebook and Instagram policy documents were traced by Platform Governance Archive, a database that automatically tracks policy changes and stores policy documents for 18 platforms, hosted by Lab Platform Governance, Media and Technology at the Center for Media, Communication, and Information Research (ZeMKI), University of Bremen.
Changes in Facebook’s Community Guidelines tracked by Platform Governance Archive.
These changes follow up the company’s announcement to label images created by AI on their platforms Facebook, Instagram and Threads. Meta has been labelling content generated through their Meta AI feature since its launch, but now the company aims to collaborate with industry partners to establish unified technical standards for identifying all kinds of AI generated content.
This opens up a question of the transparency of AI technology and calls for action from other industry actors, as the boundaries between human and synthetic content are becoming increasingly blurred. In July 2023, Meta was one of the several AI companies to voluntarily commit to the White House to implement measures against the harms associated with AI in the online environment.
In January, at the World Economic forum in Davos, Nick Clegg, president of global affairs at Meta, described AI watermarking as “the most urgent task” for regulators today.
Meanwhile, the European Commission has been pushing very large online platforms and search engines to start labelling AI generated content, which soon might become a requirement under the Digital Services Act.