Meta is introducing a new global policy that mandates transparency for digital manipulations in political and social issue advertisements on its platforms.
Starting in the new year, advertisers will be required to disclose any digital alterations, including those made by generative AI tools, that fabricate or modify images, videos, or audio in their political ads.
This move targets ads that might present a person saying or doing something they did not, portray nonexistent realistic entities, or depict events that did not occur or are misrepresented.
Upon disclosure, Meta will annotate such ads and include this information in its Ad Library for public scrutiny. Advertisers who fail to comply will see their ads rejected, with repeat offenders facing potential penalties.
In maintaining the integrity of its platform, Meta will continue to remove any content, AI-generated or not, that breaches its policies.
In tandem, independent fact-checkers will assess viral content for misinformation, flagging AI or digitally altered content that could mislead users.
As digital technologies evolve, Meta’s policy adjustment reflects a growing commitment to transparency and the responsible use of AI.
By implementing this policy, Meta is acknowledging the profound impact of AI on the political landscape and prioritizing user awareness about the authenticity of the political content they encounter.