Government and nonprofits
·
November 8, 2023

Helping People Understand When AI Or Digital Methods Are Used In Political or Social Issue Ads


We’re requiring advertisers to disclose when they digitally create or alter a political or social issue ad in certain cases

Updated on January 3, 2024 at 8 am PST


On Thursday, January 11, the new disclosure policy for ads about social issues, elections, and politics will go into effect. As we announced in November, advertisers will be required to disclose when their ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered by AI or other methods to depict scenarios listed in the article below.


Before then, advertisers can begin updating active ads on Tuesday, January 9 at 10 am PST to disclose whether they contain digitally created or altered content.


For more information, visit the Business Help Center. For advertisers and partners using the API, please refer to our Meta for Developers resources.


Originally published on November 8, 2023


We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy will go into effect in the new year and will be required globally.


Advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:


  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad. This may include image size adjusting, cropping an image, color correction, or image sharpening, unless such changes are consequential or material to the claim, assertion, or issue raised in the ad.


Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser. We will share additional details about the specific process advertisers will go through during the ad creation process.


As always, we remove content that violates our policies whether it was created by AI or a person. Our independent fact-checking partners review and rate viral misinformation and we do not allow an ad to run if it’s rated as False, Altered, Partly False, or Missing Context. For example, fact-checking partners can rate content as “Altered” if they determine it was created or edited in ways that could mislead people, including through the use of AI or other digital tools.