Few industries were as quick to embrace generative AI as advertising. Drawn to its potential for speed and scale, agencies and brands alike have rushed to explore how generative tools can streamline creative work and expand what marketing teams can produce.
But while adoption has happened quickly, standards haven’t followed at the same pace. That lack of standardization has left many marketers navigating AI on their own, even among growing concerns around transparency and trust.
The Interactive Advertising Bureau (IAB) has attempted to tackle this issue with its AI Transparency and Disclosure Framework. These guidelines are designed to give advertisers a common starting point for thinking about AI disclosure and transparency as generative tools become more deeply embedded in marketing processes.
The Current State of Disclosure in Advertising
Even among brands trying to be transparent about AI use, disclosure practices are all over the place. Some campaigns flag even small amounts of AI involvement, while others avoid labeling altogether unless they’re legally required to.
Inconsistent practices leading to over- and under-labeling have made disclosure a minefield in its current state. Too many disclosures can lead to label fatigue, where audiences see so many that they stop paying attention to them. But, on the other hand, under-labeling creates the risk of what’s called the implied-truth effect. When only some content carries AI labels, people may assume anything without a label is automatically real.
Regulation isn’t settled either. New policies at the state level are pushing toward greater transparency, but these rules are decentralized and differ widely depending on the market. At the federal level, existing consumer protection laws still apply, though AI-specific regulations have been slow to follow.
A Risk-Based Approach to Disclosure
One of the framework’s central ideas is that not every use of AI in advertising carries the same level of risk. AI can assist with small production tasks or generate entire scenes, people, or voices. Treating all of those uses the same would either overwhelm audiences with disclosures or make transparency impractical for marketers.
Instead, the framework focuses on materiality. In practical terms, that means asking whether AI could mislead a reasonable consumer about what is authentic, factual, or human-created in an advertisement.
The threshold is based on consumer impact rather than the number of AI tools used during production. Many uses of AI happen behind the scenes and do not meaningfully change how audiences interpret an ad. In those cases, disclosure is typically unnecessary.
Where disclosure becomes important is when AI plays a central role in creating the content audiences experience. In those situations, transparency helps prevent confusion.
Disclosure by Content Type
The framework also walks through several different content types, including images, video, AI influencers, and synthetic voice. Each section looks slightly different, but the reasoning behind them is the same. The issue is whether the use of AI could mislead someone about what they are actually seeing or hearing.
Take generated imagery. If a brand creates a realistic product scene using a text-to-image system instead of photographing it, a viewer might reasonably assume the scene existed in real life. Because that assumption could affect how the ad is interpreted, the framework treats it as something that requires disclosure, not entirely dissimilar to longstanding requirements like “not actual size” or “enlarged to show texture.”
The same reasoning carries over to other formats. AI-generated video, virtual influencers, and cloned voices can all create situations where consumers might misunderstand who or what is behind the message. For instance, a synthetic voice might make it seem as though a real person (i.e. a celebrity) delivered the narration, or a virtual influencer might appear to be a real creator rather than a generated persona. This distinction mirrors similar requirements for commercials to disclose paid actors in place of real testimonials.
On the other hand, many everyday production tasks do not warrant disclosure as they pose very little material risk. Using AI to clean up a product photo, adjust lighting, remove a shadow, or improve image quality is treated much like traditional editing. The same applies when AI helps place a real product photo into a new background, as long as the product itself is represented accurately.
How Disclosures Should Actually Appear
Along with the types of content that warrant disclosure, the IAB also offers some insight on how advertisers can implement those disclosures in practice.
Text labels are the primary method the IAB recommends. In most cases, the expectation is a clear, plain-language label using terms like “AI-generated.” The goal is straightforward communication that audiences can easily understand without needing to interpret symbols or technical language.
There are situations, however, where a text label may not fit neatly into a piece of creative. In those cases, the framework allows for visual indicators as an alternative. For example, some platforms use a small sparkle or star icon to signal AI involvement, while other platforms may apply their own AI indicators automatically. The framework says that advertisers may rely on those labels, as long as they meet visibility standards.
Regardless of the format, the framework stresses that disclosures must be clear and easy to notice. Visual labels should be readable, high contrast, and visible during the content itself. Audio disclosures should be spoken clearly at a normal pace, and visual equivalents should be included when video is present.
Learn more about the IAB’s framework and download the report here: https://www.iab.com/guidelines/ai-transparency-and-disclosure-framework/




