IAB Sets Disclosure Rules for AI in Advertising

IAB Sets Disclosure Rules for AI in Advertising

As our expert in demand generation, Milena Traikovich brings a wealth of experience in analytics and performance optimization, helping businesses navigate the complex intersection of marketing and technology. With the rapid integration of generative AI into advertising, the industry is at a critical inflection point, grappling with questions of transparency and trust. The IAB’s new AI Transparency and Disclosure Framework offers a guide, and today, we’ll delve into its practical implications with Milena. We will explore how marketing teams can assess AI’s impact on authenticity, the internal processes needed to establish clear disclosure policies, and how a two-layer approach to transparency can build consumer confidence. We’ll also discuss how proactive adoption of these standards can become a powerful competitive advantage.

The new IAB framework uses a risk-based approach, focusing on consumer impact. How can a marketing team practically assess if AI’s use “materially affects authenticity,” and what specific metrics might they use to measure that potential consumer impact? Please provide a detailed example.

That’s the central question this framework helps us answer. The key isn’t a complex algorithm but a straightforward human judgment: does the use of AI meaningfully change what a consumer believes they are seeing or hearing? Imagine a travel company’s campaign. If they use AI to optimize ad copy for different audiences or to seamlessly edit a video’s color grading, that’s routine work. It doesn’t alter the core truth of the content. However, if that same company creates a photorealistic video of a celebrity endorsing their resort, using a synthetic voice and a digital twin of a person who was never there, that is a material change. The consumer believes a real person is making a real statement, which is fundamentally misleading. The metric here isn’t a number but a qualitative assessment based on a simple checklist: Does this depict a real event? Is this a real person’s likeness? Is this a statement they actually made? Answering “no” to any of these is a strong signal that you’ve crossed the line and disclosure is necessary to maintain trust.

The framework distinguishes between routine AI uses, like optimization, and scenarios requiring disclosure, like synthetic voices. What’s a step-by-step process for a brand to draw this line, and what internal team members, such as legal or creative, should be involved in that decision?

Creating a clear internal process is absolutely essential to apply this framework consistently. The first step is to conduct an audit of your entire campaign workflow to identify every single touchpoint where AI is used, from initial brainstorming and content creation to media buying and performance analysis. Step two is to categorize these uses. Place them in buckets like “optimization and efficiency,” which would include tasks like AI-assisted editing, versus “substantive content generation,” which involves creating images, voices, or entire video scenes. The third and most critical step is to establish a cross-functional review committee for anything falling into that second bucket. This isn’t a decision marketing can make in a silo. You absolutely need your legal and compliance teams at the table to weigh the risks of being perceived as misleading. Your creative team must also be there to articulate their vision and intent, while brand and PR leaders can speak to the potential impact on public perception. This group would then apply the “materially affects authenticity” test before any such asset goes live, creating a formal record of the decision.

A two-layer disclosure model involves consumer-facing icons and machine-readable metadata like C2PA. How do you see these two layers working together to build consumer trust, and what are the primary technical challenges agencies might face when implementing the machine-readable component across different platforms?

The two-layer model is brilliant because it addresses two different needs simultaneously. The consumer-facing layer—the icons, watermarks, or text labels—is the immediate, easily digestible signal for the average person. It’s a simple, upfront cue that says, “AI was involved in creating this,” which builds trust through direct honesty. The second, machine-readable layer, using standards like C2PA, is for deeper verification. It provides a permanent, technical fingerprint embedded in the asset, allowing platforms, regulators, and even curious consumers to verify its origin and how it was modified. Together, they create a robust system where simple disclosure is backed by verifiable proof. The primary technical challenge for agencies will be interoperability. Right now, the digital advertising ecosystem is incredibly fragmented. An agency might correctly embed C2PA metadata, but it could easily get stripped out as the ad passes through various servers and platforms on its way to the end user. Ensuring that this metadata persists and is correctly interpreted across every social media platform, publisher website, and ad network is a massive undertaking that will require industry-wide collaboration.

This framework is positioned as a way for marketers to future-proof their AI adoption. Beyond avoiding regulatory issues, could you share a specific example of how proactively adopting these standards could create a competitive advantage or enhance a brand’s relationship with its customers?

Future-proofing goes far beyond just dodging fines; it’s about building brand equity in an era of skepticism. Think of it as turning a compliance requirement into a brand-building opportunity. Imagine two skincare brands launching a new product. One brand uses an AI-generated avatar in its ads without disclosure, hoping to seem cutting-edge. The other brand also uses an AI avatar but embraces the IAB framework, placing a small, clear icon on the ad and building a landing page that says, “We used AI to create our virtual spokesperson, ‘Aura,’ to show you the science behind our product in a new way.” The first brand risks a backlash when consumers discover the avatar isn’t real, making the company seem deceptive. The second brand, however, has proactively framed the narrative. It comes across as innovative, transparent, and respectful of its audience’s intelligence. This honesty builds a much stronger, more resilient relationship with customers who will see them as a trustworthy leader in the space.

What is your forecast for the evolution of AI disclosure standards in advertising over the next five years?

My forecast is that this IAB framework is the foundational layer, and what we’ll see over the next five years is increasing granularity and platform-level enforcement. Right now, the disclosure is often a simple binary signal—AI was or wasn’t used. I predict we will move toward more specific labels, distinguishing between “AI-assisted imagery,” “Fully synthetic character,” or “AI-generated voice.” Secondly, I believe the machine-readable metadata layer, like C2PA, will become a non-negotiable requirement for running ads on major platforms. They will integrate it into their ad approval systems to automate compliance at scale. Finally, I foresee a major push in consumer education. The icons and labels will only be effective if the public understands what they mean, so we’ll likely see industry-wide campaigns designed to build that literacy, turning these disclosures from an insider’s tool into a universally understood signal of authenticity.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later