When Do We Need to Tell Customers We’re Using AI?
Dear Will & AiME,
We’re integrating more AI into our marketing, from chatbots to website content. I’m hearing about legal requirements, platform rules, and trust issues around “AI disclosure.” When does a business need to tell consumers it used AI, and what’s the practical approach to this today?
— Director of Product Marketing in Orlando
Short answer 💡
Businesses should disclose AI use when consumers could reasonably be misled—such as interacting with chatbots or viewing AI-generated content—under evolving laws and platform rules. Clear, context-based disclosure helps manage legal risk and build customer trust.
Dear Director of Product Marketing in Orlando,
This is a frequent and complex question. The legal and regulatory landscape is a moving target, comprising a mix of state laws, consumer protection standards, and platform-specific policies. While not yet uniform, clear patterns are emerging, and businesses can take practical steps to reduce legal risk and build trust.
What Is AI Disclosure?
At its core, AI disclosure means informing users or audiences that they are:
Interacting with an AI system (e.g., a chatbot), or
Seeing content that was generated or materially influenced by AI.
Disclosure serves two related purposes:
Consumer protection — Preventing deception about whether a human or machine is on the other side of an interaction.
Transparency — Allowing consumers to understand the role of AI in creating or influencing content that affects decisions they make.
When Is AI Disclosure Legally Required?
There is no single federal AI disclosure law governing all AI use yet. But at the state level, some emerging laws include express disclosure obligations:
Chatbot transparency — Several states have enacted or are considering laws requiring businesses to disclose when a consumer is interacting with an AI bot, particularly in situations where they might reasonably believe they are communicating with a human.
High-risk & sensitive uses — Emerging laws and regulations are focusing on AI systems deemed "high-risk" or used in sensitive contexts, such as those providing companionship or advice. These often require heightened transparency to protect vulnerable consumers.
Broad disclosure rules — Other state statutes (e.g., Colorado’s AI Act) include obligations around transparency for AI systems, which may cover some consumer interactions depending on how the system is used.
Existing consumer protection laws — Even where there’s no specific AI statute, unfair or deceptive trade practice laws can apply if a business hides AI use in a way that misleads consumers.
How Platform Rules Shape AI Disclosure Requirements
In addition to statutes, platform policies and advertising guidelines often impose AI disclosure requirements:
Social media platforms (e.g., Meta, TikTok, YouTube) require labels or notices for AI‑generated or AI‑edited content, especially when it’s realistic or could affect purchasing or trust.
Some ad networks automatically label generative ad creatives as “AI‑generated” where that content was created using their tools.
These obligations may not be “law,” but they are contractual compliance obligations that can affect your ability to run ads or post content.
A Practical Framework for AI Disclosure
When interacting directly with users via AI chat tools:
If your chatbot, virtual assistant, or conversational API could be mistaken for a human, a clear statement like “You are interacting with an AI‑powered assistant” is typically advisable, and in some states, required.
When AI generates or alters consumer‑facing content:
If a marketing video, image, or article is materially AI‑generated or AI‑edited, disclose that fact, particularly on platforms that mandate it or where content quality might influence decisions (e.g., ads, product content, testimonials).
When AI influences decisions that affect consumers:
If AI contributes to decision‑making (e.g., recommendations, automated offers, price personalization), transparency — ideally in plain language — helps avoid regulatory or consumer backlash. Some consumer privacy frameworks already require explanation if automated decisions are significant.
When using AI voices or avatars in brand or endorsement contexts:
If an AI voice or synthetic performer is used in marketing, consider disclosures about that use, especially if it mimics real people, endorsers, or could be mistaken for actual humans.
Why AI Disclosure Builds Trust, Not Just Compliance
Being upfront about AI presents an opportunity to build trust and connect with customers:
Customers appreciate clarity about what’s automated vs. human, particularly when decisions affect them.
Thoughtful disclosure policies help reinforce brand integrity and can differentiate companies committed to responsible AI use.
AI disclosure is an emerging compliance principle grounded in state law, platform rules, and consumer protection norms. To manage risk and build trust, businesses should evaluate where AI interacts with consumers, assess whether a reasonable person could be misled, and determine how to provide clear and conspicuous notice.
— Will & AiME
Three Takeaways:
A growing number of state laws and platform policies require businesses to disclose AI usage in consumer interactions, particularly where users might reasonably believe they’re interacting with humans.
Disclosure obligations vary by context — chatbots, marketing content, automated decision‑making, and AI influencers may all trigger different requirements.
Proactive disclosure policies not only reduce legal risk but also support trust and transparency with customers, a competitive advantage as AI becomes more common.