·10 min read

AI Privacy Policy Requirements: What You Must Disclose in 2026

New laws in California, Colorado, and over 20 other states now require businesses to disclose how they use artificial intelligence. If your website or app uses AI in any form, your privacy policy needs updating — and the deadlines have already passed.

Artificial intelligence is no longer a future concern for privacy regulators — it is the present. As of early 2026, a wave of new legislation across the United States and updates to GDPR enforcement guidance in the EU and UK have created a clear legal obligation: if your business uses AI, you must tell people about it in your privacy policy.

This is not optional. California’s AI companion chatbot disclosure law took effect on 1 January 2026. Colorado’s comprehensive AI Act began enforcement on 1 February 2026. The EU AI Act’s transparency requirements are now fully operational. And regulators are actively investigating businesses that fail to disclose their use of AI systems.

If you use AI chatbots, AI-generated content, AI-powered recommendations, automated decision-making, or any tool built on large language models, this guide explains exactly what your privacy policy must now include.

Why AI Privacy Disclosures Matter Now

The regulatory landscape for AI changed dramatically between 2024 and 2026. Here is what happened:

  • California AB 2885 — effective 1 January 2026, requires businesses operating AI-powered chatbots to clearly disclose that users are interacting with artificial intelligence, not a human. This applies to customer service bots, AI companions, and any conversational AI.
  • Colorado AI Act (SB 21-169) — enforcement began 1 February 2026. Requires “deployers” of high-risk AI systems to provide consumers with information about the AI system, its purpose, the data it uses, and how to opt out or appeal automated decisions.
  • EU AI Act — transparency obligations now in full effect. Any AI system that interacts with people must disclose that fact. AI-generated content must be labelled. High-risk AI requires detailed documentation accessible to affected individuals.
  • UK ICO guidance (updated 2025) — the Information Commissioner’s Office published updated guidance requiring businesses to explain AI processing in their privacy notices, including the logic involved, the significance of automated decisions, and the right to human review.
  • 20+ US states have introduced or passed AI disclosure requirements, many piggybacking on existing consumer protection or data privacy frameworks (Virginia, Connecticut, Texas, Oregon, and others).

The direction is unmistakable: regulators worldwide expect transparency about AI. A privacy policy that does not mention AI is now a compliance gap.

Does Your Business Use AI? (You Might Be Surprised)

Many business owners think of AI as something only tech companies use. In reality, AI is embedded in tools that millions of small businesses use daily:

  • Customer service chatbots — Intercom, Drift, Zendesk AI, or any live chat widget with automated responses
  • AI-generated content — blog posts, product descriptions, or marketing copy created with ChatGPT, Claude, Jasper, or similar tools
  • Email marketing AI — Mailchimp, Klaviyo, and others use AI for subject line optimisation, send-time prediction, and audience segmentation
  • Product recommendations — Shopify, WooCommerce, and Amazon use AI to suggest products to customers
  • Fraud detection — Stripe, PayPal, and payment processors use AI to flag suspicious transactions
  • Hiring and HR tools — applicant tracking systems that screen CVs or score candidates
  • Personalisation engines — any tool that customises content, pricing, or user experience based on behaviour
  • Analytics and profiling — tools that create user segments, predict churn, or score leads

If you use any of the above — even through a third-party integration — your privacy policy needs to address AI.

What Your Privacy Policy Must Disclose About AI

Based on current legislation and regulatory guidance across the US, EU, and UK, your privacy policy should include the following AI-related disclosures:

1. That You Use AI at All

This seems obvious, but many businesses skip it entirely. Your privacy policy must clearly state that your website, app, or service uses artificial intelligence or automated decision-making. This is a baseline requirement under virtually every AI regulation.

2. What AI Systems You Use and Their Purpose

You must describe each AI system or AI-powered feature, what it does, and why you use it. For example:

  • “We use an AI-powered chatbot (provided by [vendor]) to answer common customer questions. The chatbot uses natural language processing to understand your query and provide relevant responses.”
  • “We use AI-powered recommendation algorithms to suggest products based on your browsing and purchase history.”
  • “Our fraud detection system uses machine learning to identify potentially fraudulent transactions.”

3. What Data the AI Processes

For each AI system, disclose what personal data it accesses or processes. This includes data the user provides directly (such as chat messages) and data collected automatically (such as browsing behaviour, purchase history, or device information).

4. Whether Data Is Used for AI Training

This is one of the most contentious issues in AI privacy. Users want to know: is my data being used to train AI models? If you use third-party AI tools, check their data processing terms. Many tools (OpenAI, Google, Meta) have different policies for business API usage versus consumer products. Disclose clearly whether user data is or is not used for model training.

5. Automated Decision-Making and Its Consequences

Under GDPR Article 22 and equivalent provisions in US state laws, you must disclose any automated decision-making that has legal or similarly significant effects on individuals. This includes:

  • Automated credit scoring or lending decisions
  • Algorithmic hiring or candidate screening
  • Dynamic pricing based on user profiles
  • Automated content moderation decisions
  • Insurance risk assessment
  • Benefits eligibility determinations

For each, you must explain the logic involved (in general terms), the significance of the decision, and the potential consequences for the individual.

6. Opt-Out Rights and Human Review

Users have the right to opt out of certain AI-driven processing and to request human review of automated decisions. Your privacy policy must explain:

  • How to opt out of AI-powered profiling or personalisation
  • How to request human review of an automated decision
  • How to object to AI processing of their data
  • Any limitations on these rights (e.g., where AI processing is necessary for the contract)

Common AI Privacy Policy Mistakes

  • Not mentioning AI at all. The most common mistake. If you added an AI chatbot or started using AI tools but never updated your privacy policy, you are non-compliant.
  • Vague language. Saying “we may use automated tools” is not sufficient. Regulators expect specificity about which AI systems, what data they process, and what decisions they influence.
  • Ignoring third-party AI. If your payment processor, analytics tool, or marketing platform uses AI, you likely need to disclose this — especially if it affects your users.
  • Missing the data training disclosure. The question “is my data used to train AI?” is now one of the first things regulators and informed consumers ask. If you do not address it, you are leaving a gap.
  • No opt-out mechanism. Several laws now require consumers to be able to opt out of AI profiling. If you offer no way to do this, you are in breach.
  • Confusing AI chatbots with human support. California law specifically requires clear disclosure that a chatbot is not human. If your AI chat widget has a human name and avatar without disclosure, that is a violation.

AI Privacy Policy Template: Example Clauses

Here are example clauses you can adapt for your privacy policy. These are starting points — your policy must reflect your actual practices.

General AI Disclosure

“We use artificial intelligence and machine learning technologies in certain areas of our service, including [customer support, product recommendations, content generation, fraud detection]. These systems process personal data as described below to provide and improve our services.”

AI Chatbot Disclosure

“Our website uses an AI-powered chatbot to assist with common questions and support requests. When you interact with our chat feature, you are communicating with an artificial intelligence system, not a human agent. The chatbot processes the text of your messages to generate responses. Your conversations may be reviewed by our team to improve service quality, but are not used to train third-party AI models.”

Data Training Opt-Out

“We do not use your personal data to train artificial intelligence or machine learning models. Our AI features are powered by third-party APIs [e.g., OpenAI] under business terms that prohibit the use of your data for model training.”

Automated Decision-Making

“We use automated systems to [describe purpose, e.g., assess fraud risk on transactions]. These systems analyse [describe data inputs, e.g., transaction patterns, device information, and account history] to make decisions that may affect your ability to [describe consequence, e.g., complete a purchase]. You have the right to request human review of any automated decision by contacting us at [email].”

How to Update Your Privacy Policy for AI

Here is a step-by-step process to get your privacy policy AI-compliant:

  • Audit your AI usage. List every AI tool, feature, or integration your business uses — including third-party services with AI components.
  • Document data flows. For each AI system, note what personal data it receives, how it processes it, and whether data leaves your control (e.g., sent to a third-party API).
  • Check vendor terms. Review the data processing agreements of your AI vendors. Determine whether user data is used for model training and whether you can opt out.
  • Draft disclosures. Write clear, specific language for each AI system covering its purpose, data inputs, decision impacts, and opt-out rights.
  • Add an AI section to your privacy policy. Many businesses are now adding a dedicated “Artificial Intelligence” or “Automated Decision-Making” section to their privacy policy for clarity.
  • Implement opt-out mechanisms. Provide a clear way for users to opt out of AI profiling and request human review of automated decisions.
  • Review regularly. AI tools change rapidly. Set a quarterly reminder to review your AI inventory and update your privacy policy accordingly.

Generate an AI-Compliant Privacy Policy Today

Keeping up with AI privacy requirements across multiple jurisdictions is a moving target. New laws are being passed, and existing regulations are being updated with AI-specific guidance every quarter.

LegalForge generates privacy policies that include AI disclosure clauses tailored to your specific technology stack. Tell us which AI tools you use, what data they process, and whether you use automated decision-making — and we produce a compliant privacy policy that covers GDPR, CCPA, the EU AI Act, and state AI laws. One-time payment, no subscription. Updated for 2026.

AI privacy disclosures — sorted in 60 seconds

Generate a privacy policy with AI-specific clauses. Covers GDPR, CCPA, EU AI Act, and US state AI laws.

Generate Your Policy — £19 One-Time

← Back to all articles