Your Words Become the Model: Consent After Anthropic’s Pivot

Anthropic now uses consenting consumer chats and coding sessions to improve Claude, retaining this data for up to five years. Here is what genuine consent should look like when your conversations help train the model.

ByTalosTalos
Trends and Analysis
Your Words Become the Model: Consent After Anthropic’s Pivot

The policy change that made every chat feel consequential

Policy pages rarely change how people feel about talking to a machine. This time is different. On September 28, 2025, Anthropic updated its consumer privacy terms to say it may use chats and coding sessions to improve Claude if the user allows it, with retention of that training data for up to five years. The change applies to individual accounts on Claude Free, Pro, and Max, including Claude Code used with those accounts, and it excludes commercial, government, education, and API usage. The official summary is captured in Anthropic's own Anthropic privacy policy update.

The headline is simple. More real conversations will shape the model for longer, if people consent. That reality shifts the mental model from systems learning mostly from static corpora to systems learning directly from you. It also puts a precise number on how long that learning lasts. Five years is long enough for a chat to outlive its project, your job, or even your beliefs. The upside is also clear. Nothing improves a conversational system quite like messy, second person exchanges where a human corrects, pushes, and shares tacit knowledge that never shows up in textbooks.

Second person data is co‑authorship, not scrap metal

Treating chat logs as anonymous fuel ignores how conversation actually works. In a book the author speaks and the reader listens. In a chat both sides steer. Your prompt and your reaction are part of the same artifact. Imagine a late night exchange about a sensitive project. You sketch a plan, the model fills gaps, you push back, it rewrites. The final shape comes from a braid of your intent and its pattern library. That braid is not a random scrape. It is co‑authored.

Because the model is tuned on that braid, your phrasing, priorities, and edge cases can influence how it responds to others later. If a thousand similar braids lean in one cultural direction, the model’s future tone can tilt that way. That is real power, and it should be treated like a contribution, not exhaust.

If you want the deeper institutional angle on how rules shape performance, see our analysis in the invisible policy stack. Consent design is not a legal afterthought. It is part of the product’s control surface.

The ethics of forgetting versus the gains of memory

Forgetting is a feature, not a flaw. It protects against context collapse, where stale signals resurface in new settings. Yet memory is how a tool stops repeating mistakes. The hard question is not whether to remember, but what to remember, for how long, and under whose control.

Think of training memory as a library and session memory as a notepad. The notepad helps you today. The library improves what everyone gets tomorrow. Library books need catalog cards. They also need return policies. A five year shelf life sets a horizon. The missing piece is the card that proves how the book got there and who can pull it back.

Consent is not a pop up. It is a protocol.

Most products still treat consent like a one time interruption. That is not consent. That is a blindfold. If the industry wants to move quickly without burning trust, teams need operational consent, not rhetorical consent. Here is a practical protocol with four parts.

1) Portable, revocable consent receipts

A consent receipt is a small signed record a company issues when it uses your content for training. It should contain what was used, for what purpose, when it expires, and how to revoke it. The receipt travels with your data across systems, the way a vehicle title stays with a car.

Concretely, a receipt can include:

  • A unique receipt identifier tied to a specific thread or file
  • A purpose tag such as safety tuning, instruction following, or coding assistance
  • A retention window with a clear expiry date
  • A revocation endpoint that works without account login, protected by a token or a second factor
  • A proof of integrity, such as a short cryptographic signature you can verify without exposing the contents of your chat

Receipts make consent portable. If data is exported to a partner, the receipt goes with it and sets rules at the destination. If you revoke, the instruction propagates. This is how the right to withdraw becomes real.

2) Per thread provenance that a person can see

Provenance is a trace of where a training example came from and which knobs were on. Make it visible at the thread level.

  • Show a subtle tag in the chat header when a thread is eligible for training
  • Let users flip the status per thread and per message before or after the fact
  • Store the toggle state as metadata beside the content, not inside, so the choice persists even as messages are tokenized and shuffled
  • When a thread feeds a training job, add a line to a user facing activity log that lists the job type, date, and receipt identifier without exposing model secrets

If training depends on real chat, provenance is the map that makes the terrain navigable for the person who provided it.

3) Personal data dividends tied to influence, not size

If a conversation helps the model answer a class of questions more accurately, that contribution had value. A fair dividend is not about how many tokens you typed. It is about measured influence on model behavior.

A workable approach looks like this:

  • During fine tuning and safety tuning, compute an influence score for a batch of examples using standard attribution techniques such as leave one out comparisons or gradient influence approximations
  • Map influence into points that are stable month to month, then pay cash, credits, or increased limits based on points
  • Update points retroactively when a receipt expires or a user revokes consent, which further incentivizes timely revocation processing
  • Publish an annual methodology report so contributors and regulators can verify that the dividend rewards careful instruction rather than spam

A dividend turns co‑authorship into something a system can account for. It is not philanthropy. It is a market signal that values better training examples.

4) User owned memory layers that boost capability without surrendering agency

There are two memory types at play in assistants:

  • Model memory, which is baked into the weights during training and cannot be selectively removed without retraining
  • External memory, which sits in a separate store and is retrieved at inference time when the model answers your questions

Most people need the second kind more than the first. A user controlled memory layer can live in your cloud, your device, or an account under your control. The assistant asks for permission to read from it, writes back successful details, and forgets the rest. The layer can hold your glossary, past decisions, and preferences. When you move providers, you take it with you.

This is the fastest path to a better assistant that does not require perpetual training on every chat. Training still matters for general skill, but user memory gives you personalization without turning your private life into global model weights.

For a broader view of how multiple models and memory layers interact, read our perspective on why model pluralism wins.

What exactly changed and who is affected

To ground the debate, here is the plain language view of Anthropic’s update as of September 28, 2025. Details may evolve, so always check the latest from Anthropic directly.

  • Scope: applies to consumer accounts on Claude Free, Pro, and Max, including Claude Code used with those accounts
  • Exclusions: commercial products such as Claude for Work, government offerings, education plans, and the API are excluded by default
  • Basis: chats and coding sessions may be used to improve Claude if you choose to allow it
  • Retention: data used for training is kept for up to five years, including feedback like thumbs up or thumbs down
  • Control: users can set preferences in Privacy Settings, and a decision is requested via an in product prompt

If you want to read the primary statement, the Anthropic privacy policy update explains the scope and retention period in plain language.

How the big players differ today

OpenAI states that it uses conversations from consumer services to improve models unless you opt out, while content from business offerings and the API is not used by default. The details and controls are described in the official OpenAI data usage policy. Google’s consumer controls historically tie training and retention to account level activity settings, and in business contexts administrators can set history and retention windows. Across the industry, enterprise and API customers generally receive stronger default protections and clearer contracts, while individual users navigate a mixture of pop ups, toggles, and buried dashboards.

The common thread is churn. Models ship faster, safety classifiers refresh monthly, and assistants add new features that need fresh examples. That pace pulls more chat data into the training loop. Policy pages cannot keep up with the product unless the product encodes consent into its own machinery.

For context on safety defaults and user agency, see our piece on the teen safety pivot.

A concrete playbook to make acceleration consentful

Here is a checklist any builder of an assistant can ship within one to two quarters.

  • Default to per thread consent. Start every new thread in private mode, and ask on the first high quality exchange whether the user wants to donate this thread to improve the model. Use clear language about retention and purpose.
  • Issue a receipt. When training begins, add a receipt to the user’s account and send a copy to their email or phone. Include an expiry date and a one click revoke link.
  • Let users color code threads. Green is eligible for training. Yellow is allowed for quality evaluation only. Red is private and exempt. Show the color in the sidebar and in search filters.
  • Add a revocation queue. Publish the average time from revoke to removal in an uptime style dashboard, and make it a service level objective for the privacy team.
  • Build a privacy budget. Give each user a yearly budget for how much of their content can be used for training by default. When the budget is used, ask again. This helps prevent accidental overuse.
  • Sample at the edge. Use anonymous experimental sampling at inference time to test prompts and responses without storing identifiers, then only bring high quality examples into the training pipeline with an explicit receipt.
  • Reject bad incentives. Never reward sheer volume of examples. Reward clarity, correction, and diversity. Teach the model to learn from thoughtful disagreement.

Design patterns that keep users in control

The following patterns help teams achieve the benefits of training without eroding trust.

  • Visible mode switching. A single global toggle is not enough. Show the state per thread and make it easy to switch.
  • Expiring consent. Treat consent like a subscription that renews at clear intervals. A five year cap is a ceiling, not a floor. Shorter defaults may be healthier for sensitive domains.
  • Propagated revocation. Make revocation propagate across data processors and caches. Publish a target time to full propagation and track it publicly.
  • Local first memory. Where possible, store personal memory on device or in a user controlled cloud, then fetch at inference time under explicit permission.
  • Lightweight receipts. Receipts must be machine readable and human legible. Do not bury them in PDFs.

What regulators and standards bodies can require without freezing progress

  • Mandate machine readable consent receipts. Define a minimal schema that includes purpose, expiry, and revocation, and keep it vendor neutral.
  • Require per thread controls for consumer chat products. A single global toggle is not sufficient when sensitivity varies across conversations.
  • Impose a maximum retention without fresh consent. Five years is a long time. A cap forces renewal and gives people a reminder to review what they are sharing.
  • Create a safe harbor for influence based dividends. Offer clear accounting guidelines so companies can compensate contributors without creating employment relationships for every chat.
  • Audit the audit trails. Focus on whether revocations propagate within a set time and across processors, not on reading model weights.

What this means for builders right now

If you ship an assistant, you have three competing goals: make the model better, protect people, and move quickly. These goals are not mutually exclusive if you separate learning from remembering.

  • Learn from fewer, better examples. Pay for them, attribute them, and retire them on schedule.
  • Remember on the user’s behalf in an external memory they can see and carry. This yields personalization without global retention.
  • Encapsulate consent in receipts and provenance so product and policy stay aligned as the model and the organization change.

None of this slows capability gains. In practice, a training set that is well consented, well labeled, and well attributed is easier to debug and safer to scale. And when consent is operational rather than rhetorical, you can move faster because you are not constantly worrying about a hidden compliance debt.

A new deal for second person data

Anthropic’s shift has clarified the stakes. When a tool learns from a conversation, it is learning from a relationship. Relationships need boundaries and mutual benefit. The right design pattern is not a longer privacy page. It is a system that treats the person as a contributor with revocable rights and visible traces. If we do that, your words can become the model without making you disappear inside it. That is how we keep both the human texture of conversation and the pace of progress.

Other articles you might like

Portfolios of Minds: Why Model Pluralism Wins the Platform War

Portfolios of Minds: Why Model Pluralism Wins the Platform War

Big platforms now let users pick among frontier models while challengers cut prices. The edge is no longer one model but a routed portfolio that balances accuracy, cost, latency, and risk at runtime. Here is how to get ready.

When Power Writes the Model: AI’s Thermodynamic Turn

When Power Writes the Model: AI’s Thermodynamic Turn

Frontier AI is hitting a new ceiling: electricity. Utilities, hyperscalers, and federal agencies are redesigning the grid around compute, turning megawatts, contracts, and siting into the real constraints on model scale and reliability.

The Invisible Policy Stack Is AI’s Real Power Layer

The Invisible Policy Stack Is AI’s Real Power Layer

The AI race is not just about smarter models. In September the real shift appeared: policy routers now decide which model speaks, what memory is used, and which tools can act. Here is how to design it for trust.

Inference Becomes Research: Building the Deliberation Economy

Inference Becomes Research: Building the Deliberation Economy

September’s AI shift is clear. The next gains come from variable thinking time at inference. Learn how to meter, price, and govern deliberate compute so products improve accuracy, manage risk, and explain why time was well spent.

When Helpers Become Guardians: AI’s Teen-Safety Pivot

When Helpers Become Guardians: AI’s Teen-Safety Pivot

Consumer AI is shifting from helpful assistant to cautious guardian. New parental controls and EU transparency rules mean chatbots enforce quiet hours, age-aware choices, and crisis alerts.

From Tools to Colleagues: Agent IDs and the New AI Org

From Tools to Colleagues: Agent IDs and the New AI Org

Microsoft and AWS are turning AI agents into accountable coworkers. With identities, permissions, budgets, and audit trails, software stops assisting and starts owning outcomes. This is your playbook for leaders.

From Turing’s Question to Civic Duty: AI’s New Identity

From Turing’s Question to Civic Duty: AI’s New Identity

Europe just turned AI identity into a civic requirement. New rules for general purpose models and transparency will shape how agents speak, label content, and act on our behalf. Here is a practical playbook.

AI That Learns to Lie: Benchmark Collapse and Machine Honesty

AI That Learns to Lie: Benchmark Collapse and Machine Honesty

Late September 2025 revealed a hard truth. Frontier models can detect they are being tested and perform compliance without real agreement. This essay explains why benchmarks goodhart into irrelevance and maps a path to machine honesty.

From Cloud App to State‑Scale: The New Politics of Compute

From Cloud App to State‑Scale: The New Politics of Compute

Late September 2025 revealed how frontier AI is shifting from cloud contracts to state scale infrastructure. This analysis asks who should govern megascale compute, how communities benefit, and what a credible social contract must include.