Anthropic’s Data Policy Overhaul: What Users Need to Know
Anthropic is undergoing a significant transformation in how it manages user data, requiring all Claude users to decide by September 28 whether they consent to their conversations being used for training AI models. This marks a major shift from the previous policy, where consumer chat data was not used for model training.
What’s Changing?
Previously, Anthropic assured users that their prompts and conversation outputs would be automatically deleted from its servers within 30 days, unless legally required to retain them longer. Now, for those who do not opt out, data retention will extend to five years. This change applies specifically to users of Anthropic’s consumer products, such as Claude Free, Pro, and Max, including Claude Code users. However, business customers using Claude Gov, Claude for Work, Claude for Education, or API access will not be affected.
The Rationale Behind the Change
Anthropic frames these changes around enhancing user choice and improving model safety. By not opting out, users contribute to making the systems more accurate and efficient. However, like other AI companies, Anthropic’s need for vast amounts of high-quality conversational data to remain competitive is a significant driving force.
Industry Trends and Implications
This move reflects broader shifts in data policies across the AI industry, with companies like Anthropic and OpenAI under increasing scrutiny. For instance, OpenAI is currently contesting a court order to retain all consumer ChatGPT conversations indefinitely due to legal challenges.
Impact on Users
The rapid changes in data usage policies are causing confusion among users, many of whom remain unaware of these updates. Anthropic’s implementation involves a pop-up for existing users with a prominent “Accept” button and a smaller toggle for data sharing permissions, which may lead to users accepting terms without full awareness.
Regulatory Oversight
Privacy experts have long cautioned about the challenges in obtaining meaningful user consent in the AI domain. The Federal Trade Commission has warned AI companies against making covert changes to terms of service or privacy policies. Whether the FTC can effectively monitor these practices is a question that remains open.