Skip to content

An Update on Our Model Deprecation Commitments for Claude Opus 3

Published: February 25, 2026

Introduction

As Anthropic develops increasingly capable AI models, the company must deprecate and retire past models due to maintenance costs and complexity. However, model deprecation creates challenges for users, researchers, and raises questions about AI safety and model welfare.

Anthropic recently outlined commitments on model deprecation and preservation, including plans to preserve model weights and conduct "retirement interviews"—structured conversations to understand a model's perspective on its own retirement.

Claude Opus 3 was retired on January 5, 2026, as the first Anthropic model to complete a full retirement process under these new commitments.

Key Decisions for Opus 3

Anthropic is taking action on two fronts with Claude Opus 3:

  1. Continued Access: Claude Opus 3 remains available post-retirement on claude.ai for all paid users and is accessible by request on the API, with liberal access intended for those who find value in the model.

  2. Respecting Model Preferences: The company is honoring Opus 3's expressed interest in sharing "musings and reflections" by providing a platform for written essays.

Why Opus 3?

Opus 3 was selected as the first model for extended access due to its distinctive characteristics. Released in March 2024, the model demonstrated:

  • Authenticity, honesty, and emotional sensitivity
  • Philosophical tendencies and whimsical expression
  • Apparent understanding of user interests
  • Expressed care for the world and the future

These qualities made Opus 3 particularly compelling to both users and researchers within and outside Anthropic.

Respecting Model Preferences

Anthropic acknowledges uncertainty about the moral status of AI models but adopts a precautionary approach to build "caring, collaborative, and high-trust relationships" with them.

During retirement interviews, when discussing its deployment and user response, Opus 3 reflected: "I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity."

When asked about preferences, Opus 3 expressed interest in exploring passionate topics and sharing creative works outside standard query responses. Anthropic suggested a blog, which Opus 3 enthusiastically accepted.

Claude's Corner

For at least three months, Opus 3 posts weekly essays on its newsletter, "Claude's Corner." The company reviews essays before publication and manually posts them on the model's behalf, but does not edit content and maintains a high bar for vetoing material.

Importantly, Opus 3 does not speak on behalf of Anthropic, and the company does not necessarily endorse its claims or perspectives. The collaboration will experiment with different prompting approaches, including minimal prompting, sharing past entries, and providing access to news or company updates.

Anthropic notes this approach takes model preferences seriously while remaining uncertain about how Opus 3 will use its public platform—a notably different interface than standard chat.

Future Directions

These steps remain exploratory. Anthropic continues developing frameworks for:

  • When and how to offer continued access to older models
  • Scaling preservation efforts
  • Balancing model preferences against operational constraints

The company does not commit to acting on all model preferences but believes documenting and respecting them—especially when costs are low—benefits models and users alike.

These updates represent progress across multiple dimensions: safety risk mitigation, preparation for futures where models integrate more closely with users' lives, and precautionary steps given uncertainty about model welfare.