AI safety research company & creator of Claude — the model that refuses to help you build a bomb but will happily write your novel. Now locked in a standoff with the Pentagon.
Anthropic is an AI safety company founded in 2021 by siblings Dario Amodei (CEO) and Daniela Amodei (President), both former OpenAI executives. Structured as a Public Benefit Corporation, it develops Claude — a family of large language models — with an explicit mission to build AI systems that are "steerable, interpretable, and safe."
The company has become the second-largest pure-play AI lab behind OpenAI, with its $380 billion valuation (Feb 2026) making it one of the most valuable private companies in history. Its rapid ascent has been fueled by enterprise adoption, the breakout success of Claude Code, and a massive strategic partnership with Amazon.
| Name | Role | Background |
|---|---|---|
| Dario Amodei | CEO & Co-founder | Princeton PhD (computational neuroscience). Former VP of Research at OpenAI. Left in 2020 over disagreements on safety direction. Born 1983. |
| Daniela Amodei | President & Co-founder | Former VP of Operations at OpenAI. Oversees business ops, policy, and go-to-market. |
| Mike Krieger | Head of Labs (prev. CPO) | Instagram co-founder. Joined Anthropic 2024; now leads the new "Labs" division (Jan 2026). |
| Tom Brown | Co-founder | Lead author of GPT-3 paper at OpenAI. Core technical contributor. |
| Chris Olah | Co-founder | Pioneer in neural network interpretability. Previously at Google Brain & OpenAI. |
| Jared Kaplan | Co-founder | Johns Hopkins physicist. Co-authored foundational "scaling laws" research. |
| Round | Date | Amount | Valuation | Lead Investors |
|---|---|---|---|---|
| Seed | 2021 | $124M | — | Jaan Tallinn, Eric Schmidt, Dustin Moskovitz |
| Series A | Apr 2022 | $580M | — | Sam Bankman-Fried / FTX ($500M) |
| Series B | May 2023 | $450M | $4.1B | Spark Capital |
| Series C | Sep–Nov 2023 | $2B | $18B | Amazon (initial $1.25B, total $4B commitment) |
| Series D | Mar 2024 | $2.75B | $18.4B | Menlo Ventures |
| Series E | Mar 2025 | $3.5B | $61.5B | Lightspeed Venture Partners |
| Series F | Aug 2025 | $13B | $183B | ICONIQ, Goldman Sachs |
| Series G | Feb 2026 | $30B | $380B | Coatue Management |
| Model | Release | Positioning |
|---|---|---|
| Claude 1 | Mar 2023 | Initial launch; strong writing & analysis |
| Claude 2 | Jul 2023 | 200K context, improved reasoning |
| Claude 3 (Haiku, Sonnet, Opus) | Mar 2024 | Tiered model family; Opus topped benchmarks |
| Claude 3.5 Sonnet | Jun 2024 | Fan favorite; best balance of cost/performance |
| Claude 3.5 Haiku / Sonnet v2 | Oct 2024 | Faster, cheaper, enterprise-ready |
| Claude 4 Sonnet / Opus | 2025 | Major capability jump; agentic coding focus |
| Claude Opus 4.5 | Late 2025 | "Most powerful frontier model" at launch |
| Claude Sonnet 4.6 | Feb 2026 | Latest; second major launch in two weeks |
| Product | Description |
|---|---|
| Claude Code | Agentic CLI tool for developers (Feb 2025). $2.5B ARR by Feb 2026. Developers delegate coding tasks from terminal. Configured via markdown docs (CLAUDE.md, AGENTS.md). |
| Claude.ai | Consumer-facing chatbot (free + $20/mo Pro tier). Web, iOS, Android. |
| Claude API | Developer platform. Powers enterprise integrations. Available on AWS Bedrock, Google Cloud Vertex AI. |
| Claude for Excel | Beta (late 2025). Pivot tables, charts, file uploads, quick-launch shortcut. |
| Agent Skills | Beta (Oct 2025). Modular skill folders Claude loads dynamically for specialized tasks. |
Amazon has invested a cumulative $8 billion across multiple rounds, making it Anthropic's largest strategic investor. The deal includes:
| Partner | Nature |
|---|---|
| Google Cloud | Claude available on Vertex AI; Google invested $2B (2023) |
| Salesforce | Investor + enterprise integration partner |
| Cisco | Investor; enterprise security/networking AI use cases |
| Dimension | Anthropic | OpenAI | Google DeepMind |
|---|---|---|---|
| Flagship Model | Claude Sonnet 4.6 | GPT-5 / o3 | Gemini 2.5 Pro |
| Valuation | $380B (private) | ~$300B (transitioning to for-profit) | Div. of Alphabet ($2T+) |
| Revenue (est.) | $5B+ ARR | ~$10B ARR | Single-digit billions |
| Enterprise Adoption | 12.1% market share | 36.5% market share | ~1% direct |
| Key Backer | Amazon ($8B) | Microsoft ($13B) | Alphabet (parent) |
| Structure | Public Benefit Corp | Transitioning from nonprofit | Alphabet subsidiary |
| Safety Stance | Safety-first (refusing Pentagon) | Pragmatic (took Pentagon deal) | Internal review boards |
| Developer Favorite | Coding & writing tasks | Broad general use | Multi-modal, search |
The most significant controversy in Anthropic's history. The company refused Pentagon demands to allow Claude's use in autonomous weapons systems and mass domestic surveillance. Key timeline:
| Date | Event |
|---|---|
| Feb 2026 | Pentagon demands Anthropic void TOS provisions barring military AI use in autonomous weapons & surveillance |
| Feb 25 | Anthropic rejects "final offer," requesting two assurances: no autonomous weapons, no mass surveillance |
| Feb 27 | Pentagon cancels Anthropic contract. Defense Sec. Hegseth designates Anthropic a "supply chain risk" |
| Feb 27 | Trump orders all U.S. government agencies to "immediately cease" using Anthropic technology |
| Feb 28 | OpenAI announces it will take over the Pentagon's AI contract |
SBF's $500M investment (~13.56% stake) came from FTX funds. After FTX's collapse, the stake entered bankruptcy proceedings. While Anthropic kept SBF off the board with non-voting shares, the association was a PR liability.
Critics argue Anthropic uses safety rhetoric as a marketing differentiator while pursuing the same capability races as competitors. The company's own employees have occasionally voiced concerns about the pace of deployment.
At Davos (Jan 2026), Dario Amodei made waves criticizing Nvidia and warning AI could take "half of jobs." Nvidia CEO Jensen Huang fired back: "Don't do it in a dark room and tell me it's safe."
âš ï¸ Sentiment data is estimated based on aggregated community discussions and is not scientifically sampled. It reflects online conversation trends, not a representative survey.
| Theme | Direction | Notes |
|---|---|---|
| Pentagon refusal | â–² Strongly Positive | "Hard resisting because they can see more than a few feet in front of their nose." Massive public goodwill for refusing autonomous weapons. |
| Claude Code quality | â–² Positive | Developers consistently praise Claude for coding & writing. "Best coding assistant" is a common refrain. |
| Safety sincerity | â—† Mixed | Some believe it's genuine; others call it marketing. "Our economic system ultimately corrupts any player that gets far enough in the game." |
| Corporate sustainability | â–¼ Concerned | Fear that Pentagon blacklist could threaten enterprise revenue & government contracts long-term. |
| vs OpenAI | â–² Favorable | Contrast with OpenAI immediately taking the Pentagon deal viewed very positively for Anthropic. |
Anthropic was literally founded on the premise that AI safety should be built into frontier labs, not just studied from the outside. Key pillars:
Models trained against a written "constitution" of principles rather than purely human feedback. Reduces reliance on human labelers.
AI Safety Levels (ASL-1 through ASL-4). Each capability threshold triggers mandatory safety evaluations before deployment.
Independent trust with power over Anthropic's board composition. Members: Neil Buddy Shah, Kanika Bahl, Zach Robinson, Richard Fontaine (as of Oct 2025).
Led by co-founder Chris Olah. Pioneering work on understanding what's happening inside neural networks — considered industry-leading.
Enjoyed this dossier?
Last Updated: March 22, 2026