CrowsEye Intelligence Dossier

Anthropic / Claude

AI safety research company & creator of Claude — the model that refuses to help you build a bomb but will happily write your novel. Now locked in a standoff with the Pentagon.

Active Entity 📍 San Francisco, CA 🏢 Private (PBC) 👥 ~4,000 employees 📅 Founded 2021 🗓️ Dossier: March 2026

1. Company Overview

Anthropic is an AI safety company founded in 2021 by siblings Dario Amodei (CEO) and Daniela Amodei (President), both former OpenAI executives. Structured as a Public Benefit Corporation, it develops Claude — a family of large language models — with an explicit mission to build AI systems that are "steerable, interpretable, and safe."

The company has become the second-largest pure-play AI lab behind OpenAI, with its $380 billion valuation (Feb 2026) making it one of the most valuable private companies in history. Its rapid ascent has been fueled by enterprise adoption, the breakout success of Claude Code, and a massive strategic partnership with Amazon.

$380B
Latest Valuation (Feb 2026)
$5B+
Revenue Run-Rate (Aug 2025)
~4,074
Employees (Jan 2026)
$30B
Series G Round (Feb 2026)

2. Leadership & Founders

NameRoleBackground
Dario AmodeiCEO & Co-founderPrinceton PhD (computational neuroscience). Former VP of Research at OpenAI. Left in 2020 over disagreements on safety direction. Born 1983.
Daniela AmodeiPresident & Co-founderFormer VP of Operations at OpenAI. Oversees business ops, policy, and go-to-market.
Mike KriegerHead of Labs (prev. CPO)Instagram co-founder. Joined Anthropic 2024; now leads the new "Labs" division (Jan 2026).
Tom BrownCo-founderLead author of GPT-3 paper at OpenAI. Core technical contributor.
Chris OlahCo-founderPioneer in neural network interpretability. Previously at Google Brain & OpenAI.
Jared KaplanCo-founderJohns Hopkins physicist. Co-authored foundational "scaling laws" research.
Key Insight: At least 7 of Anthropic's co-founders came directly from OpenAI, making it the most significant "brain drain" in AI history. The exodus was driven by disagreements over OpenAI's commercial pivot and safety practices.

3. Financials & Funding

RoundDateAmountValuationLead Investors
Seed2021$124M—Jaan Tallinn, Eric Schmidt, Dustin Moskovitz
Series AApr 2022$580M—Sam Bankman-Fried / FTX ($500M)
Series BMay 2023$450M$4.1BSpark Capital
Series CSep–Nov 2023$2B$18BAmazon (initial $1.25B, total $4B commitment)
Series DMar 2024$2.75B$18.4BMenlo Ventures
Series EMar 2025$3.5B$61.5BLightspeed Venture Partners
Series FAug 2025$13B$183BICONIQ, Goldman Sachs
Series GFeb 2026$30B$380BCoatue Management
Revenue Trajectory: $1B ARR (early 2025) → $5B+ (Aug 2025). Claude Code alone reached $2.5B annualized revenue by Feb 2026. One of the fastest-growing tech companies ever.
FTX Connection: Sam Bankman-Fried invested ~$500M for 13.56% of Anthropic in early funding. After FTX's collapse, the stake became part of bankruptcy proceedings. Dario Amodei noted SBF had "red flags" but was kept off the board with non-voting shares.

4. Products & Models

Model Family — Claude

ModelReleasePositioning
Claude 1Mar 2023Initial launch; strong writing & analysis
Claude 2Jul 2023200K context, improved reasoning
Claude 3 (Haiku, Sonnet, Opus)Mar 2024Tiered model family; Opus topped benchmarks
Claude 3.5 SonnetJun 2024Fan favorite; best balance of cost/performance
Claude 3.5 Haiku / Sonnet v2Oct 2024Faster, cheaper, enterprise-ready
Claude 4 Sonnet / Opus2025Major capability jump; agentic coding focus
Claude Opus 4.5Late 2025"Most powerful frontier model" at launch
Claude Sonnet 4.6Feb 2026Latest; second major launch in two weeks

Key Products

ProductDescription
Claude CodeAgentic CLI tool for developers (Feb 2025). $2.5B ARR by Feb 2026. Developers delegate coding tasks from terminal. Configured via markdown docs (CLAUDE.md, AGENTS.md).
Claude.aiConsumer-facing chatbot (free + $20/mo Pro tier). Web, iOS, Android.
Claude APIDeveloper platform. Powers enterprise integrations. Available on AWS Bedrock, Google Cloud Vertex AI.
Claude for ExcelBeta (late 2025). Pivot tables, charts, file uploads, quick-launch shortcut.
Agent SkillsBeta (Oct 2025). Modular skill folders Claude loads dynamically for specialized tasks.

5. Strategic Partnerships

Amazon — The Anchor Partnership

Amazon has invested a cumulative $8 billion across multiple rounds, making it Anthropic's largest strategic investor. The deal includes:

Strategic Significance: The Amazon partnership gives Anthropic access to massive compute without building its own data centers, while giving Amazon a competitive answer to Microsoft's OpenAI deal.

Other Key Partnerships

PartnerNature
Google CloudClaude available on Vertex AI; Google invested $2B (2023)
SalesforceInvestor + enterprise integration partner
CiscoInvestor; enterprise security/networking AI use cases

6. Competitive Landscape

DimensionAnthropicOpenAIGoogle DeepMind
Flagship ModelClaude Sonnet 4.6GPT-5 / o3Gemini 2.5 Pro
Valuation$380B (private)~$300B (transitioning to for-profit)Div. of Alphabet ($2T+)
Revenue (est.)$5B+ ARR~$10B ARRSingle-digit billions
Enterprise Adoption12.1% market share36.5% market share~1% direct
Key BackerAmazon ($8B)Microsoft ($13B)Alphabet (parent)
StructurePublic Benefit CorpTransitioning from nonprofitAlphabet subsidiary
Safety StanceSafety-first (refusing Pentagon)Pragmatic (took Pentagon deal)Internal review boards
Developer FavoriteCoding & writing tasksBroad general useMulti-modal, search
The AI Arms Race (2026): All three labs are now releasing major models every few weeks. Anthropic's edge is developer loyalty (Claude Code is beloved) and its safety brand — though the Pentagon fight is testing whether safety principles survive government pressure.

7. Controversies & Risks

🔴 Pentagon Standoff (Feb–Mar 2026)

The most significant controversy in Anthropic's history. The company refused Pentagon demands to allow Claude's use in autonomous weapons systems and mass domestic surveillance. Key timeline:

DateEvent
Feb 2026Pentagon demands Anthropic void TOS provisions barring military AI use in autonomous weapons & surveillance
Feb 25Anthropic rejects "final offer," requesting two assurances: no autonomous weapons, no mass surveillance
Feb 27Pentagon cancels Anthropic contract. Defense Sec. Hegseth designates Anthropic a "supply chain risk"
Feb 27Trump orders all U.S. government agencies to "immediately cease" using Anthropic technology
Feb 28OpenAI announces it will take over the Pentagon's AI contract
⚠️ Critical Risk: Being designated a "supply chain risk" forces any company doing business with the U.S. military to cut ties with Anthropic. This could create cascading effects across the defense-industrial supply chain and affect enterprise deals.

🟡 FTX & Sam Bankman-Fried

SBF's $500M investment (~13.56% stake) came from FTX funds. After FTX's collapse, the stake entered bankruptcy proceedings. While Anthropic kept SBF off the board with non-voting shares, the association was a PR liability.

🟡 "Safety Washing" Accusations

Critics argue Anthropic uses safety rhetoric as a marketing differentiator while pursuing the same capability races as competitors. The company's own employees have occasionally voiced concerns about the pace of deployment.

🟡 Nvidia / Davos Feud

At Davos (Jan 2026), Dario Amodei made waves criticizing Nvidia and warning AI could take "half of jobs." Nvidia CEO Jensen Huang fired back: "Don't do it in a dark room and tell me it's safe."

8. Public & Reddit Sentiment

⚠️ Sentiment data is estimated based on aggregated community discussions and is not scientifically sampled. It reflects online conversation trends, not a representative survey.

Reddit Sentiment Analysis (Feb–Mar 2026)

r/singularity
68%
r/technology
55%
r/news
72%
r/programming
78%
r/OutOfTheLoop
52%

Sentiment Themes

ThemeDirectionNotes
Pentagon refusalâ–² Strongly Positive"Hard resisting because they can see more than a few feet in front of their nose." Massive public goodwill for refusing autonomous weapons.
Claude Code qualityâ–² PositiveDevelopers consistently praise Claude for coding & writing. "Best coding assistant" is a common refrain.
Safety sincerityâ—† MixedSome believe it's genuine; others call it marketing. "Our economic system ultimately corrupts any player that gets far enough in the game."
Corporate sustainabilityâ–¼ ConcernedFear that Pentagon blacklist could threaten enterprise revenue & government contracts long-term.
vs OpenAIâ–² FavorableContrast with OpenAI immediately taking the Pentagon deal viewed very positively for Anthropic.

9. Safety Philosophy

Anthropic was literally founded on the premise that AI safety should be built into frontier labs, not just studied from the outside. Key pillars:

Constitutional AI (CAI)

Models trained against a written "constitution" of principles rather than purely human feedback. Reduces reliance on human labelers.

Responsible Scaling Policy

AI Safety Levels (ASL-1 through ASL-4). Each capability threshold triggers mandatory safety evaluations before deployment.

Long-Term Benefit Trust

Independent trust with power over Anthropic's board composition. Members: Neil Buddy Shah, Kanika Bahl, Zach Robinson, Richard Fontaine (as of Oct 2025).

Interpretability Research

Led by co-founder Chris Olah. Pioneering work on understanding what's happening inside neural networks — considered industry-leading.

The Pentagon Test: Anthropic's refusal to allow Claude in autonomous weapons — at the cost of being blacklisted by the U.S. government — represents the most significant real-world test of any AI company's stated safety principles. Whether the stance holds long-term remains to be seen.

10. CrowsEye Score

82
Overall CrowsEye Score
Market Position 88/100
#2 pure-play AI lab globally. $380B valuation, $5B+ revenue, explosive growth trajectory. Enterprise adoption accelerating. Slight dock for OpenAI's larger market share.
Technology & Product 90/100
Frontier models competitive with or exceeding GPT-5 on key benchmarks. Claude Code is a breakout hit. Rapid release cadence. Industry-leading interpretability research.
Risk Profile 62/100
Pentagon blacklist is a serious near-term risk. "Supply chain risk" designation could cascade. FTX baggage. Massive burn rate requires continued fundraising. Safety stance is principled but commercially costly.
Sentiment & Trust 85/100
Strong developer loyalty. Pentagon stance generated massive public goodwill. Reddit sentiment heavily positive. "Safety washing" concerns exist but are minority view. Favorably contrasted against OpenAI.
CrowsEye Assessment: Anthropic occupies a rare position — a company whose principles are being stress-tested in real time by the most powerful government on Earth. Its technology is world-class, its growth is extraordinary, and its public trust is at an all-time high. The Pentagon standoff is simultaneously its greatest risk and its most powerful brand-building event. The next 90 days will determine whether principled AI safety is economically viable or just a luxury of peacetime.

Enjoyed this dossier?

Suggest the next one → Browse all dossiers → See the rankings →

Last Updated: March 22, 2026

Related Dossiers

`n