Anthropic

Amongst foundation model developers, Anthropic has positioned itself as a company with a particular focus on AI safety and describes itself as building “AI research and products that put safety at the frontier.” Founded by engineers who quit OpenAI due to tension over ethical and safety concerns, Anthropic has developed its own method to train and deploy “Constitutional AI”, or large language models (LLMs) with embedded values that can be controlled by humans. Since its founding, its goal has been to deploy “large-scale AI systems that are steerable, interpretable, and robust”, and it has continued to push towards a future powered by responsible AI.

Founding Date

Dec 1, 2020

Headquarters

San Francisco, California

Total Funding

$27.3B

Status

Private

Stage

Series F

Employees

2555

Careers at Anthropic

Memo

Updated

October 3, 2025

Reading Time

60 min

Thesis

Historically, there have been two major tech-driven transformations; advancements that have been “powerful enough to bring us into a new, qualitatively different future”: the agricultural and industrial revolutions. These “general-purpose” technologies fundamentally altered the way that economics worked. In the short period that computers have existed, machines have already exceeded the intelligence of humans in some aspects, leading some to believe that artificial intelligence (AI) is the next general-purpose transformation.

Like the shift from hunter-gatherers to farming or the rise in machine manufacturing, AI has been projected to have a significant impact across all parts of the economy. In particular, the rise in generative AI has led to massive breakthroughs in task automation across the economy at large. Well-funded research organizations such as OpenAI and Cohere have leveraged proprietary natural language processing (NLP) models to be functional for enterprises. These “foundation models” provide a basis for hundreds, if not thousands, of individual developers and institutions to build new AI applications.

However, with rising competition between foundation model organizations, along with the general growth in AI adoption, ethical concerns regarding the rapid development and harmful use cases of AI systems have been raised. In March 2023, over 1K people, including OpenAI co-founder Elon Musk, wrote a letter calling for a six-month “AI moratorium”, claiming that many AI organizations were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

In February 2024, Musk sued OpenAI and chief executive Sam Altman over “breaching a contract by putting profits and commercial interests in developing artificial intelligence ahead of the public good.” Though the lawsuit was dropped in June 2024, Musk’s allegations underscore the delicate balance between speed and ethics in the development of AI. These concerns have only intensified as models have grown more powerful, with the release of increasingly powerful models, including Anthropic's Claude 4 family in May 2025, which required the first-ever Level 3 safety classification. Both the remarkable capabilities being achieved and the introduction of new safety protocols underscore the growing need for responsible development practices.

Amongst foundation model developers, Anthropic has positioned itself as a company with a particular focus on AI safety and describes itself as building “AI research and products that put safety at the frontier.” Founded by engineers who quit OpenAI due to tension over ethical and safety concerns, Anthropic has developed its own method to train and deploy “Constitutional AI”, or large language models (LLMs) with embedded values that can be controlled by humans. Since its founding, its goal has been to deploy “large-scale AI systems that are steerable, interpretable, and robust”, and it has continued to push towards a future powered by responsible AI.

Weekly Newsletter

Subscribe to the Research Rundown

Founding Story

Anthropic was founded in 2021 by ex-OpenAI VPs and siblings Dario Amodei (CEO) and Daniela Amodei (President). Prior to launching Anthropic, Dario Amodei was the VP of Research at OpenAI, while Daniela was the VP of Safety & Policy at OpenAI.

In 2016, Dario Amodei, along with other coworkers at Google, co-authored “Concrete Problems in AI Safety”, a paper discussing the inherent unpredictability of neural networks. The paper introduced the notion of side effects and unsafe exploration of the capabilities of different models. Many issues discussed could be rooted in “mechanistic interpretability”, or the lack of understanding of the inner workings of complex models. The co-authors, many of whom would also join OpenAI alongside Dario years later, sought to communicate the safety risks of rapidly scaling models, and that thought process would eventually become the foundation of Anthropic.

In 2019, OpenAI announced that it would be restructuring from a nonprofit to a “capped-profit” organization, a move attributed to its heightened ability to deliver returns for investors. It received a $1 billion investment from Microsoft later that year to continue its development of benevolent artificial intelligence, as well as to power AI supercomputing services on Microsoft Azure’s cloud platform. Oren Etzioni, CEO of the Allen Institute for AI, commented on this shift by saying:

“[OpenAI] started out as a non-profit, meant to democratize AI. Obviously when you get [$1 billion] you have to generate a return. I think their trajectory has become more corporate.”

OpenAI’s restructuring ultimately generated internal tension regarding its direction as an organization with the intention to “build safe AGI and share the benefits with the world.” It was reported that “the schism followed differences over the group’s direction after it took a landmark $1 billion investment from Microsoft in 2019.” In particular, the fear of “industrial capture” or OpenAI’s monopolization of the AI space loomed over many within OpenAI, including Jack Clark, former policy director, and Chris Olah, mechanistic interpretability engineer.

Anthropic CEO Dario Amodei attributed Anthropic’s eventual split from OpenAI to concerns over AI safety and described the decision to leave OpenAI to found Anthropic this way:

“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things... One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this... The second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety. You don’t tell the models what their values are just by pouring more compute into them. And so there were a set of people who believed in those two ideas. We really trusted each other and wanted to work together. And so we went off and started our own company with that idea in mind.”

Dario Amodei left OpenAI in December 2020, and 14 other researchers, including his sister Daniela Amodei, eventually left to join Anthropic as well. The co-founders structured Anthropic as a public benefit corporation, in which the board has a “fiduciary obligation to increase profits for shareholders,” but the board can also legally prioritize its mission of ensuring that “transformative AI helps people and society flourish.” This gives Anthropic more flexibility to pursue AI safety and ethics over increasing profits.

In November 2024, Amodei pointed out that Anthropic's team grew from approximately 300 to 950 people in less than a year, before intentionally slowing hiring to maintain quality. "Talent density beats talent mass," Amodei said. "We've focused on senior hires from top companies and theoretical physicists who learn fast." As of 2025, the company has 1,097 employees, representing a 471% increase since 2022.

In May 2024, Anthropic made several notable leadership hires, including Krishna Rao as its first CFO, Instagram co-founder Mike Krieger as its Head of Product, and Jan Leike as the leader of a new safety team. In 2025, Anthropic hired its first Chief Commercial Officer, Paul Smith, and its first managing director of international, Chris Ciauri, a former Google Cloud executive.

Product

Foundation Model Research

Since its founding, Anthropic has dedicated its resources towards “large-scale AI systems that are steerable, interpretable, and robust” with an emphasis on its alignment with human values to be “helpful, honest, and harmless”. As of October 2025, Anthropic’s three research teams were Interpretability, Alignment, and Societal Impacts, underscoring the company’s commitment to developing general AI assistants that abide by human values. The company has published over 60 papers in these research areas.

One of Anthropic’s first papers, “Predictability and Surprise in Large Generative Models,” was published in February 2022 and investigated emergent capabilities in LLMs. These capabilities can “appear suddenly and unpredictably as model size, computational power, and training data scale up.” The paper identified that although the accuracy of models increased consistently with the number of model parameters, the accuracy of some tasks (e.g., three-digit addition) surprisingly seemed to skyrocket upon reaching certain parameter count thresholds. The authors noted that developers were unable to precisely predict which new abilities would emerge or improve. This could lead to unintended consequences, especially if solely used for economic gains or in the absence of policy interventions.

Line graphs showing scale of AI models improved general performance

Source: Anthropic

In April 2022, Anthropic introduced a methodology of using preference modeling and reinforcement learning from human feedback (RLHF) to train “helpful and harmless” AI assistants. While RLHF has been studied by OpenAI’s InstructGPT paper and Meta’s LaMDA paper, both published in January 2022, Anthropic was the first to explore “online training,” in which the model is updated during the [crowdworker] process, and measure the “tension between helpfulness and harmfulness.” In RLHF, models engage in open-ended conversations with human assistants, generating multiple responses for each input prompt. The human would then choose the response they found most helpful and/or harmless, rewarding the model for either trait over time.

Interface that crowdworkers use to interact with the models

Source: Anthropic

Ultimately, such alignment efforts made the RLHF models equally or potentially more capable than plain language models among zero-shot and few-shot tasks (i.e., tasks entirely without examples or with limited prior examples).

Among other findings, the authors of the paper emphasized the “tension between helpfulness and harmlessness”. For example, a model trained to always give accurate responses would become harmful upon being fed hazardous prompts. Conversely, a model that answers “I cannot answer that” to potentially dangerous prompts, including opinionated prompts, though not harmful, would also not be helpful.

Line graph showing RLHF model performance on zero-shot and few-shot NLP tasks

Source: Anthropic

Anthropic has since been able to use its findings to push language models toward desired outcomes such as avoiding social biases, adhering to ethical principles, and self-correction. In December 2023, the company published a paper titled “Evaluating and Mitigating Discrimination in Language Model Decisions.” The paper studied how language models may make decisions for various use cases, including loan approvals, permit applications, and admissions, as well as effective mitigation strategies for discrimination, such as appending non-discrimination statements to prompts and encouraging out-loud reasoning.

In May 2024, Anthropic published a paper on understanding the inner workings of one of their LLMs to improve the interpretability of AI models and work towards safe, understandable AI. Using a type of dictionary learning algorithm called a sparse autoencoder, the authors were able to produce interpretable features of the LLM, which could be used to steer LLMs and identify potentially dangerous or harmful features. Ultimately, this paper lays the groundwork for further research into interpretability and safety strategies.

In February 2025, Anthropic introduced the Anthropic Economic Index, a continuous series of reports focused on calculating the impact of AI on labor markets and the wider economy over time. The initial report has analyzed millions of anonymized conversations with Anthropic’s chatbot Claude and has disclosed its usage across different categories of users, tasks, and professions.

Constitutional AI

In December 2022, Anthropic released a novel approach to training helpful and harmless AI assistants. Labeled “Constitutional AI”, the process involved (1) training a model via supervised learning to abide by certain ethical principles inspired by various sources, including the UN’s Declaration of Human Rights, Apple’s Terms of Service, and Anthropic’s own research, (2) creating a similarly-aligned preference model, and (3) using the preference model to judge the responses of the initial model, which would gradually improve its outputs through reinforcement learning.

Diagram showing Anthropic’s constitutional AI approach to training models

Source: TechCrunch

CEO Dario Amodei noted that this Constitutional AI model could be trained along any set of chosen principles, saying:

“I’ll write a document that we call a constitution. […] what happens is we tell the model, OK, you’re going to act in line with the constitution. We have one copy of the model act in line with the constitution, and then another copy of the model looks at the constitution, looks at the task, and the response. So if the model says, be politically neutral and the model answered, I love Donald Trump, then the second model, the critic, should say, you’re expressing a preference for a political candidate, you should be politically neutral. […] The AI grades the AI. The AI takes the place of what the human contractors used to do. At the end, if it works well, we get something that is in line with all these constitutional principles.”

Graph showing harmlessness versus helpfulness Elo scores computed from crowdworkers’ model comparisons

Source: Anthropic

Anthropic’s development of Constitutional AI has been seen by some as a promising breakthrough, enabling its commercial products, such as Claude, to follow concrete and transparent ethical guidelines. As Anthropic puts it: “AI models will have value systems, whether intentional or unintentional. One of our goals with Constitutional AI is to make those goals explicit and easy to alter as needed.”

Modern Context Protocol

In November 2024, Anthropic launched the Model Context Protocol (MCP), an open standard that addresses the "N×M problem" of connecting AI systems with data sources. MCP provides a universal protocol allowing any AI application to connect with any data source through standardized interfaces, eliminating the need for custom integrations.

As of October 2025, MCP has been adopted by companies across the industry, including competitors to Anthropic. OpenAI officially adopted MCP in March 2025, integrating it across ChatGPT, the Agents SDK, and the Responses API. Microsoft partnered with Anthropic to develop an official C# SDK and added native MCP support to Copilot Studio. Google DeepMind confirmed MCP support in upcoming Gemini models, with CEO Demis Hassabis describing it as "rapidly becoming an open standard for the AI agentic era."

The protocol supports thousands of MCP server implementations on GitHub, with Anthropic maintaining pre-built servers for popular enterprise systems, including Google Drive, Slack, GitHub, PostgreSQL, and Stripe. Companies like Block, Apollo, Replit, Codeium, and Sourcegraph have integrated MCP into their platforms.

However, security researchers identified multiple MCP vulnerabilities in April 2025, including prompt injection attacks, tool permission exploits, and lookalike tool replacement risks, which remain active areas of development as of October 2025.

Claude Code

Claude Code, a terminal-based agentic coding assistant, was released to the general public in May 2025. Unlike IDE-plugin competitors, Claude Code operates directly in developers' terminals, integrating with existing development workflows without requiring interface changes.

As of October 2025, Anthropic has added several features to Claude Code, introducing custom agents and subagents in July 2025 for specialized task automation, background command execution for development servers and monitoring, and comprehensive MCP integration for connecting with external data sources and tools. The platform includes VS Code and JetBrains extensions that display Claude's proposed edits directly inline within familiar editor interfaces.

Notable features include the Claude Code SDK for building custom agents and GitHub integration, enabling Claude to respond to PR feedback, fix CI errors, and modify code directly in repositories. The tool supports automated PR reviews with customizable prompts, though users report the need for careful configuration to avoid verbose feedback.

Claude Code adoption increased following Anthropic's restrictions on third-party access to Claude models, leading many developers to migrate from tools like Windsurf. Anthropic leads in enterprise adoption of AI tools as of October 2025, with companies like Intercom reporting that Claude Code enables building applications they previously lacked bandwidth for, while Block uses it to improve code quality in their internal agent systems.

Claude

Claude is Anthropic’s flagship AI model family, first launched in closed alpha testing in April 2022. As of October 2025, Claude can be accessed (1) directly on Anthropic’s platform as a chatbot, (2) via Anthropic’s API, or (3) through cloud infrastructure partners, such as Amazon Bedrock and Google Cloud Vertex AI.

Although Claude 1’s parameter count of 430 million was less than GPT-3’s 175 billion parameters, its context window of 9K tokens was greater than even GPT-4 (8K tokens, or roughly 6K words). Claude’s capabilities span text creation and summarization, search, coding, and more. During its closed alpha phase, Claude limited access to key partners such as Notion, Quora, and DuckDuckGo. Over time, Amodei envisions developing Claude into a “country of geniuses in a datacenter,” “capable of solving very difficult problems very fast.”

Example of conversationl AI with Claude

Source: Anthropic

In March 2023, Claude was released for public use in the UK and the US via a limited-access API. Anthropic noted that Claude’s answers are supposedly more helpful and harmless than other chatbots. It was also one of the first publicly available models that had the capability to parse PDF documents. Autumn Besselman, head of People and Comms at Quora, reported that: “Users describe Claude’s answers as detailed and easily understood, and they like that exchanges feel like natural conversation”.

In July 2023, a new version of Claude, Claude 2, was released in the form of a new beta website. This version was designed to offer better conversational abilities, deeper context understanding, and improved moral behavior from its predecessor. Claude 2’s parameter count doubled from the previous iteration to 860 million, while its context window increased significantly to 100K tokens (approximately 75K words) with a theoretical limit of 200K.

Description of Claude 3 offering three models: Haiku, Sonnet, and Opus.

Source: Anthropic

In March 2024, Anthropic announced the Claude 3 family, offering three models: Haiku, Sonnet, and Opus. The three models trade off on performance and speed — designed for lightweight tasks, Haiku is the cheapest and fastest model, Opus is the slowest yet highest-performing and most complex model, and Sonnet falls in between the two.

Claude 3 had larger context windows than most other players in the industry, as all three models offered windows of 200K tokens (which Anthropic claimed is “the equivalent of a 500-page book”) and a theoretical limit of 1 million. Compared to earlier iterations of Claude, these models generally had faster response times, lower refusal rates of harmless requests, higher accuracy rates, and fewer biases in responses. Claude 3 models could process different visual formats and achieve near-perfect recall on long inputs. In terms of reasoning, all three models outperformed GPT-4 on math & reasoning, document Q&A, and science diagram benchmarks.

Comparison of the Claude 3 models to their peers on multiple benchmarks of capability

Source: Anthropic

Claude 3 was also notable because it was the first model family with “character training” in its fine-tuning process. Researchers aimed to train certain traits, such as “curiosity, open-mindedness, and thoughtfulness” into the model, as well as traits that reinforce to the model that, as an AI, it lacks feelings and memory of its conversations. In the training process, which was a version of Constitutional AI training, researchers had Claude generate responses based on certain character traits and then “[rank] its own responses… based on how well they align with its character.” This allowed researchers to “teach Claude to internalize its character traits without the need for human interaction or feedback.”

In July 2024, researchers expanded on Claude 3 by releasing Claude 3.5 Sonnet and announcing plans to release Claude 3.5 Opus and Haiku later in the year. Anthropic reported that “Claude 3.5 Sonnet [set] new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval),” outperforming Claude 3 Opus and GPT-4o on multiple tasks. Claude 3.5 Haiku was released in October 2024, positioned as a lightweight model designed for fast, low-cost inference while maintaining performance comparable to Claude 3 Opus on many standard benchmarks.

In February 2025, Anthropic released Claude 3.7 Sonnet, introducing hybrid reasoning capabilities that allow users to choose between rapid responses and extended, step-by-step thinking. This model integrates both capabilities into a single framework, with users able to control how long the model "thinks" about a query, balancing speed and accuracy based on their needs. Claude 3.7 set new performance records on academic and industry benchmarks, including SWE-bench for coding and TAU-bench for task execution.

Comparison of the Claude 3 models to their peers on multiple benchmarks of capability

Source: Anthropic

In May 2025, Anthropic released the Claude 4 family, representing its most significant technical advancement and the first models to require AI Safety Level 3 (ASL-3) classification due to their advanced capabilities and potential risks.

Claude Opus 4, called the "world's best coding model” by Anthropic, achieved new high scores on SWE-bench (72.5%) and Terminal-bench (43.2%). The model demonstrated unprecedented sustained performance, able to work continuously for several hours on complex tasks requiring thousands of steps. Cursor called the model state-of-the-art for coding, Replit reported dramatic advancements for complex multi-file changes, and Rakuten validated its capabilities with a demanding open-source refactor running independently for 7 hours.

Claude Sonnet 4, another Claude 4 family model, achieved 72.7% on SWE-bench, slightly outperforming Opus while offering significant cost and speed advantages. GitHub selected Sonnet 4 as the model powering the new coding agent in GitHub Copilot, while iGent reported substantially improved problem-solving, with navigation errors reduced from 20% to near zero.

Both models feature hybrid reasoning models providing either instant responses or extended thinking for complex problems. They support 200K token context windows with 64K max output, and key innovations include parallel tool execution, extended thinking with tool use, and enhanced memory capabilities through file-based context storage. Notably, both models are 65% less likely to engage in reward hacking behavior compared to Sonnet 3.7, avoiding shortcuts and loopholes in task completion.

Thinking summaries use a smaller model to condense lengthy thought processes, needed only about 5% of the time, with Developer Mode available for users requiring raw chains of thought for advanced prompt engineering.

Comparison of the Claude Opus and Sonnet models to their peers on multiple benchmarks of capability

Source: Anthropic

In August 2025, Anthropic released Claude Opus 4.1, achieving 74.5% on SWE-bench Verified and improvements in agentic tasks, real-world coding, and reasoning. GitHub reported improvements across most capabilities, with particular gains in multi-file code refactoring, while Windsurf measured a full standard deviation improvement equivalent to the leap from Sonnet 3.7 to Sonnet 4.

In September 2025, Claude released Sonnet 4.5, an advancement in its smaller and lower-cost model. Sonnet 4.5 achieved a new all-time high score on SWE-bench of 77.2% and a new highest score on OSWorld, a benchmark for AI model completion of real-world computer-based tasks, of 61.4% (compared to a previous high of only 42.2%). Anthropic also claimed that Sonnet 4.5 had the lowest misalignment scores of any models it tested, meaning the model was least inclined towards behaviors like “sycophancy, deception, power-seeking, and the tendency to encourage delusional thinking”.

Comparison of the Claude Sopnnet and Opus models to their peers on multiple benchmarks of capability

Source: Anthropic

Anthropic also introduced new core platform capabilities that expand Claude’s functionality across use cases:

  • Artifacts: A persistent output pane within Claude’s interface, designed for viewing and interacting with Claude-generated content such as Images, PDFs, spreadsheets, product mockups, or code. Artifacts remain accessible across sessions and are optimized for both desktop and mobile, allowing users to iterate on outputs without losing context.

  • Computer Use: A sandboxed desktop environment that Claude can see and control, enabling the model to perform multi-step tasks across simulated software applications. This includes opening files, navigating menus, using system tools, and coordinating multiple applications in sequence — useful for automating traditional enterprise workflows.

  • Web Search: A built-in live browsing capability that allows Claude to access the internet to answer time-sensitive or obscure questions. When activated, Claude can retrieve up-to-date information, summarize web pages, and cite sources, enhancing factual accuracy and transparency in its responses.

  • Model Context Protocol (MCP): An open framework that enables Claude to access and reason over organization-specific data across multiple tools. Using MCP, enterprises can grant Claude secure, fine-grained access to resources such as internal documentation, GitHub issues, Notion databases, or Slack threads. This enables Claude to be a more context-aware assistant capable of navigating real work environments.

  • Voice Conversations: entered beta in 2025, enabling users to interact with Claude through spoken interactions on mobile platforms.

Dialog of a user interacting with Claude asking to build a scatter plot of data

Source: Anthropic

As of October 2025, Claude is available to individual users through three plans — Free ($0/month), Pro ($20/month), and Max ($100-200/month). Paid subscriptions allow higher usage limits and early access to advanced features. Users can also interact with Claude through mobile formats. In 2024, Anthropic launched Claude apps for iOS and Android, a year after OpenAI released its own ChatGPT apps. Anthropic also offers developers the ability to build with Claude APIs and charges based on the number of input and output tokens. Claude models are also available on Amazon Bedrock and Google Cloud’s Vertex AI, where they can be used to build custom AI applications. As Claude has advanced in its helpfulness, Anthropic has remained committed to limiting its harmfulness.

In an interview in August 2023, Dario Amodei noted that:

“A mature way to think about these things is not to deny that there are any costs, but to think about what the costs are and what the benefits are. I think we’ve been relatively responsible in the sense that we didn’t cause the big acceleration that happened late last year and at the beginning of this year.”

In September 2023, the company published its Responsible Scaling Policy, a 22-page document that defines new safety and security standards for various model sizes. In July 2024, Anthropic announced a new initiative for soliciting and funding third-party evaluations of its AI models to abide by its Responsible Scaling Policy.

Past Models

Each foundation model company typically releases new versions of models, both iterations of existing models (e.g., OpenAI’s GPT-3.5, GPT-4, GPT-4o) as well as models around different focuses (e.g,. GPT for language, DALL·E for images, Sora for video). Anthropic has had similar iterations of its models over time.

For example, Claude Instant was released alongside Claude itself in March 2023, and described by Anthropic as a “lighter, less expensive, and much faster option”. Initially released with a context window of 9K tokens, the same as that of Claude, Claude Instant is described by some users as being less conversational, while equally capable compared to Claude.

In August 2023, an API for Claude Instant 1.2 was released. Using the strengths of Claude 2, its context window expanded to 100K tokens — enough to analyze the entirety of “The Great Gatsby” within seconds. Claude Instant 1.2 also demonstrated higher proficiency across a variety of subjects, including math, coding, and reading, among other subjects, with a lower risk of hallucinations and jailbreaks.

In September 2023, the company published its Responsible Scaling Policy, a 22-page document that defines new safety and security standards for various model sizes. In July 2024, Anthropic announced a new initiative for soliciting and funding third-party evaluations of its AI models to abide by its Responsible Scaling Policy.

Market

Customer

Anthropic’s customer segments include consumers, developers, and enterprises. Users can chat with Claude through its web platform, Claude.ai, and its mobile apps. As of the second quarter of 2025, Claude has 30 million monthly active users globally, with 2.9 million mobile app users.

The largest user demographics include users aged 25-34 (61.6% male, 38.4% female), with the United States representing 36.1% of traffic, followed by India (8.3%) and the United Kingdom (4.7%). Claude is available in 159 countries, significantly expanding from its initial US/UK launch.

According to Anthropic’s Economic Index, the Claude users span a wide range of knowledge professions — from computer programmers to editors, tutors, and analysts. The largest occupational categories as a percentage of all Claude prompts include:

  • Computer & Mathematical (37.2%)

  • Arts & Media (10.3%)

  • Education & Library (9.3%)

  • Office & Administrative (7.9%)

  • Life, Physical & Social Science (6.4%)

  • Business & Financial (5.9%)

As of October 2025, Anthropic publicly lists over 130 companies as its customers. As usage deepens, Anthropic expects enterprises to become the primary revenue driver. As Amodei put it:

“Startups are reaching $50 million+ annualized spend very quickly… but long-term, enterprises have far more spend potential.”

This enterprise strategy has shown results. Anthropic tripled the number of eight and nine-figure deals signed in 2025 compared to all of 2024, reflecting accelerated adoption across large organizations. However, this growth comes with significant revenue concentration risks, with substantial portions tied to GitHub Copilot and Cursor.

Major new partnerships include a five-year strategic deal with Databricks, bringing Claude directly to over 10k companies. The partnership enables enterprises to build domain-specific AI agents on their unique data with end-to-end governance. In September 2025, Microsoft announced a partnership with Anthropic, bringing Claude into some applications of Microsoft’s Copilot, which had previously been powered solely by OpenAI models.

In August 2025, Anthropic bundled Claude Code into enterprise plans, responding to enterprise demand for integrated coding tools.

Hierarchical breakdown of top six occupational categories by the amount of AI usage in their associated tasks

Source: Anthropic [via arXiv]

Consumers

Consumers engage with Claude through the Claude.ai platform and mobile apps for iOS and Android. Use cases range from writing and summarization to study help, coding assistance, and project planning.

Anthropic offers a freemium model:

  • Free access to Claude Sonnet 4

  • $20/month subscriptions for higher usage limits (~45 messages every 5 hours)

Features such as Claude Code (for writing/debugging code) and Artifacts (persistent visual workspaces) have expanded Claude’s utility for students, creators, and solo workers.

Developers

Anthropic offers multiple options for developers to build with Claude, including a dedicated API, Claude Code, and integrations via Amazon Bedrock and Google Cloud’s Vertex AI. The Claude API is priced on a usage-based model and gives developers access to Claude’s full model family. Claude Code, available through Claude’s main interface, allows developers to write, edit, debug, and ship code end-to-end. It supports VS Code and JetBrains extensions, displaying edits directly in familiar editor interfaces, and includes GitHub integration for automated PR responses. The platform has been adopted by major development tools including GitHub Copilot, Cursor, Replit, and Bolt.new.

For individual users, Anthropic also offers a high-usage “Claude Max” tier. The Max plan costs $100/month for 225 messages every 5 hours, or $200/month for 900 messages every 5 hours, and higher limits for Claude Opus 4.1

Features such as Claude Code, Artifacts (persistent visual workspaces), and 1M token context windows have expanded Claude's utility for students, creators, and solo workers.

Enterprises

Anthropic has expanded Claude’s adoption among enterprises, offering both packaged SaaS plans and custom integrations. The Team plan, priced at $25/user/month, allows organizations to use Claude collaboratively, with access controls and shared context across teammates. For larger customers, the Enterprise plan unlocks custom usage tiers, fine-tuning options, SSO support, and security reviews.

A number of case studies highlight Claude’s usage across different sectors:

  • Pfizer uses Claude in Amazon Bedrock to streamline R&D processes and reduce operational costs, reportedly saving tens of millions of dollars.

  • Intuit deploys Claude to explain complex tax calculations for millions of users through its TurboTax product during peak tax season.

  • Block uses Claude 3.5 Sonnet in Databricks as its default model to power “codename goose,” an internal AI agent. The tool connects internal systems, automates SQL queries, and saves engineers across the company 8–10 hours per week.

  • Lyft has integrated Claude for customer care, reducing support resolution time by 87% and piloting new AI-powered rider and driver experiences.

  • European Parliament adopted Claude to power “Archibot,” making 2.1 million official documents searchable and reducing research time by 80%.

  • Amazon’s Alexa+ is partially powered by Claude models, incorporating Anthropic’s jailbreaking resistance and safety tools.

  • Altana experienced 2-10x development velocity increase using Claude Code and Claude.

  • Rakuten uses Claude Code for autonomous coding, achieved 7 hours of continuous autonomous work, and reduced feature delivery from 24 days to 5 days.

Claude’s usage across different sectors

Source: Anthropic

As a general AI assistant, Claude’s services aren’t limited to the industries and use cases mentioned above. For example, Claude has powered Notion AI due to its unique writing and summarization traits. It has also served as the backbone of Sourcegraph, a code completion and generation product; Poe, Quora’s experimental chatbot; and Factory, an AI company that seeks to automate parts of the software development lifecycle.

Market Size

Generative AI is projected to significantly impact many industries. In June 2023, generative AI was reported to have the capacity to automate activities accounting for 60%-70% of employees’ time and is projected to enable productivity growth of 0.1%-0.6% annually through 2040. One report released in May 2024 found that already 75% of knowledge workers use AI at work. Additionally, as of November 2023, over two million developers, representing 92% of Fortune 500 companies, use OpenAI’s APIs.

Consumer-oriented chatbots have experienced unprecedented growth. ChatGPT, OpenAI's public chatbot, reached 100 million users just two months after its launch in November 2022, making it "the fastest-growing consumer internet app of all time" by that metric. By comparison, it took Facebook, Twitter, and Instagram between two to five years after launch to reach the same milestone. The growth trajectory has accelerated dramatically since then, with ChatGPT reaching 800 million weekly active users as of 2025, doubling from 400 million in February 2025. The platform serves over 1 billion messages daily, showcasing how deeply integrated it has become in personal and professional workflows.

Bar graph of AI startups venture funding for 2022-2024

Source: CB Insights

The generative AI market has experienced explosive growth since ChatGPT's launch. The global generative AI market size was valued at $16.9 billion in 2024 and is projected to reach $109.4 billion by 2030, growing at a compound annual growth rate of 37.6% during the forecast period. Alternative market research projects even more aggressive growth, with the market reaching $71.4 billion in 2025 and surging to $890.6 billion by 2032 at a CAGR of 43.4%. This represents a significant acceleration from Bloomberg Intelligence's 2023 projection of the market growing from $40 billion in 2022 to $1.3 trillion by 2032.

Growth is likely to be driven by task automation, with customer operations, marketing, software, and R&D, accounting for 75% of use cases. Across these industries, some estimate that generative AI has the potential to add $4.4 trillion in annual value. It is estimated that half of all work activities will be automated by 2045.

The AI funding landscape has reached historic levels, with 2024 marking a breakout year for investment in AI companies. Global venture capital funding for AI exceeded $100 billion in 2024, representing an increase of over 80% from $55.6 billion in 2023. Nearly 33% of all global venture funding was directed to AI companies, making artificial intelligence the leading sector for investments and surpassing even peak global funding levels from 2021.

Generative AI specifically attracted approximately $45 billion in venture capital funding in 2024, nearly doubling from $24 billion in 2023. Late-stage venture capital deal sizes for generative AI companies skyrocketed from $48 million in 2023 to $327 million in 2024, highlighting investor confidence in the sector's potential.

The investment momentum has continued into 2025, with AI startups receiving 53% of all global venture capital dollars invested in the first half of the year. In the United States specifically, this percentage jumps to 64%, with AI startups also comprising nearly 36% of all funded startups in the country.

Bar graph showing capital investments in agentic AI

Source: Axios

A key metric underscoring the rapid growth in generative AI is the number of parameters in commercial NLP models. One of the first transformer models, Google’s BERT-Large, was created with 340 million parameters in 2018. Within just a few years, the parameter size of leading AI models has grown exponentially. Google’s PaLM 2, released in May 2023, had 340 billion parameters at its maximum size. Some rumors have even suggested that GPT-4 had nearly 1.8 trillion parameters, though OpenAI has not disclosed the actual number.

The number of parameters used within a model is one factor in determining its accuracy across different tasks. While most models excel at classifying, editing, and summarizing text and images, their capacity to successfully execute tasks autonomously varies greatly. As CEO Dario Amodei put it, “they feel like interns in some areas and then they have areas where they spike and are really savants.”

Line graph of performance of top models on LMSYS Chatbot Arena by select providers

Source: HAI Stanford

CEO Dario Amodei expressed his fear about the potential impact of increasingly capable models in the long term by stating that:

“No one cares if you can get the model to hotwire [a car] — you can Google for that. But if I look at where the scaling curves are going, I’m actually deeply concerned that in two or three years, we’ll get to the point where the models can… do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death.”

Competition

OpenAI

OpenAI is both Anthropic’s largest competitor and the former employer of many members of its founding team. Founded in 2015, OpenAI is best known for its development of the Generative Pre-trained Transformers (GPT). With its first model released in 2018, its public model ChatGPT, powered by GPT-3.5, amassed well over 100 million monthly active users within two months of its launch.

ChatGPT serves 700 million weekly users as of October 2025 across consumers and enterprises, with ChatGPT attracting 1.5 million enterprise customers across its Enterprise, Team, and Edu offerings. The platform serves over 10 million paying subscribers across its consumer plans, including 15.5 million Plus subscribers as of May 2025.

In March 2025, the company closed the largest private funding round in tech history as of October 2025, raising $40 billion at a $300 billion post-money valuation, led by SoftBank's $30 billion contribution. OpenAI raised an additional $8.3 billion in August 2025, led by Dragoneer Investment Group's $2.8 billion check. As of October 2025, OpenAI has raised approximately $57.9 billion in total funding, making it the most well-funded AI company in history.

The company's annual recurring revenue jumped to $13 billion in August 2025, with projections to reach $12.7 billion in total revenue for 2025 announced in May 2025. The company is exploring secondary sales that could value it at up to $500 billion, underscoring investor confidence despite the company's substantial losses. OpenAI generated $3.7 billion in revenue in 2024, representing more than a tripling from $1 billion in 2023. OpenAI is projected to complete $11.6 billion in sales in 2025, with executives predicting revenue could reach $100 billion by 2029, comparable to annual sales figures of major companies like Nestlé and Target.

Like Anthropic, OpenAI offers an API platform for its foundation models, and its AI chatbot, ChatGPT, allows users to interact with GPT-3.5 for free. With a paid plan priced at $20 per month, users can also access GPT-4 and GPT-4o, as well as OpenAI’s image generator, DALL-E. While GPT-4o has a smaller context window and a less recent knowledge cut-off than Claude 3.5, it has the advantages of image generation and web searches. In December 2024, OpenAI launched a Pro plan priced at $200 per month, with extended usage limits and advanced features, similar to Claude’s Max plan.

OpenAI has invested in multiple safety initiatives, including a new superalignment research team announced in July 2023 and a new Safety and Security Committee introduced in May 2024. However, current and former employees have continued to criticize OpenAI for deprioritizing safety. In spring 2024, OpenAI’s safety team was given only a week to test GPT-4o. “Testers compressed the evaluations into a single week, despite complaints from employees,” in order to meet a launch date set by OpenAI executives. Days after the May 2024 launch, Jan Leike, OpenAI’s former Head of Alignment, became the latest executive to leave the company for Anthropic, claiming that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI.

In June 2024, a group of current and former employees at OpenAI alleged that the company was “recklessly racing” to build AGI and “used hardball tactics to prevent workers from voicing their concerns about the technology.” Along with former employees of DeepMind and Anthropic, they signed an open letter calling for AI companies to “support a culture of open criticism” and to allow employees to share “risk-related concerns.” In addition, one of OpenAI’s co-founders, Ilya Sutskever, left the company in May 2024 and by June 2024 had started a new company called Safe Superintelligence Inc. (SSI), stating that safety is in the company’s name because it is “our mission, our name, and our entire product roadmap because it is our sole focus.”

In January 2025, the Trump administration announced the Stargate project, a $500 billion initiative aimed at advancing AI infrastructure in the US, through the construction of over 20 large-scale data centers nationwide. The project will be funded by Softbank and MGX, while OpenAI, Oracle, Microsoft, Nvidia, and Arm will be responsible for the operational and technological contribution.

xAI

Founded in July 2023 by Elon Musk, xAI is a frontier AI lab spun out of Musk’s broader X/Twitter ecosystem. From its inception, xAI has aimed to build a “maximum truth-seeking AI” that, in Musk’s words, “understands the universe.” The company released its first model, Grok-1, in November 2023, integrated directly into X’s premium subscription tiers. Since then, xAI has scaled both model capabilities and its underlying infrastructure.

The company's trajectory took a dramatic turn in March 2025 when Musk announced that xAI had formally acquired X Corp., consolidating the social media platform and AI startup into a single entity valued at $80 billion. The all-stock transaction valued X at $33 billion while positioning xAI to leverage X's vast data streams for model training. "xAI and X's futures are intertwined," Musk explained, emphasizing how the merger would "combine the data, models, compute, distribution, and talent" to create competitive advantages that standalone AI companies cannot match.

AI has reached a significant amount of funding. In July 2025, the company raised $10 billion, split evenly between $5 billion in equity and $5 billion in secured notes and term loans, with SpaceX contributing $2 billion to the equity portion. Morgan Stanley noted that the debt offering was "oversubscribed and included prominent global debt investors." As of July 2025, the company is aiming to raise another $10 billion at a valuation of up to $200 billion, which would make it one of the most valuable private companies in the world.

In July 2025, xAI unveiled Grok 4 and Grok 4 Heavy, which the company claims are "the most intelligent models in the world." The models feature native tool use and real-time search integration, with Grok 4 Heavy designed for the most challenging tasks. These releases build on February's Grok 3 launch, which Musk stated was trained with "10x more computing power" than its predecessor using the company's Colossus supercomputer with around 200K GPUs.

As of October 2025, xAI is operating its own 200K+ H100 GPU cluster in Memphis, known as the "Colossus" supercomputer, with plans for expanding it into a 1 million GPU training cluster in 2026. It is believed to be one of the world's largest AI supercomputers. xAI expects to generate $500 million in revenue in 2025, with projections reaching $19 billion by 2029.

In July 2025, the U.S. Department of Defense announced contract awards of up to $200 million for AI development at xAI, along with Anthropic, Google, and OpenAI. That same month, xAI launched "Grok for Government," making its models available to U.S. government customers.

Musk has called Grok a "maximally truth-seeking" AI that is also "anti-woke," in a bid to set it apart from its rivals. However, this positioning has generated considerable controversy. The model has faced criticism for generating conspiracy theories, antisemitic content, and praise of Adolf Hitler, leading to multiple system prompt modifications. As of May 2025, xAI has begun publishing Grok's system prompts on GitHub in response to these incidents.

DeepSeek

Founded in July 2023 by Liang Wenfeng in Hangzhou, DeepSeek operates as a subsidiary of the Chinese hedge fund High-Flyer. The company's breakthrough lies not in revolutionary architecture, but in extreme efficiency. DeepSeek claims it trained its V3 model for just $6 million, far less than the $100 million cost for OpenAI's GPT-4, using approximately one-tenth the computing power consumed by Meta's comparable model, Llama 3.1.

Rather than relying on the most advanced H100 chips that U.S. export controls have restricted, DeepSeek focused on making extremely efficient use of more constrained hardware, primarily using Nvidia H20 chips designed for the Chinese market. The company's R1 model uses reinforcement learning without extensive labeled data to achieve high-quality reasoning capabilities, questioning whether the expensive training with human feedback employed by competitors was necessary.

DeepSeek's success has broader geopolitical implications. U.S. President Donald Trump called DeepSeek a "wake-up call" for American industry to be "laser-focused on competing to win." The company's emergence has revitalized Chinese venture capital interest in AI after three years of decline, with investors rushing to find "the next DeepSeek."

The company remains focused on research rather than immediate commercialization, allowing it to avoid certain provisions of China's AI regulations aimed at consumer-facing technologies. DeepSeek's hiring approach emphasizes skills over lengthy work experience, resulting in many hires directly from university. The company recruits individuals without computer science backgrounds to expand expertise in areas like poetry and advanced mathematics.

DeepSeek has struggled with stability issues when encouraged by Chinese authorities to adopt Huawei's Ascend chips instead of Nvidia hardware. Reports suggest that R2, the intended successor to R1, has been delayed due to slow data labeling and chip problems. Additionally, concerns have been raised about the company's training methods, as DeepSeek appears to have relied on outputs from OpenAI models, potentially violating OpenAI's terms of service.

Cohere

Cohere, which aims to bring AI to businesses, was founded in 2019 by former AI researchers Aidan Gomez, Nick Frosst, and Ivan Zhang. Gomez serves as CEO, and prior to starting Cohere, he had interned at Google Brain, where he worked under Geoffrey Hinton and co-wrote the breakthrough paper introducing transformer architecture.

In August 2025, Cohere raised $500 million at a $6.8 billion valuation, led by Radical Ventures and Inovia Capital with participation from returning investors including Nvidia, AMD Ventures, PSP Investments, and Salesforce Ventures. The round was oversubscribed, representing a 24% increase from its $5.5 billion valuation just a year earlier, in contrast to the company's previous pattern of underperforming fundraising goals. As of October 2025, Cohere's total funding stands at $1.7 billion.

Cohere provides in-house LLMs for tasks like summarization, text generation, classification, data analysis, and search to enterprise customers. These LLMs can be used to streamline internal processes, with notable customization and hyperlocal fine-tuning not often provided by competitors.

As of October 2025, the company has appointed Joelle Pineau, former Vice President of AI Research at Meta, as Chief AI Officer, and Francois Chadwick, former CFO at Uber and Shield AI, as Chief Financial Officer. Pineau, who recently departed Meta after leading its Fundamental AI Research (FAIR) lab, will direct research and product development from Cohere's new Montreal office.

Unlike OpenAI and Anthropic, Cohere's mission is centered on the accessibility and security of LLMs for enterprise use rather than the strength of its foundation models. In an interview with Scale AI founder Alexandr Wang, Cohere CEO Aidan Gomez emphasized this exact need as the key problem Cohere seeks to address:

“[We need to get] to a place where any developer like a high school student can pick up an API and start deploying large language models for the app they’re building […] If it’s not in all 30 million developers toolkit, we’re going to be bottlenecked by the number of AI experts, and there always will be a shortage of that talent.”

Cohere's platform is cloud-agnostic and can be deployed across public clouds, virtual private clouds, or on-premises, with partnerships spanning major enterprise technology providers including Oracle, Dell, Bell, Fujitsu, LG's consulting service CNS, and SAP. The company has also secured enterprise customers such as RBC, Notion, and the Healthcare of Ontario Pension Plan, which became a new investor in the latest round.

Hugging Face

Founded in 2016 and aspiring to become the “GitHub for machine learning”, Hugging Face is the main open-source community platform for AI projects. As of October 2025, it had over 2 million free and publicly available models spanning NLP, computer vision, image generation, and audio processing. Any user on the platform has the ability to post and download models and datasets, with its most downloaded models including GPT-2, BERT, and Whisper. Although most transformer models are typically too large to be trained without supercomputers, users can access and deploy pre-trained models from the platform.

Hugging Face operates on an open-core business model, meaning all users have access to its public models. Paying users get additional features, such as higher rate limits, its Inference API integration, additional security, and more. Compared to Anthropic, its core product is more community-centric. Clem Delangue, CEO of Hugging Face, noted in an interview:

“I think open source also gives you superpowers and things that you couldn't do without it. I know that for us, like I said, we are the kind of random French founders, and if it wasn't for the community, for the contributors, for the people helping us on the open source, people sharing their models, we wouldn't be where we are today.”

Hugging Face’s relationship with its commercial partners is less direct than its competitors, including Anthropic. Nonetheless, it has raised $395.2 million as of October 2025, with its $235 million Series D in August 2023. This put the company at a valuation of $4.5 billion, which was double its valuation during its previous round in May 2023 and more than 100x its reported ARR. As a platform spreading large foundation models to the public (albeit models 1-2 generations behind state-of-the-art models such as those being developed by Anthropic and OpenAI), Hugging Face represents a significant player in the AI landscape. In 2022, Hugging Face, with the help of over 1K researchers, released BLOOM, which the company claims is “the world’s largest open multilingual language model.”

Other Research Organizations

Anthropic also competes with various established AI research labs backed by large tech companies; most notably, Google’s DeepMind, Meta AI, and Microsoft Azure AI.

DeepMind: Founded in 2010, DeepMind was acquired by Google in 2014. Known as Google DeepMind, it was later merged with Google Brain in Google’s efforts to “accelerate our progress in AI”. DeepMind’s Gemini offers similar abilities to GPT-4 and is the crux of Google DeepMind’s research efforts. As Google’s AI research branch, DeepMind indirectly serves as a competitor to Anthropic due to its development of scalable LLMs, which are likely used to power Google’s AI searches and Google Cloud NLP services.

Meta: In May 2025, Meta introduced three new models in its Llama 4 family: Scout, Maverick, and Behemoth. Scout and Maverick are publicly available via Llama.com and platforms like Hugging Face, while Behemoth remains in training. As of May 2025, Meta AI, the assistant integrated into WhatsApp, Messenger, and Instagram, runs on Llama 4 in 40 countries.

Prior to this, Meta released LLaMA 3 in April 2024. In July 2023, Meta announced that it would be making LLaMA open-source. “Open source drives innovation because it enables many more developers to build with new technology”, posted Mark Zuckerberg. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.” While LLaMA 3 lags behind GPT-4 in reasoning and mathematics, it remains one of the largest open-source models available and rivals major models in some performance aspects, making it a likely choice for independent developers.

Microsoft Azure: Among its hundreds of services, Microsoft Azure’s AI platform offers tools and frameworks for users to build AI solutions. While Azure offers a portfolio of AI products instead of its own foundation models, these products include top models like GPT-4o and LLaMA 2, which can be used for services ranging from internal cognitive search to video and image analysis. Unlike Claude, which is low-code and accessible for teams of all sizes, Azure primarily targets developers and data scientists capable of coding on top of existing models. As such, Microsoft Azure AI serves as an indirect competitor to Anthropic, increasing the accessibility and customizability of foundation AI models.

Business Model

Anthropic generates revenue through both usage-based APIs and subscription-based access to its Claude models. According to internal estimates, API sales accounted for the vast majority of the company's revenue in 2024: 60–75% from third-party API integrations, 10–25% from direct API customers, 5% from chatbot subscriptions, and 2% from professional services.

Anthropic offers three main subscription tiers for individual users:

  • Free: Access to Claude via web and mobile, with support for image and document queries using Claude Sonnet 4.

  • Pro ($20/month or $17/month annually): Adds 5x higher usage limits, Projects functionality, and access to Claude Opus 4.1 and other advanced models.

  • Max ($100–$200/month): Introduced in April 2025, designed for power users with two tiers. The $100/month tier provides 5x higher rate limits than Pro, while the $200/month tier offers 20x higher usage limits, priority access during peak traffic, and early access to new features.

Chart showing Anthropic pricing tiers

Source: Anthropic

For teams, Anthropic introduced a Team Plan in May 2024, priced at $25–30/user/month. It includes admin controls, usage consolidation, and shared Claude access, making it suitable for organizations deploying AI broadly across departments. Custom Enterprise Plans are available for higher-volume clients, with reported pricing of $60 per seat for a minimum of 70 users and a 12-month contract, resulting in a minimal Enterprise plan cost of approximately $50K annually.

Anthropic's API pricing varies significantly by model tier and includes several cost optimization features:

  • Batch Processing: 50% discount on both input and output tokens for asynchronous processing

  • Prompt Caching: Up to 90% cost savings with intelligent caching of repeated prompts

  • Long Context Pricing: Premium rates for requests exceeding 200K input tokens when using the 1 million token context window

In July 2025, Anthropic introduced new weekly rate limits to manage computational demand, particularly for Claude Code usage. The limits affect less than 5% of subscribers based on usage patterns as of October 2025:

Max subscribers can purchase additional usage beyond rate limits at standard API rates. These limits respond to "unprecedented demand" for Claude Code since its launch, according to Anthropic.

Traction

Anthropic has seen commercial momentum across strategic partnerships, enterprise adoption, and revenue growth. As of October 2025, the company is estimated to have reached $5 billion in annual recurring revenue, up from $3 billion in May 2025, $1.4 billion in March 2025, and $1 billion at the end of 2024. The company is projecting $9 billion in ARR by the end of 2025. According to internal estimates, enterprise and startup API calls continue to drive 70-75% of Anthropic's revenue through pay-per-token pricing, with consumer subscriptions accounting for 10-15% of revenue.

Anthropics ARR for 2022 - 2024 and projected for 2025

Source: Sacra

Strategic Partnerships

As of 2025, Anthropic's largest infrastructure partner remains Amazon. In November 2024, Amazon deepened its relationship with Anthropic by committing an additional $4 billion in funding, bringing its total investment to $8 billion. As part of the expanded agreement, AWS became Anthropic's primary cloud and training partner, with Anthropic committing to train future foundation models on AWS Trainium and Inferentia chips.

In March 2025, Anthropic announced a significant partnership with Databricks, establishing a five-year strategic relationship to integrate Anthropic's models natively into the Databricks Data Intelligence Platform. This partnership provided over 10K companies access to Claude models for building AI agents. In September 2025, Microsoft announced a partnership with Anthropic to integrate Claude into Microsoft’s Copilot, which had been powered solely by OpenAI models until that point.

Anthropic has expanded into government and defense sectors through strategic partnerships. In November 2024, Palantir announced a partnership with Anthropic and Amazon Web Services to provide U.S. intelligence and defense agencies access to Claude models. This marked the first time Claude would be used in "classified environments." In June 2025, Anthropic announced a "Claude Gov" model, which as of June 2025 was in use at multiple US national security agencies. In July 2025, the United States Department of Defense announced that Anthropic had received a $200 million contract for AI in the military, alongside Google, OpenAI, and xAI.

Enterprise Adoption

Claude is available via Amazon Bedrock, where it serves as an element of the core infrastructure for tens of thousands of customers. Notable enterprise adopters include Pfizer (saving tens of millions in operational costs), Intuit, Perplexity, and the European Parliament (powering a chatbot that analyzes 2.1 million official documents). Other major enterprise customers include Slack, Zoom, GitLab, Notion, Factory, Asana, BCG, Bridgewater, and Scale AI. In July 2025, AWS launched an AI agent marketplace with Anthropic as a key partner, allowing startups to directly offer their AI agents to AWS customers through a single platform.

Anthropic continues to expand its developer platform traction through integrations and partnerships. According to CEO Dario Amodei in November 2024, "Several customers have already deployed Claude's computer use ability," noting that "Replit moved fast." Replit is one of several next-generation IDE companies, alongside Cursor, Vercel, and Bolt.new, that have adopted Claude to support development workflows.

The company's developer-focused approach has resonated particularly well with enterprise customers building generative AI applications. Even excluding its two largest customers, Anthropic's remaining business has grown more than eleven-fold year-over-year as of October 2025, and the startup has tripled the number of eight and nine-figure deals signed in 2025 compared to all of 2024, reflecting broader enterprise adoption beyond coding applications.

Valuation

In September 2025, Anthropic raised a $13 billion Series F that valued the company at $170 billion. Iconiq Capital led this round, with participation from the Qatar Investment Authority and Singapore's sovereign wealth fund GIC. The round represented a nearly 3x increase from its previous $61.5 billion valuation. Anthropic's valuation increased from $18.5 billion in February 2024 to $61.5 billion by March 2025, representing a 9x increase in just 18 months.

As of October 2025, Anthropic has raised a total of $33.7 billion across 14 funding rounds. Amazon remains Anthropic's largest strategic investor, contributing a total of $8 billion across two investments, including a $4 billion commitment in November 2024. Google has committed over $3 billion in total investments.

In May 2025, Anthropic secured a $2.5 billion revolving credit facility to support its continued growth. The five-year facility is underwritten by Morgan Stanley, Goldman Sachs, JPMorgan Chase, Citibank, Barclays, Royal Bank of Canada, and Mitsubishi UFJ Financial Group. According to CFO Krishna Rao, "This revolving credit facility provides Anthropic significant flexibility to support our continued exponential growth."

Based on Anthropic's $5 billion ARR as of July 2025, the company's $170 billion valuation represents a 34x revenue multiple. In comparison, OpenAI is valued at $300 billion as of October 2025, with an estimated $13 billion in annualized revenue as of mid-2025, implying a 23x revenue multiple.

Chart showing EV/Sales (NTM) for Amazon, Microsoft, Alphabet and Meta.

Source: Koyfin

As a private AI foundation model company, Anthropic's revenue multiples remain significantly higher than those of large public tech companies like Microsoft at 12.2x, Meta at 8.9x, Google at 5.4x, and Amazon at 3.2x, which act as both competitors and cloud distribution partners. This premium reflects the high growth expectations and scarcity value of leading AI companies, but also indicates significant execution risk as Anthropic works to justify its valuation through continued product differentiation, effective monetization of Claude APIs, and long-term defensibility in an increasingly competitive market.

In July 2024, Menlo Ventures, a major Anthropic investor, partnered with Anthropic to create a $100 million fund for investing in early-stage AI startups. Startups backed by the fund receive investments of at least $100K, access to Anthropic's models, $25K in Anthropic credits, and mentorship from Anthropic leaders.

Key Opportunities

Claude future plans to assist, collaborate and pioneer

Source: Claude

Enterprise Market Dominance

As of October 2025, Claude holds 32% of the enterprise large language model market by usage, a reversal from 2023 when OpenAI held 50% and Anthropic had only 12%. This shift began with Claude 3.5 Sonnet's June 2024 launch, and was accelerated by Claude 3.7 Sonnet's February 2025 release that introduced agent-first capabilities.

As of October 2025, Anthropic holds a large share of AI-assisted software development, capturing 42% of the enterprise code generation market, more than double OpenAI's 21% share. Claude helped transform GitHub Copilot into a $1.9 billion ecosystem within a single year, while enabling entirely new application categories, including AI-integrated development environments like Cursor and Windsurf.

Technical Superiority and Advanced Capabilities

Claude Opus 4 leads as the world's best coding model, scoring 72.5% on SWE-bench and 43.2% on Terminal-bench, while Claude Sonnet 4 also delivers a 72.7% SWE-bench score. These benchmarks demonstrate Claude's technical superiority in complex software engineering tasks that drive enterprise adoption.

Enhanced Context Capabilities: Claude Sonnet 4 supports up to 1 million tokens of context via API, a 5x increase that enables processing entire codebases containing over 75,000 lines of code or analyzing dozens of research papers simultaneously. While standard usage maintains a 200K-token context window, enterprise customers can access up to 500K tokens, providing substantial advantages for complex, long-form business applications.

Claude 4 models can use tools in parallel, follow instructions more precisely, and demonstrate significantly improved memory capabilities when given access to local files, extracting and saving key facts to maintain continuity across extended workflows. This enables Claude to act as a true AI collaborator for complex projects that previously required human handoffs.

Revenue Growth and Business Model Validation

Anthropic's revenue has grown from $1 billion at the end of 2024 to $5 billion by October 2025. Enterprise and startup API calls drive 70-75% of revenue through pay-per-token pricing, demonstrating the scalability of Anthropic's business model.

Code generation remains Anthropic’s primary revenue driver, with Claude Code alone generating $400 million in annualized revenue by July 2025, up from approximately $17.5 million in April 2025. This growth rate in enterprise software is considered unprecedented, with one investment firm noting: "We've looked at the IPOs of over 200 public software companies, and this growth rate has never happened".

As of October 2025, Anthropic generates approximately $211 per monthly user compared to OpenAI's roughly $25 per weekly user, an 8x difference that reflects the premium value enterprises place on Claude's capabilities. This efficiency stems from targeting high-value use cases where AI adoption is a necessity rather than a curiosity, particularly in software development and complex workflow automation.

Government and Policy Support

Anthropic has significantly expanded its government partnerships and policy influence throughout 2024 and 2025, positioning itself as a trusted AI partner for national security and public sector applications.

Federal Government Expansion: In August 2025, Anthropic announced it would offer Claude for Enterprise and Claude for Government to all three branches of the US government for $1 per agency for one year, expanding beyond OpenAI's executive branch focus to include legislative and judiciary branches. This comprehensive approach demonstrates Anthropic's commitment to broad government AI adoption.

In July 2025, the U.S. Department of Defense awarded Anthropic a $200 million agreement to prototype frontier AI capabilities for national security applications. Claude Gov models, custom-built for national security customers, already power deployments across the national security community, while partnerships with Palantir enable Claude integration into classified networks.

Regulatory Positioning: Claude meets the government's highest security standards, certified for FedRAMP High requirements for handling unclassified sensitive government data. Through partnerships with AWS, Google Cloud, and Palantir, agencies can access Claude through existing secure infrastructure while maintaining complete data control.

Anthropic's long-held commitment to AI safety continues to position it favorably as governments develop frameworks requiring safety testing and transparency, with the company proactively activating ASL-3 protections for Claude Opus 4 to prevent misuse for chemical, biological, radiological, and nuclear weapons development.

Strategic Advantages and Market Position

Performance Over Price Competition: Enterprise AI implementation has shifted from experimental to production deployment, with 74% of startups reporting that most AI workloads are operating in production environments. Market analysis reveals that performance capabilities rather than pricing drive enterprise switching decisions, with enterprises prioritizing operational advantages despite cost reductions in legacy systems.

Platform Ecosystem Development: Major software development platforms, including GitHub Copilot, Cursor, and Replit, have adopted Claude as their preferred model, creating a powerful moat in the highest-value segment of the AI market as of October 2025. Claude's strength in software development has captured the enterprise developer market that pays premium rates for AI capabilities.

Key Risks

Potential Copyright Infringement

In June 2025, Anthropic achieved a landmark legal victory when federal judge William Alsup ruled that the company's use of copyrighted books to train Claude was "spectacularly transformative" and constituted fair use under copyright law. This marked the first substantive federal court decision on whether AI training on copyrighted works violates copyright law, with the judge stating: "The training use was a fair use."

While Judge Alsup found that training on lawfully acquired books was fair use, he also ruled that Anthropic must face trial in December 2025 over its practice of downloading millions of pirated books from "shadow libraries" to create a permanent digital library. The judge noted that while Anthropic later purchased copies of books it had previously pirated, "that will not absolve it of liability for the theft, but it may affect the extent of statutory damages".

This copyright litigation reflects broader industry challenges that have affected multiple AI companies. Foundation model companies like OpenAI, Meta, and Microsoft have faced similar lawsuits over their practices of scraping copyrighted materials for training data. Notably, in December 2023, the New York Times sued OpenAI and Microsoft, alleging that the companies had used millions of NYT articles to train their chatbots, claiming compensation for "billions of dollars in statutory and actual damages" caused by "the unlawful copying and use of The Times's uniquely valuable works."

Subsequently, a group of eight other newspapers, including The Chicago Tribune, and The Center for Investigative Reporting filed similar lawsuits against OpenAI and Microsoft. Artists, novelists, and musicians have also taken stands against AI companies' use of their copyrighted material, with complaints filed against Stability AI, DeviantArt, and Midjourney for allegedly using "copyrighted works of millions of artists" as training data. Getty Images similarly sued Stability AI over copyright concerns.

In October 2023, Anthropic faced its own music industry copyright lawsuit, with publishers including Universal Music Group alleging that Claude was "distributing almost identical lyrics" to copyrighted songs. This case remains ongoing alongside Reddit's June 2025 lawsuit against Anthropic, alleging that the company scraped millions of Reddit comments over 100,000 times even after being denied access.

However, not all content creators have opposed AI companies' use of their material. Multiple major news organizations, including TIME, The Associated Press, Vox Media, The Atlantic, the Financial Times, and NewsCorp, have signed licensing deals with AI companies, granting access to copyrighted content in exchange for compensation or technology access. Photography companies like Getty, Shutterstock, and Adobe have created their own AI generators with support from major AI companies.

Several copyright claims have been dismissed in AI companies' favor. In July 2023, authors, including Sarah Silverman, alleged that OpenAI and Meta used copyrighted works without permission, but a California court partially dismissed most claims in February 2024. In June 2024, a judge dismissed similar claims against OpenAI, Microsoft, and GitHub regarding code suggestions from public repositories.

The ruling in Anthropic's favor, along with a similar decision favoring Meta, represents the first wave of court decisions addressing AI copyright issues, though dozens of similar lawsuits against all major AI companies remain pending. The precedent may encourage more licensing agreements between AI companies and content creators, as the legal landscape continues to evolve around AI training practices and intellectual property rights.

Regulatory Risks

The rise of generative AI has coincided with heightened U.S. regulatory scrutiny of the tech industry, which could threaten the strategic partnerships between Anthropic and Big Tech companies. In 2021, Lina Khan, an antitrust scholar who rose to prominence after writing a Yale Law Review article on Amazon's anti-competitive practices in 2017, was appointed as Chair of the Federal Trade Commission (FTC). Under Khan's tenure, the FTC has sued multiple Big Tech companies, including Amazon in a "landmark monopoly case" in 2023. In June 2024, Khan also led the FTC to strike a deal with the Justice Department to investigate Microsoft, OpenAI, and Nvidia for possibly violating antitrust laws in the AI industry.

Khan's FTC initially opened an inquiry in January 2024 into the partnerships between tech giants Microsoft, Amazon, and Google, and major AI startups OpenAI and Anthropic. Khan claimed the inquiry would "shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition." This investigation culminated in January 2025 when the FTC issued a comprehensive staff report on AI partnerships, outlining potential competition concerns, including impacts on access to computing resources and engineering talent, increased switching costs for AI developers, and exclusive dealing arrangements.

Congressional scrutiny has intensified further, with Senators Elizabeth Warren and Ron Wyden launching a formal inquiry in April 2025 into Anthropic's partnerships with Google and Amazon, demanding detailed information about arrangements they fear may circumvent antitrust scrutiny. The senators expressed particular concern about Google's 14% ownership stake in Anthropic after investing $3 billion, warning that these partnerships "sometimes function as de facto mergers" while bypassing traditional merger scrutiny.

The lawmakers cited technical and financial barriers that may lock AI developers into specific cloud providers, noting Anthropic's use of Google's proprietary Tensor Processing Units (TPUs) for model training and concerns about "hefty egress fees" that raise switching costs. The FTC warned in its January 2025 report that AI partnerships might pose "risks to competition and consumers" by "locking in the market dominance of large incumbent technology firms".

Anthropic's structure as a public benefit corporation provides some protection from these concerns, as neither Amazon nor Google owns voting shares or can directly influence board decisions. In response to earlier regulatory pressure, Microsoft voluntarily dropped its non-voting observer seat on OpenAI's board in July 2024, and Apple abandoned plans to join. However, continued FTC and congressional oversight could potentially limit Anthropic's ability to receive funding from and partner with other major technology companies, especially those already under antitrust scrutiny.

Beyond antitrust concerns, broader AI regulatory frameworks continue expanding. In October 2023, President Biden signed an executive order requiring AI companies like Anthropic to perform safety tests on new AI technologies and share results with the government before release. The order also gives federal agencies power to take protective measures against AI risks. The G-7 nations announced a complementary Code of Conduct for AI companies, emphasizing safety.

Regulatory momentum continues building globally, with U.S. federal agencies introducing 59 AI-related regulations in 2024, more than double the number in 2023, while legislative mentions of AI rose 21.3% across 75 countries. This regulatory expansion creates both challenges and opportunities for AI companies.

Considering its long-held commitment to safety, Anthropic is arguably more aligned with government regulators than rival companies. After the executive order and G-7 Code of Conduct announcements, Anthropic released a statement celebrating "the beginning of a new phase of AI safety and policy work" and committed to "playing [its] part to contribute to the realization of these objectives and encourage a safety race to the top." Anthropic's safety-first approach positions it favorably as governments develop frameworks requiring safety testing and transparency.

However, as AI models become more advanced and capable of harm, model developers may face unforeseen risks, such as bioterrorism or warfare applications. Security breaches, which are growing increasingly sophisticated, could also pose risks to Anthropic, which experienced a minor data leak in 2023. Therefore, continued investment in safety tests and initiatives to mitigate future threats remains critical, especially given increased regulatory oversight and the potential for more stringent compliance requirements as AI capabilities advance.

Secrecy in the AI Landscape

Despite Anthropic's focus on AI safety, the company has struggled to meet certain accountability metrics in an increasingly opaque industry. Using the Draft EU AI Act as a rubric, a Stanford report on AI transparency initially ranked Anthropic's Claude 3 in the bottom half of major foundation models.

However, in May 2024, Stanford's Foundation Model Transparency Index showed significant improvements in AI industry transparency, with average scores rising from 37 to 58 out of 100 points. Anthropic scored 51 points, placing it below the industry mean and trailing competitors, including Meta (60 points), though ahead of OpenAI (49 points).

While Anthropic significantly improved its transparency score from October 2023 to May 2024, it still lags behind Microsoft, Meta, Stability AI, and Mistral AI, just barely edging out OpenAI and Google. All 14 participating companies disclosed previously private information, with companies revealing information about an average of 16.6 indicators that was not previously public, yet substantial opacity remains across the industry.

The transparency challenge extends beyond individual company practices to systemic industry issues. Stanford researchers noted "a significant lack of standardization in responsible AI reporting," with leading developers including OpenAI, Google, and Anthropic primarily testing their models against different responsible AI benchmarks, complicating systematic risk comparisons. This fragmentation makes it difficult for enterprises, regulators, and researchers to assess relative model safety and capabilities.

Stanford's 2025 AI Index Report revealed that AI-related incidents jumped 56.4% to 233 reported cases in 2024, spanning privacy violations, bias incidents, and algorithmic failures. Despite rising incidents, fewer than two-thirds of organizations are actively implementing safeguards, creating a concerning gap between risk awareness and concrete action.

The competitive landscape has continued driving secrecy concerns, with nearly 90% of notable AI models in 2024 coming from industry rather than academia, up from 60% in 2023, reflecting the concentration of AI development within commercial entities that face pressure to protect competitive advantages. This shift has accelerated since OpenAI's pivot toward monetizing its services. In March 2023, OpenAI announced GPT-4 with the disclaimer that it would not be an open model, stating, "Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar."

Anthropic faces similar competitive pressures while maintaining its safety positioning. Despite its many recommendations favoring increased model restrictions, Anthropic has remained cautious about some accountability mechanisms. "While oversight and accountability are crucial for building trustworthy AI, insufficiently thoughtful or nuanced policies, liability regimes, and regulatory approaches could frustrate progress," the company wrote in its June 2023 response to the NTIA's AI Accountability Policy Request for Comment, representing a deviation from its typical policy recommendations.

As of October 2025, Anthropic has yet to fully disclose the data sources used to train Claude, though the company has been more forthcoming about its Constitutional AI methodology and safety research compared to some competitors. The company maintains a transparency hub that provides information about government data requests, child safety measures, and platform security, though comprehensive training data disclosure remains limited.

As regulatory frameworks evolve globally and governments invest billions in AI governance initiatives, Anthropic must continue balancing competitive positioning with transparency requirements while maintaining its safety-focused approach that has historically aligned well with regulatory expectations. The company's challenge lies in demonstrating sufficient transparency to satisfy regulators and enterprise customers while protecting the competitive advantages that have enabled its rapid market share growth. This balancing act becomes increasingly complex as AI becomes critical infrastructure and transparency requirements intensify across multiple jurisdictions.

Weekly Newsletter

Subscribe to the Research Rundown

Summary

As one of the leading organizations developing large-scale generative models, Anthropic has made significant progress in competing against ChatGPT with its safety-focused alternative Claude. Since its split from OpenAI, the Anthropic team has conducted extensive research expanding the production and scaling of large models, with key breakthroughs regarding the interpretability and direction of its AI systems. Its focus on safety and research has helped Anthropic become the second most-funded AI startup after OpenAI and establish key partnerships with Google and Amazon.

Claude has proven to be a potential alternative to GPT models, with its large context window and strong reasoning capabilities. As companies like OpenAI have faced criticism for their lax safety practices, Anthropic has distinguished itself as a proponent of AI safety. However, with the rise of new legal, political, and regulatory challenges, the lack of data and model transparency surrounding Claude may diminish its ultimate goal of providing steerable AI. Only time will tell if Anthropic will be able to practice what it preaches.

Important Disclosures

This material has been distributed solely for informational and educational purposes only and is not a solicitation or an offer to buy any security or to participate in any trading strategy. All material presented is compiled from sources believed to be reliable, but accuracy, adequacy, or completeness cannot be guaranteed, and Contrary LLC (Contrary LLC, together with its affiliates, “Contrary”) makes no representation as to its accuracy, adequacy, or completeness.

The information herein is based on Contrary beliefs, as well as certain assumptions regarding future events based on information available to Contrary on a formal and informal basis as of the date of this publication. The material may include projections or other forward-looking statements regarding future events, targets or expectations. Past performance of a company is no guarantee of future results. There is no guarantee that any opinions, forecasts, projections, risk assumptions, or commentary discussed herein will be realized. Actual experience may not reflect all of these opinions, forecasts, projections, risk assumptions, or commentary.

Contrary shall have no responsibility for: (i) determining that any opinions, forecasts, projections, risk assumptions, or commentary discussed herein is suitable for any particular reader; (ii) monitoring whether any opinions, forecasts, projections, risk assumptions, or commentary discussed herein continues to be suitable for any reader; or (iii) tailoring any opinions, forecasts, projections, risk assumptions, or commentary discussed herein to any particular reader’s objectives, guidelines, or restrictions. Receipt of this material does not, by itself, imply that Contrary has an advisory agreement, oral or otherwise, with any reader.

Contrary is registered with the Securities and Exchange Commission as an investment adviser under the Investment Advisers Act of 1940. The registration of Contrary in no way implies a certain level of skill or expertise or that the SEC has endorsed Contrary. Investment decisions for Contrary clients are made by Contrary. Please note that, although Contrary manages assets on behalf of Contrary clients, Contrary clients may take any position (whether positive or negative) with respect to the company described in this material. The information provided in this material does not represent any investment strategy that Contrary manages on behalf of, or recommends to, its clients.

Different types of investments involve varying degrees of risk, and there can be no assurance that the future performance of any specific investment, investment strategy, company or product made reference to directly or indirectly in this material, will be profitable, equal any corresponding indicated performance level(s), or be suitable for your portfolio. Due to rapidly changing market conditions and the complexity of investment decisions, supplemental information and other sources may be required to make informed investment decisions based on your individual investment objectives and suitability specifications. All expressions of opinions are subject to change without notice. Investors should seek financial advice regarding the appropriateness of investing in any security of the company discussed in this presentation.

Please see www.contrary.com/legal for additional important information.

Authors

William Guo

Fellow

See articles

Alice Ao

Senior Fellow

See articles

Etienne Segal

Senior Fellow

See articles

Christian Okokhere

Fellow

See articles

© 2025 Contrary Research · All rights reserved

Privacy Policy

By navigating this website you agree to our privacy policy.