Thesis
Historically, there have been two major tech-driven transformations; advancements that have been “powerful enough to bring us into a new, qualitatively different future.” These have been the agricultural and industrial revolutions. These “general-purpose” technologies fundamentally altered the way that economics worked. In the short period that computers have existed, machines have already exceeded the intelligence of humans in some aspects, leading some to believe artificial intelligence (AI) to be the next general-purpose transformation.

Source: Our World In Data
Like the shift from hunter-gatherers to farming or the rise in machine manufacturing, AI has been projected to have a significant impact across all parts of the economy. In particular, the rise in generative AI has led to massive breakthroughs in task automation across the economy at large. Well-funded research organizations such as OpenAI and Cohere have leveraged proprietary natural language processing (NLP) models to be functional for enterprises. These “foundation models” provide a basis for hundreds, if not thousands, of individual developers and institutions to build new AI applications. In February 2024, it was estimated that generative AI could increase the global GDP by 10%, or $7-10 trillion.
However, with rising competition between foundation model organizations, along with the general growth in AI adoption, ethical concerns regarding the rapid development and harmful use cases of AI systems have been raised. In March 2023, over 1K people, including OpenAI co-founder Elon Musk, wrote a letter calling for a six-month “AI moratorium”, claiming that many AI organizations were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” In February 2024, Musk sued OpenAI and chief executive Sam Altman over “breaching a contract by putting profits and commercial interests in developing artificial intelligence ahead of the public good.” Though the lawsuit was dropped in June 2024, Musk’s allegations underscore the delicate balance between speed and ethics in the development of AI.
Amongst foundation model developers, Anthropic has positioned itself as a company with a particular focus on AI safety and describes itself as building “AI research and products that put safety at the frontier.” Founded by engineers who quit OpenAI due to tension over ethical and safety concerns, Anthropic has developed its own method to train and deploy “Constitutional AI”, or large language models (LLMs) with embedded values that can be controlled by humans. Since its founding, its goal has been to deploy “large-scale AI systems that are steerable, interpretable, and robust”, and it has continued to push towards a future powered by responsible AI.
Founding Story
Anthropic was founded in 2021 by ex-OpenAI VPs and siblings Dario Amodei (CEO) and Daniela Amodei (President). Prior to launching Anthropic, Dario Amodei was the VP of Research at OpenAI, while Daniela was the VP of Safety & Policy at OpenAI.

Source: The Information
In 2016, Dario Amodei, along with other coworkers at Google, co-authored “Concrete Problems in AI Safety”, a paper discussing the inherent unpredictability of neural networks. The paper introduced the notion of side effects and unsafe exploration of the capabilities of different models. Many issues it discussed could be rooted in “mechanistic interpretability”, or the lack of understanding of the inner workings of complex models. The co-authors, many of whom would also join OpenAI alongside Dario years later, sought to communicate the safety risks of rapidly scaling models, and that thought process would eventually become the foundation of Anthropic.
In 2019, OpenAI announced that it would be restructuring from a nonprofit to a “capped-profit” organization, a move attributed to its heightened ability to deliver returns for investors. It received a $1 billion investment from Microsoft later that year to continue its development of benevolent artificial intelligence, as well as to power AI supercomputing services on Microsoft Azure’s cloud platform. Oren Etzioni, CEO of the Allen Institute for AI, commented on this shift by saying:
“[OpenAI] started out as a non-profit, meant to democratize AI. Obviously when you get [$1 billion] you have to generate a return. I think their trajectory has become more corporate.”
OpenAI’s restructuring ultimately generated internal tension regarding its direction as an organization with the intention to “build safe AGI and share the benefits with the world.” It was reported that “the schism followed differences over the group’s direction after it took a landmark $1 billion investment from Microsoft in 2019.” In particular, the fear of “industrial capture” or OpenAI’s monopolization of the AI space loomed over many within OpenAI, including Jack Clark, former policy director, and Chris Olah, mechanistic interpretability engineer.
Anthropic CEO Dario Amodei attributed Anthropic’s eventual split from OpenAI to concerns over AI safety and described the decision to leave OpenAI to found Anthropic this way:
“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things... One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this... The second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety. You don’t tell the models what their values are just by pouring more compute into them. And so there were a set of people who believed in those two ideas. We really trusted each other and wanted to work together. And so we went off and started our own company with that idea in mind.”
Dario Amodei left OpenAI in December 2020, and 14 other researchers, including his sister Daniela Amodei, eventually left to join Anthropic as well. The co-founders structured Anthropic as a public benefit corporation, in which the board has a “fiduciary obligation to increase profits for shareholders,” but the board can also legally prioritize its mission of ensuring that “transformative AI helps people and society flourish.” This gives Anthropic more flexibility to pursue AI safety and ethics over increasing profits.
Product
Foundation Model Research
Since its founding, Anthropic has dedicated its resources towards “large-scale AI systems that are steerable, interpretable, and robust” with an emphasis on its alignment with human values to be “helpful, honest, and harmless”. As of May 2025, Anthropic’s three research teams were Interpretability, Alignment, and Societal Impacts, underscoring the company’s commitment to developing general AI assistants that abide by human values. The company has published over 60 papers in these research areas.
One of Anthropic’s first papers, “Predictability and Surprise in Large Generative Models,” was published in February 2022 and investigated emergent capabilities in LLMs. These capabilities can “appear suddenly and unpredictably as model size, computational power, and training data scale up.” The paper identified that although the accuracy of models increased consistently with the number of model parameters, the accuracy of some tasks (e.g., three-digit addition) surprisingly seemed to skyrocket upon reaching certain parameter count thresholds. The authors noted that developers were unable to precisely predict which new abilities would emerge or improve. This could lead to unintended consequences, especially if solely used for economic gains or in the absence of policy interventions.

Source: Anthropic
In April 2022, Anthropic introduced a methodology of using preference modeling and reinforcement learning from human feedback (RLHF) to train “helpful and harmless” AI assistants. While RLHF has been studied by OpenAI’s InstructGPT paper and Meta’s LaMDA paper, both published in January 2022, Anthropic was the first to explore “online training,” in which the model is updated during the [crowdworker] process, and measure the “tension between helpfulness and harmfulness.”
In RLHF, models engage in open-ended conversations with human assistants, generating multiple responses for each input prompt. The human would then choose the response they found most helpful and/or harmless, rewarding the model for either trait over time.

Source: Anthropic
Ultimately, such alignment efforts made the RLHF models equally or potentially more capable than plain language models among zero-shot and few-shot tasks (i.e., tasks entirely without examples or with limited prior examples).
Among other findings, the authors of the paper emphasized the “tension between helpfulness and harmlessness”. For example, a model trained to always give accurate responses would become harmful upon being fed hazardous prompts. Conversely, a model that answers “I cannot answer that” to potentially dangerous prompts, including opinionated prompts, though not harmful, would also not be helpful.

Source: Anthropic
Anthropic has since been able to use its findings to push language models toward desired outcomes such as avoiding social biases, adhering to ethical principles, and self-correction. In December 2023, the company published a paper titled “Evaluating and Mitigating Discrimination in Language Model Decisions.” The paper studied how language models may make decisions for various use cases, including loan approvals, permit applications, and admissions, as well as effective mitigation strategies for discrimination, such as appending non-discrimination statements to prompts and encouraging out-loud reasoning.
In May 2024, Anthropic published a paper on understanding the inner workings of one of their LLMs to improve the interpretability of AI models and work towards safe, understandable AI. Using a type of dictionary learning algorithm called a sparse autoencoder, the authors were able to produce interpretable features of the LLM, which could be used to steer LLMs and identify potentially dangerous or harmful features. Ultimately, this paper lays the groundwork for further research into interpretability and safety strategies.
In February 2025, Anthropic introduced the Anthropic Economic Index, a continuous series of reports focused on calculating the impact of AI on labor markets and the wider economy over time. The initial report has analyzed millions of anonymized conversations with Anthropic’s chatbot Claude and has disclosed its usage across different categories of users, tasks, and professions.
Constitutional AI
In December 2022, Anthropic released a novel approach to training helpful and harmless AI assistants. Labeled “Constitutional AI”, the process involved (1) training a model via supervised learning to abide by certain ethical principles inspired by various sources, including the UN’s Declaration of Human Rights, Apple’s Terms of Service, and Anthropic’s own research, (2) creating a similarly-aligned preference model, and (3) using the preference model to judge the responses of the initial model, which would gradually improve its outputs through reinforcement learning.

Source: TechCrunch
CEO Dario Amodei noted that this Constitutional AI model could be trained along any set of chosen principles, saying:
“I’ll write a document that we call a constitution. […] what happens is we tell the model, OK, you’re going to act in line with the constitution. We have one copy of the model act in line with the constitution, and then another copy of the model looks at the constitution, looks at the task, and the response. So if the model says, be politically neutral and the model answered, I love Donald Trump, then the second model, the critic, should say, you’re expressing a preference for a political candidate, you should be politically neutral. […] The AI grades the AI. The AI takes the place of what the human contractors used to do. At the end, if it works well, we get something that is in line with all these constitutional principles.”
This process, known as “reinforcement learning from AI feedback” (RLAIF), distinguishes Anthropic’s models from OpenAI’s GPT, which uses RLHF. RLAIF functionally automates the role of the human judge in RLHF, making it a more scalable safety measure. Simultaneously, Constitutional AI increases the transparency of said models, as the goals and objectives of such AI systems are significantly easier to decode. Anthropic has also disclosed its constitution, demonstrating its commitment to transparency.
However, although Anthropic’s RLAIF-trained outputs exhibit higher harmlessness in their responses, their increase in helpfulness isn’t as pronounced. Anthropic describes its Constitutional AI as “harmless but non-evasive” as it now not only objects to harmful queries but can also explain its objections to the user, thereby marginally improving its helpfulness and significantly improving its transparency. “When it comes to trading off one between the other, I would certainly rather Claude be boring than that Claude be dangerous […] eventually, we’ll get the best of both worlds”, said Dario Amodei in an interview.

Source: Anthropic
Anthropic’s development of Constitutional AI has been seen by some as a promising breakthrough, enabling its commercial products, such as Claude, to follow concrete and transparent ethical guidelines. As Anthropic puts it:
“AI models will have value systems, whether intentional or unintentional. One of our goals with Constitutional AI is to make those goals explicit and easy to alter as needed.”
Claude
Claude is Anthropic’s flagship AI model family, first launched in a closed alpha in April 2022. As of May 2025, Claude can be accessed (1) directly on Anthropic’s platform as a chatbot, (2) via Anthropic’s API, or (3) through cloud infrastructure partners, such as Amazon Bedrock and Google Cloud Vertex AI.
Although Claude 1’s parameter count of 430 million was less than GPT-3’s 175 billion parameters, its context window of 9K tokens was greater than even GPT-4 (8K tokens, or roughly 6K words). Claude’s capabilities span text creation and summarization, search, coding, and more. During its closed alpha phase, Claude limited access to key partners such as Notion, Quora, and DuckDuckGo. Over time, Amodei envisions developing Claude into a “country of geniuses in a datacenter”, “capable of solving very difficult problems very fast”.

Source: Anthropic
In March 2023, Claude was released for public use in the UK and the US via a limited-access API. Anthropic noted that Claude’s answers are supposedly more helpful and harmless than other chatbots. It was also one of the first publicly available models that had the capability to parse PDF documents. Autumn Besselman, head of People and Comms at Quora, reported that:
“Users describe Claude’s answers as detailed and easily understood, and they like that exchanges feel like natural conversation”.
In July 2023, a new version of Claude, Claude 2, was released in the form of a new beta website. This version was designed to offer better conversational abilities, deeper context understanding, and improved moral behavior from its predecessor. Claude 2’s parameter count doubled from the previous iteration to 860 million, while its context window increased significantly to 100K tokens (approximately 75K words) with a theoretical limit of 200K.

Source: Anthropic
In March 2024, Anthropic announced the Claude 3 family, offering three models, Haiku, Sonnet, and Opus. The three models trade off on performance and speed — designed for lightweight tasks, Haiku is the cheapest and fastest model, Opus is the slowest yet highest-performing and most complex model, and Sonnet falls in between the two. Claude 3 had larger context windows than most other players in the industry, as all three models offered windows of 200K tokens (which Anthropic claimed is “the equivalent of a 500-page book”) and a theoretical limit of 1 million.
Compared to earlier iterations of Claude, these models generally had faster response times, lower refusal rates of harmless requests, higher accuracy rates, and fewer biases in responses. Claude 3 models could process different visual formats and achieve near-perfect recall on long inputs. In terms of reasoning, all three models outperformed GPT-4 on math & reasoning, document Q&A, and science diagram benchmarks.

Source: Anthropic
Claude 3 was also notable because it was the first model family with “character training” in its fine-tuning process. Researchers aimed to train certain traits, such as “curiosity, open-mindedness, and thoughtfulness” into the model, as well as traits that reinforce to the model that, as an AI, it lacks feelings and memory of its conversations. In the training process, which was a version of Constitutional AI training, researchers had Claude generate responses based on certain character traits and then “[rank] its own responses… based on how well they align with its character.” This allowed researchers to “teach Claude to internalize its character traits without the need for human interaction or feedback.”
In July 2024, researchers expanded on Claude 3 by releasing Claude 3.5 Sonnet and announcing plans to release Claude 3.5 Opus and Haiku later in the year. Anthropic reported that “Claude 3.5 Sonnet [set] new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval),” outperforming Claude 3 Opus and GPT-4o on multiple tasks. Claude 3.5 Haiku was released in October 2024, positioned as a lightweight model designed for fast, low-cost inference while maintaining performance comparable to Claude 3 Opus on many standard benchmarks.
In February 2025, the company followed with Claude 3.7 Sonnet, its most capable model as of May 2025. Claude 3.7 introduced a new technique called “hybrid reasoning,” which adjusts computational effort based on the complexity of a user query, enabling faster responses to simple prompts and deeper deliberation on complex ones. The model set new performance records on academic and industry benchmarks, including SWE-bench for code and TAU-bench for task execution.

Source: Anthropic
In parallel, Anthropic launched Claude Code, an integrated programming assistant that allows users to write, edit, debug, and ship code from within the Claude interface. Claude Code supports real-time collaboration with external tools, including GitHub repositories, terminal commands, and browser-based IDEs. Anthropic also introduced new core platform capabilities that expand Claude’s functionality across use cases:
Artifacts: A persistent output pane within Claude’s interface, designed for viewing and interacting with Claude-generated content such as documents, spreadsheets, product mockups, or code. Artifacts remain accessible across sessions and are optimized for both desktop and mobile, allowing users to iterate on outputs without losing context.
Computer Use: A sandboxed desktop environment that Claude can see and control, enabling the model to perform multi-step tasks across simulated software applications. This includes opening files, navigating menus, using system tools, and coordinating multiple applications in sequence — useful for automating traditional enterprise workflows.
Web Search: A built-in live browsing capability that allows Claude to access the internet to answer time-sensitive or obscure questions. When activated, Claude can retrieve up-to-date information, summarize web pages, and cite sources, enhancing factual accuracy and transparency in its responses.
Model Context Protocol (MCP): An open framework that enables Claude to access and reason over organization-specific data across multiple tools. Using MCP, enterprises can grant Claude secure, fine-grained access to resources such as internal documentation, GitHub issues, Notion databases, or Slack threads. This enables Claude to be a more context-aware assistant capable of navigating real work environments.

Source: Anthropic
As of May 2025, Claude is available to individual users through three plans — Free ($0/month), Pro ($17/month), and Max ($100-200/month). Paid subscriptions allow higher usage limits and early access to advanced features. Users can also interact with Claude through mobile formats. In 2024, Anthropic launched Claude apps for iOS and Android, a year after OpenAI released its own ChatGPT apps. Anthropic also offers developers the ability to build with Claude APIs and charges based on the number of input and output tokens. Claude models are also available on Amazon Bedrock and Google Cloud’s Vertex AI, where they can be used to build custom AI applications.
As Claude has advanced in its helpfulness, Anthropic has remained committed to limiting its harmfulness. In an interview in August 2023, Dario Amodei noted that:
“A mature way to think about these things is not to deny that there are any costs, but to think about what the costs are and what the benefits are. I think we’ve been relatively responsible in the sense that we didn’t cause the big acceleration that happened late last year and at the beginning of this year.”
In September 2023, the company published its Responsible Scaling Policy, a 22-page document that defines new safety and security standards for various model sizes. In July 2024, Anthropic announced a new initiative for soliciting and funding third-party evaluations of its AI models, to abide by its Responsible Scaling Policy.
Past Models
Each foundation model company typically releases new versions of models, both iterations of existing models (e.g. GPT-3.5, GPT-4, GPT-4o) as well as models around different focuses (e.g. GPT for language, DALL·E for images, Sora for video). Anthropic has had similar iterations of its models over time.
For example, Claude Instant was released alongside Claude itself in March 2023, and described by Anthropic as a “lighter, less expensive, and much faster option”. Initially released with a context window of 9K tokens, the same as that of Claude, Claude Instant is described by some users as being less conversational while equally capable compared to Claude.
In August 2023, an API for Claude Instant 1.2 was released. Using the strengths of Claude 2, its context window expanded to 100K tokens — enough to analyze the entirety of “The Great Gatsby” within seconds. Claude Instant 1.2 also demonstrated higher proficiency across a variety of subjects, including math, coding, and reading, among other subjects, with a lower risk of hallucinations and jailbreaks.
Market
Customer
Anthropic’s customer segments include consumers, developers, and enterprises. Users can chat with Claude through its web platform, Claude.ai, and its mobile apps. While Anthropic does not release its exact number of customers, in July 2023, Anthropic claimed to be serving thousands of customers and business partners. Claude’s website saw an estimated 100 million visits in March 2025, and its iOS app was estimated to have over 150K global downloads within a week of its release in May 2024.
According to Anthropic’s Economic Index, the Claude users span a wide range of knowledge professions — from computer programmers to editors, tutors, and analysts. The largest occupational categories as a percentage of all Claude prompts include:
Computer & Mathematical (37.2%)
Arts & Media (10.3%)
Education & Library (9.3%)
Office & Administrative (7.9%)
Life, Physical & Social Science (6.4%)
Business & Financial (5.9%)
Anthropic publicly lists customers such as Pfizer, Notion, Zoom, Slack, Canva, Bridgewater, Perplexity, and Asana. As usage deepens, Anthropic expects enterprises to become the primary revenue driver. As Amodei put it:
“Startups are reaching $50 million+ annualized spend very quickly… but long-term, enterprises have far more spend potential.”

Source: Anthropic via arXiv
Consumers
Consumers engage with Claude through the Claude.ai platform and mobile apps for iOS and Android. Use cases range from writing and summarization to study help, coding assistance, and project planning.
Anthropic offers a freemium model:
Free access to Claude 3.5 Sonnet
$17-20/month subscriptions for higher usage limits (~45 messages every 5 hours)
Features such as Claude Code (for writing/debugging code) and Artifacts (persistent visual workspaces) have expanded Claude’s utility for students, creators, and solo workers.
Developers
Anthropic offers multiple options for developers to build with Claude, including a dedicated API, Claude Code, and integrations via Amazon Bedrock and Google Cloud’s Vertex AI. The Claude API is priced on a usage-based model and gives developers access to Claude’s full model family. Claude Code, available through Claude’s main interface, allows developers to write, edit, debug, and ship code end-to-end. It supports GitHub, CLI tools, and web-based IDEs.
In parallel, Claude is powering a new generation of developer tools. IDEs like Cursor, Codeium, Replit*, and Bolt.new integrate Claude for code completion, real-time suggestions, and natural-language querying of codebases. For individual users, Anthropic also offers a high-usage “Claude Max” tier. The Max plan costs $100/month for 225 messages every 5 hours, or $200/month for 900 messages every 5 hours.
Enterprises
Anthropic has expanded Claude’s adoption among enterprises, offering both packaged SaaS plans and custom integrations. The Team plan, priced at $25/user/month, allows organizations to use Claude collaboratively, with access controls and shared context across teammates. For larger customers, the Enterprise plan unlocks custom usage tiers, fine-tuning options, SSO support, and security reviews.
A number of case studies highlight Claude’s usage across different sectors:
Lyft has integrated Claude for customer care, reducing support resolution time by 87% and piloting new AI-powered rider and driver experiences.
European Parliament adopted Claude to power “Archibot,” making 2.1 million official documents searchable and reducing research time by 80%.
Amazon’s Alexa+ is partially powered by Claude models, incorporating Anthropic’s jailbreaking resistance and safety tools.

Source: Anthropic
As a general AI assistant, Claude’s services aren’t limited to the industries and use cases mentioned above. For example, Claude has powered Notion AI due to its unique writing and summarization traits. It has also served as the backbone of Sourcegraph, a code completion and generation product; Poe, Quora’s experimental chatbot; and Factory, an AI company that seeks to automate parts of the software development lifecycle.
Market Size
Generative AI is projected to significantly impact many industries. In June 2023, generative AI was reported to have the capacity to automate activities accounting for 60%-70% of employees’ time and is projected to enable productivity growth of 0.1%-0.6% annually through 2040. One report released in May 2024 found that already 75% of knowledge workers use AI at work. Additionally, as of November 2023, over two million developers, representing 92% of Fortune 500 companies, use OpenAI’s APIs.

Source: Goldman Sachs
Consumer-oriented chatbots have also exploded in popularity. ChatGPT, OpenAI’s public chatbot, reached 100 million users just two months after its launch in November 2022. By comparison, it took Facebook, Twitter, and Instagram between two to five years after launch to reach the same milestone, making ChatGPT “the fastest-growing consumer internet app of all time” by that metric.
The market for generative AI products was valued at $40 billion in 2022 and, as of August 2023, is projected to grow to $1.3 trillion by 2032. The industry has skyrocketed since the release of ChatGPT in late 2022, as the total funding for generative AI reached $29.1 billion in 2023, a 268.4% increase in deal value from 2022. Funding continues to climb, as Q2 2024 alone saw $27.1 billion in AI funding, which amounted to 28% of overall venture funding in that quarter.

Source: CB Insights
Growth is likely to be driven by task automation, with customer operations, marketing and sales, software, and R&D, accounting for 75% of use cases. Across these industries, some estimate that generative AI has the potential to add $4.4 trillion in annual value. It is estimated that half of all work activities will be automated by 2045.
A key metric underscoring the rapid growth in generative AI is the number of parameters in commercial NLP models. One of the first transformer models, Google’s BERT-Large, was created with 340 million parameters. Within just a few years, the parameter size of leading AI models has grown exponentially. PaLM 2, released in May 2023, had 340 billion parameters at its maximum size, and GPT-4, released in March 2023, is estimated to have nearly 1.8 trillion parameters. Though newer models are not always larger, as some prioritize efficiency and cost over performance, the general trend is that newer, high-performing models have more parameters.

Source: Our World In Data
The number of parameters used within a model is closely correlated with its accuracy. While most models excel at classifying, editing, and summarizing text and images, their capacity to execute tasks successfully varies greatly. As CEO Daria Amodei put it, “they feel like interns in some areas and then they have areas where they spike and are really savants.”

Source: HAI Stanford
As a model’s capacity increases, so will its magnitude of impact, which could either be positive or harmful. CEO Dario Amodei expressed his fear about the potential impact of models in the long term by stating that:
“No one cares if you can get the model to hotwire [a car] — you can Google for that. But if I look at where the scaling curves are going, I’m actually deeply concerned that in two or three years, we’ll get to the point where the models can… do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death.”
Competition
OpenAI
With the majority of Anthropic’s founding team originating from OpenAI, it’s no surprise that OpenAI is its largest competitor. Founded in 2015, OpenAI is best known for its development of the Generative Pre-trained Transformers (GPT). With its first model released in 2018, its public model ChatGPT, powered by GPT-3.5, amassed well over 100 million monthly active users within two months of its launch.
As of May 2025, it had raised $62 billion in funding, including a $10 billion investment from Microsoft in early 2023, making it the most well-funded AI firm. In March 2025, OpenAI’s valuation reached $300 billion, making it one of the highest valued private companies in the world.
Like Anthropic, OpenAI offers an API platform for its foundation models, and its AI chatbot, ChatGPT, allows users to interact with GPT-3.5 for free. With a paid plan priced at $20 per month, users can also access GPT-4 and GPT-4o, as well as OpenAI’s image generator, DALL-E. While GPT-4o has a smaller context window and a less recent knowledge cut-off than Claude 3.5, it has the advantages of image generation and web searches. In December 2024, OpenAI launched a Pro plan priced at $200 per month, with extended usage limits and advanced features, similar to Claude’s Max plan.
OpenAI has invested in multiple safety initiatives, including a new superalignment research team announced in July 2023 and a new Safety and Security Committee introduced in May 2024. However, current and former employees have continued to criticize OpenAI for deprioritizing safety. In spring 2024, OpenAI’s safety team was given only a week to test GPT-4o. “Testers compressed the evaluations into a single week, despite complaints from employees,” in order to meet a launch date set by OpenAI executives. Days after the May 2024 launch, Jan Leike, OpenAI’s former Head of Alignment, became the latest executive to leave the company for Anthropic, claiming that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI.
In June 2024, a group of current and former employees at OpenAI alleged that the company was “recklessly racing” to build AGI and “used hardball tactics to prevent workers from voicing their concerns about the technology.” Along with former employees of DeepMind and Anthropic, they signed an open letter calling for AI companies to “support a culture of open criticism” and to allow employees to share “risk-related concerns.” In addition, one of OpenAI’s co-founders, Ilya Sutskever, left the company in May 2024 and by June 2024 had started a new company called Safe Superintelligence Inc. (SSI), stating that safety is in the company’s name because it is “our mission, our name, and our entire product roadmap because it is our sole focus.”
In January 2025, the Trump administration announced the Stargate project, a $500 billion initiative aimed at advancing AI infrastructure in the US, through the construction of over 20 large-scale data centers nationwide. The project will be funded by Softbank and MGX, while OpenAI, Oracle, Microsoft, Nvidia, and Arm will be responsible for the operational and technological contribution.
xAI
Founded in July 2023 by Elon Musk, xAI is a frontier AI lab spun out of Musk’s broader X/Twitter ecosystem. From its inception, xAI has aimed to build a “maximum truth-seeking AI” that, in Musk’s words, “understands the universe.” The company released its first model, Grok-1, in November 2023, integrated directly into X’s premium subscription tiers. Since then, xAI has scaled both model capabilities and its underlying infrastructure.
xAI has integrated its foundation models into the broader X platform, including use cases like real-time post summarization, trending topic analysis, and content moderation. In March 2025, Musk formally merged xAI and X, consolidating their data, infrastructure, and distribution. The deal valued xAI at $80 billion.
As of May 2025, xAI is operating its own 200K+ H100 GPU cluster in Memphis, known as the “Colossus” supercomputer, with plans for expanding it into a 1 million GPU training cluster in 2026. It is believed to be one of the world's largest AI supercomputers.
In February 2025, xAI released Grok 3, which is the company’s most powerful model as of May 2025. According to Musk, Grok 3 was trained with 10x more compute than its predecessor and introduces a new family of reasoning-optimized models. xAI claims these models outperform OpenAI’s o3-mini on mathematical benchmarks like AIME 2025. As of May 2025, xAI has raised a total of $12.4 billion
DeepSeek
DeepSeek is an AI lab founded in May 2023 and based in Hangzhou, China. In January 2025, DeepSeek launched its consumer-facing chatbot, built on the DeepSeek-R1 model, with free apps available for iOS and Android. Within a week, it became the most downloaded app on the US App Store, briefly overtaking ChatGPT, and was widely cited as contributing to an 18% drop in Nvidia’s stock price. It is rumored that DeepSeek has raised $1 billion in a funding round in February 2025. DeepSeek-R1’s performance is comparable to GPT-4o and o1 across general-purpose tasks. The company claims that it trained earlier versions of the model at a fraction of the cost of its competitors — reportedly $6 million, compared to $100 million for GPT-4.
Cohere
Cohere, which aims to bring AI to businesses, was founded in 2019 by former AI researchers Aidan Gomez, Nick Frosst, and Ivan Zhang. Gomez currently serves as CEO, and prior to starting Cohere, he had interned at Google Brain, where he worked under Geoffrey Hinton and co-wrote the breakthrough paper introducing transformer architecture.
Cohere provides in-house LLMs for tasks like summarization, text generation, classification, data analysis, and search to enterprise customers. These LLMs can be used to streamline internal processes, with notable customization and hyperlocal fine-tuning not often provided by competitors.
Although its cost per token generated or interpreted is lower than others, Cohere likely charges high upfront costs for enterprises seeking custom-built LLMs. Like Anthropic and OpenAI, Cohere also offers a knowledge assistant Coral designed to streamline businesses’ internal operations. Unlike OpenAI and Anthropic, however, Cohere’s mission appears to be more centered around the accessibility of LLMs rather than the strength of its foundation models.
In an interview with Scale AI founder Alexandr Wang, Cohere CEO Aidan Gomez emphasized this exact need as the key problem Cohere seeks to address:
“[We need to get] to a place where any developer like a high school student can pick up an API and start deploying large language models for the app they’re building […] If it’s not in all 30 million developers toolkit, we’re going to be bottlenecked by the number of AI experts, and there always will be a shortage of that talent.”
While Cohere is considered a major AI competitor, it has underperformed its fundraising goals. In June 2024, the company raised a total of $450 million from investors like Nvidia, Salesforce Ventures, Cisco, and PSP Investments, below its goal of $500 million to $1 billion. Similarly, in June 2023, Cohere raised a $270 million Series C, and its resulting valuation of $2.1 billion was well under the $6 billion valuation expected earlier that year. As of May 2025, Cohere’s total funding stands at $1.1 billion.
Hugging Face
Aspiring to become the “GitHub for machine learning”, Hugging Face is the main open-source community platform for all things AI. As of July 2024, it has over 750K free and publicly available models spanning NLP, computer vision, image generation, and audio processing. Any user on the platform has the ability to post and download models and datasets, with its most downloaded models including GPT-2, BERT, and Whisper. Although most transformer models are typically too large to be trained without supercomputers, users can access and deploy pre-trained models more easily.
Hugging Face operates on an open-core business model, meaning all users have access to its public models. Paying users get additional features, such as higher rate limits, its Inference API integration, additional security, and more. Compared to Anthropic, its core product is more community-centric. Clem Delangue, CEO of Hugging Face, noted in an interview:
“I think open source also gives you superpowers and things that you couldn't do without it. I know that for us, like I said, we are the kind of random French founders, and if it wasn't for the community, for the contributors, for the people helping us on the open source, people sharing their models, we wouldn't be where we are today.”
Hugging Face’s relationship with its commercial partners is less direct than its competitors, including Anthropic. Nonetheless, it has raised $395.2 million, with its $235 million Series D in August 2023. This put the company at a valuation of $4.5 billion, which was double its valuation during its previous round in May 2023 and more than 100x its reported ARR. As a platform spreading large foundation models to the public (albeit models 1-2 generations behind state-of-the-art models such as those being developed by Anthropic and OpenAI), Hugging Face represents a significant player in the AI landscape. In 2022, Hugging Face, with the help of over 1K researchers, released BLOOM, which the company claims is “the world’s largest open multilingual language model.”
Other Research Organizations
Anthropic also competes with various established AI research labs backed by large tech companies; most notably, Google’s DeepMind, Meta AI, and Microsoft Azure AI.
Deepmind: Founded in 2010, DeepMind was acquired by Google in 2014. Known as Google DeepMind, it was later merged with Google Brain in Google’s efforts to “accelerate our progress in AI”. DeepMind’s Gemini offers similar abilities to GPT-4 and is the crux of Google DeepMind’s research efforts. As Google’s AI research branch, DeepMind indirectly serves as a competitor to Anthropic due to its development of scalable LLMs, which are likely used to power Google’s AI searches and Google Cloud NLP services.
Meta: In May 2025, Meta introduced three new models in its Llama 4 family: Scout, Maverick, and Behemoth. Scout and Maverick are publicly available via Llama.com and platforms like Hugging Face, while Behemoth remains in training. As of May 2025, Meta AI, the assistant integrated into WhatsApp, Messenger, and Instagram, runs on Llama 4 in 40 countries.
Prior to this, Meta released LLaMA 3 in April 2024. In July 2023, Meta announced that it would be making LLaMA open-source. “Open source drives innovation because it enables many more developers to build with new technology”, posted Mark Zuckerberg. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.” While LLaMA 3 lags behind GPT-4 in reasoning and mathematics, it remains one of the largest open-source models available and rivals major models in some performance aspects, making it a likely choice for independent developers.
Microsoft Azure: Among its hundreds of services, Microsoft Azure’s AI platform offers tools and frameworks for users to build AI solutions. While Azure offers a portfolio of AI products instead of its own foundation models, these products include top models like GPT-4o and LLaMA 2, which can be used for services ranging from internal cognitive search to video and image analysis. Unlike Claude, which is low-code and accessible for teams of all sizes, Azure primarily targets developers and data scientists capable of coding on top of existing models. As such, Microsoft Azure AI serves as an indirect competitor to Anthropic, increasing the accessibility and customizability of foundation AI models.
Business Model
Anthropic generates revenue through both usage-based APIs and subscription-based access to its Claude models. According to internal estimates shared with CNBC, API sales accounted for the vast majority of the company’s revenue in 2024: 60–75% from third-party API integrations, 10–25% from direct API customers, 5% from chatbot subscriptions, and 2% from professional services.

Source: Tanay’s Newsletter
Anthropic offers three main subscription tiers:
Free: Access to Claude via web and mobile, with support for image and document queries using Claude 3.5 Sonnet.
Pro ($20/month or $17/month annually): Adds higher usage limits, Projects functionality, and access to Claude 3 Opus and Haiku.
Max ($100–$200/month): Designed for power users, with significantly higher usage allowances, early access to new features, and priority access during peak traffic. ($100/month = 225 messages/5hr; $200/month = 900 messages/5hr)
For teams, Anthropic introduced a Team Plan in May 2024, priced at $25–30/user/month. It includes admin controls, usage consolidation, and shared Claude access, making it suitable for organizations deploying AI broadly across departments. Custom Enterprise Plans and Education Plans are also available for higher-volume clients.

Source: Anthropic
For teams, Anthropic introduced a Team Plan in May 2024, priced at $25–30/user/month. It includes admin controls, usage consolidation, and shared Claude access, making it suitable for organizations deploying AI broadly across departments. Custom Enterprise Plans and Education Plans are also available for higher-volume clients.
Anthropic’s API pricing structure places Claude among the higher-end models in terms of output cost, particularly for Opus ($15/MTok input, $75/MTok output). For comparison, OpenAI charges $15/MTok for GPT-4o1 prompts and $60/MTok for completions.

Source: Anthropic
Traction
Anthropic has seen strong commercial momentum across strategic partnerships, enterprise adoption, and revenue growth. As of March 2025, the company reportedly reached $1.4 billion in annual recurring revenue, up from $1 billion at the end of 2024 and $150 million in 2023. According to internal estimates, third-party APIs account for 60–75% of total sales, followed by direct API contracts (10–25%), chatbot subscriptions (15%), and professional services (2%).

Source: Sacra
As of May 2025, Anthropic’s largest infrastructure partner is Amazon. In November 2024, Amazon deepened its relationship with Anthropic by committing $4 billion in funding, bringing its total investment to $8 billion. As part of the agreement, AWS remains Anthropic’s primary cloud and training provider. Claude is available via Amazon Bedrock, where it serves as core infrastructure for tens of thousands of customers. Notable examples include Pfizer, Intuit, Perplexity, and the European Parliament. Claude has also been adopted by companies including Slack, Zoom, GitLab, Notion, Factory, Asana, BCG, Bridgewater, and Scale AI.
In July 2024, Menlo Ventures, a major Anthropic investor, partnered with Anthropic to create a $100 million fund for investing in early-stage AI startups. Startups backed by the fund will receive investments of at least $100K, access to Anthropic’s models, $25K in Anthropic credits, and mentorship from Anthropic leaders. In May 2025, Anthropic was reported to be partnering with Apple to create an AI-coding platform.
Anthropic continues to expand its direct platform traction with developers and startups through integrations and partnerships. According to CEO Dario Amodei, “Several customers have already deployed Claude’s computer use ability,” he noted in November 2024, “and Replit moved fast.” Replit* is one of several next-generation IDE companies, alongside Cursor, Vercel, and Bolt.new, that have adopted Claude to support development workflows.
In November 2024, Amodei pointed out that Anthropic’s team grew from ~300 to ~950 people in less than a year, before intentionally slowing hiring to maintain quality. “Talent density beats talent mass,” Amodei said. “We’ve focused on senior hires from top companies and theoretical physicists who learn fast.” In May 2024, Anthropic made several notable leadership hires, such as Krishna Rao as its first CFO, Instagram co-founder Mike Krieger as its Head of Product, and Jan Leike as the leader of a new safety team.
Valuation
As of May 2025, Anthropic has raised a total of $18.2 billion in funding. Its latest round, a $3.5 billion Series E at a $61.5 billion valuation, was completed in March 2025. Amazon remains Anthropic’s largest backer, contributing a total of $8 billion across two investments, including a $4 billion commitment in November 2024. As part of the deal, Anthropic named AWS its primary cloud and training partner and continued working with Amazon to optimize custom Trainium chips and power Alexa+.
Prior to this, Google committed to investing up to $2 billion in Anthropic in October 2023, starting with an initial $500 million upfront investment. In May 2023, Anthropic had also previously raised a $450 million Series C led by Google and Spark Capital at a $4.1 billion valuation. SK Telecom, one of the participants in Anthropic’s Series C, invested another $100 million after announcing its partnership with Anthropic in August 2023. As of May 2025, Anthropic’s total funding makes it the second most well-funded AI company, behind only OpenAI.
At the time of Anthropic’s Series E round in March 2025, it was valued at $61.5 billion, with an estimated $1.4 billion ARR, implying a 44x revenue multiple. While all of Anthropic’s direct comparables are private companies (OpenAI, xAI, Cohere, Mistral), Anthropic could be compared to Amazon, Microsoft, Meta, and Google in the public markets, due to their significant roles as cloud infrastructure providers, strategic investors, and distribution channels for Anthropic’s models.

Source: Koyfin
As a private AI foundation model company with proprietary infrastructure and enterprise partnerships, Anthropic’s implied 44x ARR multiple at the time of its Series E is significantly higher than those of large public tech companies like Microsoft at 9.9x, Meta at 6.9x, Google at 4.8x, and Amazon at 2.8x, which act as both competitors and cloud distribution partners.
Anthropic’s premium reflects investor expectations around the scalability of Anthropic’s Claude platform, its growing share of enterprise AI workloads, and its revenue expansion. However, this valuation is forward-looking and relies on continued product differentiation, effective monetization of Claude APIs, and long-term defensibility in a market increasingly saturated by Big Tech and open-source models.
Key Opportunities

Source: Claude
Practical AI
Claude has built in multiple features with the intention of creating a competitive edge for the company when it comes to servicing enterprise customers. Compared to ChatGPT, Bard, and Bing, three of the largest publicly available chatbots backed by established tech giants, Anthropic claims that Claude’s ethical focus makes it easiest to align with enterprise values. A survey in June 2023 reported that only 29% of business leaders had faith in the ethical use of commercial AI at the present moment, though 52% were “very confident” in ethical use within five years.
Anthropic also believes the company’s extensive focus on minimizing harm while maximizing helpfulness gives Claude a head start in commercial task automation. One of Claude’s advantages is its large context window, as Claude 3.5 Sonnet’s context window of 200K tokens significantly exceeds GPT-4o’s context window of 128K tokens. While models tend to struggle to identify relevant details with larger context windows, Claude 3.5 can still reportedly recall information. On the “Needle in a Haystack” benchmark, which measures retrieval accuracy, Claude 3.5 managed to achieve over 99% recall, even on longer context windows. While OpenAI has not disclosed the performance of GPT-4o on this benchmark, an independent evaluation of GPT-4, which also has a context window of 128K tokens, found that the model’s retrieval accuracy tends to degrade for context lengths over 100K.

Source: Anthropic
Claude’s context window was only topped by Google’s Gemini-1.5 Pro, which has a 1 million token context window and a retrieval rate of over 99%, in May 2024. However, Anthropic has shared research suggesting that longer context windows are at risk of many-shot jailbreaking. Context windows of even just 70K tokens provide more shots, or examples, in a prompt, which malicious actors can exploit to produce harmful responses from the model.

Source: Anthropic
Ultimately, Claude’s large context window and accurate retrieval allow it to effectively parse longer inputs that may be commonplace in certain industries. Continuing to anticipate and invest in safety concerns could help Anthropic cement itself as a trusted, practical choice for enterprises.
In April 2023, Anthropic expressed its plans to build “Claude-Next” a “frontier model” times better than existing AI. The development of this model, requiring up to $5 billion in funding over the next two years, is described as the “next-gen algorithm for AI self-teaching”, a description resemblant of AGI at the level of humans.
Support from Policymakers
Given the increased political scrutiny of AI, from US and international lawmakers alike, it is imperative that Anthropic secures political support for its business and safety practices. Anthropic has been recognized by politicians as a key player in AI safety. In June 2023, Dario Amodei, along with two AI professors, was called to testify at a congressional hearing about the long-term risks of AI.
Additionally, in May 2023, US Vice President Kamala Harris invited Dario Amodei, along with the CEOs of Google, Microsoft, and OpenAI, to discuss safeguards to mitigate the potential risks of AI. “It is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security. These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values”, the White House noted in a press release. Subsequently, the US issued a public request for safe and trustworthy AI through the National Telecommunications and Information Administration (NTIA).
As a leader in the movement towards transparent AI, Anthropic has exhibited its support for greater accountability. Following a request for AI policy comments from the National Telecommunications and Information Administration (NTIA), Anthropic released its own “AI Accountability Policy Comment”, in which it recommended more stringent safety evaluations and clearer standards for AI interpretability. Many of the administration’s recommendations to develop responsible AI aligned with Anthropic’s suggestions, underscoring its focus on safety compared to its competitors. What could be institutional roadblocks for competitors failing to meet such standards may ultimately benefit Anthropic in its efforts to develop stronger foundation models.
In an attempt to win further political support for its policies, Anthropic has also participated in the growing trend of federal lobbying around AI, which topped $560 million and included over 350 companies, nonprofits, and universities in 2023. That year, Anthropic reportedly spent over $280K to increase funding to the National Institute of Standards and Technology and a bill that would create “a national AI research infrastructure.” In March 2024, the company also hired its first in-house lobbyist, former Department of Justice attorney Rachel Appleton, to lobby on AI policy. Ultimately, continuing to garner support from policymakers is critical to the future success of Anthropic and other AI companies.
Key Risks
Potential Copyright Infringement
Foundation model companies like OpenAI, Meta, and Microsoft have faced multiple lawsuits over their practices of scraping copyrighted materials for training data. Notably, in December 2023, the New York Times (NYT) sued OpenAI and Microsoft, alleging that the companies had used millions of NYT articles to train their chatbots. While OpenAI does not disclose its training data, the NYT concluded that its paywalled articles were included in OpenAI’s training data, since its chatbot could reproduce “near-verbatim excerpts from [its] articles.” The NYT claimed that it should be compensated for “billions of dollars in statutory and actual damages” caused by “the unlawful copying and use of The Times’s uniquely valuable works.”
Subsequently, a group of eight other newspapers, including The Chicago Tribune, and The Center for Investigative Reporting (CIR) have filed similar lawsuits against OpenAI and Microsoft. In a statement, Monika Bauerlin, CEO of The CIR, wrote:
“OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material. This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.”
News isn’t the only industry with a stake in how AI companies source their training data, as artists have taken similar stands against AI companies’ use of their copyrighted material. Novelists have filed lawsuits against OpenAI and NVIDIA for using the authors’ copyrighted books to train their AI models. Artists and musicians have also taken similar stands against AI companies. A group of artists filed a complaint against Stability AI, DeviantArt, and Midjourney, alleging that their AI product, Stable Diffusion, used “copyrighted works of millions of artists… as training data.” Similarly, photography company Getty Images sued Stability AI, citing similar reasons. In October 2023, Anthropic faced its own copyright lawsuit, as it was sued by music publishers, including Universal Music Group (UMG), for allegedly “distributing almost identical lyrics” to many of its artists’ copyrighted songs.
However, not all media organizations have fought against AI companies’ use of their content. Multiple major news organizations, including TIME, The Associated Press, Vox Media, The Atlantic, the Financial Times, and NewsCorp, have signed licensing deals and entered partnerships with OpenAI, granting OpenAI their copyrighted content in exchange for compensation or access to OpenAI technology. Photography companies like Getty, Shutterstock, and Adobe have also created their own AI generators, often with the support of major AI companies.
Additionally, a few of the copyright claims have already been dismissed in the AI companies’ favor. For instance, in July 2023, a group of authors including comedian Sarah Silverman alleged that OpenAI and Meta had used the authors’ copyrighted works in their training data without permission. However, in February 2024, a California court partially dismissed most of their claims, keeping only the main claim of direct copyright infringement. Similarly, in June 2024, a judge dismissed most of the claims in a lawsuit against OpenAI, Microsoft, and GitHub. In the lawsuit, a group of developers claimed that GitHub Copilot violated the DCMA by suggesting code snippets from public GitHub repositories without attribution.
Even if AI companies win copyright infringement lawsuits, the broader issue of using scraped web data to train AI models, which could be harmful or dangerous, remains. In July 2024, it was reported that Anthropic, along with Nvidia, Apple, and Salesforce, used subtitles from over 170K YouTube videos in their training data. Some of the videos promoted misinformation and conspiracy theories, such as the “flat-earth theory,” which could be shaping the development of Claude and other major AI models. AI companies are also facing pushback from lawmakers, who have made multiple attempts to require companies to disclose the contents of their training data, to prevent copyright infringement.
As AI companies continue to garner criticism for their data collection practices and readily available online data grows scarcer, Anthropic must avoid conflicts with lax data usage practices and find new methods of sourcing data.
Regulatory Risks
The rise of generative AI has also coincided with heightened U.S. regulatory scrutiny of the tech industry, which could threaten the strategic partnerships between Anthropic and Big Tech companies. In 2021, Lina Khan, an antitrust scholar who rose to prominence after writing a Yale Law Review article on Amazon’s anti-competitive practices in 2017, was appointed as Chair of the Federal Trade Commission (FTC). Under Khan’s tenure, the FTC has sued multiple Big Tech companies, including Amazon in a “landmark monopoly case” in 2023. In June 2024, Khan also led the FTC to strike a deal with the Justice Department to investigate Microsoft, OpenAI, and Nvidia for possibly violating antitrust laws in the AI industry.
Khan’s FTC has also investigated how tech giants may unfairly influence the AI industry with their multi-billion dollar investments and stakes in AI startups. In January 2024, the FTC opened an inquiry into the partnerships between tech giants, namely Microsoft, Amazon, and Google, and major AI startups OpenAI and Anthropic. Khan claimed the inquiry would “shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition.” In response to the increased scrutiny, in July 2024, Microsoft voluntarily dropped its non-voting observer seat on OpenAI’s board, and Apple also abandoned plans to join.
Anthropic’s structure as a public benefit corporation prevents its Big Tech investors, Amazon and Google from directly influencing its board decisions, and neither company owns voting shares in Anthropic. However, increased FTC scrutiny could potentially limit Anthropic’s ability to receive funding from and partner with other tech companies, especially Big Tech partners that could already be violating anti-trust laws.
Furthermore, the FTC is not the only regulatory power curtailing AI business. In October 2023, President Biden signed an executive order to prevent and mitigate the potential risks of AI. The order requires AI companies like Anthropic to perform safety tests on new AI technologies and share the results with the government before the technologies’ release. It also gives federal agencies the power to take measures to protect against the risks of AI. On the same day, the G-7 (which includes the US, EU, the UK, and other democratic nations) also announced a new Code of Conduct directed at AI companies to emphasize safety.
Considering its long-held commitment to safety, Anthropic is arguably more aligned with government regulators than its rival companies are. After the announcements of the executive order and the new G-7 Code of Conduct, Anthropic released a statement celebrating “the beginning of a new phase of AI safety and policy work” and stated that it was “committed to playing [its] part to contribute to the realization of these objectives and encourage a safety race to the top.”
However, as AI models become more advanced and capable of harm, model developers may have to contend with unforeseen risks, such as bioterrorism or warfare. Security breaches, which are growing increasingly sophisticated, could also pose a risk to Anthropic, which experienced a minor data leak in 2023. Therefore, it is critical that Anthropic continues to invest in safety tests, as well as initiatives to mitigate future threats, especially in the wake of increased regulatory oversight.
Secrecy in the AI Landscape
Despite Anthropic’s focus on AI safety, it has struggled to meet certain accountability metrics. Using the Draft EU AI Act as a rubric, a Stanford report on AI transparency ranked Anthropic’s Claude 3 in the bottom half of major foundation models. While Anthropic significantly improved its transparency score from October 2023 to May 2024, it still lags behind the models of Microsoft, Meta, Stability AI, and Mistral AI, and just barely edges out Open AI and Google. Notably, Anthropic has yet to fully disclose the data sources used to train Claude.

Source: Stanford CRFM

Source: Stanford CRFM
Since OpenAI’s shift towards monetizing its services, competition has risen for the next AI breakthroughs. In March 2023, OpenAI announced the release of GPT-4 with the disclaimer that it would not be an open AI model. “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar”, reads its technical report.
Despite its many recommendations in favor of increased model restrictions, Anthropic has remained hesitant to support some accountability mechanisms. “While oversight and accountability are crucial for building trustworthy AI, insufficiently thoughtful or nuanced policies, liability regimes, and regulatory approaches could frustrate progress,” it wrote in its June 2023 response to the NTIA’s AI Accountability Policy Request for Comment, a deviation from its policy recommendations. As Anthropic continues to develop foundation models, it must balance safe AI disclosure practices with the overarching risk of losing its competitive edge.
Summary
As one of the leading organizations developing large-scale generative models, Anthropic has made significant progress in competing against ChatGPT with its safety-focused alternative Claude. Since its split from OpenAI, the Anthropic team has conducted extensive research expanding the production and scaling of large models, with key breakthroughs regarding the interpretability and direction of its AI systems. Its focus on safety and research has helped Anthropic become the second most-funded AI startup after OpenAI and establish key partnerships with Google and Amazon.
Claude has proven to be a potential alternative to GPT models, with its large context window and strong reasoning capabilities. As companies like OpenAI have faced criticism for their lax safety practices, Anthropic has distinguished itself as a proponent of AI safety. However, with the rise of new legal, political, and regulatory challenges, the lack of data and model transparency surrounding Claude may diminish its ultimate goal of providing steerable AI. Only time will tell if Anthropic will be able to practice what it preaches.