Anthropic

Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems. Anthropic's first product is Claude, an AI assistant for tasks at scale. Its research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.

Founding Date

Jan 1, 2021

Headquarters

San Francisco, California

Total Funding

$ 8B

Stage

secondary market

Employees

251-500

Careers at Anthropic

Memo

Updated

October 12, 2023

Reading Time

25 min

Thesis

Historically, there have been two major tech-driven transformations (i.e. advancements that have been “powerful enough to bring us into a new, qualitatively different future”): the agricultural and industrial revolutions. These “general-purpose” technologies fundamentally altered the way that economics worked. In the short period that computers have existed, machines have already exceeded the intelligence of humans in some aspects, leading some to believe artificial intelligence (AI) to be the next general-purpose transformation.

Like the shift from hunter-gatherers to farming or the rise in machine manufacturing, AI has been projected to have a significant impact across all parts of the economy. In particular, the rise in generative AI has led to massive breakthroughs in task automation across the economy at large. Well-funded research organizations such as OpenAI and Cohere have leveraged proprietary natural language processing (NLP) models to be functional for enterprises. These “foundational models” provide a basis for hundreds, if not thousands, of individual developers and institutions to build new AI applications. In April 2023, it was estimated that generative AI would have the capacity to boost productivity to increase the global GDP by 7% over the ensuing decade.

However, with rising competition between foundational organizations, along with the general growth in AI adoption, ethical concerns regarding the rapid development and harmful use cases of AI systems have been raised. In March 2023, over 1K people including Elon Musk wrote a letter calling for a six-month “AI moratorium”, claiming that many AI organizations were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Amongst foundational model developers, Anthropic has positioned itself as a company with a particular focus on AI safety and describes itself as building “AI research and products that put safety at the frontier.” Founded by engineers who quit OpenAI due to tension over ethical and safety concerns, Anthropic has developed its own method to train and deploy “Constitutional AI”, or large language models (LLMs) with embedded values that can be controlled by humans. Since its founding, its goal has been to deploy “large-scale AI systems that are steerable, interpretable, and robust”, and it has continued to push towards a future powered by responsible AI.

Weekly Newsletter

Subscribe to the Research Rundown

Founding Story

Anthropic was founded in 2021 by ex-OpenAI VPs and siblings Dario Amodei (CEO) and Daniela Amodei (President). Prior to launching Anthropic, Dario Amodei was the VP of Research at OpenAI, while Daniela was the VP of Safety & Policy at OpenAI.

In 2016, Dario Amodei, along with other coworkers at Google, co-authored “Concrete Problems in AI Safety”, a paper discussing the inherent unpredictability of neural networks. The paper introduced the notion of side effects and unsafe exploration of the capabilities of different models. Many issues it discussed could be rooted in “mechanistic interpretability”, or the lack of understanding of the inner workings of complex models. The co-authors, many of whom would also join OpenAI alongside Dario years later, sought to communicate the safety risks of rapidly scaling models, and would eventually become the foundation of Anthropic.

In 2019, OpenAI announced that it would be restructuring from a nonprofit to a “capped-profit” organization, a move attributed to its heightened ability to deliver returns for investors. It received a $1 billion investment from Microsoft later that year to continue its development of benevolent artificial intelligence, as well as to power AI supercomputing services on Microsoft Azure’s cloud platform. Oren Etzioni, CEO of the AI institute, commented on this shift by saying:

“[OpenAI] started out as a non-profit, meant to democratize AI. Obviously when you get [$1 billion] you have to generate a return. I think their trajectory has become more corporate.”

OpenAI’s restructuring ultimately generated internal tension regarding its direction as an organization aimed to “build safe AGI and share the benefits with the world”. It was reported that “the schism followed differences over the group’s direction after it took a landmark $1 billion investment from Microsoft in 2019.” In particular, the fear of “industrial capture” or OpenAI’s monopolization of the AI space loomed over many within OpenAI, including Jack Clark, former policy director, and Chris Olah, mechanistic interpretability engineer.

Anthropic CEO Dario Amode attributed Anthropic’s eventual split from OpenAI to concerns over AI safety, and described the decision to leave OpenAI to found Anthropic as follows:

“So there was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things. I think even more so than most people there. One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this. I think this is much more widely accepted now. But, you know, I think we were among the first believers in it. And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety. You don’t tell the models what their values are just by pouring more compute into them. And so there were a set of people who believed in those two ideas. We really trusted each other and wanted to work together. And so we went off and started our own company with that idea in mind.”

Dario Amodei left OpenAI in December 2020, and 14 other researchers eventually left to join Anthropic as well, including his sister Daniela Amodei.

Product

Foundational Model Research

Since its founding, Anthropic has dedicated its resources towards “large-scale AI systems that are steerable, interpretable, and robust” with an emphasis on its alignment with human values to be “helpful, honest, and harmless”. It has conducted extensive research to develop general AI assistants abiding by these values.

Soon after its inception, Anthropic released a wave of papers investigating the unpredictability of large-scale generative models. In February 2022, it published a paper, “Predictability and Surprise in Large Generative Models”, to analyze unpredictable loss. It identified that although the accuracy of models increases consistently with the number of model parameters, the accuracy of some tasks (e.g. three-digit addition) seems to skyrocket upon reaching certain parameter count thresholds. “Developers can’t tell you precisely what new behaviors will emerge as they scale up models”, the authors wrote. “For example, the ability to complete a specific task can sometimes emerge abruptly as developers increase the size of a model.”

Source: Anthropic

This unpredictability may lead to unintended consequences, the paper notes, particularly if solely used for economic gains or in the absence of policy interventions.

In April 2022, Anthropic introduced a methodology of using preference modeling and reinforcement learning from human feedback (RLHF) to train “helpful and harmless” AI assistants. The models would engage in open-ended conversations with human assistants, generating multiple responses for each input prompt. The human would then choose the response they found most helpful and/or harmless, rewarding the model for either trait over time.

Source: Anthropic

Ultimately, such alignment efforts made the RLHF models equally or potentially more capable than plain language models among zero-shot and few-shot tasks (i.e. tasks entirely without examples or with limited prior examples).

Among other findings, the authors of the paper emphasized the “tension between helpfulness and harmlessness”. For example, a model trained to always give accurate responses would become harmful upon being fed hazardous prompts. Conversely, a model that answers “I cannot answer that” to potentially dangerous prompts, including opinionated prompts, though not harmful would also not be helpful.

Source: Anthropic

Anthropic has since been able to use its findings to push language models toward desired outcomes such as avoiding social biases, adhering to ethical principles, and self-correction. In August 2023, Anthropic released a paper entitled "Studying Large Language Model Generalization with Influence Functions” in which it functionally reverse-engineered the influence of training data on a model’s output.

“Our aforementioned ability to localize influence to specific layers and tokens also suggests a way forward for connecting influence functions to mechanistic interpretability”, claimed the authors. The paper also claimed that the findings spanned beyond constitutional LLMs, and could even impact AI research in other industries such as the life sciences.

Constitutional AI

In December 2022, Anthropic released a novel approach to training helpful and harmless AI assistants. Labeled “Constitutional AI”, the process involved (1) training a model via supervised learning to abide by certain ethical principles drawing from a variety of sources including the UN’s Declaration of Human Rights, (2) creating a similarly-aligned preference model, and (3) using the preference model as an adjudicator on the initial model, which would gradually improve its outputs through reinforcement learning.

Source: TechCrunch

CEO Dario Amodei noted that this Constitutional AI model could be trained along any set of chosen principles, saying:

“I’ll write a document that we call a constitution. […] what happens is we tell the model, OK, you’re going to act in line with the constitution. We have one copy of the model act in line with the constitution, and then another copy of the model looks at the constitution, looks at the task, and the response. So if the model says, be politically neutral and the model answered, I love Donald Trump, then the second model, the critic, should say, you’re expressing a preference for a political candidate, you should be politically neutral. […] The AI grade the AI. The AI takes the place of what the human contractors used to do. At the end, if it works well, we get something that is in line with all these constitutional principles.”

This process, known as “reinforcement learning from AI feedback” (RLAIF), functionally automates the role of the human in RLHF, making it a scalable safety measure. Simultaneously, Constitutional AI increases the transparency of said models, as the goals and objectives of such AI systems are significantly easier to decode.

However, although Anthropic’s RLAIF-trained outputs exhibit higher harmlessness in its responses, its increase in helpfulness isn’t as pronounced. Anthropic describes its Constitutional AI as “harmless but non-evasive” as it now not only objects to harmful queries but can also explain its objections to the user, thereby marginally improving its helpfulness and significantly improving its transparency. “When it comes to trading off one between the other, I would certainly rather Claude be boring than that Claude be dangerous […] eventually, we’ll get the best of both worlds”, said Dario Amodei in an interview.

Source: Anthropic

Anthropic’s development of Constitutional AI has been a promising breakthrough enabling its commercial products, such as Claude, to follow concrete and transparent ethical guidelines. As Anthropic puts it:

“AI models will have value systems, whether intentional or unintentional. One of our goals with Constitutional AI is to make those goals explicit and easy to alter as needed.”

Claude AI

Launched in a closed alpha in April 2022, Claude is Anthropic’s flagship AI assistant. Although Claude 1’s parameter count of 430 million was less than GPT-3’s 175 billion parameters, its context window of 9K tokens was greater than even GPT-4 (8K tokens, or roughly 6K words). Claude’s capabilities span text creation and summarization, search, coding, and more. During its closed alpha phase, Claude limited access to key partners such as Notion, Quora, and DuckDuckGo.

In March 2023, Claude was released for public use in the UK and US via a limited-access API. Anthropic notes that Claude’s answers are supposedly more helpful and harmless than other chatbots. It also has the capability to parse PDF documents. Autumn Besselman, head of People and Comms at Quora, reported that:

“Users describe Claude’s answers as detailed and easily understood, and they like that exchanges feel like natural conversation”.

In July 2023, a new version of Claude, Claude 2 was released in the form of a new beta website. This version was designed to offer better conversational abilities, deeper context understanding, and improved moral behavior from its predecessor. Claude 2’s parameter count doubled from the previous iteration to 860 million, while its context window increased significantly to 100K tokens (approximately 75K words) with a theoretical limit of 200K.

As Anthropic’s “most capable system yet” as of October 2023, Claude 2 has shown promise in the area of harmfulness. Nonetheless, within its evaluation tests, it provided unfavorable outputs for 4 of 328 prompts attempting to jailbreak or access harmful information.

Source: Anthropic

Claude Instant

Claude Instant was released alongside Claude itself in March 2023, and described by Anthropic as a “lighter, less expensive, and much faster option”. Initially released with a context window of 9K tokens, the same as that of Claude, Claude Instant is described by some users as being less conversational while equally capable compared to Claude.

In August 2023, an API for Claude Instant 1.2 was released. Using the strengths of Claude 2, its context window expanded to 100K tokens — enough to analyze the entirety of “The Great Gatsby” within seconds. Claude Instant 1.2 also demonstrated higher proficiency across a variety of subjects, including math, coding, and reading among other subjects, with a lower risk of hallucinations & jailbreaks.

Source: Anthropic

Market

Customer

Anthropic’s Claude has been used across a variety of different industries, including:

  • Customer Service & Sales: Claude offers rapid and friendly advice for customer service requests, reportedly faster than alternatives. Early users seemed to choose Claude for its more natural dialogue compared to alternatives. In August 2023, SK Telecom, Korea’s leading telecommunications operator, announced a partnership to use Claude across a wide range of telco applications, most notably for customer service.

  • Legal: Claude’s expanded context window allows it to both read and write longer documents, which can be used to parse legal documents. With a score of 76.5% on the Bar exam, Claude can also provide factual support for lawyers.

  • Coaching: Claude’s helpful and ethical system can be used to provide useful advice as a personal growth companion for users.

  • Search: Though Claude doesn’t have live access to the internet, its API can be integrated into private and public knowledge bases. DuckDuckGo, an early adopter of Claude, developed DuckAssist using Claude’s summarization capabilities on Wikipedia to provide instant answers to user queries.

  • Back-Office: Claude can also be integrated into various office workflows, such as extracting information from work documents or emails. For example, the Claude App for Slack is built off Slack’s platform. With permission from the user, it can access messages for use in its future responses.

As a general AI assistant, Claude’s services aren’t limited to the industries and use cases mentioned above. For example, Claude has powered the backbone of Notion AI due to its unique writing and summarization traits. Claude has also been used to power code completion and generation products such as Sourcegraph.

In light of this growth, Anthropic has sought to balance scaling the popularity of AI models with their safety. It has consistently expressed concerns about the magnitude of jailbreaks as models become more integrated with everyday workflows. In an interview in August 2023, Dario Amodei noted that:

“A mature way to think about these things is not to deny that there are any costs, but to think about what the costs are and what the benefits are. I think we’ve been relatively responsible in the sense that we didn’t cause the big acceleration that happened late last year and at the beginning of this year.”

Business partners and customers can use Claude’s assistance across a variety of sectors including customer service, legal, coaching, search, back-office, and sales. Claude has been used to power AI products such as Notion AI, Quora’s experimental chat app Poe, and DuckDuckGo’s AI search tool, DuckAssist, among others. Other notable customers include Slack, Zoom, and AssemblyAI.

Source: Anthropic

In July 2023, Anthropic claimed to be serving “thousands” of customers and business partners. As of August 2023, Claude for Enterprises was still in closed beta with a public-facing chatbot and is open to applications from interested customers.

Market Size

Generative AI is projected to have a significant impact across industries. In June 2023, generative AI was reported to have the capacity to automate activities accounting for 60%-70% of employees’ time and is projected to enable productivity growth of 0.1%-0.6% annually through 2040. A report in April 2023 claimed that nearly two-thirds of professions could be partially automated by AI.

As of Q1 2023, over 1 in 10 adults in the UK used generative AI in the workplace daily. Consumer-oriented chatbots have also exploded in popularity. ChatGPT, OpenAI’s public chatbot, reached 100 million users just two months after launch.

The market for generative AI products was valued at $40 billion in 2022 and, as of August 2023, is projected to grow to $1.3 trillion by 2032. The industry has skyrocketed since the release of ChatGPT in late 2022, and the industry has received 5x the private funding in the first half of 2023 compared to the entirety of 2022.

Source: CBInsights

Growth is likely to be driven by task automation, with customer operations, marketing and sales, software, and R&D, accounting for 75% of use cases. Across these industries, McKinsey estimates generative AI has the potential to add $4.4 trillion in annual value. It is estimated that half of all work activities will be automated by 2045.

A key metric underscoring the rapid growth in the generative AI economy is the number of parameters in commercial NLP models. One of the first transformer models, Google’s BERT-Large, was created with 340 million parameters. Five years later, GPT-4, released in March 2023, is estimated to have nearly 1.8 billion parameters.

The number of parameters used within a model is closely correlated with its accuracy. While most models excel at classifying, editing, and summarizing text and images, their capacity to execute tasks successfully varies greatly. As CEO Daria Amodei put it, “they feel like interns in some areas and then they have areas where they spike and are really savants.”

As a model’s capacity increases, so will its magnitude of impact, which could either be positive or harmful. CEO Dario Amodei expressed his fear about the potential impact of models in the long-term in stating that:

“No one cares if you can get the model to hotwire [a car] — you can Google for that. But if I look at where the scaling curves are going, I’m actually deeply concerned that in two or three years, we’ll get to the point where the models can… do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death.”

Competition

OpenAI

With the majority of Anthropic’s founding team originating from OpenAI, it’s no surprise that OpenAI is its largest competitor. Founded in 2015, OpenAI is best known for its development of the Generative Pre-trained Transformers (GPT). With its first model released in 2018, its public model ChatGPT, powered by GPT-3.5, amassed well over 100 million monthly active users within two months of its launch. As of October 2023, it has raised $11.3 billion in funding, including a $10 billion investment from Microsoft in early 2023. In April 2023, OpenAI’s valuation was reported at $27 billion.

Like Anthropic, OpenAI offers an API for its foundational models, including GPT, its natural language model, the image generator DALL-E, and Whisper, its speech-to-text and translation model. Although its models charge at higher prices per token on average, Claude’s capabilities often fall short of those of OpenAI’s ChatGPT in prompt completion, text creation, code generation, and other tasks. Despite the difference in performance, Claude has been touted as exhibiting better writing abilities, notably with regard to creative writing. Since 2018, OpenAI shifted from a non-profit to a capped-profit, an act ultimately inciting the divide that led to the birth of Anthropic. Despite this divergence of values, it will continue to race Anthropic to produce the next state-of-the-art models.

Cohere

Founded by former Google Brain AI researcher & co-author of the breakthrough paper introducing the transformer model Aidan Gomez, Cohere is another leading NLP model provider. It provides in-house LLMs that can be used for summarization, text generation, and classification purposes to its business partners through its API. These models can be used to streamline internal processes, with notable customization and hyperlocal fine-tuning not often provided by competitors.

Although its cost per token generated or interpreted is lower than others, Cohere likely charges high upfront costs for enterprises seeking custom-built LLMs. Like Anthropic and OpenAI, Cohere also offers a knowledge assistant Coral to streamline businesses’ internal operations. In comparison to OpenAI and Anthropic, Cohere’s mission appears to be more centered around the accessibility of LLMs rather than the strength of its foundational models.

In an interview with Scale AI founder Alexandr Wang, Cohere CEO Aidan Gomez emphasized this exact need as the key problem Cohere seeks to address:

“[We need to get] to a place where any developer like a high school student can pick up an API and start deploying large language models for the app they’re building […] If it’s not in all 30 million developers toolkit, we’re going to be bottlenecked by the number of AI experts, and there always will be a shortage of that talent.”

In June 2023, Cohere raised a $270 million Series C. The round was raised at a valuation of $2.1 billion, well under the $6 billion valuation expected earlier that year. Nonetheless, Cohere is backed by lead tech institutions such as Salesforce and Nvidia, positioning towards the front of the generative AI market next to Anthropic.

Hugging Face

Known as the “GitHub for machine learning”, Hugging Face is the sole open-source platform for all things AI. As of August 2023, it has nearly 300K free and publicly available models spanning NLP, computer vision, image generation, and audio processing. Any user on the platform has the ability to post and download models and datasets, with its most popular models including Stable Diffusion, GPT-2, BERT, and LLaMA 2. Although most transformer models are typically too large to be trained without supercomputers, users can access and deploy such models with ease.

Hugging Face operates on an open-core business model, meaning all users have access to its public models. Paying users get additional features, such as model hosting, its Inference API integration, additional security, and more. In relation to Anthropic, its core product is more community-centric. Clem Delangue, CEO of Hugging Face, noted in an interview:

“I think open source also gives you superpowers and things that you couldn't do without it. I know that for us, like I said, we are the kind of random French founders and if it wasn't for the community, for the contributors, for the people helping us on the open source, people sharing their models, we wouldn't be where we are today.”

In short, Hugging Face’s relationship with its commercial partners is less direct than its competitors, including Anthropic. Nonetheless, it has raised $395.2 million, with its $235 million Series D in August 2023 valuing the company at $4.5 billion, which was double its valuation during its previous round in May 2023 and more than 100x its reported ARR. As a platform spreading large foundational models to the public (albeit models 1-2 generations behind state-of-the-art models such as those being developed by Anthropic and OpenAI), Hugging Face represents a significant player in the AI landscape.

Other Research Organizations

Anthropic also competes with various established AI research labs backed by tech giants; most notably, Google’s DeepMind, Meta AI, and Microsoft Azure AI.

Deepmind: Founded in 2010, DeepMind was acquired by Google in 2014. Known as Google DeepMind, it was later merged with Google Brain in Google’s efforts to “accelerate our progress in AI”. DeepMind’s Gemini offers similar abilities as GPT-4 and is the crux of Google DeepMind’s research efforts. As Google’s AI research branch, DeepMind indirectly serves as a competitor against Anthropic due to its development of scalable LLMs, which are likely used to power Google’s AI searches and Google Cloud NLP services.

Meta: Meta AI’s LLaMA 2 is Meta’s large language model, accessible for research and commercial purposes. In July 2023, Meta announced that it would be making LLaMA open-sourced. “Open source drives innovation because it enables many more developers to build with new technology”, posted Mark Zuckerberg. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.” Though more likely to hallucinate compared to both GPT-4 and Claude 2, it is one of the largest open-source models available, and as such will likely be used widely by independent developers.

Source: Arthur AI

Microsoft Azure: Among its hundreds of services, Microsoft Azure’s AI platform offers tools and frameworks for users to build AI solutions. Though its primary service entails AI architecture as opposed to foundational models themselves, the products available on its platform include both GPT-4 and LLaMA 2, which can be for services ranging from internal cognitive search to video and image analysis. Unlike Claude, which is low-code and accessible for teams of all sizes, Azure primarily targets developers and data scientists capable of coding on top of existing models. As such, Microsoft Azure AI serves as an indirect competitor to Anthropic, increasing the accessibility and customizability of foundational AI models.

Business Model

Anthropic charges users on a usage-based pricing model. Every million tokens (approximately 750K words) processed by Claude Instant costs $1.63 for prompts and $5.51 for completions. For Claude 2, the price increases to $11.02/million prompt tokens and $32.68/million completion tokens. For reference, OpenAI charges $60/million prompt tokens and $120/million completion tokens for GPT-4. As of October 2023, Claude is only accessible to select business partners. Though it currently doesn’t provide sector or company-specific features for its customers, it will occasionally fine-tune its models for certain use cases, including Korea’s primary mobile operator SK Telecom.

Traction

Though Claude’s exact number of users is unknown, Anthropic claims to have “thousands” of customers. Among its select business partners are Quora, Notion, Zoom, DuckDuckGo, and SK Telecom. Its services range from enhanced search to interactive consumer solutions to code assistance. In February 2023, Anthropic declared Google as its “preferred cloud provider” in an effort to expand its cloud infrastructure. The partnership was soon followed by Google’s $300 million investment into Anthropic, securing it a 10% stake.

In April 2023, Anthropic announced a partnership with Scale AI, allowing the two to provide full-stack generative AI solutions to its customers. Scale’s AI experimentation tools such as its model validation, data connectors, and prompt templates enable its customers to fully utilize Claude’s capabilities. In September 2023, Amazon announced that it would be investing up to $4 billion and at least $1.2 billion in the company, granting it access to Amazon’s proprietary Trainium and Inferentia chips. Amazon and Anthropic also announced a strategic partnership in September 2023, making AWS Anthropic’s “primary cloud provider”. Moving forward, Anthropic has expressed continued interest in both Google and Amazon as strategic partners. Furthermore, Anthropic is expected to hit $200 million in revenue by the end of 2023.

Valuation

In September 2023, Amazon announced plans to invest up to $4 billion in Anthropic. It said that it would initially invest ~$1.3 billion for a minority stake, and had negotiated both the option to increase its investment to a total of $4 billion and the commitment on Anthropic’s part to use AWS as a primary cloud provider for critical workloads like safety research and foundation model development.

Shortly after this, Anthropic was reported to be in talks to raise an additional $2 billion from investors, including Google, at a $20-$30 billion valuation. Prior to this, in May 2023, Anthropic had previously raised a $450 million Series C led by Google and Spark Capital at a $4.1 billion valuation. SK Telecom, one of the participants in Anthropic’s Series C, invested another $100 million after announcing its partnership with Anthropic in August 2023. Anthropic’s total funding as of October 2023 is $2.8 billion, with notable investors including Amazon and Google.

Key Opportunities

Practical AI

Claude has ingrained competitive edges enabling it to be the best geared for enterprises. Between ChatGPT, Bard, and Bing, three of the largest publicly available chatbots backed by established tech giants, Claude’s ethical focus makes it easiest to align with enterprise values. A survey in June 2023 reported that only 29% of business leaders had faith in the ethical use of commercial AI at the present moment, though 52% were “very confident” in ethical use within five years. Anthropic’s extensive development of Claude’s ability to minimize harmlessness while maximizing helpfulness gives it a head start in the realm of commercial task automation.

Claude’s context window is also over an order of magnitude greater than that of GPT-4’s. At 100K tokens (roughly 75K words), it is capable of finding a single altered sentence in the entirety of The Great Gatsby in under 30 seconds. In particular, its extremely large context window allows it to parse longer prompts that may be commonplace in certain industries.

While Claude is capable of summarizing large documents, as demonstrated by a user who asked Claude to summarize a PDF documenting TikTok’s hearing in Congress, its competitors can either not complete the task or do so incompletely or inaccurately. As such, it serves as an invaluable tool for industries where prompt window limits are a regular pain point.

Source: LMSys Leaderboard; Contrary Research

Anthropic has also expressed its plans to build “Claude-Next” a “frontier model” 10x better than existing AI. The development of this model, requiring up to $5 billion over the next two years, is described as the “next-gen algorithm for AI self-teaching”, a description resemblant of AGI at the level of humans. Should Anthropic be the first to achieve this goal, it will likely lead the market.

Support from Policymakers

In May 2023, Vice President Kamala Harris invited Dario Amodei along with the CEOs of Google, Microsoft, and OpenAI to discuss safeguards to mitigate the potential risks of AI. “It is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security. These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values”, the White House noted in a press release. Subsequently, the US called a public request for safe and trustworthy AI through the National Telecommunications and Information Administration (NTIA).

As a leader in the movement towards transparent AI, Anthropic has exhibited its support for greater accountability. Following the NTIA’s nationwide announcement, it released its own “AI Accountability Policy Comment”, in which it recommended more stringent safety evaluations and clearer standards for AI interpretability. Many of the administration’s recommendations to develop responsible AI aligned with Anthropic’s suggestions, underscoring its focus on safety compared to its competitors. What could be institutional roadblocks for competitors failing to meet such standards may ultimately benefit Anthropic in its efforts to develop stronger foundational models.

Key Risks

Unethical Data Practices

In July 2023, both OpenAI and Meta were sued by comedian and author Sarah Silverman along with authors Christopher Golden and Richard Kadrey on the basis of using copyrighted material as training data without permission. The claimant offered exhibits showing OpenAI’s ability to summarize Silverman’s memoir Bedwetter without the copyright management information included with the work. In the case of Meta, the exhibits directly show the books in LLaMA’s training dataset, despite never consenting to the use of such materials.

As of August 2023, the lawsuit has yet to be settled, with no definitive winner. Nonetheless, Anthropic’s research concerns the ethicality of its foundational models, meaning not much can be done by the model to alter its own training data. As such, avoiding conflicts with lax data practices is a prerequisite to maintaining its commitment to helpful and harmless AI.

Secrecy in the AI Landscape

Despite Anthropic’s focus on AI safety, it has struggled to meet certain accountability metrics. A report in June 2023 using the Draft EU AI Act as a rubric evaluating AI companies’ disclosure practices ranked Claude second to last. Notably, Anthropic has yet to fully disclose the data sources used to train Claude, and has also remained vague regarding the prompts used to evaluate the harmfulness of its Constitutional AI.

Since OpenAI’s shift towards monetizing its services, competition has risen for the next AI breakthroughs. In March 2023, OpenAI announced the release of GPT-4 with the disclaimer that it would not be an OpenAI model. “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar”, reads its technical report.

Despite its many recommendations in favor of increased model restrictions, Anthropic has remained hesitant to support some accountability mechanisms. “While oversight and accountability are crucial for building trustworthy AI, insufficiently thoughtful or nuanced policies, liability regimes, and regulatory approaches could frustrate progress,” it wrote in its response to the NTIA’s AI Accountability Policy Request for Comment, a deviation from its policy recommendations. As Anthropic continues to develop foundational models, it must balance safe AI disclosure practices with the overarching risk of losing its competitive edge.

Weekly Newsletter

Subscribe to the Research Rundown

Summary

As one of the leading organizations developing large-scale generative models, Anthropic has made significant progress in competing against ChatGPT with its safety-focused alternative Claude. Since their split from OpenAI, the Anthropic team has conducted extensive research expanding the production and scaling of large models, with key breakthroughs regarding the interpretability and direction of its AI systems. Nonetheless, as the frontiers of AI research are accelerated by well-funded research organizations, companies such as Anthropic may fall victim to the same cultural shift OpenAI did in 2019. The lack of data and model transparency surrounding Claude may diminish its ultimate goal of providing steerable AI; only time will tell if Anthropic will be able to practice what it preaches.

Disclosure: Nothing presented within this article is intended to constitute legal, business, investment or tax advice, and under no circumstances should any information provided herein be used or considered as an offer to sell or a solicitation of an offer to buy an interest in any investment fund managed by Contrary LLC (“Contrary”) nor does such information constitute an offer to provide investment advisory services. Information provided reflects Contrary’s views as of a time, whereby such views are subject to change at any point and Contrary shall not be obligated to provide notice of any change. Companies mentioned in this article may be a representative sample of portfolio companies in which Contrary has invested in which the author believes such companies fit the objective criteria stated in commentary, which do not reflect all investments made by Contrary. No assumptions should be made that investments listed above were or will be profitable. Due to various risks and uncertainties, actual events, results or the actual experience may differ materially from those reflected or contemplated in these statements. Nothing contained in this article may be relied upon as a guarantee or assurance as to the future success of any particular company. Past performance is not indicative of future results. A list of investments made by Contrary (excluding investments for which the issuer has not provided permission for Contrary to disclose publicly, Fund of Fund investments and investments in which total invested capital is no more than $50,000) is available at www.contrary.com/investments.

Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Contrary. While taken from sources believed to be reliable, Contrary has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Please see www.contrary.com/legal for additional important information.

Authors

William Guo

Fellow

See articles

Similar Companies

© 2024 Contrary Research · All rights reserved

Privacy Policy