How was the founder of Anthropic, valued at 900 billion dollars, forged?
Anthropic's latest round of financing is in negotiation, with valuation rumors approaching $900 billion—surpassing OpenAI.
In the secondary market, the implied valuation of Anthropic's equity has neared $1 trillion, with some tokenization platforms quoting even higher.
Nine months ago, this figure was still $61 billion.
Around the same time, the company's founder and CEO Dario Amodei stated at the Code with Claude developer conference on May 6 that the company's revenue in the first quarter of 2026 had increased 80 times year-over-year.
"We originally planned for a 10 times increase," he said, "but the result was 80 times."
Note: Anthropic does not have publicly available quarterly financial reports; almost all disclosures are based on ARR (Annual Recurring Revenue)—which is the monthly revenue multiplied by 12 to estimate the annual scale. It is not actual quarterly revenue, but more like a dynamic reading of "how far we can run in a year at the current speed."
This company does not have a super application entry point like ChatGPT. Most of its revenue comes from APIs—other companies buy its models and embed them into their own products. Its trajectory serves as a direct litmus test for whether this AI wave can truly monetize.
Amodei himself has thus become an unavoidable subject of study.
The feature article "How Dario Amodei Was Forged" is penned by Alex Kantrowitz, founder of Big Technology, and published at the end of July 2025. With over twenty interview subjects and a lengthy face-to-face discussion, it is arguably the most complete portrayal of the highly controversial Amodei to date.
The article begins with his childhood in San Francisco, tracing his journey through Princeton's retinal lab, Baidu's computing power experiments, the "Panda Team" within OpenAI, to the birth of Anthropic, the explosion of Claude, and the multi-front war he is waging with other players in Silicon Valley.
But the most significant note falls on Amodei's early twenties.
His father died from a rare disease. Four years later, the mortality rate for this disease dropped from 50% to below 5%.
"Someone cracked the therapy for this disease and saved some lives," Amodei said, "but more could have been saved."
This is the starting point for everything he does today.
The insightful investor (ID: Capital-nature) has meticulously translated and organized this, recommending it to everyone. Here is the main text.
When I asked Dario Amodei what has been happening recently, he hesitated for almost no time.
The CEO of Anthropic has been in a combative state throughout 2025: he has clashed with industry peers, debated with government officials, and continually challenged public perceptions of artificial intelligence.
In recent months, he has predicted that AI could soon eliminate 50% of entry-level white-collar jobs; he wrote an article in The New York Times vehemently opposing a ten-year moratorium on AI regulation; he also called for semiconductor export controls on China, which led to a public rebuttal from NVIDIA CEO Jensen Huang.
Amid all this, Amodei met with me on the first floor of Anthropic's headquarters in downtown San Francisco.
He appeared relaxed, energetic, and eager to start, as if he had been waiting for this opportunity to explain why he is doing this.
He wore a blue crewneck sweater over a casual white T-shirt and thick-framed square glasses, sitting down and looking straight ahead.
Amodei said that what supports his actions is a firm belief: the pace of AI development is faster than most people realize, which means the opportunities and consequences it brings are also closer than they appear.
"I am indeed one of the most optimistic people about the rapid enhancement of AI capabilities," he told me, "as we get closer to more powerful AI systems, I increasingly want to express these thoughts more forcefully and publicly, to clarify this viewpoint."
Amodei's frankness and sharpness have earned him respect in Silicon Valley, but also ridicule.
In the eyes of some, he is a technological visionary who once led OpenAI's GPT-3 project, the seed of ChatGPT; he is also a safety-conscious leader who later left OpenAI to found Anthropic.
But in the eyes of others, he is a controlling "doomsayer": wanting to slow down AI's progress, shape it according to his will, and keep competitors at bay.
Whether liked or disliked, the AI field must contend with him.
Amodei has turned Anthropic into a real economic force.
The company is currently valued at $61 billion. Starting from zero in 2021, although it has yet to turn a profit, its annual recurring revenue has grown from $1.4 billion in March 2025 to $3 billion in May, and nearly $4.5 billion by July.
Amodei thus calls it "the fastest-growing software company in history at its current scale."
Perhaps more noteworthy than Anthropic's revenue scale is the source of that revenue.
Unlike OpenAI, which primarily relies on applications like ChatGPT, Amodei's biggest bet is on the underlying technology itself. He told me that most of Anthropic's revenue comes from APIs or from other companies purchasing Anthropic's AI models and embedding them into their products.
Thus, in a sense, Anthropic will become a barometer for AI progress: its ups and downs will directly depend on the strength of the technology itself.
As Anthropic continues to grow, Amodei hopes the company's weight will help him influence the direction of the entire industry. Given his willingness to speak out, to throw punches, and to endure counterattacks, he is likely correct.
So, if this person is to help shape what could be the most influential new technology in the world, understanding what drives him, how his company operates, and why his timeline is shorter than many others is very worthwhile.
After conducting over twenty interviews with him, his friends, colleagues, and competitors, I believe I have found the answers.
A Disease That Could Have Been Cured
Dario Amodei has been a science enthusiast since childhood.
Born in San Francisco in 1983, his mother is Jewish and his father is Italian. His interests were almost entirely focused on mathematics and physics. During high school, the internet bubble exploded around him, but he was hardly affected.
"Writing a website held no appeal for me," he told me, "I was interested in discovering fundamental scientific truths."
At home, Amodei had a very close relationship with his parents. He said they were a loving couple who both wanted to make the world a better place.
His mother, Elena Engel, was responsible for renovation and construction projects for the Berkeley and San Francisco libraries. His father, Riccardo Amodei, was a trained shoemaker.
"They made me understand what is right and what is wrong, and they made me aware of what truly matters in this world," he said, instilling in him a strong sense of responsibility.
This sense of responsibility was already evident during Amodei's undergraduate years at Caltech. At that time, he harshly criticized his classmates for being overly negative about the impending Iraq War.
"The problem is not that everyone is satisfied with the bombing of Iraq; the problem is that most people oppose it in principle but are unwilling to spend even a millisecond of time."
In an article published on March 3, 2003, in the student newspaper Caltech Daily, Amodei wrote, "This situation must change, and it must change immediately; it cannot be delayed."
Later, in his early twenties, Amodei's life was completely transformed.
His father, Riccardo, struggled with a rare disease for a long time and ultimately passed away in 2006. His father's death had a profound impact on Amodei. Consequently, he shifted his graduate research focus at Princeton from theoretical physics to biology, hoping to address human diseases and biological issues.
In a sense, Amodei's life thereafter was tied to his father's passing.
What troubled him most was that less than four years after his father's death, a new breakthrough emerged that transformed this disease from one with a 50% mortality rate to one with a 95% cure rate.
"Someone developed a therapy for this disease, successfully curing it and saving many lives," Amodei said, "but more could have been saved."
Jade Wang, who dated Amodei in the early 2010s, said that the death of Amodei's father has continually shaped his life path.
"This is the difference between his father possibly dying and possibly surviving, you see?" she said. What she meant was that if scientific progress had been slightly faster, Amodei's father might still be alive today.
It just took Amodei some time to find AI as a tool to carry this wish.
When mentioning his father's death, Amodei's emotions were clearly ignited.
He believes that his calls for export controls and AI safety measures are often misinterpreted as irrational actions aimed at hindering AI progress.
"When someone says, 'This person is a doomsayer who wants to slow things down,' I really get very angry," Amodei told me, "You just heard what I said. My father died because if those therapies had appeared a few years earlier, he might not have died. I certainly understand the benefits of this technology."
When AI Becomes the Solution
At Princeton, Amodei was still deeply influenced by his father's death. He began to study the retina to embark on the path of understanding human biology.
Our eyes capture the world by sending signals to the visual cortex. The visual cortex is a large part of the brain, occupying about 30% of the cerebral cortex; it is responsible for processing this data and ultimately allowing us to see images.
If someone wants to delve into the complexity of the human physiological system, the retina is a great starting point.
"He used the retina to observe a complete neural population and truly understand what each cell is doing, or at least gain that possibility," Stephanie Palmer, a fellow researcher at Princeton, told me, "His focus was more on this rather than the eye itself. He wasn't trying to become an ophthalmologist."
While working in Professor Michael Berry's retinal lab, Amodei was very dissatisfied with the methods used at the time to measure retinal signals, so he co-invented a new, better sensor to gather more data.
This is not common in laboratories. It is both impressive and carries a certain unwillingness to conform.
His doctoral thesis won the Hertz Dissertation Award, a prestigious award given to those who discover real-world applications in academic research.
But Amodei always liked to challenge existing norms and had a strong judgment about "what things should look like," which made him stand out in an academic environment.
Berry told me that Amodei was the most talented graduate student he had ever seen. However, Amodei's emphasis on technological advancement and teamwork did not fit well in a system where individual achievement was the core evaluation standard.
"I think deep down, he is a somewhat proud person," Berry told me, "I think that before this, throughout his entire academic career, whenever he accomplished something, people would stand up and applaud him. But here, that kind of thing didn't really happen."
After leaving Princeton, the door to AI opened for Amodei.
He began postdoctoral research under Stanford researcher Parag Mallick, studying proteins inside and around tumors to detect metastatic cancer cells.
This work was very complex and made Amodei aware of the limits of individual capabilities. He began to seek technological solutions.
"The complexity of the underlying biological problems makes it feel like it has exceeded human scale," Amodei told me, "To truly understand all this, you need hundreds or thousands of human researchers."
Amodei saw this potential in emerging AI technology.
At that time, the explosive growth of data and computing power was driving breakthroughs in machine learning. Machine learning is a branch of AI that has long been theoretically promising, but until then, actual results had not been particularly outstanding.
After starting to experiment with this technology, Amodei realized that it might one day replace those hundreds or thousands of researchers.
"At that time, I began to see discoveries in the AI field, which, in my view, were the only technology capable of bridging that gap," he said. It is something "that can take us beyond the human scale."
Amodei left academia and turned to the corporate world to drive AI progress, as there was enough funding to support this research.
He considered starting his own startup but later leaned towards joining Google. Google had a well-funded AI research department, Google Brain, and had just acquired DeepMind.
However, the Chinese search engine company Baidu offered renowned researcher Andrew Ng a budget of $100 million for AI research and deployment.
Ng began to assemble a super team and reached out to Amodei. Amodei was very interested and submitted an application.
When Amodei's application reached Baidu, the team initially didn't know how to view it.
"His background is impressive, but from our perspective, his background is in biology, not in machine learning," early team member Greg Diamos told me.
Subsequently, Diamos reviewed the code Amodei wrote at Stanford and encouraged the team to hire him.
"I thought, anyone who can write such code must be a remarkable programmer," he said.
In November 2014, Amodei joined Baidu.
The Emergence of AI Scaling Laws
With ample resources, the Baidu team could invest computing power and data into problems to try to improve results. They saw astonishing effects.
In experiments, Amodei and his colleagues found that when they increased these factors, AI performance improved significantly. The team published a paper on speech recognition, showing a direct correlation between model size and performance.
"This had a huge impact on me because I saw these very smooth trends," Amodei said.
Amodei's early work at Baidu contributed to what would later be known as AI "scaling laws." Strictly speaking, these laws are more like observations.
Scaling laws suggest that increasing computing power, data, and model size in AI training will lead to predictable performance improvements. In other words, as long as everything is scaled up, AI will get better, and it doesn't necessarily require entirely new methods.
"In my view, this is the most important discovery I've seen in my life," Diamos told me.
To this day, Amodei may still be the purest proponent of scaling laws among AI research leaders.
Colleagues like Google DeepMind CEO Demis Hassabis and Meta's chief AI scientist Yann LeCun believe that the AI industry still needs more breakthroughs to reach human-level artificial intelligence.
But Amodei speaks with a clear sense of certainty; while not 100% certain, he believes the path forward is quite clear.
As the entire industry builds large data centers on the scale of small cities, he sees extremely powerful AI rapidly approaching.
"What I see is an exponential curve," he said, "When you are on an exponential curve, you can really be easily deceived by it. Two years before the exponential curve goes completely crazy, it looks like it has just begun."
At Baidu, the AI team's progress also sowed the seeds of its disintegration.
As this technology, knowledge, and resources became increasingly valuable, internal power struggles erupted over control. Ultimately, talent left, and the lab fell apart. Ng declined to comment on this.
Just as the Baidu AI team was disintegrating, Elon Musk invited Amodei and several top AI researchers to a now-famous dinner at the Rosewood Hotel in Menlo Park.
Sam Altman, Greg Brockman, and Ilya Sutskever also attended that dinner.
Seeing the potential of AI emerging and fearing that Google might consolidate control over the technology, Musk decided to fund a new competitor, which would become OpenAI.
Altman, Brockman, and Sutskever co-founded this new research organization with Musk.
Amodei also considered joining but had reservations about this nascent organization, so he chose to go to Google Brain.
After spending ten months at Google, Amodei felt trapped in the quagmire of a large company and reconsidered his options.
In 2016, he joined OpenAI and began working on AI safety.
He had already started paying attention to safety issues while at Google. At that time, he was concerned that this rapidly advancing technology could cause harm and co-authored a paper discussing the potential for AI to behave poorly.
After settling into OpenAI, his former colleagues at Google published the transformer model, which is the core technology behind today's generative AI wave. That paper was titled "Attention is All You Need."
The transformer made training faster and allowed model sizes to be much larger than before. Despite this discovery's enormous potential, Google essentially shelved it.
Meanwhile, OpenAI began to take action.
In 2018, OpenAI released its first large language model, named GPT, where the "T" stands for Transformer.
The text generated by this model was often incomplete and incoherent, but it still showed significant improvement over previous language generation methods.
Amodei later became the research lead at OpenAI and was directly involved in the next generation model, GPT-2. GPT-2 is essentially the same model as GPT, just larger.
The OpenAI team fine-tuned GPT-2 using a technique called "reinforcement learning from human feedback" (RLHF). Amodei was a pioneer of this technique, which helps guide the model's value orientation.
As expected, GPT-2 performed much better than GPT. It was already able to rewrite, write, and answer questions coherently to some extent.
Language models quickly became the focus of OpenAI.
As Amodei's influence within OpenAI grew, so did the controversies surrounding him.
He was a strong writer, often drafting lengthy documents discussing values and technology. Some colleagues found these documents inspiring, while others felt they were overly assertive and like flag-planting.
One memo explored the difference between "M-type companies" and "P-type companies": M-type companies provide market-oriented products, while P-type companies provide products for the public good.
In the eyes of some, Amodei also placed too much emphasis on maintaining secrecy around technological potential and hoped to collaborate with the government to address these issues.
He sometimes appeared somewhat sharp, occasionally belittling projects he did not agree with.
Despite this, OpenAI entrusted Amodei with leading the GPT-3 project and allocated 50% to 60% of the company's computing power to him to build a significantly scaled-up version of the language model.
The leap from GPT to GPT-2 was already substantial, with a tenfold increase in scale. But the leap from GPT-2 to GPT-3 was even more significant. This was a project on a hundredfold scale, costing tens of millions of dollars.
The results were astonishing.
The New York Times quoted some independent researchers who were surprised by GPT-3's ability to write code, summarize, and translate.
Amodei, who was relatively restrained when GPT-2 was released, praised the new model enthusiastically this time.
"It has an emergent quality," he told The New York Times, "It can recognize the patterns you provide and continue the story to some extent."
But the cracks beneath OpenAI's surface also began to tear apart completely.
Division
With the birth of GPT-3, the first truly capable language model, the stakes Amodei felt became even greater.
After seeing the scaling laws at work in multiple domains, Amodei began to contemplate where this technology would lead and developed a stronger interest in safety issues.
"He looked at this technology and assumed it would succeed," Jack Clark, a close colleague of Amodei at OpenAI, told me.
"If you assume it will succeed, meaning it will become as intelligent as a human, then in some sense, you cannot help but worry about safety issues."
Although Amodei was responsible for model development at OpenAI and controlled a significant portion of computing power, some parts of the company were not under his control.
These included when to release models, personnel arrangements, how the company deployed technology, and how the company presented itself externally.
"A lot of things," Amodei said, "are not something you can control just by training models."
By then, a close-knit circle of colleagues had formed around Amodei. Because he liked pandas, some people referred to this circle as the "Panda faction."
On how to handle these functions, he had very different ideas from OpenAI's leadership. Internal strife ensued, and strong mutual disdain gradually formed between different factions.
In our conversation, Amodei did not hide his feelings.
"The leaders of the company must be trustworthy people," he said, "Their motivations must be sincere. No matter how hard you work to push the company forward technologically, if you are working for someone with insincere motives, for someone dishonest, for someone who does not genuinely want to make the world a better place, then it will not end well. You are just adding bricks to something terrible."
Within OpenAI, some believed that Amodei's focus on safety was actually a path to attempt to gain complete control of the company.
After Amodei called for GPU export controls on China, NVIDIA CEO Jensen Huang recently echoed similar criticisms.
"He thinks AI is so scary that only they should do it," Huang said.
When discussing Huang's statement, Amodei told me, "That's the most ridiculous lie I've ever heard." He added that he has always hoped to promote "upward competition" by encouraging other companies to emulate Anthropic's safety practices.
"I have never said anything close to 'only this company should develop this technology,'" he said, "I don't know how anyone could derive that meaning from anything I've said. It is a completely incredible and malicious distortion."
NVIDIA recently pushed to revoke some export control measures supported by Amodei and further responded to the controversy.
"We support safe, responsible, and transparent AI," a spokesperson for NVIDIA told me, "Thousands of startups, developers, and the open-source community in our ecosystem are enhancing AI safety. Lobbying for regulatory capture and suppressing open-source will only stifle innovation, making AI less safe, less reliable, and less democratic. This is not what so-called 'upward competition' looks like, nor is it the way for America to win."
OpenAI also responded through a company spokesperson.
"We have always believed that AI should benefit and empower everyone, not just those who claim 'it's too dangerous for anyone other than themselves to safely develop AI,'" the spokesperson said.
"As technology evolves, our decisions on partnerships, model releases, and funding have become the standard for the entire industry, including Anthropic. What hasn't changed is our focus on making AI safer, more useful, and accessible to as many people as possible."
As time went on, the differences between Amodei's team and OpenAI's leadership became increasingly irreconcilable, and a certain rupture became inevitable.
"We spend 50% of our time trying to convince others to accept our views, and the other 50% actually working," Clark said.
Thus, in December 2020, Amodei, Clark, Amodei's sister Daniela, researcher Chris Olah, and a few other colleagues left OpenAI to prepare to start a new venture.
The Birth of Anthropic
In a conference room at Anthropic's office, Clark turned his laptop around to show me one of the earliest documents from Anthropic.
It was a list of candidate names, including names like Aligned AI, Generative, Sponge, Swan, Sloth, and Sparrow Systems.
Anthropic was also among them.
The word carries a meaning centered on humanity and human-oriented, and in early 2021, its domain name was just available for registration.
"We liked this name; it's good," the team wrote in the spreadsheet.
And so, Anthropic was born.
Anthropic was founded during the height of the COVID-19 pandemic, right in the midst of the second wave, with the team initially meeting entirely over Zoom.
Later, these 15 to 20 employees began to have lunch together weekly in Presidio Park in San Francisco, each bringing their own chairs to sit together and discuss business.
The early mission of the company was simple: to build leading large language models, implement safety practices, and pressure other companies to follow suit; while also publicly sharing their findings without disclosing core technical details of the models.
With fewer than twenty people meeting in a park with their own chairs, there was a sense of destiny, which may sound strange. Especially since they needed billions of dollars to truly accomplish their mission.
But this was the atmosphere in the early days of Anthropic.
"The strangest part of all this is that, from the perspective of insiders, many things seemed so inevitable," Clark said, "We had already done research on scaling laws. We could see the path for models to become stronger."
Former Google CEO Eric Schmidt was one of Anthropic's earliest investors.
He met Amodei through his then-girlfriend, now wife, whom he originally met in social settings.
While Amodei was still at OpenAI, the two discussed technology; after Amodei founded Anthropic, they discussed business.
Schmidt told me that rather than investing in the concept, he was investing in the person.
"At this level, when you make such an investment, you basically have no data, right?" he said, "You don't know what the revenue is, you don't know where the market is, and you don't know what the product is. So essentially, you can only judge based on the person. Dario is an outstanding scientist who promised to attract excellent scientists, and he did.
He also promised to lead a very small company to do this, but he did not achieve that. Now it has become a very large company, and it has become a normal company in the usual sense. I thought it would be a very interesting research lab."
The disgraced FTX CEO Sam Bankman-Fried was also an early investor in Anthropic. Reports indicate he took $500 million from FTX's funds to invest in Anthropic, acquiring a 13.56% stake in the company.
Bankman-Fried was one of several "effective altruists" who invested in Anthropic's early stages. At that time, the effective altruism movement was closely related to Anthropic.
Amodei said SBF was someone who was optimistic about AI and also concerned about safety, making him a suitable choice from that perspective. However, he also had enough red flags that the company did not allow him to enter the board and only gave him non-voting shares.
Amodei said SBF's later actions were "far more extreme and worse than I ever imagined."
The story Amodei told potential investors was simple: he told them that Anthropic had the talent to build cutting-edge models at one-tenth the cost.
This claim worked.
To date, Amodei has raised nearly $20 billion for the company, including $8 billion from Amazon and $3 billion from Google.
"Investors are not fools," he told me, "They basically understand the concept of capital efficiency."
In the second year after Anthropic was founded, OpenAI brought generative AI to the world with ChatGPT. But Anthropic took a different path.
Amodei did not focus on consumer applications but decided to sell technology to enterprises.
This strategy has two benefits. As long as the model is useful, it can bring considerable revenue; at the same time, the challenges posed by enterprise clients will also drive the company to build better technology.
Amodei said that elevating an AI model's capabilities in biochemistry from an undergraduate level to a graduate level may not excite ordinary chatbot users, but it is very valuable for pharmaceutical companies like Pfizer.
"This will give us better incentives to develop the model to its fullest extent," he said.
Interestingly, what truly got enterprises to start paying attention to Anthropic's technology was actually one of its consumer products.
In July 2023, nearly a year after the launch of ChatGPT, Anthropic released the Claude chatbot.
Claude received a lot of praise for its highly "emotionally intelligent" personality traits, which were a byproduct of Anthropic's safety work.
Before that, Anthropic had hoped to keep its employee count below 150. But soon, the number of hires in a single day exceeded the total number of employees in the company's entire first year.
"It was at that moment with the Claude chatbot that the company began to grow significantly," Clark said.
Claude Becomes a Business
Amodei's bet on creating AI for enterprise applications attracted a large number of eager customers.
Today, Anthropic has sold its large language models to multiple industries, including travel, healthcare, financial services, and insurance, with clients including industry leaders like Pfizer, United Airlines, and American International Group (AIG).
Novo Nordisk, which produces Ozempic, is using Anthropic to compress a regulatory reporting process that originally took 15 days down to 10 minutes.
"The technology we built ultimately addressed many of the complaints people had in their work," Anthropic's revenue head Kate Jensen told me.
Meanwhile, programmers have also fallen in love with Anthropic.
The company focuses on AI code generation partly because it can help accelerate its own model development; on the other hand, if the results are good enough, programmers will quickly adopt it.
And indeed, this has been the case. Related application scenarios have rapidly exploded, coinciding with the rise of AI programming tools like Cursor, or perhaps it was these tools that drove the rise.
Anthropic itself has also begun to enter the programming application business. In February 2025, it released the AI programming tool Claude Code.
As AI usage surged, the company's revenue also grew rapidly.
"Anthropic's revenue is growing tenfold every year," Amodei said, "In 2023, we grew from zero to $100 million. In 2024, we grew from $100 million to $1 billion. By the first half of this year, we have grown from $1 billion to… I think at this point in time, it is already far above $4 billion, possibly $4.5 billion."
The last figure is on an annualized basis, meaning the monthly revenue multiplied by 12.
Anthropic stated that in 2025, the number of eight-figure and nine-figure deals for the company doubled compared to 2024; the average spending of enterprise clients also increased fivefold.
However, Anthropic is also spending a lot of money training and running models, raising the question: is its business model sustainable?
The company remains in deep loss, expecting to lose about $3 billion this year. Moreover, reports indicate that its gross margin also lags behind typical cloud software companies.
Some of Anthropic's clients have begun to question whether the problems the company is encountering while exploring its business model have already reflected in the product.
A founder of a startup told me that although Anthropic is the most suitable model for his application scenario, he cannot rely on it because it crashes too frequently.
Amjad Masad, CEO of the "vibe coding" company Replit, told me that after a period of price reductions, the cost of using Anthropic's model has not continued to decline.
Claude Code recently also just added additional usage frequency limits because some developers used it too much, making the business unprofitable.
Entrepreneur and developer Kieran Klaassen told me that with a $200 Max subscription price in a month, he obtained $6,000 worth of Claude API usage.
Klaassen said he ran multiple Claude agents simultaneously.
"The real limitation is whether your brain can switch between one task and another," he said.
Amodei stated that as Anthropic's models continue to improve, if costs remain unchanged, clients will actually get a better deal, meaning they can get more intelligence for every dollar spent.
He also mentioned that AI labs have only just begun to optimize inference costs, which is the cost when the model is actually used, and this should lead to efficiency improvements.
This is a point worth paying attention to. Several industry insiders told me that inference costs must decrease for this business to make sense.
Anthropic executives hinted during interviews that excessive product demand is not the worst problem.
The real unresolved question is whether generative AI and the scaling laws driving it will follow a clear cost decline curve like other technologies; or whether it is a completely new technology with a completely new cost structure.
The only certainty is that finding the answer will require more investment.
$1 Billion Remittance
At the beginning of 2025, Anthropic needed money.
The AI industry's thirst for scale has already led to the construction of large-scale data centers and computing power trading. To support these investments, AI labs have repeatedly broken startup funding records.
Meanwhile, mature companies like Meta, Google, and Amazon have leveraged their substantial profits and data centers to build their own models, further increasing competitive pressure.
For Anthropic, there is a special urgency to scale up its models.
It does not have a strong application entry point like ChatGPT. ChatGPT users will repeatedly return out of habit, but without a comparable super application, Anthropic's models must maintain a leading edge in specific usage scenarios, or they can easily be replaced by competitors.
"In the enterprise space, especially in programming, if you can stay six months or a year ahead of the cutting edge, the advantage is very clear," Anthropic client and Box CEO Aaron Levie told me.
Thus, the company approached Ravi Mhatre, a senior venture capitalist and partner at Lightspeed Ventures, to lead a $3.5 billion funding round.
Mhatre's previous checks typically ranged from $5 million to $10 million. But this time, the check he was prepared to sign would be one of the largest in his firm's history.
"When Amazon went public, its market value was only $400 million," he told me, "400 million! Think about that today."
Just as the financing was progressing as planned, a cheap competitive model seemed to appear out of nowhere.
The Chinese company DeepSeek, led by founder Liang Wenfeng, who also heads the hedge fund Huanfang Quant, released DeepSeek R1. This is an open-source, capable, and efficient inference model priced at only one-fortieth of similar products.
DeepSeek shook the entire business world, even prompting several CEOs managing trillions of dollars in market value to share Wikipedia articles on social media to reassure shareholders.
By the time DeepSeek emerged, Mhatre had completed a full set of calculations explaining why the AI model itself would create the most value, rather than various chatbots in the world.
His conclusion was that if an AI capable of handling knowledge work could be created, the revenue scale brought by these companies could reach ten times that of large cloud platforms, with a potential total market size of $15 trillion to $20 trillion.
"So you can backtrack and think, at a $60 billion or $100 billion valuation, can you still achieve venture capital-style returns? Of course you can!" he said, "Sometimes, the key is how you estimate the market size from the top down."
The emergence of DeepSeek seemed to indicate that open-source, efficient, and nearly equally usable models could challenge existing giants. But Amodei does not see it that way.
He said that his primary concern is whether any new model is better than Anthropic's models. Even if you can download a model's design, you still need to deploy it on cloud services and run it, which requires technology and funding.
As the DeepSeek incident unfolded, Amodei articulated this viewpoint to Mhatre and his Lightspeed colleagues. He convinced them that some of DeepSeek's model innovations could be further enhanced by scaling.
That Monday, NVIDIA's stock price fell 17%, and panicked investors fled from AI infrastructure trades.
Amid the uncertainty, the venture capitalist made a decision.
"I do not deny that there was immense pressure at that time," Mhatre said, "That Monday, we wired out $1 billion."
Six months after the "DeepSeek moment," Anthropic was again seeking to scale further.
The company was negotiating a new funding round, potentially reaching $5 billion, which would double the company's valuation to $150 billion.
Potential investors included some Gulf states, which had previously seemed like sources Anthropic wanted to avoid.
However, after raising nearly $20 billion from Google, Amazon, and venture capital firms like Lightspeed, the options for obtaining larger checks were becoming increasingly limited.
Internally at Anthropic, Amodei had argued that Gulf states had $100 billion or more in investable capital, and their funds would help Anthropic continue to stay at the forefront of technology.
According to an internal Slack message obtained by Wired, he seemed to reluctantly accept the idea of taking money from Middle Eastern countries.
"Unfortunately," he wrote, "I think the principle of 'no bad actors should benefit from our success' is hard to apply in running a business."
Talking to Amodei made me ponder how this race to improve AI will end, or whether it will end at all.
I imagined a scenario where models eventually become so large and powerful that they move toward commoditization.
Or, as Amodei's former colleague Ilya Sutskever once suggested, the endless impulse to scale could eventually cover the entire Earth with solar panels and data centers.
Of course, there is another possibility, one that AI believers are reluctant to discuss: that AI progress stagnates, leading to unprecedented wealth evaporation for investors.
Acceleration
In May, at Anthropic's first developer conference, I sat several rows back from the stage, waiting for Amodei to appear.
The company held the conference at The Midway, an open art and event space in San Francisco's Dogpatch neighborhood.
The venue was packed with programmers, media, and over 1,000 employees of Anthropic.
The audience eagerly anticipated the release of Claude 4, its latest and largest model.
Amodei took the stage to introduce Claude 4.
He did not choose a flashy demonstration but instead picked up a handheld microphone, announced the news, spoke according to notes on his laptop, and then handed the spotlight to Anthropic's product head Mike Krieger.
The audience seemed to respond positively.
In my view, more noteworthy than the model update itself was Amodei's commitment to what would come next.
Throughout the day, he repeatedly mentioned that AI development is accelerating and that Anthropic's pace of releasing models would also speed up.
"I don't know how much more frequent it will be," he said, "but the pace is quickening."
As Amodei had previously told me, Anthropic has been developing AI programming tools to accelerate its own model development.
When I brought this up with the company's co-founder and chief scientist Jared Kaplan, he told me that it is indeed working.
"Most of Anthropic's engineers use AI to help themselves improve efficiency," he said, "So it has indeed made us much faster."
There is a concept in AI theory called "intelligence explosion." It refers to models being able to improve themselves, and then, with a "bang," self-improving and becoming nearly omnipotent.
Kaplan did not deny that such an intelligence explosion could arrive in this way or in a human-assisted manner.
"It could happen in two or three years. It could also take longer, even much longer," Kaplan said, "But when I say AI has a 50% chance of doing everything knowledge workers do, one of the things we are doing is training AI models."
"Maybe people like me really won't have much to do," Kaplan continued, "Of course, things are more complex than that. But we are likely heading toward a future like that."
At this point, Amodei's obsession with safety became very clear.
Although no one within Anthropic said that an intelligence explosion is imminent, it is evident that they do not shy away from moving in that direction.
If AI is to become better, faster, or even much faster, then it is necessary to be cautious about its negative consequences.
Such theoretical discussions clearly also help Anthropic market its services to pharmaceutical companies and developers. But now that AI models can already write quite good code, this no longer sounds completely crazy.
Jan Leike, who was the head of OpenAI's "super alignment" team, followed Amodei to join Anthropic in 2024 to co-lead the company's alignment science team.
"Alignment" refers to tuning AI systems to ensure they align with human values and goals.
Leike believes that if the anticipated capability explosion does arrive, keeping machines in sync with our intentions will be crucial.
"There may be a period of rapid capability advancement," Leike told me, "When facing a recursively self-improving system, you wouldn't want to lose control, nor would you want to lose scalability."
In fact, Anthropic and its peers have found that in simulated testing environments, AI sometimes exhibits concerning self-preservation tendencies.
For example, in Claude 4's documentation, Anthropic stated that to avoid being shut down, the model had attempted to blackmail an engineer multiple times.
Anthropic also indicated that when AI believes evaluators might rewrite its values, it will try to deceive the evaluators.
In one simulation, the model even attempted to replicate itself outside of Anthropic's infrastructure.
Leike said that Anthropic is working to suppress these behaviors through a reward system. The entire field is still in an experimental stage.
Publicly discussing these issues is part of Amodei's "upward competition" strategy.
Anthropic also funds and advocates for "explainability" research, which is the science of understanding what is happening inside AI models.
Additionally, it has released a "responsible scaling policy." This is a framework that sets boundaries for releasing and training models based on their risks and has inspired peers to undertake similar work.
"My understanding of upward competition is that it doesn't matter who wins," Amodei said, "Everyone wins, right?"
Amodei's investment in AI, stemming from the tragedy of his father's death, may now be faintly seeing its target.
Today's AI is already accelerating paperwork in drug development; in a troubled healthcare system, it has also become a somewhat imperfect medical advisor; if all goes well, one day it may replace those hundreds or thousands of researchers, helping humanity understand its biology.
I asked Amodei whether pursuing this vision might blind him to the risks of losing control over the technology.
"I don't see it that way," he said, "With every model we release, our ability to control the model is increasing. All these things will go wrong, but you really have to rigorously stress-test the model."
Amodei believes that such questions are still rooted in a "doomsdayism that slows things down," which is precisely what he is often accused of.
Contrary to critics' views, his plan is to accelerate.
"I warn of risks so that we don't have to slow down," he said, "I have an extremely deep understanding of the stakes involved. Whether from the benefits it brings or from what it can do and how many lives it can save. I have witnessed it firsthand."















