top of page
Search

The Battle Over A $236B Market for A.I. Agents - March 20, 2026

It’s A Multi-Combatant Cage Match - Amodei v Altman v Musk v Google v China



Most people will have used A.I. for searches, in lieu of “Googling” something; they have used A.I. for research, or as a work tool. What we are using today are “A.I. applications”, an early form of A.I.  Common examples include Claude by Anthropic, Chat GPT by Open AI and Gemini by Google.  These applications are an early step in the ultimate goal of developers of A.I technology, which is to create autonomous “A.I. Agents”;  tools that can execute multi-step tasks, operate with some autonomy and replace or augment human work. According to Precedence Research, the global market for A.I. Agents alone is expected to reach $236B by 2034, with that market having already reached $7.8B in 2025. The A.I. Agents market is one of the fastest-growing technology sectors, with businesses leading adoption (driven by automation, cost savings, and workflow efficiency) while consumer adoption is accelerating through virtual assistants and smart devices.


This market size does not include the A.I. opportunities for compute infrastructure (semiconductor chips), data centres, the energy supply needed by A.I. applications and infrastructure, and the networks that move the A.I. data.  Each one of these is a multi-trillion dollar opportunity unto itself.


My recent reading in this area got me thinking about who the key players are. What is their philosophy and approach to this life-alter technology?  The A.I applications we are currently using will eventually become A.I. Agents, as it’s these agents, when they reach their full potential, that will have a profound impact on people's daily lives.  


So, who are the key players?

1. Dario Amodei

Company: CEO & co-Founder of Anthropic (key product - “Claude”), age 43 

Background: Former VP of Research at OpenAI

Core philosophy: Safety-first, controlled A.I. development

Key ideas: A.I. will become extremely powerful, quickly; human safety must scale before capabilities; advocates for “constitutional AI” and alignment research; 

Notable positions: Warns that advanced A.I. could arrive within a few years; supports government regulation of powerful models; wants slower, safer scaling with strong oversight.

Reputation: Seen as the cautious scientist of the A.I. race.


2. Sam Altman

Company: CEO and Founder of OpenAI (key product - “ChatGPT”), age 40

Background: Entrepreneur, Tech Company Founder, Venture Capitalist.

Core philosophy: Accelerate development of A.I., but manage risks

Key ideas: A.I. will transform the economy and society; it should be deployed iteratively so society adapts; a deep believer in the “move fast and break things” approach, a classic silicon valley start-up and investor mentality; believes in large-scale deployment to learn safety in practice, ie; make up the rules and regulations as we go. 

Notable positions: Supports some regulation but warns against slowing innovation too much; a position adopted by David Saks (the White House’s “A.I. & Crypto Czar”); focuses heavily on A.I. products and adoption; promotes A.I. as a tool for economic abundance.

Reputation: Seen as the pragmatic builder driving A.I. into the real world. 


3. Elon Musk

Company: xAI (key product - “Grok”); (also previously involved with OpenAI), age 54

Core philosophy: A.I. must remain open and aligned with humanity

Key ideas: A.I. could become dangerous if controlled by a few companies; strongly criticizes closed A.I. systems; pushes for open models and competition.

Notable positions: Sued OpenAI over its partnership with Microsoft; founded xAI to compete in the A.I. race; advocates truth-seeking A.I. and less censorship.

Reputation: Seen as the disruptor and critic of the current A.I. establishment.


Elon Musk was a founding investor in OpenAI, and had a falling out with Sam Altman when OpenAI pivoted from being a not-for-profit entity whose claimed initial focus was ensuring A.I. was developed safely, and focussed on helping humans and their well-being, into becoming a products company focussed on revenue and profits.  This pivot was catalyzed by Microsoft’s second investment of $10B in OpenAI, made in January 2023. Musk's dislike for Altman stems from his belief that Altman betrayed OpenAI's founding mission by prioritizing profits over humanity, allegedly using Musk's initial funding and connections for personal gain rather than open, non-profit A.I. development. This led to lawsuits and accusations of greed versus altruism.  “Musk feels Altman ‘deceived’ him into supporting a non-profit only to transition it into a for-profit entity, a move Musk sees as a betrayal of trust and the original agreement”, reports Business Insider. 


Despite this drama from these early players in the A.I. story, many think the future of A.I. is being shaped by the difference in philosophy between Sam Altman and Dario Amodei, and not so much by Elon Musk. The reason mostly comes down to who is actually driving the leading-edge research and how they think A.I. should be deployed.


Here’s why: 

1. Altman vs Amodei: two competing visions for how A.I. should reach society

These leaders run the two A.I. labs currently closest to the frontier. Their companies are competing to build the most advanced models, while shaping how the technology is rolled out to the world.


  • Altman’s philosophy: “Iterate through deployment” Altman’s approach is sometimes described as: “Ship the technology and learn from real-world use.”


Examples of this approach include: launching ChatGPT to hundreds of millions of users; rapid capability scaling (GPT-4, GPT-4o, etc.); he believes that society adapts faster if people can interact with A.I. early and often. Altman firmly believes that controlled deployment helps discover risks faster than lab-only safety research.


  • Amodei’s philosophy: “Safety must scale before capability” Amodei’s approach is more cautious.


At Anthropic they focus heavily on: alignment research; interpretability; constitutional A.I. (a national uniform set of rules and regulations); preventing catastrophic misuse. Their models were designed around these safety principles. Amodei often warns that extremely powerful A.I. could arrive soon, and the risks from misuse may be catastrophic and become civilization-scale. 

2. Why Musk sits outside the core research debate

Elon Musk is influential but not deeply embedded in the leading edge of the research ecosystem anymore. The reasons for this include:

  • He left OpenAI early on - Musk helped found OpenAI in 2015 but left the board in 2018. And, he was not involved in the development of modern large language models like GPT-4, or the latest offerings of OpenAI. 

  • His current company, xAI  is newer - his company was launched in 2023.

  • While its model “Grok” is competitive, the xAI research lab is still less experienced than OpenAI and Anthropic. 


3. Musk’s arguments are more political than technical

Elon Musk often emphasizes concerns about bias and censorship in A.I. systems, and he  expresses the view that A.I. reflects political ideologies. He emphasizes the need for “truth-seeking” A.I. (his framing around xAI); these are seen as ideological or political concerns, as opposed to technical concerns.


4. xAI’s business performance is not at the same level as Anthropic or OpenAI. 

xAI’s primary commercial product, the chatbot Grok, is integrated into the X social media platform (also owned by Musk), only for premium subscribers. X's subscription business reached $1 billion in annual recurring revenue by early 2026. xAI is targeting B2B revenue with API access, though early reports indicate enterprise trials (for example, with Palantir and Morgan Stanley) have generated limited revenue (hundreds of thousands to a few million dollars). 


Meanwhile in February 2026, Anthropic reported annual revenues of  $14B from its A.I. applications. The company also claimed to have over 300,000 business users of its corporate applications, and over 16 million daily active users of its’ consumer assistant products. Both of these user metrics are based on 2025 data.  


For OpenAI, according to CNBC, the company is reporting annual revenue from its A.I. applications of $25B.  OpenAI also reports 900 million weekly active users, which works out to about 128 million daily active users, or eight times as many daily active users as Anthropic, and infinitely more users than xAI’s application.   


To be fair, xAI was started by Musk in 2023, while OpenAI was founded in 2015 and Anthropic was started in 2021. 


Both OpenAI and Anthropic are now valued in the hundreds of billions; Anthropic at $380 billion following its $30 billion Series G. OpenAI’s most recent private round in February 2026 valued it at approximately $730 billion, with an IPO potentially targeting a $1 trillion valuation. 


What’s driving this revenue? 


It’s not IT budgets anymore. The applications, Claude from Anthropic and ChatGPT-5 from OpenAI, have crossed a threshold. They’re now competing with corporate labour budgets. Companies are not buying A.I. to replace servers. They’re buying A.I. to augment and ultimately displace their human work force. 


So, what is the breakthrough use case? Software coding. Claude Code (Anthropic’s agentic coding tool) now generates revenue above $2.5 billion, having more than doubled since the beginning of 2026. Business subscriptions have quadrupled since the start of the year, and enterprise use has grown to represent over half of all Claude Code revenue. Software engineers have oftten been the limiting factor for startups. They can never hire enough. The Fortune 500 companies barely get any such engineers, as they all go to Silicon Valley technology companies. Now, you can buy intelligence on a metered basis. Pay per token. No recruiting, no vetting, no retention, no equity. It’s just intelligence as a utility. Consumers pay $20/month. Enterprise users pay $200/month. And companies are spending millions per year because the ROI is there.


Google

While most of the press around A.I. is focussed on Sam Altman and “ChatGPT” and Dario Amodei and “Claude”, with Elon Musk and "Grok" only appearing in the news when Grok is used for something egregious, or when Musk weighs in with a political opinion. However, many “A.I. insiders” think the real future fight won’t even be Altman vs Amodei, but OpenAI vs Anthropic vs Google DeepMind. Many A.I. researchers believe. Google’s DeepMind could ultimately become the most important A.I. lab, even though companies like OpenAI and Anthropic dominate headlines today. The reasons mostly come down to scientific depth, infrastructure, and long-term strategy.  


So, why are so many A.I. insiders picking Google’s Deepmind as a dark horse in the A.I. race (although it’s hard to imagine any Google project as a “dark horse” in any technology industry): 


1. DeepMind’s culture is closer to a scientific institute

Unlike most A.I. companies, DeepMind was originally founded with the explicit goal of solving artificial general intelligence (AGI). Their culture resembles a hybrid of a university research lab and a tech company. They focus heavily on: fundamental algorithms; neuroscience-inspired learning; long-horizon research. Many landmark A.I. breakthroughs came from this environment. Examples include: AlphaGo;  AlphaFold; AlphaZero


Here is a brief description of each: 


1. AlphaGo (The Pioneer) - Introduced in 2015-2016, AlphaGo was developed to master “Go”, an ancient board game with immense complexity. It utilized deep neural networks and Monte Carlo Tree Search. AlphaGo defeated Lee Sedol (4-1) in 2016 and Ke Jie (3-0) in 2017, two renowned experts in the game Go.  This demonstrated that A.I. could transcend human intuition. 


2. AlphaZero (The Generalist)

AlphaZero is a generalized version of the self-play reinforcement learning approach pioneered by AlphaGo. Unlike AlphaGo, AlphaZero was given only the rules of the games and no human data. It learned through self-play, and mastered chess in nine hours and “Go” in 13 days, beating world-champion software programs like Stockfish. 


3. AlphaFold (The Scientist)

AlphaFold applies the principles of deep learning to biology, specifically predicting the 3D structure of a protein from its amino acid sequence. Protein folding has been an unsolved biological scientific problem for 50+ years. AlphaFold predicted structures for 200+ million proteins, and accelerated drug discovery and biology research worldwide. This made DeepMind look less like a technology company and more like a scientific breakthrough engine. Many researchers think this shows A.I. can become a general scientific discovery tool.


Observers believe Google DeepMind could win long-term because: 1) their focus is on scientific breakthroughs instead of revenue-generating applications; 2) their easy access to massive compute infrastructure, with Google Cloud; 3) their ability to attract top talent,  and 4) their access to almost an infinite amount of capital. 


So even though Sam Altman and Dario Amodei lead the public conversation, many insiders believe Google DeepMind might quietly be building the most important long-term A.I. capabilities.


 One more interesting view - China is an A.I. player:

Some A.I. researchers say the real global race isn’t just between companies; it’s actually between U.S. labs and Chinese labs like DeepSeek. Many Silicon Valley researchers were shocked by DeepSeek in 2024–2025 because it challenged three core assumptions about how cutting-edge A.I. is built. Those being: 


1. DeepSeek showed initial versions of A.I. can be developed far cheaper than expected.  Before DeepSeek, most of Silicon Valley believed that only companies spending billions could build top A.I. models. Examples include: OpenAI reportedly spending $100M+ training GPT-4, with Google DeepMind and Anthropic investing similar amounts. But DeepSeek claimed it trained one of its advanced models for about $6 million. That was a revelation because it implied that  A.I. might not require massive spending; it suggested that smaller teams could compete; and, it implied that the current A.I. business model might be inefficient. Some analysts said this revelation forced investors to rethink the economics of A.I.


2. They built competitive models despite U.S. chip restrictions

The United States had restricted exports of advanced A.I. semiconductor components to China. The assumption in Silicon Valley was that without these chips from companies like NVIDIA, China would fall behind. But DeepSeek built competitive models using weaker components and fewer of them.This suggested that innovative engineering can serve as a substitute for raw compute power; export controls might slow China, but not stop it. That realization worried many United States policymakers.

3. Their models rivaled Western systems

DeepSeek’s DeepSeek-R1 and DeepSeek-V3 performed competitively with top Western models. Benchmarks showed performance to be comparable to systems like: GPT-4o and o1, both from OpenAI.  And in some math and reasoning tests, DeepSeek-R1 even matched or exceeded them. For researchers, that meant the technical gap between the U.S. and China might be smaller than expected.

4. DeepSeek open-sourced powerful models

Another major shock was their decision to release the model into the public domain. This meant developers around the world could: download it; run it locally; modify it. Open-sourcing early A.I. technology is unusual. Companies like OpenAI and Google mostly keep their best models closed. DeepSeek’s move democratized powerful reasoning models, accelerating global A.I. experimentation. Needless to say Elon Musk was happy to see this form of A.I. technology in the open domain. 

5. It changed the narrative of the A.I. race

Before DeepSeek, the common narrative was that Silicon Valley is years ahead of everyone else. After DeepSeek, many researchers started saying that the A.I. race might actually be U.S. vs China, not company vs company. This reframed A.I. as a geopolitical competition. But the biggest takeaway wasn’t just about China. It was that A.I. progress may depend more on more efficient algorithms rather than massive spending. If that’s true, then: smaller labs could produce breakthroughs; open-source A.I. could accelerate rapidly; the A.I. race would become much less predictable. One fascinating detail many people miss: some researchers believe DeepSeek accidentally demonstrated a new path to AGI through reasoning models, which may be even more important than the cost breakthrough.


Anthropic’s big gamble is paying off

President Trump recently ordered the U.S. government to stop using Anthropic's products and had the Pentagon designate the company a national security risk, in an escalation of a fight over the U.S. military's use of A.I. The Trump administration designated Anthropic as a “supply-chain risk”, the first such designation for an American company, as this designation is typically applied to risky non-American vendors. This action by the Trump Administration  stems from Anthropic’s refusal to back down over safety principles, including red lines over A.I. 's use in autonomous weapons and mass surveillance. Anthropic has challenged this designation in the courts, a risky business move on their part. The company defends its ’lawsuit by saying it’s at risk of losing hundreds of millions of dollars in government contracts because of the Administration’s designation of Anthropic. 


But Anthropic’s gamble to take on the Trump administration could give it an advantage in the overall A.I. race. Fighting that decision in court appears to be earning the company other benefits: strengthened recruitment, public brand recognition and employee morale. Anthropic could join a handful of companies that have gained positive exposure after directly opposing the Administration. Anthropic has long positioned itself as an A.I. company that prioritizes safety and ethics in contrast to its competitors


 “Anthropic’s reputation within the technology community went up, not down,” said a former xAI engineer. “The Pentagon issue made Anthropic look like heroes.” Anthropic is seeing more interest from customers, as well. In the week after the Pentagon canceled its contract with Anthropic and removed its product out of all federal agencies, Anthropic’s Claude application shot up to the top of both the Apple and Android app stores. Claude’s daily active users have also increased by more than 140% since January, according to data from SimilarWeb. Anthropic is also getting a financial boost. “Anthropic now wins about 70% of head-to-head matchups against OpenAI among businesses purchasing A.I. services for the first time,” according to corporate financial technology company Ramp.


Given the actions and leadership shown thus far by Anthropic, my bet (and money, if I can get an allotment in their eventual IPO shares), is on Anthropic and Dario Amodei.



 
 
 

Comments


Email: spaliwal44@gmail.com

Text or call: 613-851-8666

©2023 by Paliwal Professional Writing. Proudly created with Wix.com

bottom of page