amazon and Microsoft have so far stayed slightly away from the artificial intelligence arms race. Google and Meta have made developing their own AI models a top priority, while Microsoft and Amazon invest in smaller technology companies and in return receive access to those companies' AI models, which they then use to build their own. Incorporated into products and services.
Microsoft has invested at least $13 billion in OpenAI, the company that developed ChatGPT. As part of the agreement, OpenAI will give Microsoft exclusive access to the AI systems it develops, and Microsoft will provide OpenAI with the computing power it needs. Anthropic has agreements with both Amazon and Google that will net him $4 billion and up to $2 billion, respectively, in exchange for Anthropic making its models available through Amazon and Google's cloud services platforms. receive. (Investors in Anthropic also include Salesforce, whose CEO is TIME co-chairman and owner Marc Benioff.)
Now, there are signs that the two tech giants are moving deeper into the fray. In March, The Verge reported that Amazon had tasked its AGI team with building a model that surpasses Anthopic's most capable AI model, Claude 3, by the middle of this year. Earlier this month, The Information reported that Microsoft is training an underlying model large enough to compete with frontier model developers such as OpenAI.
There are many different types of AI systems used in different ways, but a big trend in recent years has been language models. It's an AI system that generates consistent prose and usable code to power chatbots like ChatGPT. Young companies OpenAI and Anthropic are in the lead for now, along with the more established Google DeepMind, but their new big tech rivals have advantages that are difficult to offset. And if big tech comes to dominate his AI market, it could have implications for corporate power concentration and whether the most powerful AI systems are developed safely.
change of strategy
Throughout the 2010s, AI researchers began to realize that training AI systems with more computational power would certainly improve their capabilities. According to researchers at Epoch, an AI-focused research institute, over the same period, the computing power used to train AI models has increased rapidly, doubling every six months. .
The specialized semiconductor chips needed to do that much computational work are expensive, and so is hiring the engineers who know how to use them. OpenAI CEO Sam Altman says training for GPT-4 will cost him more than $100 million. It's increasingly clear why OpenAI, founded as a nonprofit in 2015, changed its structure and signed a multibillion-dollar deal with Microsoft, and why Anthropic signed similar deals with Amazon and Google. Because it requires a lot of capital. Google DeepMind, the AI team within Google that develops Google's most powerful AI systems, was created last year when Google merged its elite AI groups Google Brain and DeepMind. Similar to OpenAI and Anthropic, DeepMind started as a startup before being acquired by Google in 2014.
read more: Amazon and Anthropic partnership shows scale matters in the AI industry
These partnerships have yielded results for all involved. OpenAI and Anthropic had access to the computational power needed to train cutting-edge AI models. Most commentators agree that the three are OpenAI's GPT-4, Anthropic's Claude 3 Opus, and Google DeepMind's Gemini Ultra. Compatible models currently available. Companies that are behind the frontier have tried alternative business strategies. For example, Meta has created an in-house version of his AI model to attract talented researchers who like the benefits of external developers tweaking his AI model and the ability to publish their research openly. Provides more thorough access.
In their quarterly earnings reports for April, Microsoft and Amazon reported strong months, which both companies partially attributed to AI. Both companies also benefit from the deal in that a significant portion of the funds will be used to purchase computing power from the Cloud Computing Services segment, thus returning to both companies.
But as the technical feasibility and commercial utility of training large models became clearer, it became more attractive for Microsoft and Amazon to build their own large models, FutureTech said. said Neil Thompson, principal investigator on the economics of AI. A project at the Massachusetts Institute of Technology. If successful, building their own models should be cheaper than licensing models from smaller partners and give big tech companies more control over how the models are used, he said.
It's not just big tech companies that are making progress. At OpenAI, Altman has pitched his product to a variety of large companies, including customers at Microsoft.
Who will survive?
The good news for OpenAI and Anthropic is that they have a head start. According to the popular chatbot ranking site, GPT-4 and Claude 3 Opus are still in a different class from other language models such as Meta's Llama 3, along with Google's Gemini Ultra. OpenAI specifically completed his GPT-4 training in August 2022.
But maintaining this lead will be a “constant battle,” Nathan Benaich, founder and general partner of venture capital firm Eyre Street Capital, wrote in an email to TIME. . “Research institutions face the challenge of continually being in funding mode to pay for staff and hardware while lacking a plan to transform this model release arms race into a sustainable long-term business.” “As the amounts involved become so large, U.S. investors will also have to deal with difficult questions about foreign sovereign wealth,'' Wall Street said in February. journal The company reported that Altman is in talks with investors, including the UAE government, to raise up to $7 trillion for AI chip manufacturing projects.
read more: The UAE is on a mission to become an AI powerhouse
Big technology companies, on the other hand, have ready access to computational resources. According to data from market intelligence firm Synergy, Amazon, Microsoft, and Google account for 31%, 24%, and 11% of the global cloud infrastructure market, respectively. research group. This makes training large models cheap. This also means that even if further development of language models does not commercially benefit any company, high-tech companies that sell access to computing power through the cloud can still profit. means.
“Cloud providers are the shovel salesmen during the gold rush. Whether frontier model builders make money or lose money, cloud providers win,” Benaich writes. “Companies like Microsoft and Amazon are in an enviable position in the value chain and have the scale to combine the resources of both to create uniquely powerful models and become essential distribution partners for new entrants.”
But while large technology companies may have certain advantages, smaller companies have unique strengths, such as extensive experience training the largest models and the ability to attract the most talented researchers. says Thompson.
Jack Clark, one of the company's co-founders and head of policy, said Anthropic's talent density and proprietary algorithms allow it to stay at the forefront while using fewer computing resources than many of its competitors. He says he is betting that he can do it. “We will be able to reach the frontier surprisingly efficiently compared to others,” he says. “We won't have to worry about this for the next few years.”
If Big Tech wins
It remains an open question whether large technology companies will be able to survive competition from smaller investors. But if this happens, it could impact market competition and efforts to ensure that the development of powerful AI systems benefits society.
While it could be argued that competition would increase as more companies entered the underlying model market, vertical integration is more likely to increase the power of already strong technology companies, says the AI Now Institute co-author. It is a research institute that studies the social impact of artificial intelligence, argues Amba Kaku, executive director.
“Seeing this as 'increasing competition' would be the most inventive corporate spin that masks the reality that every version of this world serves to strengthen the concentration of power in technology,” she said. writes in TIME. “We need to be wary of this kind of spin, especially in the context of increased antitrust scrutiny from the UK CMA, FTC and European Commission.”
read more: UK competition watchdog signals cautious approach to AI regulation
Anton Korinek, an economics professor at the University of Virginia, said the small businesses currently in charge were created expressly to ensure that building powerful AI systems works for humanity. Therefore, he says, the dominance of large companies could also become a troubling problem. OpenAI's founding goal is to “advance digital intelligence in ways that are most likely to benefit all of humanity,” and Anthropic's founding goal is to “develop more capable, versatile, and reliable AI systems.” “to advance basic research so that we can build upon it.'' and deploying these systems in a way that benefits people. ”
“In some ways, AGI Labs, OpenAI, Anthropic, and DeepMind were all founded on idealism,” he says. “Companies that are owned and controlled by major shareholders cannot follow that strategy. Ultimately, they have to create value for their shareholders.”
Still, companies like OpenAI and Anthropic are also subject to commercial incentives due to the need to raise capital, so they can't act entirely in the public interest, Korinek says. “This is part of a broader movement, whose capital is [computational power] is becoming the most important input,” he says. “When training rounds reach millions of dollars, it becomes much easier to raise philanthropic funds for them. But when training rounds reach billions of dollars, our economy is currently The way it is being done requires an economic benefit.”
With reporting by Billy Perrigo/San Francisco