With the price of an ounce now more than $5,000, why is everyone going for gold?
Listen | 47:11
DCM Editorial Summary: This story has been independently rewritten and summarised for DCM readers to highlight key developments relevant to the region. Original reporting by Irish Times, click this post to read the original article.
OpenAI is prioritising the advancement of ChatGPT over more long-term research, prompting the departure of senior staff as the $500 billion (€423 billion) company adapts to stiff competition from rivals such as Google and Anthropic.
The San Francisco-based start-up has reallocated resources for experimental work in favour of advances to the large language models (LLMs) that power its flagship chatbot, according to 10 current and former employees.
Among those to leave OpenAI in recent months over the strategic shift are vice-president of research Jerry Tworek, model policy researcher Andrea Vallone and economist Tom Cunningham.
The changes at OpenAI mark an important shift for a group where ChatGPT emerged from a research preview in 2022 before igniting the generative AI boom.
Led by chief executive Sam Altman, it is evolving from a research lab into one of Silicon Valley’s biggest companies. That means the company must prove to investors it will earn the revenues needed to justify a $500 billion valuation.
[ What do ads on ChatGPT tell us about the AI race?Opens in new window ]
“OpenAI is trying to treat language models now as an engineering problem where they’re scaling up compute and scaling up algorithms and data, and they’re eking out really big gains from doing that,” one person familiar with its research ambitions said.
“But if you want to do original blue-sky research, it is quite tough. And if you don’t find yourself in one of the teams in the centre, it becomes increasingly political.”
OpenAI’s chief research officer, Mark Chen, rejected the characterisation.
“Long-term, foundational research remains central to OpenAI and continues to account for the majority of our compute and investment, with hundreds of bottom-up projects exploring long-horizon questions beyond any single product,” he said.
“Pairing that research with real-world deployment strengthens our science by accelerating feedback, learning loops and rigour – and we’ve never been more confident in our long-term research roadmap towards an automated researcher.”
Listen | 47:11
As at other large tech companies, researchers at OpenAI need to apply to top executives for computing “credits” and access to technology to get their projects off the ground.
Multiple people close to the company said that over recent months researchers who did not work on large language models often had their requests denied or were granted amounts insufficient to validate research.
Teams working on video and image generation models Sora and DALL-E felt neglected and under-resourced as their projects were deemed less relevant to ChatGPT, people familiar with the matter said. Over the past year, other projects unrelated to language models have been wound down, one person said.
[ ‘I use ChatGPT like a lifecoach … It has helped me avoid a lot of arguments’Opens in new window ]
Others said there had been a reorganisation of teams at the company as OpenAI streamlines its structure around improving its popular chatbot used by 800 million people.
In December, Altman declared a “code red” over the need to improve ChatGPT. It followed the release of Google’s Gemini Three model, which outperformed OpenAI’s on independent benchmarks, and as Anthropic’s Claude model made strides in generating computer code.
“Realistically, there are tonnes of competitive pressures, especially for scaling companies who want to have the best model every quarter; it is a crazy, cut-throat race,” a former employee said. “Companies are spending an unbelievable amount of money on that race, and that often requires focus, it requires trying to do what you know best and expect that to be working.”
Another former senior employee said: “Theoretically, there was some willingness to do other kinds of research, but directing resources to those things was made really difficult, so you always felt like a second-class citizen to the main bets.”
In January, Tworek, who led its efforts on the “reasoning” of AI models, left OpenAI after seven years, saying he wanted to explore “types of research that are hard to do at OpenAI”.
He wanted to work on continuous learning – the ability of a model to learn from new data over time while retaining previously learned information. People close to Tworek said his appeals for more resources such as computing power and staff were rejected by leadership, culminating in a stand-off with chief scientist Jakub Pachocki.
People familiar with the dispute said Pachocki disagreed with Tworek’s specific scientific approach and believed that OpenAI’s existing artificial intelligence (AI) “architecture” around LLMs was more promising.
Last month, Vallone, who led model policy research at OpenAI, joined rival Anthropic. Two people familiar with her exit said she was given an “impossible” mission of protecting the mental health of users becoming attached to ChatGPT. Vallone did not respond to a request for comment.
Cunningham left the economic research team last year, suggesting OpenAI was straying from impartial research to focus on work that promoted the company. His departure was first reported by Wired.
“The company is still making progress, but it is locked in a tight competition with Google and Anthropic, who have consensus stronger models, so they have less of a luxury to slow down because they could let competitors push ahead,” a former employee said.
Many investors are unconcerned about the risk that OpenAI falls behind rivals in the race to build advanced “frontier” models and products. Jenny Xiao, a partner at Leonis Capital and former researcher at OpenAI, believes its advantage is the hundreds of millions of people who use ChatGPT.
“Everyone’s obsessing over whether OpenAI has the best model,” she said. “That’s the wrong question. They’re converting technical leadership into platform lock-in. The moat has shifted from research to user behaviour, and that’s a much stickier advantage.” – Copyright The Financial Times Limited 2026