There are good reasons to go slow with new versions of AI as Cristina Criddle explains in the Financial Times, April 12. 2025.
OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without sufficient safeguards.
Staff and third-party groups have recently been given just days to conduct “evaluations”, the term given to tests for assessing models' risks and performance, on OpenAI’s latest large language models, compared with several months previously.
According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources spent on identifying and mitigating risks, as the $300bn start-up comes under pressure to release models quickly and retain its competitive edge.
“We had more thorough safety testing when [the technology] was less imporant,” said one person testing OpenAI’s upcoming 03 model, designed for complex tasks such as problem-solving and and reasoning.
They added that as LLMs become more capable, the “potential weaponisation” of the technology is increased. “But because there is more demand for it, they want it out faster. I hope it is not a catastrophic mis-step, but it is reckless. This is a recipe for disaster.”
The time crunch has been driven by “competitive pressures”, according to people familiar with the matter, as OpenAI races against Big Tech groups such as Meta and Google and start-ups including Elon Musk’s XAI to cash in on the cutting-edge technology.
There is no global standard for AI safety testing, but from later this year, the EU’s AI Act will force companies to conduct safety tests on their most powerful models. Previously, Al groups have signed voluntary commitments with US and UK governments to allow researchers at AI safety institutes to test models.
OpenAI has been pushing to release its new model 03 as early as next week, giving less than a week to some testers for their safety checks, according to people familiar with the matter. This release date could be subject to change.
Previously, OpenAI allowed several months for safety tests. For GPT-4, which was launched in 2023, testers had six months to conduct evaluations before it was released, according to people familiar with the matter. One person who had tested GPT-4 said some dangerous capabilities were discovered only two months into testing. “They are just not prioritising public safety at all,” they said.
“There’s no regulation saying [companies] have to keep the public informed about all the scary capabilities . . . and they’re under lots of pressure to race each other so they’re not going to stop making them more capable,” said Daniel Kokotajlo, a former OpenAI researcher who now leads the non-profit group AI Futures Project. OpenAI has previously committed to building customised versions of its models to assess for potential misuse, such as whether its technology could help make a biological virus more transmissible. The approach involves considerable resources, such as assembling data sets of specialised information like virology and feeding it to the model to train it in a technique called fine-tuning.
But OpenAI has done this only in a limited way, opting to fine-tune an older, less capable model instead of its more powerful and advanced ones.
OpenAI has never reported how its newer models, like o1 and 03-mini, would also score if fine-tuned. “It is great OpenAI set such a high bar by committing to testing customised versions of their models. But if it is not following through on this commitment, the public deserves to know,” said Steven Adler, a former OpenAI safety researcher.
“Not doing such tests could mean OpenAI and the other AI companies are underestimating the worst risks of their models,” he added.
People familiar with such tests said they bore hefty costs, such as hiring external experts, creating specific data sets, as well as using internal engineers and computing power.
OpenAI said it had made efficiencies in its evaluation processes, including automated tests, which have led to a reduction in timeframes. It said there was no agreed recipe for approaches such as fine-tuning, but it was confident that its methods were the best it could do and were made transparent in its reports. It also added that models, espe- cially for catastrophic risks, were thor- oughly tested and mitigated for safety. “We have a good balance of how fast we move and how thorough we are,” said Johannes Heidecke, head of safety sys- tems.
Another concern raised was over safety tests not conducted on the final models released to the public. Instead, they are performed on earlier so-called checkpoints that are later updated to improve performance and capabilities, with “near-final” versions referenced in OpenAI’s system safety reports.
“It is bad practice to release a model which is different from the one you evaluated,” said a former OpenAI technical staff member.
OpenAI said the checkpoints were “basically identical” to what was launched in the end.