Behind the Numbers: The Real Story of OpenAI’s o3 Benchmark Performance


OpenAI’s own Wenda Zhou, a member of the technical staff,
 
said during a livestream last week that the o3 in production is “more optimized for real-world use cases” and speed versus the version of o3 demoed in December. As a result, it may exhibit benchmark “disparities,” he added.

“[W]e’ve done [optimizations] to make the [model] more cost efficient and more useful in general,” Zhou said. “We still hope that — we still think that — this is a much better model . You won’t have to wait as long when you’re asking for an answer, which is a real thing with these [types of] models.”

Granted, the fact that the public release of o3 falls short of OpenAI’s testing promises is a bit of a moot point, since the company’s o3-mini-high and o4-mini models outperform o3 on Frontier Math, and OpenAI plans to debut a more powerful o3 variant, o3-pro, in the coming weeks.

It is, however, another reminder that AI benchmarks are best not taken at face value — particularly when the source is a company with services to sell.

Benchmarking “controversies” are becoming a common occurrence in the AI industry as vendors race to capture headlines and mindshare with new models.

In January, Epoch was criticized for waiting to disclose funding from OpenAI until after the company announced o3. Many academics who contributed to Frontier Math weren’t informed of OpenAI’s involvement until it was made public.

More recently, Elon Musk’s xAI was accused of publishing misleading benchmark charts for its latest AI model, Grok 3. Just this month, Meta admitted to touting benchmark scores for a version of a model that differed from the one the company made available to developers.

That doesn’t mean OpenAI lied, per se. The benchmark results the company published in December show a lower-bound score that matches the score Epoch observed. Epoch also noted its testing setup likely differs from OpenAI’s, and that it used an updated release of FrontierMath for its evaluations.

“The difference between our results and OpenAI’s might be due to OpenAI evaluating with a more powerful internal scaffold, using more test-time [computing], or because those results were run on a different subset of FrontierMath (the 180 problems in frontiermath-2024-11-26 vs the 290 problems in frontiermath-2025-02-28-private),” wrote Epoch.

According to a post on X from the ARC Prize Foundation, an organization that tested a pre-release version of o3, the public o3 model “is a different model tuned for chat/product use,” corroborating Epoch’s report.

“All released o3 compute tiers are smaller than the version we [benchmarked],” wrote ARC Prize. Generally speaking, bigger compute tiers can be expected to achieve better benchmark scores.

Added comments from Wenda Zhou, a member of the OpenAI technical staff, from a livestream last week.

Comments