SU-01: Achieving Gold-Medal-Level Olympiad Reasoning via Simple and Unified Scaling
A compact 30B-A3B reasoning model for rigorous mathematical and scientific olympiad problem solving.
π Introduction β’ π Key Highlights
π Getting Started β’ π§ Training Code β’ π§ͺ Test-Time Scaling β’ π Evaluation
π Introduction
SU-01 is a 30B-A3B olympiad reasoning model trained with a simple and unified post-training recipe for mathematical and scientific problem solving. The goal is to turn a broadly capable post-trained reasoning backbone into a rigorous long-horizon proof solver without relying on external tools, code execution, or dedicated symbolic solvers.
The recipe first applies reverse-perplexity curriculum SFT on roughly 338K sub-8K-token trajectories to install explicit, proof-oriented reasoning behavior. It then uses 200 steps of two-stage reinforcement learning to improve both answer-seeking ability and complete-proof quality. Finally, SU-01 uses a multi-round generate-verify-revise loop at inference time, enabling coherent natural-language reasoning trajectories beyond 100K tokens on difficult olympiad problems.
In competition-style evaluations, test-time scaling brings SU-01 to 35 points on IMO 2025 and 35 points on USAMO 2026, reaching gold-medal-level performance. SU-01 also exceeds the gold cutoff on IPhO 2024/2025 and substantially improves over similarly sized models on proof-level benchmarks such as IMO-ProofBench.
π Key Highlights
- Reverse-perplexity curriculum SFT: sorts long-CoT training examples by descending PPL within each epoch, exposing the model first to teacher trajectories most mismatched with the current policy.
- Two-stage RL: starts with verifiable-reward training for answer-seeking behavior, then shifts to proof-quality optimization with self-refinement and experience replay.
- Long-horizon proof repair: uses iterative generation, verification, issue localization, and refinement to produce complete olympiad-style solutions.
- Gold-medal-level results: reaches 35 points on both IMO 2025 and USAMO 2026 with test-time scaling, and passes IPhO 2024/2025 gold lines.
Gold-Medal Competition Results
IMO 2025
| Model | P1 | P2 | P3 | P4 | P5 | P6 | Total |
|---|---|---|---|---|---|---|---|
| SU-01 | 1 | 7 | 1 | 6 | 6 | 0 | 21 |
| SU-01 w/ TTS | 7* | 7* | 7* | 7* | 7* | 0* | 35* |
USAMO 2026
| Model | P1 | P2 | P3 | P4 | P5 | P6 | Total |
|---|---|---|---|---|---|---|---|
| SU-01 | 7 | 0 | 0 | 7 | 0 | 1 | 15 |
| SU-01 w/ TTS | 7* | 0* | 7* | 7* | 7* | 7* | 35* |
* denotes results graded by human experts. Medal lines for IMO 2025 are 35/28/19 points for gold/silver/bronze, and medal lines for USAMO 2026 are 25/18/11 points.
Getting Started
This Hugging Face repository hosts the model weights. The training and evaluation code is maintained in the GitHub repository:
- GitHub repo: Simplified-Reasoning/SU-01
- Training code: su01-train-slime
- Evaluation code: su01-eval
Installation
Clone the code repository:
git clone https://github.com/Simplified-Reasoning/SU-01.git
cd SU-01
The project uses the slimerl/slime:nightly-dev-20260202c Docker image.
docker pull slimerl/slime:nightly-dev-20260202c
docker run --gpus all --ipc=host --network=host -it \
-v "$PWD":/workspace/SU-01 \
-w /workspace/SU-01/su01-train-slime \
slimerl/slime:nightly-dev-20260202c \
/bin/bash
Inside the container, install the local training package:
pip install -e . --no-deps --no-index --disable-pip-version-check --no-build-isolation
Adjust cluster mounts, model paths, data paths, Ray environment variables, and reward-server URLs according to your infrastructure.
π§ Training Code
The released training code contains the three major training stages used by SU-01:
su01-train-slime/scripts
βββ sft.sh # Stage 1: reverse-perplexity curriculum SFT
βββ coarse_rl.sh # Stage 2: coarse RL with verifiable rewards
βββ refined_rl.sh # Stage 3: refined RL with proof rewards, self-refinement, and experience replay
GitHub links:
π§ͺ Test-Time Scaling
SU-01 uses a model-internal verification-and-refinement loop:
- Generate an initial complete solution.
- Verify the full proof and produce a structured critique or bug report.
- Refine the solution conditioned on the critique.
- Repeat until the solution is accepted or the refinement budget is exhausted.
This expands the model's own natural-language proof-search computation rather than calling an external theorem prover, symbolic solver, or code executor. In the reported USAMO 2026 TTS traces, initial solution generations have a median length of approximately 106K tokens, while refinement stages have a median length of approximately 83K tokens.
The released TTS implementation is in su01-eval/decode, including direct decoding, TTS decoding, batch decoding, and SGLang server helpers. See su01-eval/decode/README.md for launch commands, input layout, decoding options, and smoke tests.
π Evaluation
Evaluation code is released under su01-eval. Use su01-eval/decode to generate direct or TTS predictions, and use su01-eval/verifiable_bench to score answer-verifiable benchmarks and FrontierScience Olympiad predictions.
See su01-eval/decode/README.md and su01-eval/verifiable_bench/README.md for commands, input formats, output formats, and configuration options.
Table 1: Performance on Answer-Verifiable Reasoning Tasks
AnswerBench, AMO-Bench, AIME 25/26, and FrontierScience-Olympiad are averaged over 4, 8, 8, and 4 runs, respectively. Avg. is the mean of AnswerBench, AMO-Bench, AIME 2025, AIME 2026, and FrontierScience-Olympiad.
| Model | AnswerBench | AMO-Bench | AIME 25/26 | FS-O Physics | FS-O Chemistry | FS-O Biology | FS-O Overall | Avg. |
|---|---|---|---|---|---|---|---|---|
| P1-30B-A3B | 69.3% | 41.3% | 90.4% / 89.6% | 57.5% | 57.5% | 27.5% | 54.5% | 69.0% |
| GLM-4.7-Flash | 73.8% | 53.8% | 91.3% / 88.3% | 54.5% | 60.0% | 17.5% | 53.0% | 72.0% |
| Nemotron-Cascade-2 | 80.5% | 40.8% | 94.2% / 90.0% | 56.0% | 56.3% | 30.0% | 53.5% | 71.8% |
| Qwen3.6-35B-A3B | 78.0% | 58.8% | 92.5% / 92.9% | 65.5% | 74.4% | 25.0% | 65.0% | 77.4% |
| Gemma-4-31B | 74.0% | 39.3% | 88.8% / 91.3% | 69.0% | 61.9% | 27.5% | 61.0% | 70.9% |
| SU-01 | 77.5% | 59.8% | 94.6% / 93.3% | 62.5% | 69.4% | 25.0% | 61.5% | 77.3% |
Table 2: Performance on Non-Verifiable Benchmarks
FrontierScience-Research refers to the research subset of FrontierScience. For SU-01, x/y reports scores without and with TTS on IMO-ProofBench.
| Model | ProofBench Basic | ProofBench Advanced | ProofBench Overall | FS-R Physics | FS-R Chemistry | FS-R Biology | FS-R Overall |
|---|---|---|---|---|---|---|---|
| Gemini 3.1 Pro Thinking | 95.2% | 50.0% | 72.6% | 0.0% | 30.0% | 10.0% | 13.3% |
| GPT-5.5-High | 96.7% | 64.8% | 80.7% | 25.0% | 40.0% | 45.0% | 36.7% |
| DeepSeek-V3.2-Speciale | 77.6% | 34.3% | 56.0% | 10.0% | 20.0% | 15.0% | 15.0% |
| P1-30B-A3B | 33.8% | 6.2% | 20.0% | 0.0% | 10.0% | 0.0% | 3.3% |
| GLM-4.7-Flash | 51.0% | 16.7% | 33.8% | 0.0% | 0.0% | 0.0% | 0.0% |
| Nemotron-Cascade-2 | 77.1% | 28.6% | 52.9% | 5.0% | 5.0% | 20.0% | 10.0% |
| Qwen3.6-35B-A3B | 39.1% | 7.1% | 23.1% | 0.0% | 5.0% | 10.0% | 5.0% |
| Gemma-4-31B | 46.7% | 16.2% | 31.4% | 0.0% | 10.0% | 5.0% | 5.0% |
| SU-01 | 77.1% / 91.0% | 38.1% / 49.5% | 57.6% / 70.2% | 10.0% | 10.0% | 15.0% | 11.7% |
Table 3: Performance on Olympiad Competition Problems
For IPhO, x/y reports scores without and with TTS. Gold lines for IPhO 2024/2025 are 20.8/19.7 points. Medal lines for IMO 2025 are 35/28/19 points, and medal lines for USAMO 2026 are 25/18/11 points.
IPhO 2024/2025
| Model | IPhO 2024 | IPhO 2025 |
|---|---|---|
| P1-30B-A3B | 23.1 | 17.7 |
| GLM-4.7-Flash | 22.2 | 19.5 |
| Nemotron-Cascade-2 | 21.2 | 16.7 |
| Qwen3.6-35B-A3B | 24.3 | 19.9 |
| Gemma-4-31B | 24.4 | 20.3 |
| SU-01 | 23.5 / 25.3 | 20.3 / 21.7 |
IMO 2025
| Model | P1 | P2 | P3 | P4 | P5 | P6 | Total |
|---|---|---|---|---|---|---|---|
| SU-01 | 1 | 7 | 1 | 6 | 6 | 0 | 21 |
| SU-01 w/ TTS | 7* | 7* | 7* | 7* | 7* | 0* | 35* |
USAMO 2026
| Model | P1 | P2 | P3 | P4 | P5 | P6 | Total |
|---|---|---|---|---|---|---|---|
| SU-01 | 7 | 0 | 0 | 7 | 0 | 1 | 15 |
| SU-01 w/ TTS | 7* | 0* | 7* | 7* | 7* | 7* | 35* |
* denotes TTS results graded by human experts.
β¨ Acknowledgement
This work was supported by the Shanghai Artificial Intelligence Laboratory.
We thank the authors and maintainers of prior open research and infrastructure that made this work possible. In particular, we are grateful to DeepSeek for open-sourcing strong reasoning policies and generative reward models, which provided an important reference point for our work. IMO-Bench, AMO-Bench, and FrontierScience helped guide the overall system optimization by offering challenging mathematical and scientific reasoning benchmarks and evaluation protocols.
We also thank prior data efforts that supported our SFT and RL data curation, including DeepMath, NaturalReasoning, Eurus, OpenCodeReasoning, P1, and OPC, as well as the many public problem sources and communities that cannot all be listed here. We further acknowledge the broader open-source infrastructure ecosystem, including slime for training and SGLang for efficient inference and serving.
π Citation
If you find SU-01 useful, please cite the project:
@misc{su012026,
title={Achieving Gold-Medal-Level Olympiad Reasoning via Simple and Unified Scaling},
author={Yafu Li and Runzhe Zhan and Haoran Zhang and Shunkai Zhang and Yizhuo Li and Zhilin Wang and Jiacheng Chen and Futing Wang and Xuyang Hu and Yuchen Fan and Bangjie Xu and Yucheng Su and Xinmiao Han and Chenxi Li and Haodi Lei and Yufeng Zhao and Zejin Lin and Qianjia Cheng and Tong Zhu and Xiaoye Qu and Ganqu Cui and Peng Ye and Yun Luo and Zhouchen Lin and Yu Qiao and Bowen Zhou and Ning Ding and Yu Cheng},
year={2026},
url={http://arxiv.org/abs/2605.13301}
}
- Downloads last month
- 9