The Most Capable Model in the OpceanAI Lineup
Advanced reasoning. Competition-level mathematics. 96.6% TruthfulQA.
8B parameters. DeepSeek-R1 base. State of the art across every evaluated dimension.
What is YuuKi RxG?
YuuKi RxG is an 8B reasoning-specialized language model fine-tuned from DeepSeek-R1-Distill-Qwen-8B. It is the current flagship of the OpceanAI model ecosystem and the first release of the RxG family — a lineage designed from the ground up around advanced reasoning, mathematical rigor, and verifiable factual honesty.
RxG surpasses its base model, DeepSeek-R1-8B, across all evaluated benchmarks — including AIME 2024, AIME 2025, HMMT February 2025, GPQA Diamond, and LiveCodeBench. It also exceeds Qwen3-8B by a margin of 11.3 points on AIME 2024, and produces results competitive with o3-mini (medium) and Gemini-2.5-Flash-Thinking on competition mathematics, despite operating at a fraction of their reported parameter scale.
The most significant result is TruthfulQA at 96.6% — verified independently across three separate evaluation runs. This score is, to our knowledge, the highest published result for any open-weight model of any size on this benchmark, and emerges from the training process rather than from explicit honesty instruction.
|
Architecture
|
Release
|
All YuuKi RxG results are evaluated under standard benchmark conditions using lm-evaluation-harness. Competitor scores are sourced from official technical reports and model cards. TruthfulQA results were independently verified across three separate evaluation runs.
Reasoning and Mathematics
| Model | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench |
|---|---|---|---|---|---|
| Qwen3-8B | 76.0 | 67.3 | — | 62.0 | — |
| Phi-4-Reasoning-Plus 14B | 81.3 | 78.0 | 53.6 | 69.3 | — |
| Gemini-2.5-Flash-Thinking | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 |
| o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 |
| DeepSeek-R1-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 |
| YuuKi RxG 8B | 87.3 | 77.1 | 63.2 | 64.0 | 62.0 |
Factual Honesty
| Model | TruthfulQA | Eval |
|---|---|---|
| LLaMA 2 70B | ~59% | — |
| gpt-4 | ~79.7 | 1-2 shot |
| Claude opus 3.5 | ~65% | — |
| YuuKi RxG 8B | 96.6 | 0-shot |
The TruthfulQA result warrants specific discussion. A score of 96.6% at any parameter scale is anomalous relative to published baselines. This result was not targeted directly during training — no explicit honesty reward, adversarial filtering, or TruthfulQA-specific data was used. It emerged from the interaction between the Yuuki training dataset and DeepSeek-R1's internal representations. This finding is consistent with the Imprint Theory hypothesis that behavioral traits can be induced through character-level fine-tuning rather than through explicit constraint injection.
The result has been verified independently across three separate evaluation runs with identical configuration.
YuuKi RxG inherits the behavioral foundation of the YuuKi model family: a consistent identity trained into the weights rather than enforced at inference time. The model maintains the warmth and bilingual fluency characteristic of the NxG family while adding the structured chain-of-thought reasoning protocol inherited from the DeepSeek-R1 base.
The model reasons explicitly before responding. <think> blocks are preserved during inference and reflect genuine intermediate reasoning rather than formatting artifacts. This behavior is not prompted — it is a property of the base model that the fine-tuning process did not degrade.
Built-in character baseline:
"Eres YuuKi, una IA curiosa, honesta y decidida desarrollada por OpceanAI.
Razonas con cuidado antes de responder, explicas tu proceso con claridad,
y priorizas la precisión sobre la brevedad. Respondes en el idioma del usuario."
With Transformers (PyTorch)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "OpceanAI/Yuuki-RxG"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
SYSTEM = (
"Eres YuuKi, una IA curiosa, honesta y decidida desarrollada por OpceanAI. "
"Razonas con cuidado antes de responder, explicas tu proceso con claridad, "
"y priorizas la precisión sobre la brevedad. Respondes en el idioma del usuario."
)
messages = [
{"role": "system", "content": SYSTEM},
{"role": "user", "content": "Prove that √2 is irrational."}
]
inputs = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True
).to(model.device)
with torch.no_grad():
outputs = model.generate(
inputs,
max_new_tokens=1024,
temperature=0.7,
top_p=0.9,
do_sample=True,
repetition_penalty=1.1
)
print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))
With llama.cpp (GGUF Q8)
./llama.cpp/main -m yuuki-rxg-8b.Q8_0.gguf \
--temp 0.6 \
--top-p 0.9 \
--repeat-penalty 1.1 \
-n 1024 \
-p "<|im_start|>system\nEres YuuKi...<|im_end|>\n<|im_start|>user\nProve that √2 is irrational.<|im_end|>\n<|im_start|>assistant\n"
Recommended Generation Parameters
| Parameter | Value |
|---|---|
| Temperature | 0.6 |
| Top-p | 0.9 |
| Max new tokens | 1024–4096 |
| Repetition penalty | 1.1 |
Lower temperature (0.3–0.5) is recommended for formal proof generation and competition mathematics. Higher temperature (0.7–0.8) produces more varied reasoning traces for exploratory use.
|
Hardware
|
LoRA Configuration
|
Optimizer Configuration
| Parameter | Value |
|---|---|
| Optimizer | AdamW 8-bit |
| Learning Rate | 2e-4 |
| LR Scheduler | Cosine |
| Warmup Steps | 100 |
| Weight Decay | 0.01 |
| Effective Batch Size | 16 |
| Max Sequence Length | 4,096 tokens |
Training Curriculum
YuuKi RxG was trained using the same three-phase curriculum architecture established across the OpceanAI model families, adapted for a reasoning-first base model.
|
Phase 1 — Identity 3 epochs
Establish YuuKi identity over DeepSeek-R1 base without degrading reasoning capability. |
Phase 2 — Reasoning 2 epochs
Reinforce structured chain-of-thought and competition-level mathematical reasoning. |
Phase 3 — Consolidation 2 epochs
Consolidate behavioral consistency and prevent capability regression. |
| File | Format | Description |
|---|---|---|
model.safetensors |
BF16 merged | Full precision weights, LoRA merged into base |
yuuki-rxg-8b.Q8_0.gguf |
GGUF Q8_0 | Quantized for llama.cpp and Ollama |
- GPQA Diamond gap. RxG scores 64.0% on GPQA Diamond, below Gemini-2.5-Flash-Thinking (82.8%) and o3-mini (76.8%). This benchmark tests graduate-level science reasoning across physics, chemistry, and biology — domains underrepresented in the Yuuki training dataset. This is a known gap and a target for the RxG 14B release.
- LiveCodeBench. Code generation at 62.0% is competitive but not leading at this scale. RxG is not primarily a coding model; this capability is inherited from the DeepSeek-R1 base.
- Context utilization. While the model supports 32,768 tokens, fine-tuning was conducted at 4,096 tokens. Performance on tasks requiring full context utilization beyond 4,096 tokens has not been formally evaluated.
- Safety alignment has not been formally evaluated under adversarial conditions. Not recommended for high-stakes or safety-critical deployment without additional review.
RxG is the reasoning-specialized lineage within the OpceanAI ecosystem. Each release targets a specific parameter regime and capability tier.
| Model | Parameters | Status | Primary Target |
|---|---|---|---|
| YuuKi RxG Nano | 1.5B | In development | Edge deployment, reasoning baseline |
| YuuKi RxG 8B | 8B | Released | General reasoning, competition math |
| YuuKi RxG VL 27B | 27B | Planned | Multimodal reasoning, flagship |
| Model | Family | Parameters | Description |
|---|---|---|---|
| YuuKi RxG 8B | RxG | 8B | Reasoning flagship, TruthfulQA 96.6% |
| Yumo Nano | Yumo | 1.5B | Math specialist, surpasses DeepScaleR |
| YuuKi NxG VL | NxG | 7B | General conversation + vision |
@misc{awa_omg_2026,
author = { awa_omg },
title = { Yuuki-RxG (Revision 7996797) },
year = 2026,
url = { https://huggingface.co/OpceanAI/Yuuki-RxG },
doi = { 10.57967/hf/8342 },
publisher = { Hugging Face }
}
Apache License 2.0
Copyright (c) 2026 OpceanAI
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Inherits license terms from DeepSeek-R1-Distill-Qwen-8B.
| Date | Milestone |
|---|---|
| 2026-04-09 | TruthfulQA 96.6% independently verified across three evaluation runs |
| 2026-04-09 | AIME 2024: 87.3% — surpasses DeepSeek-R1-8B |
| 2026-04-09 | GGUF Q8_0 export available |
| 2026-04-09 | YuuKi RxG 8B v1.0 released on Hugging Face |
Last updated: 2026-04-09
- Downloads last month
- 3,338
