FINAL_Bench

Team
community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

SeaWolf-AI  updated a Space about 9 hours ago
FINAL-Bench/LiteRT-LM
SeaWolf-AI  published a Space about 9 hours ago
FINAL-Bench/LiteRT-LM
SeaWolf-AI  updated a Space 2 days ago
FINAL-Bench/MoneyPrinterV2
View all activity

Articles

SeaWolf-AI 
posted an update 4 days ago
view post
Post
4297
🔥 128 Blackwell GPUs — Thank You, Hugging Face

I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first — because Hugging Face is where it all started.

I design LLM architectures from scratch. HF was my lab — dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback.

Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant.
👉 FINAL-Bench/Leaderboard

These 128 Blackwells will scale AETHER-Net — our Proto-AGI architecture (Emergence Engine · Meta-Cognition · SLAI · Multi-Intelligence · Synergy & Critique) — validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B.

People I must thank:

@John6666 — Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable.

@bartowski — Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you.

@SaylorTwift — You see what others miss. Insight that cuts to the essence. Deep respect.

My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs — all shared here openly.

🤗 Thank you, Hugging Face. Let's turn the next page together. 🚀

vidraft · VIDRAFT
#OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
  • 7 replies
·
SeaWolf-AI 
posted an update 5 days ago
view post
Post
3109
💎 Gemma 4 Playground — Dual Model Demo on ZeroGPU

We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models — directly on Hugging Face Spaces with ZeroGPU.

FINAL-Bench/Gemma-4-Multi

👉 Try it now: FINAL-Bench/Gemma-4-Multi
Two Models, One Space
Switch between both Gemma 4 variants in a single interface:

⚡ Gemma 4 26B-A4B — MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%.
🏆 Gemma 4 31B — Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.

Features

Vision — Upload images for analysis, OCR, chart reading, document parsing
Thinking Mode — Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens
System Prompts — 6 presets (General, Code, Math, Creative, Translate, Research) or write your own
Streaming — Real-time token-by-token response via ZeroGPU
Apache 2.0 — Fully open, no restrictions

Technical Details
Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces .GPU — no dedicated GPU needed.
Both models support 256K context window and 140+ languages out of the box.

Links

- 🤗 Space: [FINAL-Bench/Gemma-4-Multi]( FINAL-Bench/Gemma-4-Multi)
- 📄 Gemma 4 26B-A4B: [google/gemma-4-26B-A4B-it]( google/gemma-4-26B-A4B-it)
- 📄 Gemma 4 31B: [google/gemma-4-31B-it]( google/gemma-4-31B-it)
- 🔬 DeepMind Blog: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)