Nebulos-Distill-Qwen3-0.6B
This is a lightweight reasoning model fine-tuned to perform efficient step-by-step logic. It was successfully distilled from the Qwen 3 architecture on consumer-grade hardware, proving that high-quality AI training is possible on a budget.
Model Details
Model Description
Nebulos-Distill is a compact 0.6B parameter model designed for high-speed local inference. It focuses on maintaining logical consistency and reasoning capabilities while requiring minimal VRAM.
- Developed by: Erik22TY ๐ง
- Model type: Causal Language Model (Fine-tuned via LoRA)
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: unsloth/Qwen3-0.6B-bnb-4bit
Model Sources
- Repository: Erik22TY/Nebulos-Distill-Qwen3-0.6B
Uses
Direct Use
This model is intended for local deployment in reasoning-heavy tasks such as math word problems, logic puzzles, and concise text generation. It is highly recommended for mobile deployment or low-spec desktop environments.
Out-of-Scope Use
The model is 0.6B parameters; it should not be used for long-form creative writing or highly complex professional legal/medical advice as it may hallucinate due to its small size.
Training Details
Training Data
Fine-tuned using the AM-Qwen3-Distilled dataset, a high-quality collection of reasoning-oriented instructional data.
Training Procedure
Training Hyperparameters
- Training regime: fp16 (mixed precision) โก
- Optimizer: paged_adamw_8bit (to save VRAM)
- Gradient Accumulation Steps: 16
- Max Steps: 50
- Learning Rate: 2e-4
Speeds, Sizes, Times
- Hardware: NVIDIA GeForce GTX 1050 (3GB VRAM) ๐๏ธ
- Training Time: ~1 hour and 15 minutes
- Final Loss: 0.9315 (Starting at 50 steps)
- Adapter Size: 4.60 MB
Environmental Impact
- Hardware Type: GTX 1050 Desktop
- Hours used: 1.25 hours
- Cloud Provider: N/A (Local Training on Linux Mint)
Technical Specifications
Compute Infrastructure
Hardware
- GPU: NVIDIA GTX 1050 (3.0 GB VRAM)
- OS: Linux Mint (Ubuntu-based)
Software
- Runtime: Ollama & PyTorch 2.5
- PEFT Library: LoRA (Rank 8)
How to Get Started with the Model
To run this model locally with Ollama, use:
ollama run hf.co/Erik22TY/Nebulos-Distill-Qwen3-0.6B:Q4_K_M
- Downloads last month
- 84