Generator3B-V0.1

This model is a fine-tuned version of HuggingFaceTB/SmolLM3-3B. It was trained using Unsloth and the SFTTrainer on a specialized 'Golden Dataset' for high-quality instruction following.

Model Details

  • Developed by: GODELEV
  • Model type: Causal Language Model
  • Language(s): English
  • License: Apache 2.0
  • Fine-tuned from model: SmolLM3-3B-Instruct

Training Procedure

The model was trained with maximum speed optimizations including:

  • Sequence Packing: Enabled
  • 4-bit Quantization: Bitsandbytes
  • LoRA Rank: 64
  • Optimizers: AdamW 8-bit
Downloads last month
43
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GODELEV/Generator3B-V0.1

Finetuned
(87)
this model
Quantizations
1 model

Dataset used to train GODELEV/Generator3B-V0.1