qwen35_caption_galore

This model is a fine-tuned version of /workspace/models/Qwen3.5-9B on the my_caption dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • family_to_muon_lr = { "language": _fallback(getattr(training_args, "language_muon_lr", 2e-5), language_lr), "vision": _fallback(getattr(training_args, "vision_muon_lr", 3e-5), vision_lr), "merger": _fallback(getattr(training_args, "merger_muon_lr", 6e-5), merger_lr), }

    family_to_adamw_lr = { "language": _fallback(getattr(training_args, "language_adamw_lr", 1e-5), language_lr), "vision": _fallback(getattr(training_args, "vision_adamw_lr", 1e-6), vision_lr), "merger": _fallback(getattr(training_args, "merger_adamw_lr", 1e-5), merger_lr), }

  • train_batch_size: 2

  • eval_batch_size: 8

  • seed: 42

  • distributed_type: multi-GPU

  • gradient_accumulation_steps: 16

  • total_train_batch_size: 32

  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments

  • lr_scheduler_type: cosine_with_min_lr

  • lr_scheduler_warmup_steps: 0.05

  • num_epochs: 3

Training results

Framework versions

  • Transformers 5.5.0
  • Pytorch 2.11.0+cu130
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
26
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support