Qwen-Image-2512-8bit-MLX

MLX-optimized 8-bit quantized version of Qwen-Image-2512 for Apple Silicon.

First MLX port of Qwen-Image-2512.

Quick Start

pip install mflux

mflux-generate-qwen \
  --model machiabeli/Qwen-Image-2512-8bit-MLX \
  --prompt "A photorealistic cat wearing a tiny top hat" \
  --steps 20

Performance

Metric Value
Size 34GB (8-bit quantized)
Speed ~8.5s/step on M-series Mac
20 steps ~2:50 total

Model Details

  • Base Model: Qwen/Qwen-Image-2512 (Dec 31, 2025)
  • Quantization: 8-bit
  • Framework: MLX (Apple Silicon optimized)
  • Converted with: mflux 0.14.0

Hardware Requirements

  • Apple Silicon Mac (M1/M2/M3/M4/M5)
  • ~40GB unified memory recommended

License

Apache 2.0 (same as base model)

Credits

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for machiabeli/Qwen-Image-2512-8bit-MLX

Finetuned
(11)
this model