Image-Text-to-Text
MLX
Safetensors
gemma4
apple-silicon
4bit
Mixture of Experts
mixture-of-experts
on-device
conversational
4-bit precision
Instructions to use LetheanNetwork/lemmy-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use LetheanNetwork/lemmy-mlx with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("LetheanNetwork/lemmy-mlx") config = load_config("LetheanNetwork/lemmy-mlx") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use LetheanNetwork/lemmy-mlx with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "LetheanNetwork/lemmy-mlx"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "LetheanNetwork/lemmy-mlx" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use LetheanNetwork/lemmy-mlx with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "LetheanNetwork/lemmy-mlx"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default LetheanNetwork/lemmy-mlx
Run Hermes
hermes
LetheanNetwork/lemmy-mlx
Gemma 4 26B A4B MoE in MLX format, 4-bit quantized, converted from
LetheanNetwork/lemmy's bf16
safetensors via mlx_lm.convert. Unmodified Google weights hosted
in the Lethean namespace so downstream tools don't have to depend
on external mlx-community mirrors.
For the LEK-merged sibling see lthn/lemmy.
License
Apache 2.0, subject to the Gemma Terms of Use.
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for LetheanNetwork/lemmy-mlx
Base model
google/gemma-4-26B-A4B Finetuned
google/gemma-4-26B-A4B-it Quantized
LetheanNetwork/lemmy