NVFP4 Quantized RedHatAI/Trinity-Large-Thinking-NVFP4

This is a preliminary version (and subject to change) of NVFP4 quantized arcee-ai/Trinity-Large-Thinking model. The model has both weights and activations quantized to NVFP4 format with vllm-project/llm-compressor.

It is compatible and tested against vllm main. Run it with vllm serve RedHatAI/Trinity-Large-Thinking-NVFP4 --trust-remote-code

Downloads last month
80
Safetensors
Model size
226B params
Tensor type
F32
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for RedHatAI/Trinity-Large-Thinking-NVFP4