NVFP4 Quantized RedHatAI/Trinity-Large-Thinking-NVFP4
This is a preliminary version (and subject to change) of NVFP4 quantized arcee-ai/Trinity-Large-Thinking model. The model has both weights and activations quantized to NVFP4 format with vllm-project/llm-compressor.
It is compatible and tested against vllm main. Run it with vllm serve RedHatAI/Trinity-Large-Thinking-NVFP4 --trust-remote-code
- Downloads last month
- 80
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for RedHatAI/Trinity-Large-Thinking-NVFP4
Base model
arcee-ai/Trinity-Large-TrueBase Finetuned
arcee-ai/Trinity-Large-Base Finetuned
arcee-ai/Trinity-Large-Thinking