RedHatAI/SmolLM-135M-Instruct-quantized.w8a16
Text Generation
• 83.4M • Updated
• 20.7k
RedHatAI/gemma-2-27b-it-quantized.w8a16
Text Generation
• 9B • Updated
• 8
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w8a16
Text Generation
• 105B • Updated
• 2
• 1
RedHatAI/gemma-2-2b-it-quantized.w8a8
Text Generation
• 3B • Updated
• 8
RedHatAI/Mistral-Nemo-Instruct-2407-quantized.w4a16
Text Generation
• 3B • Updated
• 490
• 4
RedHatAI/SmolLM-1.7B-Instruct-quantized.w8a16
Text Generation
• 0.6B • Updated
RedHatAI/gemma-2-2b-it-quantized.w4a16
Text Generation
• 1B • Updated
• 19
• 1
RedHatAI/gemma-2-9b-it-quantized.w4a16
Text Generation
• 3B • Updated
• 36
• 2
RedHatAI/Phi-3-small-128k-instruct-quantized.w8a16
Text Generation
• 3B • Updated
• 1
RedHatAI/gemma-2-2b-quantized.w8a16
Text Generation
• 2B • Updated
• 8
RedHatAI/gemma-2-2b-it-quantized.w8a16
Text Generation
• 2B • Updated
• 1
RedHatAI/gemma-2-9b-it-quantized.w8a16
Text Generation
• 4B • Updated
• 50
• 1
RedHatAI/gemma-2-2b-it-FP8
3B • Updated
• 172
• 1
RedHatAI/starcoder2-15b-quantized.w8a8
Text Generation
• 16B • Updated
• 1
RedHatAI/starcoder2-7b-quantized.w8a8
Text Generation
• 7B • Updated
• 33
RedHatAI/starcoder2-3b-quantized.w8a8
Text Generation
• 3B • Updated
• 2
RedHatAI/starcoder2-7b-quantized.w8a16
Text Generation
• 2B • Updated
• 4
RedHatAI/starcoder2-3b-quantized.w8a16
Text Generation
• 1B • Updated
• 6
RedHatAI/Meta-Llama-3.1-70B-quantized.w8a8
Text Generation
• 71B • Updated
• 3
RedHatAI/Meta-Llama-3.1-405B-FP8
Text Generation
• 410B • Updated
• 4
RedHatAI/Meta-Llama-3.1-70B-quantized.w8a16
Text Generation
• 19B • Updated
RedHatAI/starcoder2-3b-FP8
Text Generation
• 3B • Updated
• 1
RedHatAI/starcoder2-7b-FP8
Text Generation
• 7B • Updated
• 3
RedHatAI/starcoder2-15b-FP8
Text Generation
• 16B • Updated
• 6.24k
RedHatAI/Mistral-Nemo-Instruct-2407-quantized.w8a16
Text Generation
• 4B • Updated
• 384
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a16
Text Generation
• 3B • Updated
• 1
• 1
RedHatAI/Meta-Llama-3.1-70B-FP8
Text Generation
• 71B • Updated
• 471
• 2
RedHatAI/Mistral-Large-Instruct-2407-FP8
Text Generation
• 123B • Updated
• 4.5k
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a16
Text Generation
• 19B • Updated
• 4.13k
• 5
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
Text Generation
• 8B • Updated
• 608k
• 44