-
-
-
-
-
-
Inference Providers
Active filters:
vllm
nm-testing/Meta-Llama-3-8B-Instruct-FP8-K-V
Text Generation
•
8B
•
Updated
•
17
RedHatAI/DeepSeek-Coder-V2-Lite-Instruct-FP8
Text Generation
•
16B
•
Updated
•
22.3k
•
9
RedHatAI/DeepSeek-Coder-V2-Lite-Base-FP8
Text Generation
•
16B
•
Updated
•
13
mistralai/Mistral-Nemo-Base-2407
12B
•
Updated
•
10.1k
•
335
mgoin/Mistral-Nemo-Instruct-2407-FP8-Dynamic
Text Generation
•
12B
•
Updated
•
67
mgoin/Mistral-Nemo-Instruct-2407-FP8-KV
Text Generation
•
12B
•
Updated
•
6
RedHatAI/Mistral-Nemo-Instruct-2407-FP8
Text Generation
•
12B
•
Updated
•
696
•
18
FlorianJc/Mistral-Nemo-Instruct-2407-vllm-fp8
Text Generation
•
12B
•
Updated
•
19
•
8
RedHatAI/DeepSeek-Coder-V2-Base-FP8
Text Generation
•
236B
•
Updated
•
15
RedHatAI/DeepSeek-Coder-V2-Instruct-FP8
Text Generation
•
236B
•
Updated
•
189
•
7
mgoin/Minitron-4B-Base-FP8
Text Generation
•
4B
•
Updated
•
8
•
3
mgoin/Minitron-8B-Base-FP8
Text Generation
•
8B
•
Updated
•
5
•
3
mgoin/nemotron-3-8b-chat-4k-sft-hf
Text Generation
•
9B
•
Updated
•
7
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
322k
•
43
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8-dynamic
Text Generation
•
8B
•
Updated
•
38.2k
•
9
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8-dynamic
Text Generation
•
71B
•
Updated
•
2.29k
•
7
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
7.55k
•
50
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8
Text Generation
•
406B
•
Updated
•
325
•
31
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8-dynamic
Text Generation
•
406B
•
Updated
•
3.51k
•
15
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a16
Text Generation
•
3B
•
Updated
•
3k
•
12
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8
Text Generation
•
8B
•
Updated
•
5.72k
•
19
mistralai/Mistral-Large-Instruct-2407
123B
•
Updated
•
5.24k
•
853
mgoin/Nemotron-4-340B-Base-hf
Text Generation
•
341B
•
Updated
•
9
•
1
mgoin/Nemotron-4-340B-Base-hf-FP8
Text Generation
•
341B
•
Updated
•
58
•
2
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a16
Text Generation
•
19B
•
Updated
•
220
•
5
mgoin/Nemotron-4-340B-Instruct-hf
Text Generation
•
341B
•
Updated
•
24
•
4
mgoin/Nemotron-4-340B-Instruct-hf-FP8
Text Generation
•
341B
•
Updated
•
8
•
3
FlorianJc/ghost-8b-beta-vllm-fp8
Text Generation
•
8B
•
Updated
•
8
FlorianJc/Meta-Llama-3.1-8B-Instruct-vllm-fp8
Text Generation
•
8B
•
Updated
•
6
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
8B
•
Updated
•
21.6k
•
30