Text Generation
•
0.3B
•
Updated
•
139k
•
514
Text Generation
•
0.3B
•
Updated
•
47.5k
•
946
Image-Text-to-Text
•
4B
•
Updated
•
768k
•
1.09k
Image-Text-to-Text
•
4B
•
Updated
•
269k
•
135
Text Generation
•
1.0B
•
Updated
•
39.6k
•
174
Text Generation
•
1.0B
•
Updated
•
2.34M
•
789
Image-Text-to-Text
•
12B
•
Updated
•
21k
•
82
Image-Text-to-Text
•
12B
•
Updated
•
1.2M
•
•
607
Image-Text-to-Text
•
27B
•
Updated
•
14.1k
•
115
Image-Text-to-Text
•
27B
•
Updated
•
1.49M
•
•
1.79k
Note
^ transformers-based pre-trained and instruct models
google/shieldgemma-2-4b-it
Image-Text-to-Text
•
4B
•
Updated
•
2.2k
•
139
Note
^ ShieldGemma 2
google/gemma-3-4b-it-qat-q4_0-gguf
Image-Text-to-Text
•
4B
•
Updated
•
11k
•
228
google/gemma-3-4b-pt-qat-q4_0-gguf
Image-Text-to-Text
•
4B
•
Updated
•
126
•
23
google/gemma-3-1b-it-qat-q4_0-gguf
Text Generation
•
1.0B
•
Updated
•
1.27k
•
114
google/gemma-3-1b-pt-qat-q4_0-gguf
Text Generation
•
1.0B
•
Updated
•
81
•
12
google/gemma-3-12b-it-qat-q4_0-gguf
Image-Text-to-Text
•
12B
•
Updated
•
66.5k
•
227
google/gemma-3-12b-pt-qat-q4_0-gguf
Image-Text-to-Text
•
12B
•
Updated
•
101
•
17
google/gemma-3-27b-it-qat-q4_0-gguf
Image-Text-to-Text
•
27B
•
Updated
•
3.3k
•
372
google/gemma-3-27b-pt-qat-q4_0-gguf
Image-Text-to-Text
•
27B
•
Updated
•
44
•
28
Note
^ GGUFs to be used in llama.cpp and Ollama. We strongly recommend using the IT models.
google/gemma-3-270m-qat-q4_0-unquantized
Text Generation
•
0.3B
•
Updated
•
287
•
8
google/gemma-3-270m-it-qat-q4_0-unquantized
Text Generation
•
0.3B
•
Updated
•
511
•
12
google/gemma-3-4b-it-qat-q4_0-unquantized
Image-Text-to-Text
•
4B
•
Updated
•
594
•
10
google/gemma-3-27b-it-qat-q4_0-unquantized
Image-Text-to-Text
•
27B
•
Updated
•
461
•
38
google/gemma-3-12b-it-qat-q4_0-unquantized
Image-Text-to-Text
•
12B
•
Updated
•
2.7k
•
•
26
google/gemma-3-1b-it-qat-q4_0-unquantized
Text Generation
•
1.0B
•
Updated
•
325
•
9
google/gemma-3-4b-it-qat-int4-unquantized
Image-Text-to-Text
•
4B
•
Updated
•
412
•
9
google/gemma-3-12b-it-qat-int4-unquantized
Image-Text-to-Text
•
12B
•
Updated
•
327
•
11
google/gemma-3-1b-it-qat-int4-unquantized
Text Generation
•
1.0B
•
Updated
•
1.23k
•
12
Note
^ unquantized QAT-based checkpoints that allow quantizing while retaining similar quality to half precision