Base models trained on 1T high-quality tokens, demonstrating strong competitiveness among existing SOTA small models (<2B).
ParScale
community
AI & ML interests
None defined yet.
models
67
ParScale/ParScale-1.8B-P1-Inst
Text Generation
•
2B
•
Updated
•
82
•
1
ParScale/ParScale-1.8B-P2-Inst
Text Generation
•
2B
•
Updated
•
32
ParScale/ParScale-1.8B-P4-Inst
Text Generation
•
2B
•
Updated
•
26
•
1
ParScale/ParScale-1.8B-P8-Inst
Text Generation
•
2B
•
Updated
•
30
•
2
ParScale/ParScale-1.8B-P1
Text Generation
•
2B
•
Updated
•
16
•
1
ParScale/ParScale-1.8B-P2
Text Generation
•
2B
•
Updated
•
16
ParScale/ParScale-1.8B-P4
Text Generation
•
2B
•
Updated
•
10
•
1
ParScale/ParScale-Qwen-3B-P2-Python
Text Generation
•
3B
•
Updated
•
71
ParScale/ParScale-Qwen-3B-P4-Python
Text Generation
•
3B
•
Updated
•
29
ParScale/ParScale-Qwen-3B-P8-Python
Text Generation
•
3B
•
Updated
•
29
datasets
0
None public yet