Coding Models Qwen/Qwen3-Coder-480B-A35B-Instruct Text Generation • 480B • Updated Aug 21, 2025 • 75.2k • • 1.32k
Models OpenGVLab/InternVL3-78B-AWQ Image-Text-to-Text • Updated Sep 11, 2025 • 913 • 10 OpenGVLab/InternVL3-78B Image-Text-to-Text • Updated Sep 11, 2025 • 39.9k • 234 google-t5/t5-base Translation • Updated Feb 14, 2024 • 1.42M • • 773 HuggingFaceH4/zephyr-7b-alpha Text Generation • 7B • Updated Oct 16, 2024 • 5.43k • • 1.12k
Vision Models genmo/mochi-1-preview Text-to-Video • Updated Sep 4, 2025 • 9.67k • • 1.32k stabilityai/stable-diffusion-3.5-large Text-to-Image • Updated Oct 22, 2024 • 74.8k • • 3.41k stabilityai/stable-diffusion-3.5-medium Text-to-Image • Updated Oct 31, 2024 • 106k • • 921
Must Reads ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
Coding Models Qwen/Qwen3-Coder-480B-A35B-Instruct Text Generation • 480B • Updated Aug 21, 2025 • 75.2k • • 1.32k
Vision Models genmo/mochi-1-preview Text-to-Video • Updated Sep 4, 2025 • 9.67k • • 1.32k stabilityai/stable-diffusion-3.5-large Text-to-Image • Updated Oct 22, 2024 • 74.8k • • 3.41k stabilityai/stable-diffusion-3.5-medium Text-to-Image • Updated Oct 31, 2024 • 106k • • 921
Models OpenGVLab/InternVL3-78B-AWQ Image-Text-to-Text • Updated Sep 11, 2025 • 913 • 10 OpenGVLab/InternVL3-78B Image-Text-to-Text • Updated Sep 11, 2025 • 39.9k • 234 google-t5/t5-base Translation • Updated Feb 14, 2024 • 1.42M • • 773 HuggingFaceH4/zephyr-7b-alpha Text Generation • 7B • Updated Oct 16, 2024 • 5.43k • • 1.12k
Must Reads ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65