view post Post 4882 We collaborated with Hugging Face to enable you to train MoE models 12Ć faster with 35% less VRAM via our new Triton kernels (no accuracy loss). š¤Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply Ā· š„ 29 29 š¤ 5 5 + Reply