Papers
arxiv:2602.06079

Canzona: A Unified, Asynchronous, and Load-Balanced Framework for Distributed Matrix-based Optimizers

Published on Feb 4
· Submitted by
Liangyu Wang
on Feb 9
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Canzona presents a unified asynchronous framework that addresses the conflict between matrix-based optimizers and distributed tensor fragmentation in LLM training, improving efficiency and reducing latency.

AI-generated summary

The scaling of Large Language Models (LLMs) drives interest in matrix-based optimizers (e.g., Shampoo, Muon, SOAP) for their convergence efficiency; yet their requirement for holistic updates conflicts with the tensor fragmentation in distributed frameworks like Megatron. Existing solutions are suboptimal: synchronous approaches suffer from computational redundancy, while layer-wise partitioning fails to reconcile this conflict without violating the geometric constraints of efficient communication primitives. To bridge this gap, we propose Canzona, a Unified, Asynchronous, and Load-Balanced framework that decouples logical optimizer assignment from physical parameter distribution. For Data Parallelism, we introduce an alpha-Balanced Static Partitioning strategy that respects atomicity while neutralizing the load imbalance. For Tensor Parallelism, we design an Asynchronous Compute pipeline utilizing Micro-Group Scheduling to batch fragmented updates and hide reconstruction overhead. Extensive evaluations on the Qwen3 model family (up to 32B parameters) on 256 GPUs demonstrate that our approach preserves the efficiency of established parallel architectures, achieving a 1.57x speedup in end-to-end iteration time and reducing optimizer step latency by 5.8x compared to the baseline.

Community

Paper submitter

We propose Canzona, a unified, asynchronous, and load-balanced framework that makes matrix-based optimizers (e.g., Muon/Shampoo/SOAP) work efficiently under Megatron-style tensor fragmentation, by decoupling logical optimizer assignment from physical parameter distribution. It introduces α-Balanced Static Partitioning for DP (atomicity + load balance) and an asynchronous compute pipeline with Micro-Group Scheduling for TP to batch fragmented updates and hide reconstruction overhead, yielding 1.57× end-to-end iteration speedup and 5.8× lower optimizer-step latency on Qwen3-32B with 256 GPUs.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.06079 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.06079 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.06079 in a Space README.md to link it from this page.

Collections including this paper 1