Abstract
Stemphonic is a diffusion- and flow-based framework that generates variable sets of synchronized musical stems in single inference passes, improving both quality and efficiency over existing methods.
Music stem generation, the task of producing musically-synchronized and isolated instrument audio clips, offers the potential of greater user control and better alignment with musician workflows compared to conventional text-to-music models. Existing stem generation approaches, however, either rely on fixed architectures that output a predefined set of stems in parallel, or generate only one stem at a time, resulting in slow inference despite flexibility in stem combination. We propose Stemphonic, a diffusion-/flow-based framework that overcomes this trade-off and generates a variable set of synchronized stems in one inference pass. During training, we treat each stem as a batch element, group synchronized stems in a batch, and apply a shared noise latent to each group. At inference-time, we use a shared initial noise latent and stem-specific text inputs to generate synchronized multi-stem outputs in one pass. We further expand our approach to enable one-pass conditional multi-stem generation and stem-wise activity controls to empower users to iteratively generate and orchestrate the temporal layering of a mix. We benchmark our results on multiple open-source stem evaluation sets and show that Stemphonic produces higher-quality outputs while accelerating the full mix generation process by 25 to 50%. Demos at: https://stemphonic-demo.vercel.app.
Community
Check out intro & demo video here!
https://youtu.be/IrGD3CHaPYU?si=nutPyU5sz5iHfES5
More sound examples:
https://stemphonic-demo.vercel.app
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DARC: Drum accompaniment generation with fine-grained rhythm control (2026)
- MM-Sonate: Multimodal Controllable Audio-Video Generation with Zero-Shot Voice Cloning (2026)
- MMEDIT: A Unified Framework for Multi-Type Audio Editing via Audio Language Model (2025)
- Audio ControlNet for Fine-Grained Audio Generation and Editing (2026)
- SemanticAudio: Audio Generation and Editing in Semantic Space (2026)
- Diffusion Timbre Transfer Via Mutual Information Guided Inpainting (2026)
- Rethinking Music Captioning with Music Metadata LLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper