Abstract
VOID is a video object removal framework that uses vision-language models and video diffusion models to generate physically plausible scenes by leveraging causal reasoning and counterfactual reasoning.
Existing video object removal methods excel at inpainting content "behind" the object and correcting appearance-level artifacts such as shadows and reflections. However, when the removed object has more significant interactions, such as collisions with other objects, current models fail to correct them and produce implausible results. We present VOID, a video object removal framework designed to perform physically-plausible inpainting in these complex scenarios. To train the model, we generate a new paired dataset of counterfactual object removals using Kubric and HUMOTO, where removing an object requires altering downstream physical interactions. During inference, a vision-language model identifies regions of the scene affected by the removed object. These regions are then used to guide a video diffusion model that generates physically consistent counterfactual outcomes. Experiments on both synthetic and real data show that our approach better preserves consistent scene dynamics after object removal compared to prior video object removal methods. We hope this framework sheds light on how to make video editing models better simulators of the world through high-level causal reasoning.
Community
We present VOID, a video object removal framework designed to perform physically-plausible inpainting in these complex scenarios.
Check out the demo here: https://huggingface.co/spaces/sam-motamed/VOID.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EffectErase: Joint Video Object Removal and Insertion for High-Quality Effect Erasing (2026)
- From Understanding to Erasing: Towards Complete and Stable Video Object Removal (2026)
- Toward Physically Consistent Driving Video World Models under Challenging Trajectories (2026)
- MVHOI: Bridge Multi-view Condition to Complex Human-Object Interaction Video Reenactment via 3D Foundation Model (2026)
- TRACE: Object Motion Editing in Videos with First-Frame Trajectory Guidance (2026)
- Point2Insert: Video Object Insertion via Sparse Point Guidance (2026)
- PISCO: Precise Video Instance Insertion with Sparse Control (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper