DreamCAD: Scaling Multi-modal CAD Generation using Differentiable Parametric Surfaces
Abstract
DreamCAD is a multi-modal generative framework that creates editable BReps from point-level supervision using parametric patches and differentiable tessellation, achieving superior geometric fidelity and user preference scores.
Computer-Aided Design (CAD) relies on structured and editable geometric representations, yet existing generative methods are constrained by small annotated datasets with explicit design histories or boundary representation (BRep) labels. Meanwhile, millions of unannotated 3D meshes remain untapped, limiting progress in scalable CAD generation. To address this, we propose DreamCAD, a multi-modal generative framework that directly produces editable BReps from point-level supervision, without CAD-specific annotations. DreamCAD represents each BRep as a set of parametric patches (e.g., Bézier surfaces) and uses a differentiable tessellation method to generate meshes. This enables large-scale training on 3D datasets while reconstructing connected and editable surfaces. Furthermore, we introduce CADCap-1M, the largest CAD captioning dataset to date, with 1M+ descriptions generated using GPT-5 for advancing text-to-CAD research. DreamCAD achieves state-of-the-art performance on ABC and Objaverse benchmarks across text, image, and point modalities, improving geometric fidelity and surpassing 75% user preference. Code and dataset will be publicly available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ShapeR: Robust Conditional 3D Shape Generation from Casual Captures (2026)
- Brep2Shape: Boundary and Shape Representation Alignment via Self-Supervised Transformers (2026)
- Stroke3D: Lifting 2D strokes into rigged 3D model via latent diffusion models (2026)
- PixARMesh: Autoregressive Mesh-Native Single-View Scene Reconstruction (2026)
- Pointer-CAD: Unifying B-Rep and Command Sequences via Pointer-based Edges&Faces Selection (2026)
- HY3D-Bench: Generation of 3D Assets (2026)
- STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper