Diffusers documentation

AutoModel

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.36.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AutoModel

The AutoModel class automatically detects and loads the correct model class (UNet, transformer, VAE) from a config.json file. You don’t need to know the specific model class name ahead of time. It supports data types and device placement, and works across model types and libraries.

The example below loads a transformer from Diffusers and a text encoder from Transformers. Use the subfolder parameter to specify where to load the config.json file from.

import torch
from diffusers import AutoModel, DiffusionPipeline

transformer = AutoModel.from_pretrained(
    "Qwen/Qwen-Image", subfolder="transformer", torch_dtype=torch.bfloat16, device_map="cuda"
)

text_encoder = AutoModel.from_pretrained(
    "Qwen/Qwen-Image", subfolder="text_encoder", torch_dtype=torch.bfloat16, device_map="cuda"
)

Custom models

AutoModel also loads models from the Hub that aren’t included in Diffusers. Set trust_remote_code=True in AutoModel.from_pretrained() to load custom models.

A custom model repository needs a Python module with the model class, and a config.json with an auto_map entry that maps "AutoModel" to "module_file.ClassName".

custom/custom-transformer-model/
β”œβ”€β”€ config.json
β”œβ”€β”€ my_model.py
└── diffusion_pytorch_model.safetensors

The config.json includes the auto_map field pointing to the custom class.

{
  "auto_map": {
    "AutoModel": "my_model.MyCustomModel"
  }
}

Then load it with trust_remote_code=True.

import torch
from diffusers import AutoModel

transformer = AutoModel.from_pretrained(
    "custom/custom-transformer-model", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda"
)

For a real-world example, Overworld/Waypoint-1-Small hosts a custom WorldModel class across several modules in its transformer subfolder.

transformer/
β”œβ”€β”€ config.json          # auto_map: "model.WorldModel"
β”œβ”€β”€ model.py
β”œβ”€β”€ attn.py
β”œβ”€β”€ nn.py
β”œβ”€β”€ cache.py
β”œβ”€β”€ quantize.py
β”œβ”€β”€ __init__.py
└── diffusion_pytorch_model.safetensors
import torch
from diffusers import AutoModel

transformer = AutoModel.from_pretrained(
    "Overworld/Waypoint-1-Small", subfolder="transformer", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda"
)

If the custom model inherits from the ModelMixin class, it gets access to the same features as Diffusers model classes, like regional compilation and group offloading.

As a precaution with trust_remote_code=True, pass a commit hash to the revision argument in AutoModel.from_pretrained() to make sure the code hasn’t been updated with new malicious code (unless you fully trust the model owners).

transformer = AutoModel.from_pretrained(
    "Overworld/Waypoint-1-Small", subfolder="transformer", trust_remote_code=True, revision="a3d8cb2"
)

Learn more about implementing custom models in the Community components guide.

Update on GitHub