How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline

controlnet = ControlNetModel.from_pretrained("kwanY/EAS4")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
	"SG161222/RealVisXL_V3.0", controlnet=controlnet
)

controlnet-kwanY/EAS4

These are AlignNet weights trained on SG161222/RealVisXL_V3.0 with Pose, Expression and Sparse landmark conditions. You can find some example images below.

prompt: photo of a human images_0) prompt: A render of a head of Disney character images_1) prompt: a FHD head of an Orc images_2)

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kwanY/EAS4

Adapter
(1)
this model