GStars
    huggingface

    huggingface/diffusers

    ๐Ÿค— Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

    deep-learning
    machine-learning
    diffusion
    flux
    image-generation
    image2image
    image2video
    latent-diffusion-models
    pytorch
    qwen-image
    score-based-generative-modeling
    stable-diffusion
    stable-diffusion-diffusers
    text2image
    text2video
    video2video
    Python
    Apache-2.0
    31.3K stars
    6.4K forks
    31.3K watching
    Updated 2/27/2026
    View on GitHub
    Backblaze Advertisement

    Loading star history...

    Health Score

    5.6

    Weekly Growth

    +0

    +0.0% this week

    Contributors

    1

    Total contributors

    Open Issues

    872

    Generated Insights

    About diffusers



    GitHub GitHub release GitHub release Contributor Covenant X account

    ๐Ÿค— Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, ๐Ÿค— Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions.

    ๐Ÿค— Diffusers offers three core components:

    • State-of-the-art diffusion pipelines that can be run in inference with just a few lines of code.
    • Interchangeable noise schedulers for different diffusion speeds and output quality.
    • Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.

    Installation

    We recommend installing ๐Ÿค— Diffusers in a virtual environment from PyPI or Conda. For more details about installing PyTorch, please refer to their official documentation.

    PyTorch

    With pip (official package):

    pip install --upgrade diffusers[torch]
    

    With conda (maintained by the community):

    conda install -c conda-forge diffusers
    

    Apple Silicon (M1/M2) support

    Please refer to the How to use Stable Diffusion in Apple Silicon guide.

    Quickstart

    Generating outputs is super easy with ๐Ÿค— Diffusers. To generate an image from text, use the from_pretrained method to load any pretrained diffusion model (browse the Hub for 30,000+ checkpoints):

    from diffusers import DiffusionPipeline
    import torch
    
    pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16)
    pipeline.to("cuda")
    pipeline("An image of a squirrel in Picasso style").images[0]
    

    You can also dig into the models and schedulers toolbox to build your own diffusion system:

    from diffusers import DDPMScheduler, UNet2DModel
    from PIL import Image
    import torch
    
    scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
    model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda")
    scheduler.set_timesteps(50)
    
    sample_size = model.config.sample_size
    noise = torch.randn((1, 3, sample_size, sample_size), device="cuda")
    input = noise
    
    for t in scheduler.timesteps:
        with torch.no_grad():
            noisy_residual = model(input, t).sample
            prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
            input = prev_noisy_sample
    
    image = (input / 2 + 0.5).clamp(0, 1)
    image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
    image = Image.fromarray((image * 255).round().astype("uint8"))
    image
    

    Check out the Quickstart to launch your diffusion journey today!

    How to navigate the documentation

    DocumentationWhat can I learn?
    TutorialA basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model.
    LoadingGuides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers.
    Pipelines for inferenceGuides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library.
    OptimizationGuides for how to optimize your diffusion model to run faster and consume less memory.
    TrainingGuides for how to train a diffusion model for different tasks with different training techniques.

    Contribution

    We โค๏ธ contributions from the open-source community! If you want to contribute to this library, please check out our Contribution guide. You can look out for issues you'd like to tackle to contribute to the library.

    Also, say ๐Ÿ‘‹ in our public Discord channel Join us on Discord. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or just hang out โ˜•.

    TaskPipeline๐Ÿค— Hub
    Unconditional Image Generation DDPM google/ddpm-ema-church-256
    Text-to-ImageStable Diffusion Text-to-Image stable-diffusion-v1-5/stable-diffusion-v1-5
    Text-to-ImageunCLIP kakaobrain/karlo-v1-alpha
    Text-to-ImageDeepFloyd IF DeepFloyd/IF-I-XL-v1.0
    Text-to-ImageKandinsky kandinsky-community/kandinsky-2-2-decoder
    Text-guided Image-to-ImageControlNet lllyasviel/sd-controlnet-canny
    Text-guided Image-to-ImageInstructPix2Pix timbrooks/instruct-pix2pix
    Text-guided Image-to-ImageStable Diffusion Image-to-Image stable-diffusion-v1-5/stable-diffusion-v1-5
    Text-guided Image InpaintingStable Diffusion Inpainting runwayml/stable-diffusion-inpainting
    Image VariationStable Diffusion Image Variation lambdalabs/sd-image-variations-diffusers
    Super ResolutionStable Diffusion Upscale stabilityai/stable-diffusion-x4-upscaler
    Super ResolutionStable Diffusion Latent Upscale stabilityai/sd-x2-latent-upscaler

    Thank you for using us โค๏ธ.

    Credits

    This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:

    • @CompVis' latent diffusion models library, available here
    • @hojonathanho original DDPM implementation, available here as well as the extremely useful translation into PyTorch by @pesser, available here
    • @ermongroup's DDIM implementation, available here
    • @yang-song's Score-VE and Score-VP implementations, available here

    We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available here as well as @crowsonkb and @rromb for useful discussions and insights.

    Citation

    @misc{von-platen-etal-2022-diffusers,
      author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Dhruv Nair and Sayak Paul and William Berman and Yiyi Xu and Steven Liu and Thomas Wolf},
      title = {Diffusers: State-of-the-art diffusion models},
      year = {2022},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/huggingface/diffusers}}
    }
    

    Discover Repositories

    Search across tracked repositories by name or description