* 1. add `SanaVideoTransformer3DModel` in transformer_sana_video.py 2. add `SanaVideoPipeline` in pipeline_sana_video.py 3. add all code we need for import `SanaVideoPipeline` * add a sample about how to use sana-video; * code update; * update hf model path; * update code; * sana-video can run now; * 1. add aspect ratio in sana-video-pipeline; 2. add reshape function in sana-video-processor; 3. fix convert pth to safetensor bugs; * default to use `use_resolution_binning`; * make style; * remove unused code; * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by: YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * support `dispatch_attention_fn` * 1. add sana-video markdown; 2. fix typos; * add two test case for sana-video (need check) * fix text-encoder in test-sana-video; * Update tests/pipelines/sana/test_sana_video.py * Update tests/pipelines/sana/test_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/video_processor.py Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> * make style make quality make fix-copies * toctree yaml update; * add sana-video-transformer3d markdown; * Apply style fixes --------- Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by: YiYi Xu <yixu310@gmail.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
6.9 KiB
SANA-Sprint
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation from NVIDIA, MIT HAN Lab, and Hugging Face by Junsong Chen, Shuchen Xue, Yuyang Zhao, Jincheng Yu, Sayak Paul, Junyu Chen, Han Cai, Enze Xie, Song Han
The abstract from the paper is:
This paper presents SANA-Sprint, an efficient diffusion model for ultra-fast text-to-image (T2I) generation. SANA-Sprint is built on a pre-trained foundation model and augmented with hybrid distillation, dramatically reducing inference steps from 20 to 1-4. We introduce three key innovations: (1) We propose a training-free approach that transforms a pre-trained flow-matching model for continuous-time consistency distillation (sCM), eliminating costly training from scratch and achieving high training efficiency. Our hybrid distillation strategy combines sCM with latent adversarial distillation (LADD): sCM ensures alignment with the teacher model, while LADD enhances single-step generation fidelity. (2) SANA-Sprint is a unified step-adaptive model that achieves high-quality generation in 1-4 steps, eliminating step-specific training and improving efficiency. (3) We integrate ControlNet with SANA-Sprint for real-time interactive image generation, enabling instant visual feedback for user interaction. SANA-Sprint establishes a new Pareto frontier in speed-quality tradeoffs, achieving state-of-the-art performance with 7.59 FID and 0.74 GenEval in only 1 step — outperforming FLUX-schnell (7.94 FID / 0.71 GenEval) while being 10× faster (0.1s vs 1.1s on H100). It also achieves 0.1s (T2I) and 0.25s (ControlNet) latency for 1024×1024 images on H100, and 0.31s (T2I) on an RTX 4090, showcasing its exceptional efficiency and potential for AI-powered consumer applications (AIPC). Code and pre-trained models will be open-sourced.
This pipeline was contributed by lawrence-cj, shuchen Xue and Enze Xie. The original codebase can be found here. The original weights can be found under hf.co/Efficient-Large-Model.
Available models:
| Model | Recommended dtype |
|---|---|
Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers |
torch.bfloat16 |
Efficient-Large-Model/Sana_Sprint_0.6B_1024px_diffusers |
torch.bfloat16 |
Refer to this collection for more information.
Note: The recommended dtype mentioned is for the transformer weights. The text encoder must stay in torch.bfloat16 and VAE weights must stay in torch.bfloat16 or torch.float32 for the model to work correctly. Please refer to the inference example below to see how to load the model with the recommended dtype.
Quantization
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.
Refer to the Quantization overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [SanaSprintPipeline] for inference with bitsandbytes.
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaSprintPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModel
quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = AutoModel.from_pretrained(
"Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
subfolder="text_encoder",
quantization_config=quant_config,
torch_dtype=torch.bfloat16,
)
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = SanaTransformer2DModel.from_pretrained(
"Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
subfolder="transformer",
quantization_config=quant_config,
torch_dtype=torch.bfloat16,
)
pipeline = SanaSprintPipeline.from_pretrained(
"Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
text_encoder=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.bfloat16,
device_map="balanced",
)
prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("sana.png")
Setting max_timesteps
Users can tweak the max_timesteps value for experimenting with the visual quality of the generated outputs. The default max_timesteps value was obtained with an inference-time search process. For more details about it, check out the paper.
Image to Image
The [SanaSprintImg2ImgPipeline] is a pipeline for image-to-image generation. It takes an input image and a prompt, and generates a new image based on the input image and the prompt.
import torch
from diffusers import SanaSprintImg2ImgPipeline
from diffusers.utils.loading_utils import load_image
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png"
)
pipe = SanaSprintImg2ImgPipeline.from_pretrained(
"Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = pipe(
prompt="a cute pink bear",
image=image,
strength=0.5,
height=832,
width=480
).images[0]
image.save("output.png")
SanaSprintPipeline
autodoc SanaSprintPipeline
- all
- call
SanaSprintImg2ImgPipeline
autodoc SanaSprintImg2ImgPipeline
- all
- call
SanaPipelineOutput
autodoc pipelines.sana.pipeline_output.SanaPipelineOutput