* checking to improve pipelines.
* more fixes.
* add: tip to encourage the usage of revision
* Apply suggestions from code review
* retrigger ci
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* Fix ControlNetModel.from_unet do not load add_embedding
* delete white space in blank line
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* debugging
* let's see the numbers
* let's see the numbers
* let's see the numbers
* restrict tolerance.
* increase inference steps.
* shallow copy of cross_attentionkwargs
* remove print
* pop scale from the top-level unet instead of getting it.
* improve readability.
* Apply suggestions from code review
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* fix a little bit.
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline`
* Fix variable name typo and update comments
* Update deprecated `output_type="numpy"` to "np" in test files
* Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py
* Update test_stable_diffusion_panorama.py
* Update numbers in README.md
* Update get_guidance_scale_embedding method to use timesteps instead of w
* Update number of checkpoints in README.md
* Add type hints and fix var name
* Fix PyTorch's convention for inplace functions
* Fix a typo
* Revert "Fix PyTorch's convention for inplace functions"
This reverts commit 74350cf65b.
* Fix typos
* Indent
* Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
* log loss per image
* add commandline param for per image loss logging
* style
* debug-loss -> debug_loss
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Change step_offset scheduler docstrings
* Mention it may be needed by some models
* More docstrings
These ones failed literal S&R because I performed it case-sensitive
which is fun.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* add: support for notifying maintainers about the nightly test status
* add: a tempoerary workflow for validation.
* cancel in progress.
* runs-on
* clean up
* add: peft dep
* change device.
* multiple edits.
* remove temp workflow.
* add: a workflow to check if docker containers can be built if the files are modified.
* type
* unify docker image build test and push
* make it run on prs too.
* check
* check
* check
* check again.
* remove docker test build file.
* remove extra dependencies./
* check
* Initial commit
* Removed copy hints, as in original SDXLControlNetPipeline
Removed copy hints, as in original SDXLControlNetPipeline, as the `make fix-copies` seems to have issues with the @property decorator.
* Reverted changes to ControlNetXS
* Addendum to: Removed changes to ControlNetXS
* Added test+docs for mixture of denoiser
* Update docs/source/en/using-diffusers/controlnet.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/using-diffusers/controlnet.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.
* Update src/diffusers/models/unet_2d_blocks.py
This changes suggest by maintener.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update src/diffusers/models/unet_2d_blocks.py
Add suggested text
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update unet_2d_blocks.py
I changed the Parameter to Args text.
* Update unet_2d_blocks.py
proper indentation set in this file.
* Update unet_2d_blocks.py
a little bit of change in the act_fun argument line.
* I run the black command to reformat style in the code
* Update unet_2d_blocks.py
similar doc-string add to have in the original diffusion repository.
* Fix bug for mention in this issue section #6901
* Update src/diffusers/schedulers/scheduling_ddim_flax.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix linter
* Restore empty line
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* copied from for t2i pipelines without ip adapter support.
* two more pipelines with proper copied from comments.
* revert to the original implementation
* throw error when patch inputs and layernorm are provided for transformers2d.
* add comment on supported norm_types in transformers2d
* more check
* fix: norm _type handling
* [bug] Fix float/int guidance scale not working in `StableVideoDiffusionPipeline`
* Add test to disable CFG on SVD
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* support and example launch for sdxl turbo
* White space fixes
* Trailing whitespace character
* ruff format
* fix guidance_scale and steps for turbo mode
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Radames Ajna <radamajna@gmail.com>
* update svd docs
* fix example doc string
* update return type hints/docs
* update type hints
* Fix typos in pipeline_stable_video_diffusion.py
* make style && make fix-copies
* Update src/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update src/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* update based on suggestion
---------
Co-authored-by: M. Tolga Cangöz <mtcangoz@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Enable FakeTensorMode for EulerDiscreteScheduler scheduler
PyTorch's FakeTensorMode does not support `.numpy()` or `numpy.array()`
calls.
This PR replaces `sigmas` numpy tensor by a PyTorch tensor equivalent
Repro
```python
with torch._subclasses.FakeTensorMode() as fake_mode, ONNXTorchPatcher():
fake_model = DiffusionPipeline.from_pretrained(model_name, low_cpu_mem_usage=False)
```
that otherwise would fail with
`RuntimeError: .numpy() is not supported for tensor subclasses.`
* Address comments
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add dora tags for drambooth lora scripts
* style
* add is_dora arg
* style
* add dora training feature to sd 1.5 script
* added notes about DoRA training
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* initial
* check_inputs fix to the rest of pipelines
* add fix for no cfg too
* use of variable
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Add copyright notice to relevant files and fix typos
* Set `timestep_spacing` parameter of `StableDiffusionXLPipeline`'s scheduler to `'trailing'`.
* Update `StableDiffusionXLPipeline.from_single_file` by including EulerAncestralDiscreteScheduler with `timestep_spacing="trailing"` param.
* Update model loading method in SDXL Turbo documentation
* move model helper function in pipeline to EfficiencyMixin
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* DPMMultistep rescale_betas_zero_snr
* DPM upcast samples in step()
* DPM rescale_betas_zero_snr UT
* DPMSolverMulti move sample upcast after model convert
Avoids having to re-use the dtype.
* Add a newline for Ruff
* log_validation unification for controlnet.
* additional fixes.
* remove print.
* better reuse and loading
* make final inference run conditional.
* Update examples/controlnet/README_sdxl.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* resize the control image in the snippet.
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Make LoRACompatibleConv padding_mode work.
* Format code style.
* add fast test
* Update src/diffusers/models/lora.py
Simplify the code by patrickvonplaten.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* code refactor
* apply patrickvonplaten suggestion to simplify the code.
* rm test_lora_layers_old_backend.py and add test case in test_lora_layers_peft.py
* update test case.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* modulize log validation
* run make style and refactor wanddb support
* remove redundant initialization
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* make checkpoint_merger pipeline pass the "variant" argument to from_pretrained()
* make style
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* add stable_diffusion_xl_ipex community pipeline
* make style for code quality check
* update docs as suggested
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* standardize model card
* fix tags
* correct import styling and update tags
* run make style and make quality
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* feat: allow low_cpu_mem_usage in ip adapter loading
* reduce the number of device placements.
* documentation.
* throw low_cpu_mem_usage warning only once from the main entry point.
* use load_model_into_meta in single file utils
* propagate to autoencoder and controlnet.
* correct class name access behaviour.
* remove torch_dtype from load_model_into_meta; seems unncessary
* remove incorrect kwarg
* style to avoid extra unnecessary line breaks
* fix: bias loading bug
* fixes for SDXL
* apply changes to the conversion script to match single_file_utils.py
* do transpose to match the single file loading logic.
Remove <cat-toy> validation prompt from textual_inversion_sdxl.py
The `<cat-toy>` validation prompt is a default choice for the example task in the README. But no other part of `textual_inversion_sdxl.py` references the cat toy and `textual_inversion.py` has a default validation prompt of `None` as well.
So bring `textual_inversion_sdxl.py` in line with `textual_inversion.py` and change default validation prompt to `None`
* attention_head_dim
* debug
* print more info
* correct num_attention_heads behaviour
* down_block_num_attention_heads -> num_attention_heads.
* correct the image link in doc.
* add: deprecation for num_attention_head
* fix: test argument to use attention_head_dim
* more fixes.
* quality
* address comments.
* remove depcrecation.
* add: support for passing ip adapter image embeddings
* debugging
* make feature_extractor unloading conditioned on safety_checker
* better condition
* type annotation
* index to look into value slices
* more debugging
* debugging
* serialize embeddings dict
* better conditioning
* remove unnecessary prints.
* Update src/diffusers/loaders/ip_adapter.py
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* make fix-copies and styling.
* styling and further copy fixing.
* fix: check_inputs call in controlnet sdxl img2img pipeline
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* feat: standarize model card creation for dreambooth training.
* correct 'inference
* remove comments.
* take component out of kwargs
* style
* add: card template to have a leaner description.
* widget support.
* propagate changes to train_dreambooth_lora
* propagate changes to custom diffusion
* make widget properly type-annotated
* fix: callback function name is incorrect
On this tutorial there is a function defined and then used inside `callback_on_step_end` argument, but the name was not correct (mismatch)
* fix: typo in num_timestep (correct is num_timesteps)
fixed property name
* remove _to_tensor
* remove _to_tensor definition
* remove _collapse_frames_into_batch
* remove lora for not bloating the code.
* remove sample_size.
* simplify code a bit more
* ensure timesteps are always in tensor.
* Fix `AutoencoderTiny` with `use_slicing`
When using slicing with AutoencoderTiny, the encoder mistakenly encodes the entire batch for every image in the batch.
* Fixed formatting issue
@@ -77,7 +77,7 @@ Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggi
## Quickstart
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 19000+ checkpoints):
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 22000+ checkpoints):
```python
fromdiffusersimportDiffusionPipeline
@@ -219,7 +219,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
- https://github.com/deep-floyd/IF
- https://github.com/bentoml/BentoML
- https://github.com/bmaltais/kohya_ss
- +8000 other amazing GitHub repositories 💪
- +9000 other amazing GitHub repositories 💪
Thank you for using us ❤️.
@@ -238,7 +238,7 @@ We also want to thank @heejkoo for the very helpful overview of papers, code and
```bibtex
@misc{von-platen-etal-2022-diffusers,
author={Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
author={Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Dhruv Nair and Sayak Paul and William Berman and Yiyi Xu and Steven Liu and Thomas Wolf},
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -12,14 +12,18 @@ specific language governing permissions and limitations under the License.
# IP-Adapter
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs.
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder.
<Tip>
Learn how to load an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/loading_adapters#ip-adapter) loading guide.
Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Consistency Decoder
Consistency decoder can be used to decode the latents from the denoising UNet in the [`StableDiffusionPipeline`]. This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3).
Consistency decoder can be used to decode the latents from the denoising UNet in the [`StableDiffusionPipeline`]. This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3).
The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder).
[AnimateLCM](https://animatelcm.github.io/) is a motion module checkpoint and an [LCM LoRA](https://huggingface.co/docs/diffusers/using-diffusers/inference_with_lcm_lora) that have been created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# LEDITS++
LEDITS++ was proposed in [LEDITS++: Limitless Image Editing using Text-to-Image Models](https://huggingface.co/papers/2311.16711) by Manuel Brack, Felix Friedrich, Katharina Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, Apolinário Passos.
The abstract from the paper is:
*Text-to-image diffusion models have recently received increasing interest for their astonishing ability to produce high-fidelity images from solely text inputs. Subsequent research efforts aim to exploit and apply their capabilities to real image editing. However, existing image-to-image methods are often inefficient, imprecise, and of limited versatility. They either require time-consuming fine-tuning, deviate unnecessarily strongly from the input image, and/or lack support for multiple, simultaneous edits. To address these issues, we introduce LEDITS++, an efficient yet versatile and precise textual image manipulation technique. LEDITS++'s novel inversion approach requires no tuning nor optimization and produces high-fidelity results with a few diffusion steps. Second, our methodology supports multiple simultaneous edits and is architecture-agnostic. Third, we use a novel implicit masking technique that limits changes to relevant image regions. We propose the novel TEdBench++ benchmark as part of our exhaustive evaluation. Our results demonstrate the capabilities of LEDITS++ and its improvements over previous methods. The project page is available at https://leditsplusplus-project.static.hf.space .*
<Tip>
You can find additional information about LEDITS++ on the [project page](https://leditsplusplus-project.static.hf.space/index.html) and try it out in a [demo](https://huggingface.co/spaces/editing-images/leditsplusplus).
</Tip>
<Tip warning={true}>
Due to some backward compatability issues with the current diffusers implementation of [`~schedulers.DPMSolverMultistepScheduler`] this implementation of LEdits++ can no longer guarantee perfect inversion.
This issue is unlikely to have any noticeable effects on applied use-cases. However, we provide an alternative implementation that guarantees perfect inversion in a dedicated [GitHub repo](https://github.com/ml-research/ledits_pp).
</Tip>
We provide two distinct pipelines based on different pre-trained models.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stable Cascade
This model is built upon the [Würstchen](https://openreview.net/forum?id=gU58d5QeGv) architecture and its main
difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this
important? The smaller the latent space, the **faster** you can run inference and the **cheaper** the training becomes.
How small is the latent space? Stable Diffusion uses a compression factor of 8, resulting in a 1024x1024 image being
encoded to 128x128. Stable Cascade achieves a compression factor of 42, meaning that it is possible to encode a
1024x1024 image to 24x24, while maintaining crisp reconstructions. The text-conditional model is then trained in the
highly compressed latent space. Previous versions of this architecture, achieved a 16x cost reduction over Stable
Diffusion 1.5.
Therefore, this kind of model is well suited for usages where efficiency is important. Furthermore, all known extensions
like finetuning, LoRA, ControlNet, IP-Adapter, LCM etc. are possible with this method as well.
The original codebase can be found at [Stability-AI/StableCascade](https://github.com/Stability-AI/StableCascade).
## Model Overview
Stable Cascade consists of three models: Stage A, Stage B and Stage C, representing a cascade to generate images,
hence the name "Stable Cascade".
Stage A & B are used to compress images, similar to what the job of the VAE is in Stable Diffusion.
However, with this setup, a much higher compression of images can be achieved. While the Stable Diffusion models use a
spatial compression factor of 8, encoding an image with resolution of 1024 x 1024 to 128 x 128, Stable Cascade achieves
a compression factor of 42. This encodes a 1024 x 1024 image to 24 x 24, while being able to accurately decode the
image. This comes with the great benefit of cheaper training and inference. Furthermore, Stage C is responsible
for generating the small 24 x 24 latents given a text prompt.
The Stage C model operates on the small 24 x 24 latents and denoises the latents conditioned on text prompts. The model is also the largest component in the Cascade pipeline and is meant to be used with the `StableCascadePriorPipeline`
The Stage B and Stage A models are used with the `StableCascadeDecoderPipeline` and are responsible for generating the final image given the small 24 x 24 latents.
<Tip warning={true}>
There are some restrictions on data types that can be used with the Stable Cascade models. The official checkpoints for the `StableCascadePriorPipeline` do not support the `torch.float16` data type. Please use `torch.bfloat16` instead.
In order to use the `torch.bfloat16` data type with the `StableCascadeDecoderPipeline` you need to have PyTorch 2.2.0 or higher installed. This also means that using the `StableCascadeCombinedPipeline` with `torch.bfloat16` requires PyTorch 2.2.0 or higher, since it calls the `StableCascadeDecoderPipeline` internally.
If it is not possible to install PyTorch 2.2.0 or higher in your environment, the `StableCascadeDecoderPipeline` can be used on its own with the `torch.float16` data type. You can download the full precision or `bf16` variant weights for the pipeline and cast the weights to `torch.float16`.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.