* dreambooth training
* train_dreambooth validation scheduler
* set a particular scheduler via a string
* modify readme after setting a particular scheduler via a string
* modify readme after setting a particular scheduler
* use importlib to set a particular scheduler
* import with correct sort
* Fix AutoencoderTiny encoder scaling convention
* Add [-1, 1] -> [0, 1] rescaling to EncoderTiny
* Move [0, 1] -> [-1, 1] rescaling from AutoencoderTiny.decode to DecoderTiny
(i.e. immediately after the final conv, as early as possible)
* Fix missing [0, 255] -> [0, 1] rescaling in AutoencoderTiny.forward
* Update AutoencoderTinyIntegrationTests to protect against scaling issues.
The new test constructs a simple image, round-trips it through AutoencoderTiny,
and confirms the decoded result is approximately equal to the source image.
This test checks behavior with and without tiling enabled.
This test will fail if new AutoencoderTiny scaling issues are introduced.
* Context: Raw TAESD weights expect images in [0, 1], but diffusers'
convention represents images with zero-centered values in [-1, 1],
so AutoencoderTiny needs to scale / unscale images at the start of
encoding and at the end of decoding in order to work with diffusers.
* Re-add existing AutoencoderTiny test, update golden values
* Add comments to AutoencoderTiny.forward
This is a better method than comparing against a list of supported backends as it allows for supporting any number of backends provided they are installed on the user's system.
This should have no effect on the behaviour of tests in Huggingface's CI workers.
See transformers#25506 where this approach has already been added.
* Update loaders.py
add config_file to from_single_file,
when the download_from_original_stable_diffusion_ckpt use
* Update loaders.py
add config_file to from_single_file,
when the download_from_original_stable_diffusion_ckpt use
* change config_file to original_config_file
* make style && make quality
---------
Co-authored-by: jianghua.zuo <jianghua.zuo@weimob.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* Add SDXL long weighted prompt pipeline
* Add SDXL long weighted prompt pipeline usage sample in the readme document
* Add SDXL long weighted prompt pipeline usage sample in the readme document, add result image
* make safetensors default
* set default save method as safetensors
* update tests
* update to support saving safetensors
* update test to account for safetensors default
* update example tests to use safetensors
* update example to support safetensors
* update unet tests for safetensors
* fix failing loader tests
* fix qc issues
* fix pipeline tests
* fix example test
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* add: train to text image with sdxl script.
Co-authored-by: CaptnSeraph <s3raph1m@gmail.com>
* fix: partial func.
* fix: default value of output_dir.
* make style
* set num inference steps to 25.
* remove mentions of LoRA.
* up min version
* add: ema cli arg
* run device placement while running step.
* precompute vae encodings too.
* fix
* debug
* should work now.
* debug
* debug
* goes alright?
* style
* debugging
* debugging
* debugging
* debugging
* fix
* reinit scheduler if prediction_type was passed.
* akways cast vae in float32
* better handling of snr.
Co-authored-by: bghira <bghira@users.github.com>
* the vae should be also passed
* add: docs.
* add: sdlx t2i tests
* save the pipeline
* autocast.
* fix: save_model_card
* fix: save_model_card.
---------
Co-authored-by: CaptnSeraph <s3raph1m@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: bghira <bghira@users.github.com>
* Fixing repo_id regex validation error on windows platforms
* Validating correct URL with prefix is provided
If we are loading a URL then we don't need to use os.path.join and array slicing to split out a repo_id and file path from an absolute filepath.
Checking if the URL prefix is valid first before doing any URL splitting otherwise we raise a ValueError since neither a valid filepath or URL was provided.
* Style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* move slow pix2pixzero tests to nightly
* move slow panorama tests to nightly
* move txt2video full test to nightly
* clean up
* remove nightly test from text to video pipeline
* add load_lora_weights and save_lora_weights to StableDiffusionXLImg2ImgPipeline
* add load_lora_weights and save_lora_weights to StableDiffusionXLInpaintPipeline
* apply black format
* apply black format
* add copy statement
* fix statements
* fix statements
* fix statements
* run `make fix-copies`
* add pipeline class
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* style
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* move audioldm tests to nightly
* move kandinsky im2img ddpm test to nightly
* move flax dpm test to nightly
* move diffedit dpm test to nightly
* move fp16 slow tests to nightly
* add train_text_to_image_lora_sdxl.py
* add train_text_to_image_lora_sdxl.py
* add test and minor fix
* Update examples/text_to_image/README_sdxl.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix unwrap_model rule
* add invisible-watermark in requirements
* del invisible-watermark
* Update examples/text_to_image/README_sdxl.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/text_to_image/README_sdxl.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/text_to_image/train_text_to_image_lora_sdxl.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* del comment & update readme
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* added placeholder token concatenation during training
* Update examples/textual_inversion/textual_inversion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Faster controlnet model instantiation, and allow controlnets to be loaded (from ckpt) in a parallel thread with a SD model (ckpt) without tensor errors (race condition)
* type conversion
Default value of `control_guidance_start` and `control_guidance_end` in `StableDiffusionControlNetPipeline.check_inputs` causes `TypeError: object of type 'float' has no len()`
Proposed fix:
Convert `control_guidance_start` and `control_guidance_end` to list if float
* Update src/diffusers/pipelines/controlnet/pipeline_controlnet.py
* Update src/diffusers/pipelines/controlnet/pipeline_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/controlnet/pipeline_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Prevent online access when desired
- Bypass requests with config files option added to download_from_original_stable_diffusion_ckpt
- Adds local_files_only flags to all from_pretrained requests
* add zero123 pipeline to community
* add community doc
* reformat
* update zero123 pipeline, including cc_projection within diffusers; add convert ckpt scripts; support diffusers weights
* first draft
* tidy api
* apply feedback
* mdx to md
* apply feedback
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update expected slice so img2img compile tests pass
* use default attn processor
* use default attn processor and update expected slice value to pass test
* use default attn processor
* set default attn processor and update expected slice
* set default attn processor and change precision for check
* set unet to use default attn processor
* fixed typo
* updated doc to be consistent in naming
* make style/quality
* preprocessing for 4 channels and not 6
* make style
* test for 4c
* make style/quality
* fixed test on cpu
* fixed doc typo
* changed default ckpt to 4c
* Update pipeline_stable_diffusion_ldm3d.py
---------
Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>
Update unet_1d.py
highlighting the way the modules are actually fed in the main code as the order matters because no skip block attaches time embeds whilst others do not
* [SDXL-IP2P] Add gif for demonstrating training processes
* [SDXL-IP2P] Add gif for demonstrating training processes
* [SDXL-IP2P] Change gif to URLs
* [SDXL-IP2P] Add URLs in case gif now show
---------
Co-authored-by: Harutatsu Akiyama <kf.zy.qin@gmail.com>
* fix_batch_xl
* Fix other pipelines as well
* up
* up
* Update tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_inpaint.py
* sort
* up
* Finish it all up Co-authored-by: Bagheera <bghira@users.github.com>
* Co-authored-by: Bagheera bghira@users.github.com
* Co-authored-by: Bagheera <bghira@users.github.com>
* Finish it all up Co-authored-by: Bagheera <bghira@users.github.com>
* add test for pipeline import.
* Update tests/others/test_dependencies.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* address suggestions
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* initial
* style
* from ...pipelines -> from ..pipeline_util
* make style
* fix-copies
* fix value_guided_sampling oops
* style
* add test
* Show failing test
* update from_pipe
* fix
* add controlnet, additional test and register unused original config
* update for controlnet
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* store unused config as private attribute and pass if can
* add doc
* kandinsky inpaint pipeline does not work with decoder checkpoint
* update doc
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* style
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix
* Apply suggestions from code review
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix: #4206
* add: sdxl controlnet training smoketest.
* remove unnecessary token inits.
* add: licensing to model card.
* include SDXL licensing in the model card and make public visibility default
* debugging
* debugging
* disable local file download.
* fix: training test.
* fix: ckpt prefix.
* Fix the XL ensemble not working for any kerras scheduler sigmas and having an off by one bug
* Update src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
* make sytle
---------
Co-authored-by: Jimmy <39@🇺🇸.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix bug when no cfg
* style
* fix no cfg for shap-e and cycle
* style
* fix no cfg for sdxl
* fix copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* 📄 Renamed File for Better Understanding
Renamed the 'rl' file to 'run_locomotion'. This change was made to improve the clarity and readability of the codebase. The 'rl' name was ambiguous, and 'run_locomotion' provides a more clear description of the file's purpose.
Thanks 🙌
* 📁 [Docs] Renamed Directory for Better Clarity
Renamed the 'rl' directory to 'reinforcement_learning'. This change provides a clearer understanding of the directory's purpose and its contents.
* Update examples/reinforcement_learning/README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* 📝 Update README
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix bug in ControlNetPipelines with MultiControlNetModel of length 1
* Add tests for varying number of ControlNet models
* Fix missing indexing for control_guidance_start and control_guidance_end
* Fix code quality
* Separate test for MultiControlNet with one model
* Revert formatting of earlier test
* Add controlnet from single file
* Updates
* make style
* finish
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* feat: add act_fn param to OutValueFunctionBlock
* feat: update unet1d tests to not use mish
* feat: add `mish` as the default activation function
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* feat: drop mish tests from unet1d
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: controlnet sdxl.
* modifications to controlnet.
* run styling.
* add: __init__.pys
* incorporate https://github.com/huggingface/diffusers/pull/4019 changes.
* run make fix-copies.
* resize the conditioning images.
* remove autocast.
* run styling.
* disable autocast.
* debugging
* device placement.
* back to autocast.
* remove comment.
* save some memory by reusing the vae and unet in the pipeline.
* apply styling.
* Allow low precision sd xl
* finish
* finish
* changes to accommodate the improved VAE.
* modifications to how we handle vae encoding in the training.
* make style
* make existing controlnet fast tests pass.
* change vae checkpoint cli arg.
* fix: vae pretrained paths.
* fix: steps in get_scheduler().
* debugging.
* debugging./
* fix: weight conversion.
* add: docs.
* add: limited tests./
* add: datasets to the requirements.
* update docstrings and incorporate the usage of watermarking.
* incorporate fix from #4083
* fix watermarking dependency handling.
* run make-fix-copies.
* Empty-Commit
* Update requirements_sdxl.txt
* remove vae upcasting part.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* run make style
* run make fix-copies.
* disable suppot for multicontrolnet.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* run make fix-copies.
* dtyle/.
* fix-copies.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add Recent Timestep Scheduling Improvements to DDIM Inverse Scheduler
Roll timesteps by one to reflect origin-destination semantic discrepancy
Restore `set_alpha_to_one` option to handle negative initial timesteps
Remove `set_alpha_to_zero` option not used due to previous truncation
* Bugfix
* Remove unnecessary calls to `detach()`
Use `self.image_processor.preprocess` in DiffEdit pipeline functions
* Preprocess list input for inverted image latents in diffedit pipeline
* Add `timestep_spacing` and `steps_offset` to `DPMSolverMultistepInverseScheduler`
* Update expected test results to account for inverting last forward diffusion step
* Fix inversion progress bar bug
* Add first draft for proper fast tests for DDIMInverseScheduler
* Add deprecated DDIMInverseScheduler kwarg to ConfigMixer registry
* Fix test failure in DPMMultistepInverseScheduler
Invert step specification leads to negative noise variance in SDE-based algs
Add first draft for proper fast tests for DPMMultistepInverseScheduler
* Update expected test results to account for inverting last forward diffusion step
Clean up diffedit fast test
* Quick implementation of t2i-adapter
Load adapter module with from_pretrained
Prototyping generalized adapter framework
Writeup doc string for sideload framework(WIP) + some minor update on implementation
Update adapter models
Remove old adapter optional args in UNet
Add StableDiffusionAdapterPipeline unit test
Handle cpu offload in StableDiffusionAdapterPipeline
Auto correct coding style
Update model repo name to "RzZ/sd-v1-4-adapter-pipeline"
Refactor MultiAdapter to better compatible with config system
Export MultiAdapter
Create pipeline document template from controlnet
Create dummy objects
Supproting new AdapterLight model
Fix StableDiffusionAdapterPipeline common pipeline test
[WIP] Update adapter pipeline document
Handle num_inference_steps in StableDiffusionAdapterPipeline
Update definition of Adapter "channels_in"
Update documents
Apply code style
Fix doc typo and merge error
Update doc string and example
Quality of life improvement
Remove redundant code and file from prototyping
Remove unused pageage
Remove comments
Fix title
Fix typo
Add conditioning scale arg
Bring back old implmentation
Offload sideload
Add supply info on document
Update src/diffusers/models/adapter.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
Update MultiAdapter constructor
Swap out custom checkpoint and update pipeline constructor
Update docment
Apply suggestions from code review
Co-authored-by: Will Berman <wlbberman@gmail.com>
Correcting style
Following single-file policy
Update auto size in image preprocess func
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
fix copies
Update adapter pipeline behavior
Add adapter_conditioning_scale doc string
Add the missing doc string
Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Fix few bugs from suggestion
Handle L-mode PIL image as control image
Rename to differentiate adapter resblock
Update src/diffusers/models/adapter.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Fix typo
Update adapter parameter name
Update test case and code style
Fix copies
Fix typo
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
Update Adapter class name
Add checkpoint converting script
Fix style
Fix-copies
Remove dev script
Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Updates for parameter rename
Fix convert_adapter
remove main
fix diff
more
refactoring
more
more
small fixes
refactor
tests
more slow tests
more tests
Update docs/source/en/api/pipelines/overview.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
add community contributor to docs
Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
fix
remove from_adapters
license
paper link
docs
more url fixes
more docs
fix
fixes
fix
fix
* fix sample inplace add
* additional_kwargs -> additional_residuals
* move t2i adapter pipeline to own module
* preprocess -> _preprocess_adapter_image
* add TencentArc to license
* fix example code links
* add image converter and fix example doc string
* fix links
* clearer additional residual application
---------
Co-authored-by: HimariO <dsfhe49854@gmail.com>
* 📝 Update doc with more descriptive title and filename for "IF" section
Updated the documentation to provide a more descriptive title and filename for the "IF" section. Previously, having only "IF" as the title was not conveying a clear meaning. By renaming the section to "DeepFloyd IF," we provide users with a more informative and context-specific heading.
Thanks! 🙌
* 📝 Update name for "IF" section in 📝 Update name for "IF" section in README
Updated the link and name for the "IF" section in the README file to reflect the new heading "DeepFloyd IF."
* 📝 Fix broken link for "Instruct Pix2Pix" section in README
Fixed the broken link for the "Instruct Pix2Pix" section in the README file. Previously, the link was pointing to an incorrect location due to the presence of "stable_diffusion" in the URL. By removing "stable_diffusion" from the URL, I have corrected the error and ensured that users are directed to the correct section.
* 🔧💼 Updated parameters in _toctree.yml file
- ✏️ Updated 'local' parameter to 'api/pipelines/deepfloyd_if'.
- ✏️ Updated 'title' parameter to 'DeepFloyd IF'.
🎯 These changes aim to improve visibility and accessibility in the documentation of the DeepFloyd IF pipeline. 🚀📚
* add noise_sampler to StableDiffusionKDiffusionPipeline
* fix/docs: Fix the broken doc links (#3897)
* fix/docs: Fix the broken doc links
Signed-off-by: GitHub <noreply@github.com>
* Update docs/source/en/using-diffusers/write_own_pipeline.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Add video img2img (#3900)
* Add image to image video
* Improve
* better naming
* make fix copies
* add docs
* finish tests
* trigger tests
* make style
* correct
* finish
* Fix more
* make style
* finish
* fix/doc-code: Updating to the latest version parameters (#3924)
fix/doc-code: update to use the new parameter
Signed-off-by: GitHub <noreply@github.com>
* fix/doc: no import torch issue (#3923)
Ffix/doc: no import torch issue
Signed-off-by: GitHub <noreply@github.com>
* Correct controlnet out of list error (#3928)
* Correct controlnet out of list error
* Apply suggestions from code review
* correct tests
* correct tests
* fix
* test all
* Apply suggestions from code review
* test all
* test all
* Apply suggestions from code review
* Apply suggestions from code review
* fix more tests
* Fix more
* Apply suggestions from code review
* finish
* Apply suggestions from code review
* Update src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
* finish
* Adding better way to define multiple concepts and also validation capabilities. (#3807)
* - Added validation parameters
- Changed some parameter descriptions to better explain their use.
- Fixed a few typos.
- Added concept_list parameter for better management of multiple subjects
- changed logic for image validation
* - Fixed bad logic for class data root directories
* Defaulting validation_steps to None for an easier logic
* Fixed multiple validation prompts
* Fixed bug on validation negative prompt
* Changed validation logic for tracker.
* Added uuid for validation image labeling
* Fix error when comparing validation prompts and validation negative prompts
* Improved error message when negative prompts for validation are more than the number of prompts
* - Changed image tracking number from epoch to global_step
- Added Typing for functions
* Added some validations more when using concept_list parameter and the regular ones.
* Fixed error message
* Added more validations for validation parameters
* Improved messaging for errors
* Fixed validation error for parameters with default values
* - Added train step to image name for validation
- reformatted code
* - Added train step to image's name for validation
- reformatted code
* Updated README.md file.
* reverted back original script of train_dreambooth.py
* reverted back original script of train_dreambooth.py
* left one blank line at the eof
* reverted back setup.py
* reverted back setup.py
* added same logic for when parameters for prior preservation are used without enabling the flag while using concept_list parameter.
* Ran black formatter.
* fixed a few strings
* fixed import sort with isort and removed fstrings without placeholder
* fixed import order with ruff (since with isort wasn't ok)
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [ldm3d] Update code to be functional with the new checkpoints (#3875)
* fixed typo
* updated doc to be consistent in naming
* make style/quality
* preprocessing for 4 channels and not 6
* make style
* test for 4c
* make style/quality
* fixed test on cpu
---------
Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>
* Improve memory text to video (#3930)
* Improve memory text to video
* Apply suggestions from code review
* add test
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finish test setup
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* revert automatic chunking (#3934)
* revert automatic chunking
* Apply suggestions from code review
* revert automatic chunking
* avoid upcasting by assigning dtype to noise tensor (#3713)
* avoid upcasting by assigning dtype to noise tensor
* make style
* Update train_unconditional.py
* Update train_unconditional.py
* make style
* add unit test for pickle
* revert change
---------
Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* Fix failing np tests (#3942)
* Fix failing np tests
* Apply suggestions from code review
* Update tests/pipelines/test_pipelines_common.py
* Add `timestep_spacing` and `steps_offset` to schedulers (#3947)
* Add timestep_spacing to DDPM, LMSDiscrete, PNDM.
* Remove spurious line.
* More easy schedulers.
* Add `linspace` to DDIM
* Noise sigma for `trailing`.
* Add timestep_spacing to DEISMultistepScheduler.
Not sure the range is the way it was intended.
* Fix: remove line used to debug.
* Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC
* Fix: convert to numpy.
* Use sched. defaults when instantiating from_config
For params not present in the original configuration.
This makes it possible to switch pipeline schedulers even if they use
different timestep_spacing (or any other param).
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Missing args in DPMSolverMultistep
* Test: default args not in config
* Style
* Fix scheduler name in test
* Remove duplicated entries
* Add test for solver_type
This test currently fails in main. When switching from DEIS to UniPC,
solver_type is "logrho" (the default value from DEIS), which gets
translated to "bh1" by UniPC. This is different to the default value for
UniPC: "bh2". This is where the translation happens: 36d22d0709/src/diffusers/schedulers/scheduling_unipc_multistep.py (L171)
* UniPC: use same default for solver_type
Fixes a bug when switching from UniPC from another scheduler (i.e.,
DEIS) that uses a different solver type. The solver is now the same as
if we had instantiated the scheduler directly.
* do not save use default values
* fix more
* fix all
* fix schedulers
* fix more
* finish for real
* finish for real
* flaky tests
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
* Default steps_offset to 0.
* Add missing docstrings
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add Consistency Models Pipeline (#3492)
* initial commit
* Improve consistency models sampling implementation.
* Add CMStochasticIterativeScheduler, which implements the multi-step sampler (stochastic_iterative_sampler) in the original code, and make further improvements to sampling.
* Add Unet blocks for consistency models
* Add conversion script for Unet
* Fix bug in new unet blocks
* Fix attention weight loading
* Make design improvements to ConsistencyModelPipeline and CMStochasticIterativeScheduler and add initial version of tests.
* make style
* Make small random test UNet class conditional and set resnet_time_scale_shift to 'scale_shift' to better match consistency model checkpoints.
* Add support for converting a test UNet and non-class-conditional UNets to the consistency models conversion script.
* make style
* Change num_class_embeds to 1000 to better match the original consistency models implementation.
* Add support for distillation in pipeline_consistency_models.py.
* Improve consistency model tests:
- Get small testing checkpoints from hub
- Modify tests to take into account "distillation" parameter of ConsistencyModelPipeline
- Add onestep, multistep tests for distillation and distillation + class conditional
- Add expected image slices for onestep tests
* make style
* Improve ConsistencyModelPipeline:
- Add initial support for class-conditional generation
- Fix initial sigma for onestep generation
- Fix some sigma shape issues
* make style
* Improve ConsistencyModelPipeline:
- add latents __call__ argument and prepare_latents method
- add check_inputs method
- add initial docstrings for ConsistencyModelPipeline.__call__
* make style
* Fix bug when randomly generating class labels for class-conditional generation.
* Switch CMStochasticIterativeScheduler to configuring a sigma schedule and make related changes to the pipeline and tests.
* Remove some unused code and make style.
* Fix small bug in CMStochasticIterativeScheduler.
* Add expected slices for multistep sampling tests and make them pass.
* Work on consistency model fast tests:
- in pipeline, call self.scheduler.scale_model_input before denoising
- get expected slices for Euler and Heun scheduler tests
- make Euler test pass
- mark Heun test as expected fail because it doesn't support prediction_type "sample" yet
- remove DPM and Euler Ancestral tests because they don't support use_karras_sigmas
* make style
* Refactor conversion script to make it easier to add more model architectures to convert in the future.
* Work on ConsistencyModelPipeline tests:
- Fix device bug when handling class labels in ConsistencyModelPipeline.__call__
- Add slow tests for onestep and multistep sampling and make them pass
- Refactor fast tests
- Refactor ConsistencyModelPipeline.__init__
* make style
* Remove the add_noise and add_noise_to_input methods from CMStochasticIterativeScheduler for now.
* Run python utils/check_copies.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite to make dummy objects for new pipeline and scheduler.
* Make fast tests from PipelineTesterMixin pass.
* make style
* Refactor consistency models pipeline and scheduler:
- Remove support for Karras schedulers (only support CMStochasticIterativeScheduler)
- Move sigma manipulation, input scaling, denoising from pipeline to scheduler
- Make corresponding changes to tests and ensure they pass
* make style
* Add docstrings and further refactor pipeline and scheduler.
* make style
* Add initial version of the consistency models documentation.
* Refactor custom timesteps logic following DDPMScheduler/IFPipeline and temporarily add torch 2.0 SDPA kernel selection logic for debugging.
* make style
* Convert current slow tests to use fp16 and flash attention.
* make style
* Add slow tests for normal attention on cuda device.
* make style
* Fix attention weights loading
* Update consistency model fast tests for new test checkpoints with attention fix.
* make style
* apply suggestions
* Add add_noise method to CMStochasticIterativeScheduler (copied from EulerDiscreteScheduler).
* Conversion script now outputs pipeline instead of UNet and add support for LSUN-256 models and different schedulers.
* When both timesteps and num_inference_steps are supplied, raise warning instead of error (timesteps take precedence).
* make style
* Add remaining diffusers model checkpoints for models in the original consistency model release and update usage example.
* apply suggestions from review
* make style
* fix attention naming
* Add tests for CMStochasticIterativeScheduler.
* make style
* Make CMStochasticIterativeScheduler tests pass.
* make style
* Override test_step_shape in CMStochasticIterativeSchedulerTest instead of modifying it in SchedulerCommonTest.
* make style
* rename some models
* Improve API
* rename some models
* Remove duplicated block
* Add docstring and make torch compile work
* More fixes
* Fixes
* Apply suggestions from code review
* Apply suggestions from code review
* add more docstring
* update consistency conversion script
---------
Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add test case for StableDiffusionKDiffusionPipeline noise_sampler
---------
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Andrés Mauricio Repetto Ferrero <amd.repetto@gmail.com>
Co-authored-by: estelleafl <estelle.aflalo@intel.com>
Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>
Co-authored-by: Prathik Rao <prathikr@usc.edu>
Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
* Add circular padding option
* Fix style with black
* Fix corner case with small image size
* Add circular padding test cases
* Fix docstring
* Improve docstring for circular padding, remove slow test case
* Update docs for circular padding argument
* Add images comparison for circular padding
* diffusers#4003 - initial implementation of max_inference_steps
* diffusers#4003 - initial implementation of max_inference_steps and first_inference_step for img2img
* diffusers#4003 - use first_inference_step as an input arg for get_timestamps in img2img
* diffusers#4003 Do not add noise during img2img when we have a defined first timestep
* diffusers#4003 Mild updates after revert
* diffusers#4003 Missing change
* Show implementation with denoising_start and end
* Apply suggestions from code review
* Update src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* move to 0.19.0dev
* Apply suggestions from code review
* add exhaustive tests
* add docs
* finish
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* make style
---------
Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* 📝 Fix broken link to models documentation
Corrected the link to the models documentation in the README. Previously, the link was pointing to an incorrect URL. Now, the link directs users to the correct documentation page for more details on the models.
Thanks! 🙌
* Update src/diffusers/models/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* refactor to support patching LoRA into T5
instantiate the lora linear layer on the same device as the regular linear layer
get lora rank from state dict
tests
fmt
can create lora layer in float32 even when rest of model is float16
fix loading model hook
remove load_lora_weights_ and T5 dispatching
remove Unet#attn_processors_state_dict
docstrings
* text encoder monkeypatch class method
* fix test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* refactor prior_transformer
adding conversion script
add pipeline
add step_index from pipeline, + remove permute
add zero pad token
remove copy from statement for betas_for_alpha_bar function
* add
* add
* update conversion script for renderer model
* refactor camera a little bit
* clean up
* style
* fix copies
* Update src/diffusers/schedulers/scheduling_heun_discrete.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* alpha_transform_type
* remove step_index argument
* remove get_sigmas_karras
* remove _yiyi_sigma_to_t
* move the rescale prompt_embeds from prior_transformer to pipeline
* replace baddbmm with einsum to match origial repo
* Revert "replace baddbmm with einsum to match origial repo"
This reverts commit 3f6b435d65.
* add step_index to scale_model_input
* Revert "move the rescale prompt_embeds from prior_transformer to pipeline"
This reverts commit 5b5a8e6be9.
* move rescale from prior_transformer to pipeline
* correct step_index in scale_model_input
* remove print lines
* refactor prior - reduce arguments
* make style
* add prior_image
* arg embedding_proj_norm -> norm_embedding_proj
* add pre-norm for proj_embedding
* move rescale prompt from pipeline to _encode_prompt
* add img2img pipeline
* style
* copies
* Update src/diffusers/models/prior_transformer.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
add arg: encoder_hid_proj
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
add new config: norm_in_type
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
add new config: added_emb_type
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
rename out_dim -> clip_embed_dim
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
rename config: out_dim -> clip_embed_dim
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/prior_transformer.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* finish refactor prior_tranformer
* make style
* refactor renderer
* fix
* make style
* refactor img2img
* remove params_proj
* add test
* add upcast_softmax to prior_transformer
* enable num_images_per_prompt, add save_gif utility
* add
* add fast test
* make style
* add slow test
* style
* add test for img2img
* refactor
* enable batching
* style
* refactor scheduler
* update test
* style
* attempt to solve batch related tests timeout
* add doc
* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* hardcode rendering related config
* update betas_for_alpha_bar on ddpm_scheduler
* fix copies
* fix
* export_to_gif
* style
* second attempt to speed up batching tests
* add doc page to index
* Remove intermediate clipping
* 3rd attempt to speed up batching tests
* Remvoe time index
* simplify scheduler
* Fix more
* Fix more
* fix more
* make style
* fix schedulers
* fix some more tests
* finish
* add one more test
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* style
* apply feedbacks
* style
* fix copies
* add one example
* style
* add example for img2img
* fix doc
* fix more doc strings
* size -> frame_size
* style
* update doc
* style
* fix on doc
* update repo name
* improve the usage example in shap-e img2img
* add usage examples in the shap-e docs.
* consolidate examples.
* minor fix.
* update doc
* Apply suggestions from code review
* Apply suggestions from code review
* remove upcast
* Make sure background is white
* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
* Apply suggestions from code review
* Finish
* Apply suggestions from code review
* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
* Make style
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Kandinsky2_2
* fix init kandinsky2_2
* kandinsky2_2 fix inpainting
* rename pipelines: remove decoder + 2_2 -> V22
* Update scheduling_unclip.py
* remove text_encoder and tokenizer arguments from doc string
* add test for text2img
* add tests for text2img & img2img
* fix
* add test for inpaint
* add prior tests
* style
* copies
* add controlnet test
* style
* add a test for controlnet_img2img
* update prior_emb2emb api to accept image_embedding or image
* add a test for prior_emb2emb
* style
* remove try except
* example
* fix
* add doc string examples to all kandinsky pipelines
* style
* update doc
* style
* add a top about 2.2
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* vae -> movq
* vae -> movq
* style
* fix the #copied from
* remove decoder from file name
* update doc: add a section for kandinsky 2.2
* fix
* fix-copies
* add coped from
* add copies from for prior
* add copies from for prior emb2emb
* copy from for img2img
* copied from for inpaint
* more copied from
* more copies from
* more copies
* remove the yiyi comments
* Apply suggestions from code review
* Self-contained example, pipeline order
* Import prior output instead of redefining.
* Style
* Make VQModel compatible with model offload.
* Fix copies
---------
Co-authored-by: Shahmatov Arseniy <62886550+cene555@users.noreply.github.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Add new text encoder
* add transformers depth
* More
* Correct conversion script
* Fix more
* Fix more
* Correct more
* correct text encoder
* Finish all
* proof that in works in run local xl
* clean up
* Get refiner to work
* Add red castle
* Fix batch size
* Improve pipelines more
* Finish text2image tests
* Add img2img test
* Fix more
* fix import
* Fix embeddings for classic models (#3888)
Fix embeddings for classic SD models.
* Allow multiple prompts to be passed to the refiner (#3895)
* finish more
* Apply suggestions from code review
* add watermarker
* Model offload (#3889)
* Model offload.
* Model offload for refiner / img2img
* Hardcode encoder offload on img2img vae encode
Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* correct
* fix
* clean print
* Update install warning for `invisible-watermark`
* add: missing docstrings.
* fix and simplify the usage example in img2img.
* fix setup for watermarking.
* Revert "fix setup for watermarking."
This reverts commit 491bc9f5a6.
* fix: watermarking setup.
* fix: op.
* run make fix-copies.
* make sure tests pass
* improve convert
* make tests pass
* make tests pass
* better error message
* fiinsh
* finish
* Fix final test
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* use sample directly instead of the dataclass.
* more usage of directly samples instead of dataclasses
* more usage of directly samples instead of dataclasses
* use direct sample in the pipeline.
* direct usage of sample in the img2img case.
* add default to unet output to prevent it from being a required arg
* add unit test
* make style
* adjust unit test
* mark as fast test
* adjust assert statement in test
---------
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* initial commit
* Improve consistency models sampling implementation.
* Add CMStochasticIterativeScheduler, which implements the multi-step sampler (stochastic_iterative_sampler) in the original code, and make further improvements to sampling.
* Add Unet blocks for consistency models
* Add conversion script for Unet
* Fix bug in new unet blocks
* Fix attention weight loading
* Make design improvements to ConsistencyModelPipeline and CMStochasticIterativeScheduler and add initial version of tests.
* make style
* Make small random test UNet class conditional and set resnet_time_scale_shift to 'scale_shift' to better match consistency model checkpoints.
* Add support for converting a test UNet and non-class-conditional UNets to the consistency models conversion script.
* make style
* Change num_class_embeds to 1000 to better match the original consistency models implementation.
* Add support for distillation in pipeline_consistency_models.py.
* Improve consistency model tests:
- Get small testing checkpoints from hub
- Modify tests to take into account "distillation" parameter of ConsistencyModelPipeline
- Add onestep, multistep tests for distillation and distillation + class conditional
- Add expected image slices for onestep tests
* make style
* Improve ConsistencyModelPipeline:
- Add initial support for class-conditional generation
- Fix initial sigma for onestep generation
- Fix some sigma shape issues
* make style
* Improve ConsistencyModelPipeline:
- add latents __call__ argument and prepare_latents method
- add check_inputs method
- add initial docstrings for ConsistencyModelPipeline.__call__
* make style
* Fix bug when randomly generating class labels for class-conditional generation.
* Switch CMStochasticIterativeScheduler to configuring a sigma schedule and make related changes to the pipeline and tests.
* Remove some unused code and make style.
* Fix small bug in CMStochasticIterativeScheduler.
* Add expected slices for multistep sampling tests and make them pass.
* Work on consistency model fast tests:
- in pipeline, call self.scheduler.scale_model_input before denoising
- get expected slices for Euler and Heun scheduler tests
- make Euler test pass
- mark Heun test as expected fail because it doesn't support prediction_type "sample" yet
- remove DPM and Euler Ancestral tests because they don't support use_karras_sigmas
* make style
* Refactor conversion script to make it easier to add more model architectures to convert in the future.
* Work on ConsistencyModelPipeline tests:
- Fix device bug when handling class labels in ConsistencyModelPipeline.__call__
- Add slow tests for onestep and multistep sampling and make them pass
- Refactor fast tests
- Refactor ConsistencyModelPipeline.__init__
* make style
* Remove the add_noise and add_noise_to_input methods from CMStochasticIterativeScheduler for now.
* Run python utils/check_copies.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite to make dummy objects for new pipeline and scheduler.
* Make fast tests from PipelineTesterMixin pass.
* make style
* Refactor consistency models pipeline and scheduler:
- Remove support for Karras schedulers (only support CMStochasticIterativeScheduler)
- Move sigma manipulation, input scaling, denoising from pipeline to scheduler
- Make corresponding changes to tests and ensure they pass
* make style
* Add docstrings and further refactor pipeline and scheduler.
* make style
* Add initial version of the consistency models documentation.
* Refactor custom timesteps logic following DDPMScheduler/IFPipeline and temporarily add torch 2.0 SDPA kernel selection logic for debugging.
* make style
* Convert current slow tests to use fp16 and flash attention.
* make style
* Add slow tests for normal attention on cuda device.
* make style
* Fix attention weights loading
* Update consistency model fast tests for new test checkpoints with attention fix.
* make style
* apply suggestions
* Add add_noise method to CMStochasticIterativeScheduler (copied from EulerDiscreteScheduler).
* Conversion script now outputs pipeline instead of UNet and add support for LSUN-256 models and different schedulers.
* When both timesteps and num_inference_steps are supplied, raise warning instead of error (timesteps take precedence).
* make style
* Add remaining diffusers model checkpoints for models in the original consistency model release and update usage example.
* apply suggestions from review
* make style
* fix attention naming
* Add tests for CMStochasticIterativeScheduler.
* make style
* Make CMStochasticIterativeScheduler tests pass.
* make style
* Override test_step_shape in CMStochasticIterativeSchedulerTest instead of modifying it in SchedulerCommonTest.
* make style
* rename some models
* Improve API
* rename some models
* Remove duplicated block
* Add docstring and make torch compile work
* More fixes
* Fixes
* Apply suggestions from code review
* Apply suggestions from code review
* add more docstring
* update consistency conversion script
---------
Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add timestep_spacing to DDPM, LMSDiscrete, PNDM.
* Remove spurious line.
* More easy schedulers.
* Add `linspace` to DDIM
* Noise sigma for `trailing`.
* Add timestep_spacing to DEISMultistepScheduler.
Not sure the range is the way it was intended.
* Fix: remove line used to debug.
* Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC
* Fix: convert to numpy.
* Use sched. defaults when instantiating from_config
For params not present in the original configuration.
This makes it possible to switch pipeline schedulers even if they use
different timestep_spacing (or any other param).
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Missing args in DPMSolverMultistep
* Test: default args not in config
* Style
* Fix scheduler name in test
* Remove duplicated entries
* Add test for solver_type
This test currently fails in main. When switching from DEIS to UniPC,
solver_type is "logrho" (the default value from DEIS), which gets
translated to "bh1" by UniPC. This is different to the default value for
UniPC: "bh2". This is where the translation happens: 36d22d0709/src/diffusers/schedulers/scheduling_unipc_multistep.py (L171)
* UniPC: use same default for solver_type
Fixes a bug when switching from UniPC from another scheduler (i.e.,
DEIS) that uses a different solver type. The solver is now the same as
if we had instantiated the scheduler directly.
* do not save use default values
* fix more
* fix all
* fix schedulers
* fix more
* finish for real
* finish for real
* flaky tests
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
* Default steps_offset to 0.
* Add missing docstrings
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Improve memory text to video
* Apply suggestions from code review
* add test
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finish test setup
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* - Added validation parameters
- Changed some parameter descriptions to better explain their use.
- Fixed a few typos.
- Added concept_list parameter for better management of multiple subjects
- changed logic for image validation
* - Fixed bad logic for class data root directories
* Defaulting validation_steps to None for an easier logic
* Fixed multiple validation prompts
* Fixed bug on validation negative prompt
* Changed validation logic for tracker.
* Added uuid for validation image labeling
* Fix error when comparing validation prompts and validation negative prompts
* Improved error message when negative prompts for validation are more than the number of prompts
* - Changed image tracking number from epoch to global_step
- Added Typing for functions
* Added some validations more when using concept_list parameter and the regular ones.
* Fixed error message
* Added more validations for validation parameters
* Improved messaging for errors
* Fixed validation error for parameters with default values
* - Added train step to image name for validation
- reformatted code
* - Added train step to image's name for validation
- reformatted code
* Updated README.md file.
* reverted back original script of train_dreambooth.py
* reverted back original script of train_dreambooth.py
* left one blank line at the eof
* reverted back setup.py
* reverted back setup.py
* added same logic for when parameters for prior preservation are used without enabling the flag while using concept_list parameter.
* Ran black formatter.
* fixed a few strings
* fixed import sort with isort and removed fstrings without placeholder
* fixed import order with ruff (since with isort wasn't ok)
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Correct controlnet out of list error
* Apply suggestions from code review
* correct tests
* correct tests
* fix
* test all
* Apply suggestions from code review
* test all
* test all
* Apply suggestions from code review
* Apply suggestions from code review
* fix more tests
* Fix more
* Apply suggestions from code review
* finish
* Apply suggestions from code review
* Update src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
* finish
* Support for manual CLIP loading in StableDiffusionPipeline - txt2img.
* Update src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py
* Update variables & according docs to match previous style.
* Updated to match style & quality of 'diffusers'
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add guidance start/stop
* Add guidance start/stop to inpaint class
* Black formatting
* Add support for guidance for multicontrolnet
* Add inclusive end
* Improve design
* correct imports
* Finish
* Finish all
* Correct more
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add paradigms parallel sampling pipeline
* linting
* ran make fix-copies
* add paradigms parallel sampling pipeline
* linting
* ran make fix-copies
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* changes based on review
* add docs for paradigms
* update docs with paradigms abstract
* improve documentation, and add tests for ddim/ddpm batch_step_no_noise
* fix docs and run make fix-copies
* minor changes to docs.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* move parallel scheduler to new classes for DDPMParallelScheduler and DDIMParallelScheduler
* remove changes for scheduling_ddim, adjust licenses, credits, and commented code
* fix tensor type that is breaking tests
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add entry for safe stable diffusion to the sd overview page.
* add missing pipelines o the broader overview section in the pipelines.
* address PR feedback./
* refactor: readme serialized from the example when push_to_hub is True.
* fix: batch size arg.
* a bit better formatting
* minor fixes.
* add note on env.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* condition wandb info better
* make mixed_precision assignment in cli args explicit.
* separate inference block for sample images.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* address more comments.
* autocast mode.
* correct none image type problem.
* ifx: list assignment.
* minor fix.
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* added ldm3d pipeline and updated image processor to support depth
* added description
* added paper reference
* added docs
* fixed bug
* added test
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* added reference in indexmdx
* reverted changes tto image processor'
* added LDM3DOutput
* Fixes with make style
* fix failing tests for make fix-copies
* aligned with our version
* Update pipeline_stable_diffusion_ldm3d.py
updated the guidance scale
* Fix for failing check_code_quality test
* Code review feedback
* Fix typo in ldm3d_diffusion.mdx
* updated the doc accordnlgy
* copyrights
* fixed test failure
* make style
* added image processor of LDM3D in the documentation:
* added ldm3d doc to toctree
* run make style && make quality
* run make fix-copies
* Update docs/source/en/api/image_processor.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* updated the safety checker to accept tuple
* make style and make quality
* Update src/diffusers/pipelines/stable_diffusion/__init__.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* LDM3D output
* up
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Aflalo <estellea@isl-gpu27.rr.intel.com>
Co-authored-by: Anahita Bhiwandiwalla <anahita.bhiwandiwalla@intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu26.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Aflalo <estellea@isl-gpu42.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu43.rr.intel.com>
* modify the issue template to include core maintainers.
* add: entry for audio.
* Update .github/ISSUE_TEMPLATE/bug-report.yml
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update pipeline_flax_controlnet.py
Change type of images array from jax.numpy.array to numpy.ndarray to permit in-place modification of the array when the safety checker detects a NSFW image.
* fix docs typos. add frame_ids argument to text2video-zero pipeline call
* make style && make quality
* add support of pytorch 2.0 scaled_dot_product_attention for CrossFrameAttnProcessor
* add chunk-by-chunk processing to text2video-zero docs
* make style && make quality
* Update docs/source/en/api/pipelines/text_to_video_zero.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* added StableDiffusionCanvasPipeline pipeline
* Added utils codes to pipe_utils file.
* make style
* delete mixture.py and Text2ImageRegion class
* make style
* Added the codes to the readme.md file.
* Moved functions from pipeline_utils to mix_canvas
* Implement option for rescaling betas to zero terminal SNR
* Implement rescale classifier free guidance in pipeline_stable_diffusion.py
* focus on DDIM
* make style
* make style
* make style
* make style
* Apply suggestions from Peter Lin
* Apply suggestions from Peter Lin
* make style
* Apply suggestions from code review
* Apply suggestions from code review
* make style
* make style
---------
Co-authored-by: MaxWe00 <gitlab.9v1lq@slmail.me>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add draft for lora text encoder scale
* Improve naming
* fix: training dreambooth lora script.
* Apply suggestions from code review
* Update examples/dreambooth/train_dreambooth_lora.py
* Apply suggestions from code review
* Apply suggestions from code review
* add lora mixin when fit
* add lora mixin when fit
* add lora mixin when fit
* fix more
* fix more
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* support views batch for panorama
* add entry for the new argument
* format entry for the new argument
* add view_batch_size test
* fix batch test and a boundary condition
* add more docstrings
* fix a typos
* fix typos
* add: entry to the doc about view_batch_size.
* Revert "add: entry to the doc about view_batch_size."
This reverts commit a36aeaa9ed.
* add a tip on .
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
VaeImageProcessor.preprocess refactor
* refactored VaeImageProcessor
- allow passing optional height and width argument to resize()
- add convert_to_rgb
* refactored prepare_latents method for img2img pipelines so that if we pass latents directly as image input, it will not encode it again
* added a test in test_pipelines_common.py to test latents as image inputs
* refactored img2img pipelines that accept latents as image:
- controlnet img2img, stable diffusion img2img , instruct_pix2pix
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* dreambooth if docs - stage II, more info
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* download instructions for downsized images
* update source README to match docs
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* iterate over unique tokens to avoid duplicate replacements
* added test for multiple references to multi embedding
* adhere to black formatting
* reorder test post-rebase
* add _convert_kohya_lora_to_diffusers
* make style
* add scaffold
* match result: unet attention only
* fix monkey-patch for text_encoder
* with CLIPAttention
While the terrible images are no longer produced,
the results do not match those from the hook ver.
This may be due to not setting the network_alpha value.
* add to support network_alpha
* generate diff image
* fix monkey-patch for text_encoder
* add test_text_encoder_lora_monkey_patch()
* verify that it's okay to release the attn_procs
* fix closure version
* add comment
* Revert "fix monkey-patch for text_encoder"
This reverts commit bb9c61e6fa.
* Fix to reuse utility functions
* make LoRAAttnProcessor targets to self_attn
* fix LoRAAttnProcessor target
* make style
* fix split key
* Update src/diffusers/loaders.py
* remove TEXT_ENCODER_TARGET_MODULES loop
* add print memory usage
* remove test_kohya_loras_scaffold.py
* add: doc on LoRA civitai
* remove print statement and refactor in the doc.
* fix state_dict test for kohya-ss style lora
* Apply suggestions from code review
Co-authored-by: Takuma Mori <takuma104@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* update code to reflect latest changes as of May 30th
* update text to image example
* reflect changes to textual inversion
* make style
* fix typo
* Revert unnecessary readme changes
---------
Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* Throw error if strength adjusted num_inference_steps < 1
* Added new fast test to check ValueError raised when num_inference_steps < 1
when strength adjusts the num_inference_steps then the inpainting pipeline should fail
* fix#3487 initial latents are now only scaled by init_noise_sigma when pure noise
updated this commit w.r.t the latest merge here: https://github.com/huggingface/diffusers/pull/3533
* fix
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix a bug of pano when not doing CFG (#3030)
* Fix a bug of pano when not doing CFG
* enhance code quality
* apply formatting.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Text2video zero refinements (#3070)
* fix progress bar issue in pipeline_text_to_video_zero.py. Copy scheduler after first backward
* fix tensor loading in test_text_to_video_zero.py
* make style && make quality
* Release: v0.15.0
* [Tests] Speed up panorama tests (#3067)
* fix: norm group test for UNet3D.
* chore: speed up the panorama tests (fast).
* set default value of _test_inference_batch_single_identical.
* fix: batch_sizes default value.
* [Post release] v0.16.0dev (#3072)
* Adds profiling flags, computes train metrics average. (#3053)
* WIP controlnet training
- bugfix --streaming
- bugfix running report_to!='wandb'
- adds memory profile before validation
* Adds final logging statement.
* Sets train epochs to 11.
Looking at a longer ~16ep run, we see only good validation images
after ~11ep:
https://wandb.ai/andsteing/controlnet_fill50k/runs/3j2hx6n8
* Removes --logging_dir (it's not used).
* Adds --profile flags.
* Updates --output_dir=runs/fill-circle-{timestamp}.
* Compute mean of `train_metrics`.
Previously `train_metrics[-1]` was logged, resulting in very bumpy train
metrics.
* Improves logging a bit.
- adds l2_grads gradient norm logging
- adds steps_per_sec
- sets walltime as x coordinate of train/step
- logs controlnet_params config
* Adds --ccache (doesn't really help though).
* minor fix in controlnet flax example (#2986)
* fix the error when push_to_hub but not log validation
* contronet_from_pt & controlnet_revision
* add intermediate checkpointing to the guide
* Bugfix --profile_steps
* Sets `RACKER_PROJECT_NAME='controlnet_fill50k'`.
* Logs fractional epoch.
* Adds relative `walltime` metric.
* Adds `StepTraceAnnotation` and uses `global_step` insetad of `step`.
* Applied `black`.
* Streamlines commands in README a bit.
* Removes `--ccache`.
This makes only a very small difference (~1 min) with this model size, so removing
the option introduced in cdb3cc.
* Re-ran `black`.
* Update examples/controlnet/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Converts spaces to tab.
* Removes repeated args.
* Skips first step (compilation) in profiling
* Updates README with profiling instructions.
* Unifies tabs/spaces in README.
* Re-ran style & quality.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [Pipelines] Make sure that None functions are correctly not saved (#3080)
* doc string example remove from_pt (#3083)
* [Tests] parallelize (#3078)
* [Tests] parallelize
* finish folder structuring
* Parallelize tests more
* Correct saving of pipelines
* make sure logging level is correct
* try again
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Throw deprecation warning for return_cached_folder (#3092)
Throw deprecation warning
* Allow SD attend and excite pipeline to work with any size output images (#2835)
Allow stable diffusion attend and excite pipeline to work with any size output image. Re: #2476, #2603
* [docs] Update community pipeline docs (#2989)
* update community pipeline docs
* fix formatting
* explain sharing workflows
* Add to support Guess Mode for StableDiffusionControlnetPipleline (#2998)
* add guess mode (WIP)
* fix uncond/cond order
* support guidance_scale=1.0 and batch != 1
* remove magic coeff
* add docstring
* add intergration test
* add document to controlnet.mdx
* made the comments a bit more explanatory
* fix table
* fix default value for attend-and-excite (#3099)
* fix default
* remvoe one line as requested by gc team (#3077)
remvoe one line
* ddpm custom timesteps (#3007)
add custom timesteps test
add custom timesteps descending order check
docs
timesteps -> custom_timesteps
can only pass one of num_inference_steps and timesteps
* Fix breaking change in `pipeline_stable_diffusion_controlnet.py` (#3118)
fix breaking change
* Add global pooling to controlnet (#3121)
* [Bug fix] Fix img2img processor with safety checker (#3127)
Fix img2img processor with safety checker
* [Bug fix] Make sure correct timesteps are chosen for img2img (#3128)
Make sure correct timesteps are chosen for img2img
* Improve deprecation warnings (#3131)
* Fix config deprecation (#3129)
* Better deprecation message
* Better deprecation message
* Better doc string
* Fixes
* fix more
* fix more
* Improve __getattr__
* correct more
* fix more
* fix
* Improve more
* more improvements
* fix more
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* make style
* Fix all rest & add tests & remove old deprecation fns
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* feat: verfication of multi-gpu support for select examples. (#3126)
* feat: verfication of multi-gpu support for select examples.
* add: multi-gpu training sections to the relvant doc pages.
* speed up attend-and-excite fast tests (#3079)
* Optimize log_validation in train_controlnet_flax (#3110)
extract pipeline from log_validation
* make style
* Correct textual inversion readme (#3145)
* Update README.md
* Apply suggestions from code review
* Add unet act fn to other model components (#3136)
Adding act fn config to the unet timestep class embedding and conv
activation.
The custom activation defaults to silu which is the default
activation function for both the conv act and the timestep class
embeddings so default behavior is not changed.
The only unet which use the custom activation is the stable diffusion
latent upscaler https://huggingface.co/stabilityai/sd-x2-latent-upscaler/blob/main/unet/config.json
(I ran a script against the hub to confirm).
The latent upscaler does not use the conv activation nor the timestep
class embeddings so we don't change its behavior.
* class labels timestep embeddings projection dtype cast (#3137)
This mimics the dtype cast for the standard time embeddings
* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model (#2705)
* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model
* Address review comment from PR
* PyLint formatting
* Some more pylint fixes, unrelated to our change
* Another pylint fix
* Styling fix
* add from_ckpt method as Mixin (#2318)
* add mixin class for pipeline from original sd ckpt
* Improve
* make style
* merge main into
* Improve more
* fix more
* up
* Apply suggestions from code review
* finish docs
* rename
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add TensorRT SD/txt2img Community Pipeline to diffusers along with TensorRT utils (#2974)
* Add SD/txt2img Community Pipeline to diffusers along with TensorRT utils
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update installation command
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update tensorrt installation
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* changes
1. Update setting of cache directory
2. Address comments: merge utils and pipeline code.
3. Address comments: Add section in README
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* apply make style
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Correct `Transformer2DModel.forward` docstring (#3074)
⚙️chore(transformer_2d) update function signature for encoder_hidden_states
* Update pipeline_stable_diffusion_inpaint_legacy.py (#2903)
* Update pipeline_stable_diffusion_inpaint_legacy.py
* fix preprocessing of Pil images with adequate batch size
* revert map
* add tests
* reformat
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* next try to fix the style
* wth is this
* Update testing_utils.py
* Update testing_utils.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
* Update test_stable_diffusion_inpaint_legacy.py
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Modified altdiffusion pipline to support altdiffusion-m18 (#2993)
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
---------
Co-authored-by: root <fulong_ye@163.com>
* controlnet training resize inputs to multiple of 8 (#3135)
controlnet training center crop input images to multiple of 8
The pipeline code resizes inputs to multiples of 8.
Not doing this resizing in the training script is causing
the encoded image to have different height/width dimensions
than the encoded conditioning image (which uses a separate
encoder that's part of the controlnet model).
We resize and center crop the inputs to make sure they're the
same size (as well as all other images in the batch). We also
check that the initial resolution is a multiple of 8.
* adding custom diffusion training to diffusers examples (#3031)
* diffusers==0.14.0 update
* custom diffusion update
* custom diffusion update
* custom diffusion update
* custom diffusion update
* custom diffusion update
* custom diffusion update
* custom diffusion
* custom diffusion
* custom diffusion
* custom diffusion
* custom diffusion
* apply formatting and get rid of bare except.
* refactor readme and other minor changes.
* misc refactor.
* fix: repo_id issue and loaders logging bug.
* fix: save_model_card.
* fix: save_model_card.
* fix: save_model_card.
* add: doc entry.
* refactor doc,.
* custom diffusion
* custom diffusion
* custom diffusion
* apply style.
* remove tralining whitespace.
* fix: toctree entry.
* remove unnecessary print.
* custom diffusion
* custom diffusion
* custom diffusion test
* custom diffusion xformer update
* custom diffusion xformer update
* custom diffusion xformer update
---------
Co-authored-by: Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
* make style
* Update custom_diffusion.mdx (#3165)
Add missing newlines for rendering the links correctly
* Added distillation for quantization example on textual inversion. (#2760)
* Added distillation for quantization example on textual inversion.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* refined readme and code style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* Update text2images.py
* refined code of model load and added compatibility check.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* fixed code style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* fix C403 [*] Unnecessary `list` comprehension (rewrite as a `set` comprehension)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
---------
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* Update Noise Autocorrelation Loss Function for Pix2PixZero Pipeline (#2942)
* Update Pix2PixZero Auto-correlation Loss
* Add fast inversion tests
* Clarify purpose and mark as deprecated
Fix inversion prompt broadcasting
* Register modules set to `None` in config for `test_save_load_optional_components`
* Update new tests to coordinate with #2953
* [DreamBooth] add text encoder LoRA support in the DreamBooth training script (#3130)
* add: LoRA text encoder support for DreamBooth example.
* fix initialization.
* fix: modification call.
* add: entry in the readme.
* use dog dataset from hub.
* fix: params to clip.
* add entry to the LoRA doc.
* add: tests for lora.
* remove unnecessary list comprehension./
* Update Habana Gaudi documentation (#3169)
* Update Habana Gaudi doc
* Fix tables
* Add model offload to x4 upscaler (#3187)
* Add model offload to x4 upscaler
* fix
* [docs] Deterministic algorithms (#3172)
deterministic algos
* Update custom_diffusion.mdx to credit the author (#3163)
* Update custom_diffusion.mdx
* fix: unnecessary list comprehension.
* Fix TensorRT community pipeline device set function (#3157)
pass silence_dtype_warnings as kwarg
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make `from_flax` work for controlnet (#3161)
fix from_flax
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [docs] Clarify training args (#3146)
* clarify training arg
* apply feedback
* Multi Vector Textual Inversion (#3144)
* Multi Vector
* Improve
* fix multi token
* improve test
* make style
* Update examples/test_examples.py
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* update
* Finish
* Apply suggestions from code review
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Add `Karras sigmas` to HeunDiscreteScheduler (#3160)
* Add karras pattern to discrete heun scheduler
* Add integration test
* Fix failing CI on pytorch test on M1 (mps)
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [AudioLDM] Fix dtype of returned waveform (#3189)
* Fix bug in train_dreambooth_lora (#3183)
* Update train_dreambooth_lora.py
fix bug
* Update train_dreambooth_lora.py
* [Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)
* Update lpw_stable_diffusion.py
* fix cpu offload
* Make sure VAE attention works with Torch 2_0 (#3200)
* Make sure attention works with Torch 2_0
* make style
* Fix more
* Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (#3201)
Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)"
This reverts commit 9965cb50ea.
* [Bug fix] Fix batch size attention head size mismatch (#3214)
* fix mixed precision training on train_dreambooth_inpaint_lora (#3138)
cast to weight dtype
* adding enable_vae_tiling and disable_vae_tiling functions (#3225)
adding enable_vae_tiling and disable_val_tiling functions
* Add ControlNet v1.1 docs (#3226)
Add v1.1 docs
* Fix issue in maybe_convert_prompt (#3188)
When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens. Adding a space for the padding tokens fixes this.
* Sync cache version check from transformers (#3179)
sync cache version check from transformers
* Fix docs text inversion (#3166)
* Fix docs text inversion
* Apply suggestions from code review
* add model (#3230)
* add
* clean
* up
* clean up more
* fix more tests
* Improve docs further
* improve
* more fixes docs
* Improve docs more
* Update src/diffusers/models/unet_2d_condition.py
* fix
* up
* update doc links
* make fix-copies
* add safety checker and watermarker to stage 3 doc page code snippets
* speed optimizations docs
* memory optimization docs
* make style
* add watermarking snippets to doc string examples
* make style
* use pt_to_pil helper functions in doc strings
* skip mps tests
* Improve safety
* make style
* new logic
* fix
* fix bad onnx design
* make new stable diffusion upscale pipeline model arguments optional
* define has_nsfw_concept when non-pil output type
* lowercase linked to notebook name
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
* Allow return pt x4 (#3236)
* Add all files
* update
* Allow fp16 attn for x4 upscaler (#3239)
* Add all files
* update
* Make sure vae is memory efficient for PT 1
* make style
* fix fast test (#3241)
* Adds a document on token merging (#3208)
* add document on token merging.
* fix headline.
* fix: headline.
* add some samples for comparison.
* [AudioLDM] Update docs to use updated ckpt (#3240)
* [AudioLDM] Update docs to use updated ckpt
* make style
* Release: v0.16.0
* Post release for 0.16.0 (#3244)
* Post release
* fix more
* [docs] only mention one stage (#3246)
* [docs] only mention one stage
* add blurb on auto accepting
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
* Write model card in controlnet training script (#3229)
Write model card in controlnet training script.
* [2064]: Add stochastic sampler (sample_dpmpp_sde) (#3020)
* [2064]: Add stochastic sampler
* [2064]: Add stochastic sampler
* [2064]: Add stochastic sampler
* [2064]: Add stochastic sampler
* [2064]: Add stochastic sampler
* [2064]: Add stochastic sampler
* [2064]: Add stochastic sampler
* Review comments
* [Review comment]: Add is_torchsde_available()
* [Review comment]: Test and docs
* [Review comment]
* [Review comment]
* [Review comment]
* [Review comment]
* [Review comment]
---------
Co-authored-by: njindal <njindal@adobe.com>
* [Stochastic Sampler][Slow Test]: Cuda test fixes (#3257)
[Slow Test]: Cuda test fixes
Co-authored-by: njindal <njindal@adobe.com>
* Remove required from tracker_project_name (#3260)
Remove required from tracker_project_name.
As observed by https://github.com/off99555 in https://github.com/huggingface/diffusers/issues/2695#issuecomment-1470755050, it already has a default value.
* adding required parameters while calling the get_up_block and get_down_block (#3210)
* removed unnecessary parameters from get_up_block and get_down_block functions
* adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [docs] Update interface in repaint.mdx (#3119)
Update repaint.mdx
accomodate to #1701
* Update IF name to XL (#3262)
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
* fix typo in score sde pipeline (#3132)
* Fix typo in textual inversion JAX training script (#3123)
The pipeline is built as `pipe` but then used as `pipeline`.
* AudioDiffusionPipeline - fix encode method after config changes (#3114)
* config fixes
* deprecate get_input_dims
* Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline"" (#3265)
Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (#3201)"
This reverts commit 91a2a80eb2.
* Fix community pipelines (#3266)
* update notebook (#3259)
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
* [docs] add notes for stateful model changes (#3252)
* [docs] add notes for stateful model changes
* Update docs/source/en/optimization/fp16.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* link to accelerate docs for discarding hooks
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* [LoRA] quality of life improvements in the loading semantics and docs (#3180)
* 👽 qol improvements for LoRA.
* better function name?
* fix: LoRA weight loading with the new format.
* address Patrick's comments.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* change wording around encouraging the use of load_lora_weights().
* fix: function name.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Community Pipelines] EDICT pipeline implementation (#3153)
* EDICT pipeline initial commit
- Starting point taking from https://github.com/Joqsan/edict-diffusion
* refactor __init__() method
* minor refactoring
* refactor scheduler code
- remove scheduler and move its methods to the EDICTPipeline class
* make CFG optional
- refactor encode_prompt().
- include optional generator for sampling with vae.
- minor variable renaming
* add EDICT pipeline description to README.md
* replace preprocess() with VaeImageProcessor
* run make style and make quality commands
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Docs]zh translated docs update (#3245)
* zh translated docs update
* update _toctree
* Update logging.mdx (#2863)
Fix typos
* Add multiple conditions to StableDiffusionControlNetInpaintPipeline (#3125)
* try multi controlnet inpaint
* multi controlnet inpaint
* multi controlnet inpaint
* Let's make sure that dreambooth always uploads to the Hub (#3272)
* Update Dreambooth README
* Adapt all docs as well
* automatically write model card
* fix
* make style
* Diffedit Zero-Shot Inpainting Pipeline (#2837)
* Update Pix2PixZero Auto-correlation Loss
* Add Stable Diffusion DiffEdit pipeline
* Add draft documentation and import code
* Bugfixes and refactoring
* Add option to not decode latents in the inversion process
* Harmonize preprocessing
* Revert "Update Pix2PixZero Auto-correlation Loss"
This reverts commit b218062fed.
* Update annotations
* rename `compute_mask` to `generate_mask`
* Update documentation
* Update docs
* Update Docs
* Fix copy
* Change shape of output latents to batch first
* Update docs
* Add first draft for tests
* Bugfix and update tests
* Add `cross_attention_kwargs` support for all pipeline methods
* Fix Copies
* Add support for PIL image latents
Add support for mask broadcasting
Update docs and tests
Align `mask` argument to `mask_image`
Remove height and width arguments
* Enable MPS Tests
* Move example docstrings
* Fix test
* Fix test
* fix pipeline inheritance
* Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
* Register modules set to `None` in config for `test_save_load_optional_components`
* Move fixed logic to specific test class
* Clean changes to other pipelines
* Update new tests to coordinate with #2953
* Update slow tests for better results
* Safety to avoid potential problems with torch.inference_mode
* Add reference in SD Pipeline Overview
* Fix tests again
* Enforce determinism in noise for generate_mask
* Fix copies
* Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
* Add LoraLoaderMixin and update `prepare_image_latents`
* clean up repeat and reg
* bugfix
* Remove invalid args from docs
Suppress spurious warning by repeating image before latent to mask gen
* add constant learning rate with custom rule (#3133)
* add constant lr with rules
* add constant with rules in TYPE_TO_SCHEDULER_FUNCTION
* add constant lr rate with rule
* hotfix code quality
* fix doc style
* change name constant_with_rules to piecewise constant
* Allow disabling torch 2_0 attention (#3273)
* Allow disabling torch 2_0 attention
* make style
* Update src/diffusers/models/attention.py
* [doc] add link to training script (#3271)
add link to training script
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
* temp disable spectogram diffusion tests (#3278)
The note-seq package throws an error on import because the default installed version of Ipython
is not compatible with python 3.8 which we run in the CI.
https://github.com/huggingface/diffusers/actions/runs/4830121056/jobs/8605954838#step:7:9
* Changed sample[0] to images[0] (#3304)
A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.
* Typo in tutorial (#3295)
* Torch compile graph fix (#3286)
* fix more
* Fix more
* fix more
* Apply suggestions from code review
* fix
* make style
* make fix-copies
* fix
* make sure torch compile
* Clean
* fix test
* Postprocessing refactor img2img (#3268)
* refactor img2img VaeImageProcessor.postprocess
* remove copy from for init, run_safety_checker, decode_latents
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [Torch 2.0 compile] Fix more torch compile breaks (#3313)
* Fix more torch compile breaks
* add tests
* Fix all
* fix controlnet
* fix more
* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* Add Horace He as co-author.
Co-authored-by: Horace He <horacehe2007@yahoo.com>
---------
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* fix: scale_lr and sync example readme and docs. (#3299)
* fix: scale_lr and sync example readme and docs.
* fix doc link.
* Update stable_diffusion.mdx (#3310)
fixed import statement
* Fix missing variable assign in DeepFloyd-IF-II (#3315)
Fix missing variable assign
lol
* Correct doc build for patch releases (#3316)
Update build_documentation.yml
* Add Stable Diffusion RePaint to community pipelines (#3320)
* Add Stable Diffsuion RePaint to community pipelines
- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline
* Fix: Remove wrong import
- Remove wrong import
- Minor change in comments
* Fix: Code formatting of stable_diffusion_repaint
* Fix: ruff errors in stable_diffusion_repaint
* Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)
* fix multistep dpmsolver for cosine schedule (deepfloy-if)
* fix a typo
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
* add test, fix style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [docs] Improve LoRA docs (#3311)
* update docs
* add to toctree
* apply feedback
* Added input pretubation (#3292)
* Added input pretubation
* Fixed spelling
* Update write_own_pipeline.mdx (#3323)
* update controlling generation doc with latest goodies. (#3321)
* [Quality] Make style (#3341)
* Fix config dpm (#3343)
* Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)
* add SDE variant of DPM-Solver and DPM-Solver++
* add test
* fix typo
* fix typo
* Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)
The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.
* Add UniDiffuser classes to __init__ files, modify transformer block to support pre- and post-LN, add fast default tests, fix some bugs.
* Update fast tests to use test checkpoints stored on the hub and to better match the reference UniDiffuser implementation.
* Fix code with make style.
* Revert "Fix code style with make style."
This reverts commit 10a174a12c.
* Add self.image_encoder, self.text_decoder to list of models to offload to CPU in the enable_sequential_cpu_offload(...)/enable_model_cpu_offload(...) methods to make test_cpu_offload_forward_pass pass.
* Fix code quality with make style.
* Support using a data type embedding for UniDiffuser-v1.
* Add fast test for checking UniDiffuser-v1 sampling.
* Make changes so that the repository consistency tests pass.
* Add UniDiffuser dummy objects via make fix-copies.
* Fix bugs and make improvements to the UniDiffuser pipeline:
- Improve batch size inference and fix bugs when num_images_per_prompt or num_prompts_per_image > 1
- Add tests for num_images_per_prompt, num_prompts_per_image > 1
- Improve check_inputs, especially regarding checking supplied latents
- Add reset_mode method so that mode inference can be re-enabled after mode is set manually
- Fix some warnings related to accessing class members directly instead of through their config
- Small amount of refactoring in pipeline_unidiffuser.py
* Fix code style with make style.
* Add/edit docstrings for added classes and public pipeline methods. Also do some light refactoring.
* Add documentation for UniDiffuser and fix some typos/formatting in docstrings.
* Fix code with make style.
* Refactor and improve the UniDiffuser convert_from_ckpt.py script.
* Move the UniDiffusers convert_from_ckpy.py script to diffusers/scripts/convert_unidiffuser_to_diffusers.py
* Fix code quality via make style.
* Improve UniDiffuser slow tests.
* make style
* Fix some typos in the UniDiffuser docs.
* Remove outdated logic based on transformers version in UniDiffuser pipeline __init__.py
* Remove dependency on einops by refactoring einops operations to pure torch operations.
* make style
* Add slow test on full checkpoint for joint mode and correct expected image slices/text prefixes.
* make style
* Fix mixed precision issue by wrapping the offending code with the torch.autocast context manager.
* Revert "Fix mixed precision issue by wrapping the offending code with the torch.autocast context manager."
This reverts commit 1a58958ab4.
* Add fast test for CUDA/fp16 model behavior (currently failing).
* Fix the mixed precision issue and add additional tests of the pipeline cuda/fp16 functionality.
* make style
* Use a CLIPVisionModelWithProjection instead of CLIPVisionModel for image_encoder to better match the original UniDiffuser implementation.
* Make style and remove some testing code.
* Fix shape errors for the 'joint' and 'img2text' modes.
* Fix tests and remove some testing code.
* Add option to use fixed latents for UniDiffuserPipelineSlowTests and fix issue in modeling_text_decoder.py.
* Improve UniDiffuser docs, particularly the usage examples, and improve slow tests with new expected outputs.
* make style
* Fix examples to load model in float16.
* In image-to-text mode, sample from the autoencoder moment distribution instead of always getting its mode.
* make style
* When encoding the image using the VAE, scale the image latents by the VAE's scaling factor.
* make style
* Clean up code and make slow tests pass.
* make fix-copies
* [docs] Fix docstring (#3334)
fix docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* if dreambooth lora (#3360)
* update IF stage I pipelines
add fixed variance schedulers and lora loading
* added kv lora attn processor
* allow loading into alternative lora attn processor
* make vae optional
* throw away predicted variance
* allow loading into added kv lora layer
* allow load T5
* allow pre compute text embeddings
* set new variance type in schedulers
* fix copies
* refactor all prompt embedding code
class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable
* fix for when variance type is not defined on scheduler
* do not pre compute validation prompt if not present
* add example test for if lora dreambooth
* add check for train text encoder and pre compute text embeddings
* Postprocessing refactor all others (#3337)
* add text2img
* fix-copies
* add
* add all other pipelines
* add
* add
* add
* add
* add
* make style
* style + fix copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [docs] Improve safetensors docstring (#3368)
* clarify safetensor docstring
* fix typo
* apply feedback
* add: a warning message when using xformers in a PT 2.0 env. (#3365)
* add: a warning message when using xformers in a PT 2.0 env.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.
* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests
Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution
* Added a resolution test to StableDiffusionInpaintPipelineSlowTests
this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* [docs] Adapt a model (#3326)
* first draft
* apply feedback
* conv_in.weight thrown away
* [docs] Load safetensors (#3333)
* safetensors
* apply feedback
* apply feedback
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* [Docs] Fix stable_diffusion.mdx typo (#3398)
Fix typo in last code block. Correct "prommpts" to "prompt"
* Support ControlNet v1.1 shuffle properly (#3340)
* add inferring_controlnet_cond_batch
* Revert "add inferring_controlnet_cond_batch"
This reverts commit abe8d6311d.
* set guess_mode to True
whenever global_pool_conditions is True
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* nit
* add integration test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Tests] better determinism (#3374)
* enable deterministic pytorch and cuda operations.
* disable manual seeding.
* make style && make quality for unet_2d tests.
* enable determinism for the unet2dconditional model.
* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
* relax tolerance (very weird issue, though).
* revert to torch manual_seed() where needed.
* relax more tolerance.
* better placement of the cuda variable and relax more tolerance.
* enable determinism for 3d condition model.
* relax tolerance.
* add: determinism to alt_diffusion.
* relax tolerance for alt diffusion.
* dance diffusion.
* dance diffusion is flaky.
* test_dict_tuple_outputs_equivalent edit.
* fix two more tests.
* fix more ddim tests.
* fix: argument.
* change to diff in place of difference.
* fix: test_save_load call.
* test_save_load_float16 call.
* fix: expected_max_diff
* fix: paint by example.
* relax tolerance.
* add determinism to 1d unet model.
* torch 2.0 regressions seem to be brutal
* determinism to vae.
* add reason to skipping.
* up tolerance.
* determinism to vq.
* determinism to cuda.
* determinism to the generic test pipeline file.
* refactor general pipelines testing a bit.
* determinism to alt diffusion i2i
* up tolerance for alt diff i2i and audio diff
* up tolerance.
* determinism to audioldm
* increase tolerance for audioldm lms.
* increase tolerance for paint by paint.
* increase tolerance for repaint.
* determinism to cycle diffusion and sd 1.
* relax tol for cycle diffusion 🚲
* relax tol for sd 1.0
* relax tol for controlnet.
* determinism to img var.
* relax tol for img variation.
* tolerance to i2i sd
* make style
* determinism to inpaint.
* relax tolerance for inpaiting.
* determinism for inpainting legacy
* relax tolerance.
* determinism to instruct pix2pix
* determinism to model editing.
* model editing tolerance.
* panorama determinism
* determinism to pix2pix zero.
* determinism to sag.
* sd 2. determinism
* sd. tolerance
* disallow tf32 matmul.
* relax tolerance is all you need.
* make style and determinism to sd 2 depth
* relax tolerance for depth.
* tolerance to diffedit.
* tolerance to sd 2 inpaint.
* up tolerance.
* determinism in upscaling.
* tolerance in upscaler.
* more tolerance relaxation.
* determinism to v pred.
* up tol for v_pred
* unclip determinism
* determinism to unclip img2img
* determinism to text to video.
* determinism to last set of tests
* up tol.
* vq cumsum doesn't have a deterministic kernel
* relax tol
* relax tol
* [docs] Add transformers to install (#3388)
add transformers to install
* [deepspeed] partial ZeRO-3 support (#3076)
* [deepspeed] partial ZeRO-3 support
* cleanup
* improve deepspeed fixes
* Improve
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add omegaconf for tests (#3400)
Add omegaconfg
* Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)
* Improve checkpointing lora
* fix more
* Improve doc string
* Update src/diffusers/loaders.py
* make stytle
* Apply suggestions from code review
* Update src/diffusers/loaders.py
* Apply suggestions from code review
* Apply suggestions from code review
* better
* Fix all
* Fix multi-GPU dreambooth
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix all
* make style
* make style
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix docker file (#3402)
* up
* up
* fix: deepseepd_plugin retrieval from accelerate state (#3410)
* [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)
* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring
* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Don't install accelerate and transformers from source (#3415)
* Don't install transformers and accelerate from source (#3414)
* Improve fast tests (#3416)
Update pr_tests.yml
* attention refactor: the trilogy (#3387)
* Replace `AttentionBlock` with `Attention`
* use _from_deprecated_attn_block check re: @patrickvonplaten
* [Docs] update the PT 2.0 optimization doc with latest findings (#3370)
* add: benchmarking stats for A100 and V100.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* address patrick's comments.
* add: rtx 4090 stats
* ⚔ benchmark reports done
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* 3313 pr link.
* add: plots.
Co-authored-by: Pedro <pedro@huggingface.co>
* fix formattimg
* update number percent.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix style rendering (#3433)
* Fix style rendering.
* Fix typo
* unCLIP scheduler do not use note (#3417)
* Replace deprecated command with environment file (#3409)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix warning message pipeline loading (#3446)
* add stable diffusion tensorrt img2img pipeline (#3419)
* add stable diffusion tensorrt img2img pipeline
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update docstrings
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* Refactor controlnet and add img2img and inpaint (#3386)
* refactor controlnet and add img2img and inpaint
* First draft to get pipelines to work
* make style
* Fix more
* Fix more
* More tests
* Fix more
* Make inpainting work
* make style and more tests
* Apply suggestions from code review
* up
* make style
* Fix imports
* Fix more
* Fix more
* Improve examples
* add test
* Make sure import is correctly deprecated
* Make sure everything works in compile mode
* make sure authorship is correctly attributed
* [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)
* Add DPM-Solver Multistep Inverse Scheduler
* Add draft tests for DiffEdit
* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents
* Fix tests
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Docs] Fix incomplete docstring for resnet.py (#3438)
Fix incomplete docstrings for resnet.py
* fix tiled vae blend extent range (#3384)
fix tiled vae bleand extent range
* Small update to "Next steps" section (#3443)
Small update to "Next steps" section:
- PyTorch 2 is recommended.
- Updated improvement figures.
* Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)
* Update pipeline_if_superresolution.py
Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape
* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments
* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adding 'strength' parameter to StableDiffusionInpaintingPipeline (#3424)
* Added explanation of 'strength' parameter
* Added get_timesteps function which relies on new strength parameter
* Added `strength` parameter which defaults to 1.
* Swapped ordering so `noise_timestep` can be calculated before masking the image
this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.
* Added strength to check_inputs, throws error if out of range
* Changed `prepare_latents` to initialise latents w.r.t strength
inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.
* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline
still need to add correct regression values
* Created a is_strength_max to initialise from pure random noise
* Updated unit tests w.r.t new strength parameter + fixed new strength unit test
* renamed parameter to avoid confusion with variable of same name
* Updated regression values for new strength test - now passes
* removed 'copied from' comment as this method is now different and divergent from the cpy
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Ensure backwards compatibility for prepare_mask_and_masked_image
created a return_image boolean and initialised to false
* Ensure backwards compatibility for prepare_latents
* Fixed copy check typo
* Fixes w.r.t backward compibility changes
* make style
* keep function argument ordering same for backwards compatibility in callees with copied from statements
* make fix-copies
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
* [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)
Added bugfix using f strings.
* Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)
* gradient checkpointing bug fix
* bug fix; changes for reviews
* reformat
* reformat
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Make dreambooth lora more robust to orig unet (#3462)
* Make dreambooth lora more robust to orig unet
* up
* Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)
Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.
* Add min snr to text2img lora training script (#3459)
add min snr to text2img lora training script
* Add inpaint lora scale support (#3460)
* add inpaint lora scale support
* add inpaint lora scale test
---------
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
* [From ckpt] Fix from_ckpt (#3466)
* Correct from_ckpt
* make style
* Update full dreambooth script to work with IF (#3425)
* Add IF dreambooth docs (#3470)
* parameterize pass single args through tuple (#3477)
* attend and excite tests disable determinism on the class level (#3478)
* dreambooth docs torch.compile note (#3471)
* dreambooth docs torch.compile note
* Update examples/dreambooth/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update examples/dreambooth/README.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add: if entry in the dreambooth training docs. (#3472)
* [docs] Textual inversion inference (#3473)
* add textual inversion inference to docs
* add to toctree
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [docs] Distributed inference (#3376)
* distributed inference
* move to inference section
* apply feedback
* update with split_between_processes
* apply feedback
* [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)
explicit view kernel size as number elements in flattened indices
* mps & onnx tests rework (#3449)
* Remove ONNX tests from PR.
They are already a part of push_tests.yml.
* Remove mps tests from PRs.
They are already performed on push.
* Fix workflow name for fast push tests.
* Extract mps tests to a workflow.
For better control/filtering.
* Remove --extra-index-url from mps tests
* Increase tolerance of mps test
This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.
* Temporarily run mps tests on pr
So we can test.
* Revert "Temporarily run mps tests on pr"
Tests passed, go back to running on push.
* [Attention processor] Better warning message when shifting to `AttnProcessor2_0` (#3457)
* add: debugging to enabling memory efficient processing
* add: better warning message.
* [Docs] add note on local directory path. (#3397)
add note on local directory path.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Refactor full determinism (#3485)
* up
* fix more
* Apply suggestions from code review
* fix more
* fix more
* Check it
* Remove 16:8
* fix more
* fix more
* fix more
* up
* up
* Test only stable diffusion
* Test only two files
* up
* Try out spinning up processes that can be killed
* up
* Apply suggestions from code review
* up
* up
* Fix DPM single (#3413)
* Fix DPM single
* add test
* fix one more bug
* Apply suggestions from code review
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
---------
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
* Add `use_Karras_sigmas` to DPMSolverSinglestepScheduler (#3476)
* add use_karras_sigmas
* add karras test
* add doc
* Adds local_files_only bool to prevent forced online connection (#3486)
* make style
* [Docs] Korean translation (optimization, training) (#3488)
* feat) optimization kr translation
* fix) typo, italic setting
* feat) dreambooth, text2image kr
* feat) lora kr
* fix) LoRA
* fix) fp16 fix
* fix) doc-builder style
* fix) fp16 일부 단어 수정
* fix) fp16 style fix
* fix) opt, training docs update
* feat) toctree update
* feat) toctree update
---------
Co-authored-by: Chanran Kim <seriousran@gmail.com>
* DataLoader respecting EXIF data in Training Images (#3465)
* DataLoader will now bake in any transforms or image manipulations contained in the EXIF
Images may have rotations stored in EXIF. Training using such images will cause those transforms to be ignored while training and thus produce unexpected results
* Fixed the Dataloading EXIF issue in main DreamBooth training as well
* Run make style (black & isort)
* make style
* feat: allow disk offload for diffuser models (#3285)
* allow disk offload for diffuser models
* sort import
* add max_memory argument
* Changed sample[0] to images[0] (#3304)
A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.
* Typo in tutorial (#3295)
* Torch compile graph fix (#3286)
* fix more
* Fix more
* fix more
* Apply suggestions from code review
* fix
* make style
* make fix-copies
* fix
* make sure torch compile
* Clean
* fix test
* Postprocessing refactor img2img (#3268)
* refactor img2img VaeImageProcessor.postprocess
* remove copy from for init, run_safety_checker, decode_latents
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [Torch 2.0 compile] Fix more torch compile breaks (#3313)
* Fix more torch compile breaks
* add tests
* Fix all
* fix controlnet
* fix more
* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* Add Horace He as co-author.
Co-authored-by: Horace He <horacehe2007@yahoo.com>
---------
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* fix: scale_lr and sync example readme and docs. (#3299)
* fix: scale_lr and sync example readme and docs.
* fix doc link.
* Update stable_diffusion.mdx (#3310)
fixed import statement
* Fix missing variable assign in DeepFloyd-IF-II (#3315)
Fix missing variable assign
lol
* Correct doc build for patch releases (#3316)
Update build_documentation.yml
* Add Stable Diffusion RePaint to community pipelines (#3320)
* Add Stable Diffsuion RePaint to community pipelines
- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline
* Fix: Remove wrong import
- Remove wrong import
- Minor change in comments
* Fix: Code formatting of stable_diffusion_repaint
* Fix: ruff errors in stable_diffusion_repaint
* Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)
* fix multistep dpmsolver for cosine schedule (deepfloy-if)
* fix a typo
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
* add test, fix style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [docs] Improve LoRA docs (#3311)
* update docs
* add to toctree
* apply feedback
* Added input pretubation (#3292)
* Added input pretubation
* Fixed spelling
* Update write_own_pipeline.mdx (#3323)
* update controlling generation doc with latest goodies. (#3321)
* [Quality] Make style (#3341)
* Fix config dpm (#3343)
* Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)
* add SDE variant of DPM-Solver and DPM-Solver++
* add test
* fix typo
* fix typo
* Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)
The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.
* Rename --only_save_embeds to --save_as_full_pipeline (#3206)
* Set --only_save_embeds to False by default
Due to how the option is named, it makes more sense to behave like this.
* Refactor only_save_embeds to save_as_full_pipeline
* [AudioLDM] Generalise conversion script (#3328)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix TypeError when using prompt_embeds and negative_prompt (#2982)
* test: Added test case
* fix: fixed type checking issue on _encode_prompt
* fix: fixed copies consistency
* fix: one copy was not sufficient
* Fix pipeline class on README (#3345)
Update README.md
* Inpainting: typo in docs (#3331)
Typo in docs
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add `use_Karras_sigmas` to LMSDiscreteScheduler (#3351)
* add karras sigma to lms discrete scheduler
* add test for lms_scheduler karras
* reformat test lms
* Batched load of textual inversions (#3277)
* Batched load of textual inversions
- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info
* Update src/diffusers/loaders.py
Check for duplicate tokens
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update condition for None tokens
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make fix-copies
* [docs] Fix docstring (#3334)
fix docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* if dreambooth lora (#3360)
* update IF stage I pipelines
add fixed variance schedulers and lora loading
* added kv lora attn processor
* allow loading into alternative lora attn processor
* make vae optional
* throw away predicted variance
* allow loading into added kv lora layer
* allow load T5
* allow pre compute text embeddings
* set new variance type in schedulers
* fix copies
* refactor all prompt embedding code
class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable
* fix for when variance type is not defined on scheduler
* do not pre compute validation prompt if not present
* add example test for if lora dreambooth
* add check for train text encoder and pre compute text embeddings
* Postprocessing refactor all others (#3337)
* add text2img
* fix-copies
* add
* add all other pipelines
* add
* add
* add
* add
* add
* make style
* style + fix copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [docs] Improve safetensors docstring (#3368)
* clarify safetensor docstring
* fix typo
* apply feedback
* add: a warning message when using xformers in a PT 2.0 env. (#3365)
* add: a warning message when using xformers in a PT 2.0 env.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.
* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests
Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution
* Added a resolution test to StableDiffusionInpaintPipelineSlowTests
this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* [docs] Adapt a model (#3326)
* first draft
* apply feedback
* conv_in.weight thrown away
* [docs] Load safetensors (#3333)
* safetensors
* apply feedback
* apply feedback
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* [Docs] Fix stable_diffusion.mdx typo (#3398)
Fix typo in last code block. Correct "prommpts" to "prompt"
* Support ControlNet v1.1 shuffle properly (#3340)
* add inferring_controlnet_cond_batch
* Revert "add inferring_controlnet_cond_batch"
This reverts commit abe8d6311d.
* set guess_mode to True
whenever global_pool_conditions is True
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* nit
* add integration test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Tests] better determinism (#3374)
* enable deterministic pytorch and cuda operations.
* disable manual seeding.
* make style && make quality for unet_2d tests.
* enable determinism for the unet2dconditional model.
* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
* relax tolerance (very weird issue, though).
* revert to torch manual_seed() where needed.
* relax more tolerance.
* better placement of the cuda variable and relax more tolerance.
* enable determinism for 3d condition model.
* relax tolerance.
* add: determinism to alt_diffusion.
* relax tolerance for alt diffusion.
* dance diffusion.
* dance diffusion is flaky.
* test_dict_tuple_outputs_equivalent edit.
* fix two more tests.
* fix more ddim tests.
* fix: argument.
* change to diff in place of difference.
* fix: test_save_load call.
* test_save_load_float16 call.
* fix: expected_max_diff
* fix: paint by example.
* relax tolerance.
* add determinism to 1d unet model.
* torch 2.0 regressions seem to be brutal
* determinism to vae.
* add reason to skipping.
* up tolerance.
* determinism to vq.
* determinism to cuda.
* determinism to the generic test pipeline file.
* refactor general pipelines testing a bit.
* determinism to alt diffusion i2i
* up tolerance for alt diff i2i and audio diff
* up tolerance.
* determinism to audioldm
* increase tolerance for audioldm lms.
* increase tolerance for paint by paint.
* increase tolerance for repaint.
* determinism to cycle diffusion and sd 1.
* relax tol for cycle diffusion 🚲
* relax tol for sd 1.0
* relax tol for controlnet.
* determinism to img var.
* relax tol for img variation.
* tolerance to i2i sd
* make style
* determinism to inpaint.
* relax tolerance for inpaiting.
* determinism for inpainting legacy
* relax tolerance.
* determinism to instruct pix2pix
* determinism to model editing.
* model editing tolerance.
* panorama determinism
* determinism to pix2pix zero.
* determinism to sag.
* sd 2. determinism
* sd. tolerance
* disallow tf32 matmul.
* relax tolerance is all you need.
* make style and determinism to sd 2 depth
* relax tolerance for depth.
* tolerance to diffedit.
* tolerance to sd 2 inpaint.
* up tolerance.
* determinism in upscaling.
* tolerance in upscaler.
* more tolerance relaxation.
* determinism to v pred.
* up tol for v_pred
* unclip determinism
* determinism to unclip img2img
* determinism to text to video.
* determinism to last set of tests
* up tol.
* vq cumsum doesn't have a deterministic kernel
* relax tol
* relax tol
* [docs] Add transformers to install (#3388)
add transformers to install
* [deepspeed] partial ZeRO-3 support (#3076)
* [deepspeed] partial ZeRO-3 support
* cleanup
* improve deepspeed fixes
* Improve
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add omegaconf for tests (#3400)
Add omegaconfg
* Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)
* Improve checkpointing lora
* fix more
* Improve doc string
* Update src/diffusers/loaders.py
* make stytle
* Apply suggestions from code review
* Update src/diffusers/loaders.py
* Apply suggestions from code review
* Apply suggestions from code review
* better
* Fix all
* Fix multi-GPU dreambooth
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix all
* make style
* make style
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix docker file (#3402)
* up
* up
* fix: deepseepd_plugin retrieval from accelerate state (#3410)
* [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)
* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring
* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Don't install accelerate and transformers from source (#3415)
* Don't install transformers and accelerate from source (#3414)
* Improve fast tests (#3416)
Update pr_tests.yml
* attention refactor: the trilogy (#3387)
* Replace `AttentionBlock` with `Attention`
* use _from_deprecated_attn_block check re: @patrickvonplaten
* [Docs] update the PT 2.0 optimization doc with latest findings (#3370)
* add: benchmarking stats for A100 and V100.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* address patrick's comments.
* add: rtx 4090 stats
* ⚔ benchmark reports done
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* 3313 pr link.
* add: plots.
Co-authored-by: Pedro <pedro@huggingface.co>
* fix formattimg
* update number percent.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix style rendering (#3433)
* Fix style rendering.
* Fix typo
* unCLIP scheduler do not use note (#3417)
* Replace deprecated command with environment file (#3409)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix warning message pipeline loading (#3446)
* add stable diffusion tensorrt img2img pipeline (#3419)
* add stable diffusion tensorrt img2img pipeline
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update docstrings
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* Refactor controlnet and add img2img and inpaint (#3386)
* refactor controlnet and add img2img and inpaint
* First draft to get pipelines to work
* make style
* Fix more
* Fix more
* More tests
* Fix more
* Make inpainting work
* make style and more tests
* Apply suggestions from code review
* up
* make style
* Fix imports
* Fix more
* Fix more
* Improve examples
* add test
* Make sure import is correctly deprecated
* Make sure everything works in compile mode
* make sure authorship is correctly attributed
* [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)
* Add DPM-Solver Multistep Inverse Scheduler
* Add draft tests for DiffEdit
* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents
* Fix tests
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Docs] Fix incomplete docstring for resnet.py (#3438)
Fix incomplete docstrings for resnet.py
* fix tiled vae blend extent range (#3384)
fix tiled vae bleand extent range
* Small update to "Next steps" section (#3443)
Small update to "Next steps" section:
- PyTorch 2 is recommended.
- Updated improvement figures.
* Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)
* Update pipeline_if_superresolution.py
Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape
* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments
* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adding 'strength' parameter to StableDiffusionInpaintingPipeline (#3424)
* Added explanation of 'strength' parameter
* Added get_timesteps function which relies on new strength parameter
* Added `strength` parameter which defaults to 1.
* Swapped ordering so `noise_timestep` can be calculated before masking the image
this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.
* Added strength to check_inputs, throws error if out of range
* Changed `prepare_latents` to initialise latents w.r.t strength
inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.
* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline
still need to add correct regression values
* Created a is_strength_max to initialise from pure random noise
* Updated unit tests w.r.t new strength parameter + fixed new strength unit test
* renamed parameter to avoid confusion with variable of same name
* Updated regression values for new strength test - now passes
* removed 'copied from' comment as this method is now different and divergent from the cpy
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Ensure backwards compatibility for prepare_mask_and_masked_image
created a return_image boolean and initialised to false
* Ensure backwards compatibility for prepare_latents
* Fixed copy check typo
* Fixes w.r.t backward compibility changes
* make style
* keep function argument ordering same for backwards compatibility in callees with copied from statements
* make fix-copies
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
* [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)
Added bugfix using f strings.
* Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)
* gradient checkpointing bug fix
* bug fix; changes for reviews
* reformat
* reformat
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Make dreambooth lora more robust to orig unet (#3462)
* Make dreambooth lora more robust to orig unet
* up
* Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)
Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.
* Add min snr to text2img lora training script (#3459)
add min snr to text2img lora training script
* Add inpaint lora scale support (#3460)
* add inpaint lora scale support
* add inpaint lora scale test
---------
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
* [From ckpt] Fix from_ckpt (#3466)
* Correct from_ckpt
* make style
* Update full dreambooth script to work with IF (#3425)
* Add IF dreambooth docs (#3470)
* parameterize pass single args through tuple (#3477)
* attend and excite tests disable determinism on the class level (#3478)
* dreambooth docs torch.compile note (#3471)
* dreambooth docs torch.compile note
* Update examples/dreambooth/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update examples/dreambooth/README.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add: if entry in the dreambooth training docs. (#3472)
* [docs] Textual inversion inference (#3473)
* add textual inversion inference to docs
* add to toctree
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [docs] Distributed inference (#3376)
* distributed inference
* move to inference section
* apply feedback
* update with split_between_processes
* apply feedback
* [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)
explicit view kernel size as number elements in flattened indices
* mps & onnx tests rework (#3449)
* Remove ONNX tests from PR.
They are already a part of push_tests.yml.
* Remove mps tests from PRs.
They are already performed on push.
* Fix workflow name for fast push tests.
* Extract mps tests to a workflow.
For better control/filtering.
* Remove --extra-index-url from mps tests
* Increase tolerance of mps test
This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.
* Temporarily run mps tests on pr
So we can test.
* Revert "Temporarily run mps tests on pr"
Tests passed, go back to running on push.
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Ilia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
Co-authored-by: Umar <55330742+mu94-csl@users.noreply.github.com>
Co-authored-by: Mylo <36931363+gitmylo@users.noreply.github.com>
Co-authored-by: Markus Pobitzer <markuspobitzer@gmail.com>
Co-authored-by: Cheng Lu <lucheng.lc15@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Isamu Isozaki <isamu.website@gmail.com>
Co-authored-by: Cesar Aybar <csaybar@gmail.com>
Co-authored-by: Will Rice <will@spokestack.io>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: At-sushi <dkahw210@kyoto.zaq.ne.jp>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Isotr0py <41363108+Isotr0py@users.noreply.github.com>
Co-authored-by: pdoane <pdoane2@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Rupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
Co-authored-by: sudowind <wfpkueecs@163.com>
Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Laureηt <laurentfainsin@protonmail.com>
Co-authored-by: Jongwoo Han <jongwooo.han@gmail.com>
Co-authored-by: asfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
Co-authored-by: clarencechen <clarencechenct@gmail.com>
Co-authored-by: Laureηt <laurent@fainsin.bzh>
Co-authored-by: superlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
Co-authored-by: Dev Aggarwal <devxpy@gmail.com>
Co-authored-by: Vimarsh Chaturvedi <vimarsh.c@gmail.com>
Co-authored-by: 7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
Co-authored-by: Glaceon-Hyy <ffheyy0017@gmail.com>
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
* [Community] reference only control (#3435)
* add reference only control
* add reference only control
* add reference only control
* fix lint
* fix lint
* reference adain
* bugfix EulerAncestralDiscreteScheduler
* fix style fidelity rule
* fix default output size
* del unused line
* fix deterministic
* Support for cross-attention bias / mask (#2634)
* Cross-attention masks
prefer qualified symbol, fix accidental Optional
prefer qualified symbol in AttentionProcessor
prefer qualified symbol in embeddings.py
qualified symbol in transformed_2d
qualify FloatTensor in unet_2d_blocks
move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
regenerate modeling_text_unet.py
remove unused import
unet_2d_condition encoder_attention_mask docs
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
transformer_2d encoder_attention_mask docs
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
unet_2d_blocks.py: add parameter name comments
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
revert description. bool-to-bias treatment happens in unet_2d_condition only.
comment parameter names
fix copies, style
* encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
* encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
* support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
* fix mistake made during merge conflict resolution
* regenerate versatile_diffusion
* pass time embedding into checkpointed attention invocation
* always assume encoder_attention_mask is a mask (i.e. not a bias).
* style, fix-copies
* add tests for cross-attention masks
* add test for padding of attention mask
* explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
* support both masks and biases in Transformer2DModel#forward. document behaviour
* fix-copies
* delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
* review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
* remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
* put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
* fix-copies
* style
* fix-copies
* put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
* restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
* make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
* fix copies
* do not scale the initial global step by gradient accumulation steps when loading from checkpoint (#3506)
* Remove CPU latents logic for UniDiffuserPipelineFastTests.
* make style
* Revert "Clean up code and make slow tests pass."
This reverts commit ec7fb8735b.
* Revert bad commit and clean up code.
* add: contributor note.
* Batched load of textual inversions (#3277)
* Batched load of textual inversions
- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info
* Update src/diffusers/loaders.py
Check for duplicate tokens
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update condition for None tokens
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Revert "add: contributor note."
This reverts commit 302fde9409.
* Re-add contributor note and refactored fast tests fixed latents code to remove CPU specific logic.
* make style
* Refactored the code:
- Updated the checkpoint ids to the new ids where appropriate
- Refactored the UniDiffuserTextDecoder methods to return only tensors (and made other changes to support this)
- Cleaned up the code following suggestions by patrickvonplaten
* make style
* Remove padding logic from UniDiffuserTextDecoder.generate_beam since the inputs are already padded to a consistent length.
* Update checkpoint id for small test v1 checkpoint to hf-internal-testing/unidiffuser-test-v1.
* make style
* Make improvements to the documentation.
* Move ImageTextPipelineOutput documentation from /api/pipelines/unidiffuser.mdx to /api/diffusion_pipeline.mdx.
* Change order of arguments for UniDiffuserTextDecoder.generate_beam.
* make style
* Update docs/source/en/api/pipelines/unidiffuser.mdx
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Co-authored-by: Ernie Chu <51432514+ernestchu@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Andranik Movsisyan <48154088+19and99@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Andreas Steiner <andstein@google.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Joseph Coffland <github@joe.coffland.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Tommaso De Rossi <beats.by.morse@gmail.com>
Co-authored-by: Cristian Garcia <cgarcia.e88@gmail.com>
Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
Co-authored-by: 1lint <105617163+1lint@users.noreply.github.com>
Co-authored-by: asfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
Co-authored-by: Chanchana Sornsoontorn <off9955555@gmail.com>
Co-authored-by: hwuebben <wbben123@yahoo.de>
Co-authored-by: superhero-7 <57797766+superhero-7@users.noreply.github.com>
Co-authored-by: root <fulong_ye@163.com>
Co-authored-by: nupurkmr9 <nupurkmr9@gmail.com>
Co-authored-by: Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
Co-authored-by: Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
Co-authored-by: Mishig <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: XinyuYe-Intel <xinyu.ye@intel.com>
Co-authored-by: clarencechen <clarencechenct@gmail.com>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Youssef Adarrab <104783077+youssefadr@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Chengrui Wang <80876977+crywang@users.noreply.github.com>
Co-authored-by: SkyTNT <SKYTNT@outlook.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Isaac <34376531+init-22@users.noreply.github.com>
Co-authored-by: pdoane <pdoane2@gmail.com>
Co-authored-by: Yuchen Fan <fyc0624@gmail.com>
Co-authored-by: Nipun Jindal <jindal.nipun@gmail.com>
Co-authored-by: njindal <njindal@adobe.com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: Xie Zejian <xiezej@gmail.com>
Co-authored-by: Jair Trejo <jairtrejo@gmail.com>
Co-authored-by: Robert Dargavel Smith <teticio@gmail.com>
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Joqsan <6027118+Joqsan@users.noreply.github.com>
Co-authored-by: NimenDavid <312648004@qq.com>
Co-authored-by: M. Tolga Cangöz <46008593+standardAI@users.noreply.github.com>
Co-authored-by: timegate <timegate@kaist.ac.kr>
Co-authored-by: Jason Kuan <jason9075@users.noreply.github.com>
Co-authored-by: Ilia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
Co-authored-by: Umar <55330742+mu94-csl@users.noreply.github.com>
Co-authored-by: Mylo <36931363+gitmylo@users.noreply.github.com>
Co-authored-by: Markus Pobitzer <markuspobitzer@gmail.com>
Co-authored-by: Cheng Lu <lucheng.lc15@gmail.com>
Co-authored-by: Isamu Isozaki <isamu.website@gmail.com>
Co-authored-by: Cesar Aybar <csaybar@gmail.com>
Co-authored-by: Will Rice <will@spokestack.io>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Rupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
Co-authored-by: sudowind <wfpkueecs@163.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Laureηt <laurentfainsin@protonmail.com>
Co-authored-by: Jongwoo Han <jongwooo.han@gmail.com>
Co-authored-by: Laureηt <laurent@fainsin.bzh>
Co-authored-by: superlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
Co-authored-by: Dev Aggarwal <devxpy@gmail.com>
Co-authored-by: Vimarsh Chaturvedi <vimarsh.c@gmail.com>
Co-authored-by: 7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
Co-authored-by: Glaceon-Hyy <ffheyy0017@gmail.com>
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
Co-authored-by: Isotr0py <41363108+Isotr0py@users.noreply.github.com>
Co-authored-by: w4ffl35 <w4ffl35@ml1.net>
Co-authored-by: Seongsu Park <tjdtnsu@gmail.com>
Co-authored-by: Chanran Kim <seriousran@gmail.com>
Co-authored-by: Ambrosiussen <paul@ambrosiussen.com>
Co-authored-by: Hari Krishna <37787894+hari10599@users.noreply.github.com>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: At-sushi <dkahw210@kyoto.zaq.ne.jp>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: takuoko <to78314910@gmail.com>
Co-authored-by: Birch-san <Birch-san@users.noreply.github.com>
* add attnprocessor to docs
* fix path to class
* create separate page for attnprocessors
* fix path
* fix path for real
* fill in docstrings
* apply feedback
* apply feedback
* Add default to inpaint
* Make sure controlnet also works with normal sd for inpaint
* Add tests
* improve
* Correct encode images function
* Correct inpaint controlnet
* Improve text2img inpanit
* make style
* up
* up
* up
* up
* fix more
* Run ControlNet compile test in a separate subprocess
`torch.compile()` spawns several subprocesses and the GPU memory used
was not reclaimed after the test ran. This approach was taken from
`transformers`.
* Style
* Prepare a couple more compile tests to run in subprocess.
* Use require_torch_2 decorator.
* Test inpaint_compile in subprocess.
* Run img2img compile test in subprocess.
* Run stable diffusion compile test in subprocess.
* style
* Temporarily trigger on pr to test.
* Revert "Temporarily trigger on pr to test."
This reverts commit 82d76868dd.
* Cross-attention masks
prefer qualified symbol, fix accidental Optional
prefer qualified symbol in AttentionProcessor
prefer qualified symbol in embeddings.py
qualified symbol in transformed_2d
qualify FloatTensor in unet_2d_blocks
move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
regenerate modeling_text_unet.py
remove unused import
unet_2d_condition encoder_attention_mask docs
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
transformer_2d encoder_attention_mask docs
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
unet_2d_blocks.py: add parameter name comments
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
revert description. bool-to-bias treatment happens in unet_2d_condition only.
comment parameter names
fix copies, style
* encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
* encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
* support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
* fix mistake made during merge conflict resolution
* regenerate versatile_diffusion
* pass time embedding into checkpointed attention invocation
* always assume encoder_attention_mask is a mask (i.e. not a bias).
* style, fix-copies
* add tests for cross-attention masks
* add test for padding of attention mask
* explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
* support both masks and biases in Transformer2DModel#forward. document behaviour
* fix-copies
* delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
* review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
* remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
* put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
* fix-copies
* style
* fix-copies
* put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
* restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
* make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
* fix copies
* allow disk offload for diffuser models
* sort import
* add max_memory argument
* Changed sample[0] to images[0] (#3304)
A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.
* Typo in tutorial (#3295)
* Torch compile graph fix (#3286)
* fix more
* Fix more
* fix more
* Apply suggestions from code review
* fix
* make style
* make fix-copies
* fix
* make sure torch compile
* Clean
* fix test
* Postprocessing refactor img2img (#3268)
* refactor img2img VaeImageProcessor.postprocess
* remove copy from for init, run_safety_checker, decode_latents
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [Torch 2.0 compile] Fix more torch compile breaks (#3313)
* Fix more torch compile breaks
* add tests
* Fix all
* fix controlnet
* fix more
* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* Add Horace He as co-author.
Co-authored-by: Horace He <horacehe2007@yahoo.com>
---------
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* fix: scale_lr and sync example readme and docs. (#3299)
* fix: scale_lr and sync example readme and docs.
* fix doc link.
* Update stable_diffusion.mdx (#3310)
fixed import statement
* Fix missing variable assign in DeepFloyd-IF-II (#3315)
Fix missing variable assign
lol
* Correct doc build for patch releases (#3316)
Update build_documentation.yml
* Add Stable Diffusion RePaint to community pipelines (#3320)
* Add Stable Diffsuion RePaint to community pipelines
- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline
* Fix: Remove wrong import
- Remove wrong import
- Minor change in comments
* Fix: Code formatting of stable_diffusion_repaint
* Fix: ruff errors in stable_diffusion_repaint
* Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)
* fix multistep dpmsolver for cosine schedule (deepfloy-if)
* fix a typo
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
* add test, fix style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [docs] Improve LoRA docs (#3311)
* update docs
* add to toctree
* apply feedback
* Added input pretubation (#3292)
* Added input pretubation
* Fixed spelling
* Update write_own_pipeline.mdx (#3323)
* update controlling generation doc with latest goodies. (#3321)
* [Quality] Make style (#3341)
* Fix config dpm (#3343)
* Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)
* add SDE variant of DPM-Solver and DPM-Solver++
* add test
* fix typo
* fix typo
* Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)
The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.
* Rename --only_save_embeds to --save_as_full_pipeline (#3206)
* Set --only_save_embeds to False by default
Due to how the option is named, it makes more sense to behave like this.
* Refactor only_save_embeds to save_as_full_pipeline
* [AudioLDM] Generalise conversion script (#3328)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix TypeError when using prompt_embeds and negative_prompt (#2982)
* test: Added test case
* fix: fixed type checking issue on _encode_prompt
* fix: fixed copies consistency
* fix: one copy was not sufficient
* Fix pipeline class on README (#3345)
Update README.md
* Inpainting: typo in docs (#3331)
Typo in docs
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add `use_Karras_sigmas` to LMSDiscreteScheduler (#3351)
* add karras sigma to lms discrete scheduler
* add test for lms_scheduler karras
* reformat test lms
* Batched load of textual inversions (#3277)
* Batched load of textual inversions
- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info
* Update src/diffusers/loaders.py
Check for duplicate tokens
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update condition for None tokens
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make fix-copies
* [docs] Fix docstring (#3334)
fix docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* if dreambooth lora (#3360)
* update IF stage I pipelines
add fixed variance schedulers and lora loading
* added kv lora attn processor
* allow loading into alternative lora attn processor
* make vae optional
* throw away predicted variance
* allow loading into added kv lora layer
* allow load T5
* allow pre compute text embeddings
* set new variance type in schedulers
* fix copies
* refactor all prompt embedding code
class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable
* fix for when variance type is not defined on scheduler
* do not pre compute validation prompt if not present
* add example test for if lora dreambooth
* add check for train text encoder and pre compute text embeddings
* Postprocessing refactor all others (#3337)
* add text2img
* fix-copies
* add
* add all other pipelines
* add
* add
* add
* add
* add
* make style
* style + fix copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [docs] Improve safetensors docstring (#3368)
* clarify safetensor docstring
* fix typo
* apply feedback
* add: a warning message when using xformers in a PT 2.0 env. (#3365)
* add: a warning message when using xformers in a PT 2.0 env.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.
* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests
Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution
* Added a resolution test to StableDiffusionInpaintPipelineSlowTests
this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* [docs] Adapt a model (#3326)
* first draft
* apply feedback
* conv_in.weight thrown away
* [docs] Load safetensors (#3333)
* safetensors
* apply feedback
* apply feedback
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* [Docs] Fix stable_diffusion.mdx typo (#3398)
Fix typo in last code block. Correct "prommpts" to "prompt"
* Support ControlNet v1.1 shuffle properly (#3340)
* add inferring_controlnet_cond_batch
* Revert "add inferring_controlnet_cond_batch"
This reverts commit abe8d6311d.
* set guess_mode to True
whenever global_pool_conditions is True
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* nit
* add integration test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Tests] better determinism (#3374)
* enable deterministic pytorch and cuda operations.
* disable manual seeding.
* make style && make quality for unet_2d tests.
* enable determinism for the unet2dconditional model.
* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
* relax tolerance (very weird issue, though).
* revert to torch manual_seed() where needed.
* relax more tolerance.
* better placement of the cuda variable and relax more tolerance.
* enable determinism for 3d condition model.
* relax tolerance.
* add: determinism to alt_diffusion.
* relax tolerance for alt diffusion.
* dance diffusion.
* dance diffusion is flaky.
* test_dict_tuple_outputs_equivalent edit.
* fix two more tests.
* fix more ddim tests.
* fix: argument.
* change to diff in place of difference.
* fix: test_save_load call.
* test_save_load_float16 call.
* fix: expected_max_diff
* fix: paint by example.
* relax tolerance.
* add determinism to 1d unet model.
* torch 2.0 regressions seem to be brutal
* determinism to vae.
* add reason to skipping.
* up tolerance.
* determinism to vq.
* determinism to cuda.
* determinism to the generic test pipeline file.
* refactor general pipelines testing a bit.
* determinism to alt diffusion i2i
* up tolerance for alt diff i2i and audio diff
* up tolerance.
* determinism to audioldm
* increase tolerance for audioldm lms.
* increase tolerance for paint by paint.
* increase tolerance for repaint.
* determinism to cycle diffusion and sd 1.
* relax tol for cycle diffusion 🚲
* relax tol for sd 1.0
* relax tol for controlnet.
* determinism to img var.
* relax tol for img variation.
* tolerance to i2i sd
* make style
* determinism to inpaint.
* relax tolerance for inpaiting.
* determinism for inpainting legacy
* relax tolerance.
* determinism to instruct pix2pix
* determinism to model editing.
* model editing tolerance.
* panorama determinism
* determinism to pix2pix zero.
* determinism to sag.
* sd 2. determinism
* sd. tolerance
* disallow tf32 matmul.
* relax tolerance is all you need.
* make style and determinism to sd 2 depth
* relax tolerance for depth.
* tolerance to diffedit.
* tolerance to sd 2 inpaint.
* up tolerance.
* determinism in upscaling.
* tolerance in upscaler.
* more tolerance relaxation.
* determinism to v pred.
* up tol for v_pred
* unclip determinism
* determinism to unclip img2img
* determinism to text to video.
* determinism to last set of tests
* up tol.
* vq cumsum doesn't have a deterministic kernel
* relax tol
* relax tol
* [docs] Add transformers to install (#3388)
add transformers to install
* [deepspeed] partial ZeRO-3 support (#3076)
* [deepspeed] partial ZeRO-3 support
* cleanup
* improve deepspeed fixes
* Improve
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add omegaconf for tests (#3400)
Add omegaconfg
* Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)
* Improve checkpointing lora
* fix more
* Improve doc string
* Update src/diffusers/loaders.py
* make stytle
* Apply suggestions from code review
* Update src/diffusers/loaders.py
* Apply suggestions from code review
* Apply suggestions from code review
* better
* Fix all
* Fix multi-GPU dreambooth
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix all
* make style
* make style
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix docker file (#3402)
* up
* up
* fix: deepseepd_plugin retrieval from accelerate state (#3410)
* [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)
* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring
* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Don't install accelerate and transformers from source (#3415)
* Don't install transformers and accelerate from source (#3414)
* Improve fast tests (#3416)
Update pr_tests.yml
* attention refactor: the trilogy (#3387)
* Replace `AttentionBlock` with `Attention`
* use _from_deprecated_attn_block check re: @patrickvonplaten
* [Docs] update the PT 2.0 optimization doc with latest findings (#3370)
* add: benchmarking stats for A100 and V100.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* address patrick's comments.
* add: rtx 4090 stats
* ⚔ benchmark reports done
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* 3313 pr link.
* add: plots.
Co-authored-by: Pedro <pedro@huggingface.co>
* fix formattimg
* update number percent.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix style rendering (#3433)
* Fix style rendering.
* Fix typo
* unCLIP scheduler do not use note (#3417)
* Replace deprecated command with environment file (#3409)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix warning message pipeline loading (#3446)
* add stable diffusion tensorrt img2img pipeline (#3419)
* add stable diffusion tensorrt img2img pipeline
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update docstrings
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* Refactor controlnet and add img2img and inpaint (#3386)
* refactor controlnet and add img2img and inpaint
* First draft to get pipelines to work
* make style
* Fix more
* Fix more
* More tests
* Fix more
* Make inpainting work
* make style and more tests
* Apply suggestions from code review
* up
* make style
* Fix imports
* Fix more
* Fix more
* Improve examples
* add test
* Make sure import is correctly deprecated
* Make sure everything works in compile mode
* make sure authorship is correctly attributed
* [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)
* Add DPM-Solver Multistep Inverse Scheduler
* Add draft tests for DiffEdit
* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents
* Fix tests
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Docs] Fix incomplete docstring for resnet.py (#3438)
Fix incomplete docstrings for resnet.py
* fix tiled vae blend extent range (#3384)
fix tiled vae bleand extent range
* Small update to "Next steps" section (#3443)
Small update to "Next steps" section:
- PyTorch 2 is recommended.
- Updated improvement figures.
* Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)
* Update pipeline_if_superresolution.py
Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape
* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments
* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adding 'strength' parameter to StableDiffusionInpaintingPipeline (#3424)
* Added explanation of 'strength' parameter
* Added get_timesteps function which relies on new strength parameter
* Added `strength` parameter which defaults to 1.
* Swapped ordering so `noise_timestep` can be calculated before masking the image
this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.
* Added strength to check_inputs, throws error if out of range
* Changed `prepare_latents` to initialise latents w.r.t strength
inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.
* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline
still need to add correct regression values
* Created a is_strength_max to initialise from pure random noise
* Updated unit tests w.r.t new strength parameter + fixed new strength unit test
* renamed parameter to avoid confusion with variable of same name
* Updated regression values for new strength test - now passes
* removed 'copied from' comment as this method is now different and divergent from the cpy
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Ensure backwards compatibility for prepare_mask_and_masked_image
created a return_image boolean and initialised to false
* Ensure backwards compatibility for prepare_latents
* Fixed copy check typo
* Fixes w.r.t backward compibility changes
* make style
* keep function argument ordering same for backwards compatibility in callees with copied from statements
* make fix-copies
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
* [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)
Added bugfix using f strings.
* Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)
* gradient checkpointing bug fix
* bug fix; changes for reviews
* reformat
* reformat
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Make dreambooth lora more robust to orig unet (#3462)
* Make dreambooth lora more robust to orig unet
* up
* Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)
Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.
* Add min snr to text2img lora training script (#3459)
add min snr to text2img lora training script
* Add inpaint lora scale support (#3460)
* add inpaint lora scale support
* add inpaint lora scale test
---------
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
* [From ckpt] Fix from_ckpt (#3466)
* Correct from_ckpt
* make style
* Update full dreambooth script to work with IF (#3425)
* Add IF dreambooth docs (#3470)
* parameterize pass single args through tuple (#3477)
* attend and excite tests disable determinism on the class level (#3478)
* dreambooth docs torch.compile note (#3471)
* dreambooth docs torch.compile note
* Update examples/dreambooth/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update examples/dreambooth/README.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add: if entry in the dreambooth training docs. (#3472)
* [docs] Textual inversion inference (#3473)
* add textual inversion inference to docs
* add to toctree
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* [docs] Distributed inference (#3376)
* distributed inference
* move to inference section
* apply feedback
* update with split_between_processes
* apply feedback
* [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)
explicit view kernel size as number elements in flattened indices
* mps & onnx tests rework (#3449)
* Remove ONNX tests from PR.
They are already a part of push_tests.yml.
* Remove mps tests from PRs.
They are already performed on push.
* Fix workflow name for fast push tests.
* Extract mps tests to a workflow.
For better control/filtering.
* Remove --extra-index-url from mps tests
* Increase tolerance of mps test
This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.
* Temporarily run mps tests on pr
So we can test.
* Revert "Temporarily run mps tests on pr"
Tests passed, go back to running on push.
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Ilia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
Co-authored-by: Umar <55330742+mu94-csl@users.noreply.github.com>
Co-authored-by: Mylo <36931363+gitmylo@users.noreply.github.com>
Co-authored-by: Markus Pobitzer <markuspobitzer@gmail.com>
Co-authored-by: Cheng Lu <lucheng.lc15@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Isamu Isozaki <isamu.website@gmail.com>
Co-authored-by: Cesar Aybar <csaybar@gmail.com>
Co-authored-by: Will Rice <will@spokestack.io>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: At-sushi <dkahw210@kyoto.zaq.ne.jp>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Isotr0py <41363108+Isotr0py@users.noreply.github.com>
Co-authored-by: pdoane <pdoane2@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Rupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
Co-authored-by: sudowind <wfpkueecs@163.com>
Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Laureηt <laurentfainsin@protonmail.com>
Co-authored-by: Jongwoo Han <jongwooo.han@gmail.com>
Co-authored-by: asfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
Co-authored-by: clarencechen <clarencechenct@gmail.com>
Co-authored-by: Laureηt <laurent@fainsin.bzh>
Co-authored-by: superlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
Co-authored-by: Dev Aggarwal <devxpy@gmail.com>
Co-authored-by: Vimarsh Chaturvedi <vimarsh.c@gmail.com>
Co-authored-by: 7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
Co-authored-by: Glaceon-Hyy <ffheyy0017@gmail.com>
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
* DataLoader will now bake in any transforms or image manipulations contained in the EXIF
Images may have rotations stored in EXIF. Training using such images will cause those transforms to be ignored while training and thus produce unexpected results
* Fixed the Dataloading EXIF issue in main DreamBooth training as well
* Run make style (black & isort)
* Fix DPM single
* add test
* fix one more bug
* Apply suggestions from code review
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
---------
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
* up
* fix more
* Apply suggestions from code review
* fix more
* fix more
* Check it
* Remove 16:8
* fix more
* fix more
* fix more
* up
* up
* Test only stable diffusion
* Test only two files
* up
* Try out spinning up processes that can be killed
* up
* Apply suggestions from code review
* up
* up
* Remove ONNX tests from PR.
They are already a part of push_tests.yml.
* Remove mps tests from PRs.
They are already performed on push.
* Fix workflow name for fast push tests.
* Extract mps tests to a workflow.
For better control/filtering.
* Remove --extra-index-url from mps tests
* Increase tolerance of mps test
This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.
* Temporarily run mps tests on pr
So we can test.
* Revert "Temporarily run mps tests on pr"
Tests passed, go back to running on push.
Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.
* Added explanation of 'strength' parameter
* Added get_timesteps function which relies on new strength parameter
* Added `strength` parameter which defaults to 1.
* Swapped ordering so `noise_timestep` can be calculated before masking the image
this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.
* Added strength to check_inputs, throws error if out of range
* Changed `prepare_latents` to initialise latents w.r.t strength
inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.
* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline
still need to add correct regression values
* Created a is_strength_max to initialise from pure random noise
* Updated unit tests w.r.t new strength parameter + fixed new strength unit test
* renamed parameter to avoid confusion with variable of same name
* Updated regression values for new strength test - now passes
* removed 'copied from' comment as this method is now different and divergent from the cpy
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Ensure backwards compatibility for prepare_mask_and_masked_image
created a return_image boolean and initialised to false
* Ensure backwards compatibility for prepare_latents
* Fixed copy check typo
* Fixes w.r.t backward compibility changes
* make style
* keep function argument ordering same for backwards compatibility in callees with copied from statements
* make fix-copies
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
* Update pipeline_if_superresolution.py
Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape
* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments
* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* refactor controlnet and add img2img and inpaint
* First draft to get pipelines to work
* make style
* Fix more
* Fix more
* More tests
* Fix more
* Make inpainting work
* make style and more tests
* Apply suggestions from code review
* up
* make style
* Fix imports
* Fix more
* Fix more
* Improve examples
* add test
* Make sure import is correctly deprecated
* Make sure everything works in compile mode
* make sure authorship is correctly attributed
* enable deterministic pytorch and cuda operations.
* disable manual seeding.
* make style && make quality for unet_2d tests.
* enable determinism for the unet2dconditional model.
* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
* relax tolerance (very weird issue, though).
* revert to torch manual_seed() where needed.
* relax more tolerance.
* better placement of the cuda variable and relax more tolerance.
* enable determinism for 3d condition model.
* relax tolerance.
* add: determinism to alt_diffusion.
* relax tolerance for alt diffusion.
* dance diffusion.
* dance diffusion is flaky.
* test_dict_tuple_outputs_equivalent edit.
* fix two more tests.
* fix more ddim tests.
* fix: argument.
* change to diff in place of difference.
* fix: test_save_load call.
* test_save_load_float16 call.
* fix: expected_max_diff
* fix: paint by example.
* relax tolerance.
* add determinism to 1d unet model.
* torch 2.0 regressions seem to be brutal
* determinism to vae.
* add reason to skipping.
* up tolerance.
* determinism to vq.
* determinism to cuda.
* determinism to the generic test pipeline file.
* refactor general pipelines testing a bit.
* determinism to alt diffusion i2i
* up tolerance for alt diff i2i and audio diff
* up tolerance.
* determinism to audioldm
* increase tolerance for audioldm lms.
* increase tolerance for paint by paint.
* increase tolerance for repaint.
* determinism to cycle diffusion and sd 1.
* relax tol for cycle diffusion 🚲
* relax tol for sd 1.0
* relax tol for controlnet.
* determinism to img var.
* relax tol for img variation.
* tolerance to i2i sd
* make style
* determinism to inpaint.
* relax tolerance for inpaiting.
* determinism for inpainting legacy
* relax tolerance.
* determinism to instruct pix2pix
* determinism to model editing.
* model editing tolerance.
* panorama determinism
* determinism to pix2pix zero.
* determinism to sag.
* sd 2. determinism
* sd. tolerance
* disallow tf32 matmul.
* relax tolerance is all you need.
* make style and determinism to sd 2 depth
* relax tolerance for depth.
* tolerance to diffedit.
* tolerance to sd 2 inpaint.
* up tolerance.
* determinism in upscaling.
* tolerance in upscaler.
* more tolerance relaxation.
* determinism to v pred.
* up tol for v_pred
* unclip determinism
* determinism to unclip img2img
* determinism to text to video.
* determinism to last set of tests
* up tol.
* vq cumsum doesn't have a deterministic kernel
* relax tol
* relax tol
* add inferring_controlnet_cond_batch
* Revert "add inferring_controlnet_cond_batch"
This reverts commit abe8d6311d.
* set guess_mode to True
whenever global_pool_conditions is True
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* nit
* add integration test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.
* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests
Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution
* Added a resolution test to StableDiffusionInpaintPipelineSlowTests
this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: a warning message when using xformers in a PT 2.0 env.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update IF stage I pipelines
add fixed variance schedulers and lora loading
* added kv lora attn processor
* allow loading into alternative lora attn processor
* make vae optional
* throw away predicted variance
* allow loading into added kv lora layer
* allow load T5
* allow pre compute text embeddings
* set new variance type in schedulers
* fix copies
* refactor all prompt embedding code
class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable
* fix for when variance type is not defined on scheduler
* do not pre compute validation prompt if not present
* add example test for if lora dreambooth
* add check for train text encoder and pre compute text embeddings
* Batched load of textual inversions
- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info
* Update src/diffusers/loaders.py
Check for duplicate tokens
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update condition for None tokens
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Set --only_save_embeds to False by default
Due to how the option is named, it makes more sense to behave like this.
* Refactor only_save_embeds to save_as_full_pipeline
* fix multistep dpmsolver for cosine schedule (deepfloy-if)
* fix a typo
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
* add test, fix style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix more torch compile breaks
* add tests
* Fix all
* fix controlnet
* fix more
* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* Add Horace He as co-author.
Co-authored-by: Horace He <horacehe2007@yahoo.com>
---------
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* fix more
* Fix more
* fix more
* Apply suggestions from code review
* fix
* make style
* make fix-copies
* fix
* make sure torch compile
* Clean
* fix test
* add constant lr with rules
* add constant with rules in TYPE_TO_SCHEDULER_FUNCTION
* add constant lr rate with rule
* hotfix code quality
* fix doc style
* change name constant_with_rules to piecewise constant
* Update Pix2PixZero Auto-correlation Loss
* Add Stable Diffusion DiffEdit pipeline
* Add draft documentation and import code
* Bugfixes and refactoring
* Add option to not decode latents in the inversion process
* Harmonize preprocessing
* Revert "Update Pix2PixZero Auto-correlation Loss"
This reverts commit b218062fed.
* Update annotations
* rename `compute_mask` to `generate_mask`
* Update documentation
* Update docs
* Update Docs
* Fix copy
* Change shape of output latents to batch first
* Update docs
* Add first draft for tests
* Bugfix and update tests
* Add `cross_attention_kwargs` support for all pipeline methods
* Fix Copies
* Add support for PIL image latents
Add support for mask broadcasting
Update docs and tests
Align `mask` argument to `mask_image`
Remove height and width arguments
* Enable MPS Tests
* Move example docstrings
* Fix test
* Fix test
* fix pipeline inheritance
* Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
* Register modules set to `None` in config for `test_save_load_optional_components`
* Move fixed logic to specific test class
* Clean changes to other pipelines
* Update new tests to coordinate with #2953
* Update slow tests for better results
* Safety to avoid potential problems with torch.inference_mode
* Add reference in SD Pipeline Overview
* Fix tests again
* Enforce determinism in noise for generate_mask
* Fix copies
* Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
* Add LoraLoaderMixin and update `prepare_image_latents`
* clean up repeat and reg
* bugfix
* Remove invalid args from docs
Suppress spurious warning by repeating image before latent to mask gen
* EDICT pipeline initial commit
- Starting point taking from https://github.com/Joqsan/edict-diffusion
* refactor __init__() method
* minor refactoring
* refactor scheduler code
- remove scheduler and move its methods to the EDICTPipeline class
* make CFG optional
- refactor encode_prompt().
- include optional generator for sampling with vae.
- minor variable renaming
* add EDICT pipeline description to README.md
* replace preprocess() with VaeImageProcessor
* run make style and make quality commands
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* 👽 qol improvements for LoRA.
* better function name?
* fix: LoRA weight loading with the new format.
* address Patrick's comments.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* change wording around encouraging the use of load_lora_weights().
* fix: function name.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [docs] add notes for stateful model changes
* Update docs/source/en/optimization/fp16.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* link to accelerate docs for discarding hooks
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* removed unnecessary parameters from get_up_block and get_down_block functions
* adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* add
* clean
* up
* clean up more
* fix more tests
* Improve docs further
* improve
* more fixes docs
* Improve docs more
* Update src/diffusers/models/unet_2d_condition.py
* fix
* up
* update doc links
* make fix-copies
* add safety checker and watermarker to stage 3 doc page code snippets
* speed optimizations docs
* memory optimization docs
* make style
* add watermarking snippets to doc string examples
* make style
* use pt_to_pil helper functions in doc strings
* skip mps tests
* Improve safety
* make style
* new logic
* fix
* fix bad onnx design
* make new stable diffusion upscale pipeline model arguments optional
* define has_nsfw_concept when non-pil output type
* lowercase linked to notebook name
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens. Adding a space for the padding tokens fixes this.
* Add karras pattern to discrete heun scheduler
* Add integration test
* Fix failing CI on pytorch test on M1 (mps)
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: LoRA text encoder support for DreamBooth example.
* fix initialization.
* fix: modification call.
* add: entry in the readme.
* use dog dataset from hub.
* fix: params to clip.
* add entry to the LoRA doc.
* add: tests for lora.
* remove unnecessary list comprehension./
* Update Pix2PixZero Auto-correlation Loss
* Add fast inversion tests
* Clarify purpose and mark as deprecated
Fix inversion prompt broadcasting
* Register modules set to `None` in config for `test_save_load_optional_components`
* Update new tests to coordinate with #2953
* Added distillation for quantization example on textual inversion.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* refined readme and code style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* Update text2images.py
* refined code of model load and added compatibility check.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* fixed code style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* fix C403 [*] Unnecessary `list` comprehension (rewrite as a `set` comprehension)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
---------
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
controlnet training center crop input images to multiple of 8
The pipeline code resizes inputs to multiples of 8.
Not doing this resizing in the training script is causing
the encoded image to have different height/width dimensions
than the encoded conditioning image (which uses a separate
encoder that's part of the controlnet model).
We resize and center crop the inputs to make sure they're the
same size (as well as all other images in the batch). We also
check that the initial resolution is a multiple of 8.
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
---------
Co-authored-by: root <fulong_ye@163.com>
* Add SD/txt2img Community Pipeline to diffusers along with TensorRT utils
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update installation command
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update tensorrt installation
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* changes
1. Update setting of cache directory
2. Address comments: merge utils and pipeline code.
3. Address comments: Add section in README
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* apply make style
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add mixin class for pipeline from original sd ckpt
* Improve
* make style
* merge main into
* Improve more
* fix more
* up
* Apply suggestions from code review
* finish docs
* rename
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model
* Address review comment from PR
* PyLint formatting
* Some more pylint fixes, unrelated to our change
* Another pylint fix
* Styling fix
Adding act fn config to the unet timestep class embedding and conv
activation.
The custom activation defaults to silu which is the default
activation function for both the conv act and the timestep class
embeddings so default behavior is not changed.
The only unet which use the custom activation is the stable diffusion
latent upscaler https://huggingface.co/stabilityai/sd-x2-latent-upscaler/blob/main/unet/config.json
(I ran a script against the hub to confirm).
The latent upscaler does not use the conv activation nor the timestep
class embeddings so we don't change its behavior.
add custom timesteps test
add custom timesteps descending order check
docs
timesteps -> custom_timesteps
can only pass one of num_inference_steps and timesteps
* add guess mode (WIP)
* fix uncond/cond order
* support guidance_scale=1.0 and batch != 1
* remove magic coeff
* add docstring
* add intergration test
* add document to controlnet.mdx
* made the comments a bit more explanatory
* fix table
* WIP controlnet training
- bugfix --streaming
- bugfix running report_to!='wandb'
- adds memory profile before validation
* Adds final logging statement.
* Sets train epochs to 11.
Looking at a longer ~16ep run, we see only good validation images
after ~11ep:
https://wandb.ai/andsteing/controlnet_fill50k/runs/3j2hx6n8
* Removes --logging_dir (it's not used).
* Adds --profile flags.
* Updates --output_dir=runs/fill-circle-{timestamp}.
* Compute mean of `train_metrics`.
Previously `train_metrics[-1]` was logged, resulting in very bumpy train
metrics.
* Improves logging a bit.
- adds l2_grads gradient norm logging
- adds steps_per_sec
- sets walltime as x coordinate of train/step
- logs controlnet_params config
* Adds --ccache (doesn't really help though).
* minor fix in controlnet flax example (#2986)
* fix the error when push_to_hub but not log validation
* contronet_from_pt & controlnet_revision
* add intermediate checkpointing to the guide
* Bugfix --profile_steps
* Sets `RACKER_PROJECT_NAME='controlnet_fill50k'`.
* Logs fractional epoch.
* Adds relative `walltime` metric.
* Adds `StepTraceAnnotation` and uses `global_step` insetad of `step`.
* Applied `black`.
* Streamlines commands in README a bit.
* Removes `--ccache`.
This makes only a very small difference (~1 min) with this model size, so removing
the option introduced in cdb3cc.
* Re-ran `black`.
* Update examples/controlnet/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Converts spaces to tab.
* Removes repeated args.
* Skips first step (compilation) in profiling
* Updates README with profiling instructions.
* Unifies tabs/spaces in README.
* Re-ran style & quality.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix: norm group test for UNet3D.
* chore: speed up the panorama tests (fast).
* set default value of _test_inference_batch_single_identical.
* fix: batch_sizes default value.
* fix progress bar issue in pipeline_text_to_video_zero.py. Copy scheduler after first backward
* fix tensor loading in test_text_to_video_zero.py
* make style && make quality
* add support for prompt embeds to SD ONNX pipeline
* fix up the pipeline copies
* add prompt embeds param to other ONNX pipelines
* fix up prompt embeds param for SD upscaling ONNX pipeline
* add missing type annotations to ONNX pipes
* inital commit for lora test cases
* help a bit with lora for 3d
* fixed lora tests
* replaced redundant code
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix: norm group test for UNet3D.
* fix: unet rejig.
* fix: unwrapping when running validation inputs.
* unwrapping the unet too.
* fix: device.
* better unwrapping.
* unwrapping before ema.
* unwrapping.
* add: first draft for a better LoRA enabler.
* make fix-copies.
* feat: backward compatibility.
* add: entry to the docs.
* add: tests.
* fix: docs.
* fix: norm group test for UNet3D.
* feat: add support for flat dicts.
* add depcrcation message instead of warning.
add group norm type to attention processor cross attention norm
This lets the cross attention norm use both a group norm block and a
layer norm block.
The group norm operates along the channels dimension
and requires input shape (batch size, channels, *) where as the layer norm with a single
`normalized_shape` dimension only operates over the least significant
dimension i.e. (*, channels).
The channels we want to normalize are the hidden dimension of the encoder hidden states.
By convention, the encoder hidden states are always passed as (batch size, sequence
length, hidden states).
This means the layer norm can operate on the tensor without modification, but the group
norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
All existing attention processors will have the same logic and we can
consolidate it in a helper function `prepare_encoder_hidden_states`
prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
move norm_cross defined check to outside norm_encoder_hidden_states
add missing attn.norm_cross check
* ⚙️chore(train_controlnet) fix typo in logger message
* ⚙️chore(models) refactor modules order; make them the same as calling order
When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
* correct many tests
* remove bogus file
* make style
* correct more tests
* finish tests
* fix one more
* make style
* make unclip deterministic
* ⚙️chore(models/attention) reorganize comments in BasicTransformerBlock class
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add only cross attention to simple attention blocks
* add test for only_cross_attention re: @patrickvonplaten
* mid_block_only_cross_attention better default
allow mid_block_only_cross_attention to default to
`only_cross_attention` when `only_cross_attention` is given
as a single boolean
* Fix invocation of some slow tests.
We use __call__ rather than pmapping the generation function ourselves
because the number of static arguments is different now.
* style
* `AttentionProcessor.group_norm` num_channels should be `query_dim`
The group_norm on the attention processor should really norm the number
of channels in the query _not_ the inner dim. This wasn't caught before
because the group_norm is only used by the added kv attention processors
and the added kv attention processors are only used by the karlo models
which are configured such that the inner dim is the same as the query
dim.
* add_{k,v}_proj should be projecting to inner_dim
* [Config] Fix config prints and save, load
* Only use potential nn.Modules for dtype and device
* Correct vae image processor
* make sure in_channels is not accessed directly
* make sure in channels is only accessed via config
* Make sure schedulers only access config attributes
* Make sure to access config in SAG
* Fix vae processor and make style
* add tests
* uP
* make style
* Fix more naming issues
* Final fix with vae config
* change more
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor
* add docs for text-to-video zero
* add teaser image for text-to-video zero docs
* Fix review changes. Add Documentation. Add test
* clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings
* make style && make quality
* make fix-copies
* make requested changes to docs. use huggingface server links for resources, delete res folder
* make style && make quality && make fix-copies
* make style && make quality
* Apply suggestions from code review
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* ensure validation image RGB not RGBA
* ensure validation image RGB not RGBA
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add load textual inversion embeddings draft
* fix quality
* fix typo
* make fix copies
* move to textual inversion mixin
* make it accept from sd-concept library
* accept list of paths to embeddings
* fix styling of stable diffusion pipeline
* add dummy TextualInversionMixin
* add docstring to textualinversionmixin
* add load textual inversion embeddings draft
* fix quality
* fix typo
* make fix copies
* move to textual inversion mixin
* make it accept from sd-concept library
* accept list of paths to embeddings
* fix styling of stable diffusion pipeline
* add dummy TextualInversionMixin
* add docstring to textualinversionmixin
* add case for parsing embedding from auto1111 UI format
Co-authored-by: Evan Jones <evan.a.jones3@gmail.com>
Co-authored-by: Ana Tamais <aninhamoraestamais@gmail.com>
* fix style after rebase
* move textual inversion mixin to loaders
* move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline)
* update dummy class name
* addressed allo comments
* fix old dangling import
* fix style
* proposal
* remove bogus
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
* finish
* make style
* up
* fix code quality
* fix code quality - again
* fix code quality - 3
* fix alt diffusion code quality
* fix model editing pipeline
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Finish
---------
Co-authored-by: Evan Jones <evan.a.jones3@gmail.com>
Co-authored-by: Ana Tamais <aninhamoraestamais@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible.
* Run make style to fix style issues.
* Change more docs to use DiffusionPipeline rather than a subclass.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Allow user to disable SafetyChecker and enable dtypes if loading models from .ckpt or .safetensors
* Fix Import sorting (Ruff error)
* Get rid of the dtype convert method as it was implemented all along
* Fix the docstring
* Fix ruff formatting
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Workaround for saving dynamo-wrapped models.
* Accept suggestion from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Apply workaround when overriding pipeline components.
* Ensure the correct config.json is saved to disk.
Instead of the dynamo class.
* Save correct module (not compiled one)
* Add test
* style
* fix docstrings
* Go back to using string comparisons.
PyTorch CPU does not have _dynamo.
* Simple test for save_pretrained of compiled models.
* Helper function to test whether module is compiled.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update docs to reflect the updated ckpts.
* update: point about prompt.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* emove image resizing.
* Apply suggestions from code review
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
[2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipelines
Co-authored-by: njindal <njindal@adobe.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
The 'CLIPFeatureExtractor' class name has been renamed to 'CLIPImageProcessor' in order to comply with future deprecation. This commit includes the necessary changes to the affected files.
* [MS Text To Video} Add first text to video
* upload
* make first model example
* match unet3d params
* make sure weights are correcctly converted
* improve
* forward pass works, but diff result
* make forward work
* fix more
* finish
* refactor video output class.
* feat: add support for a video export utility.
* fix: opencv availability check.
* run make fix-copies.
* add: docs for the model components.
* add: standalone pipeline doc.
* edit docstring of the pipeline.
* add: right path to TransformerTempModel
* add: first set of tests.
* complete fast tests for text to video.
* fix bug
* up
* three fast tests failing.
* add: note on slow tests
* make work with all schedulers
* apply styling.
* add slow tests
* change file name
* update
* more correction
* more fixes
* finish
* up
* Apply suggestions from code review
* up
* finish
* make copies
* fix pipeline tests
* fix more tests
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* apply suggestions
* up
* revert
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* first refactor
* more text
* improve
* finish
* up
* up
* up
* up
* finish
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finished
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finished
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* updated black format
* update black format
* make style format
* updated line endings
* update code formatting
* Update examples/research_projects/onnxruntime/text_to_image/train_text_to_image.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/vae.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/vae.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* added vae gradient checkpointing test
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
* Adding `use_safetensors` argument to give more control to users
about which weights they use.
* Doc style.
* Rebased (not functional).
* Rebased and functional with tests.
* Style.
* Apply suggestions from code review
* Style.
* Addressing comments.
* Update tests/test_pipelines.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
* Black ???
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
* fix AttnProcessor2_0
Fix use of AttnProcessor2_0 for cross attention with mask
* added scale_qk and out_bias flags
* fixed for xformers
* check if it has scale argument
* Update cross_attention.py
* check torch version
* fix sliced attn
* style
* set scale
* fix test
* fixed addedKV processor
* revert back AttnProcessor2_0
* if missing if
* fix inner_dim
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add support for different model prediction types in DDIMInverseScheduler
Resolve alpha_prod_t_prev index issue for final step of inversion
* Fix old bug introduced when prediction type is "sample"
* Add support for sample clipping for numerical stability and deprecate old kwarg
* Detach sample, alphas, betas
Derive predicted noise from model output before dist. regularization
Style cleanup
* Log loss for debugging
* Revert "Log loss for debugging"
This reverts commit 76ea9c856f.
* Add comments
* Add inversion equivalence test
* Add expected data for Pix2PixZero pipeline tests with SD 2
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
* Remove cruft and add more explanatory comments
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update README.md
fix 2 bugs: (1) "previous_noisy_sample" should be in the FOR loop in line 87; (2) converting image to INT should be before "Image.fromarray" in line 91
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix the in-place modification in unet condition when using controlnet, which will cause backprop errors when training
* add clone to mid block
* fix-copies
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
* ema test cases.
* debugging maessages.
* debugging maessages.
* add: tests for ema.
* fix: optimization_step arg,
* handle device placement.
* Apply suggestions from code review
Co-authored-by: Will Berman <wlbberman@gmail.com>
* remove del and gc.
* address PR feedback.
* add: tests for serialization.
* fix: typos.
* skip_mps to serialization.
---------
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* support for List[ControlNetModel] on init()
* Add to support for multiple ControlNetCondition
* rename conditioning_scale to scale
* scaling bugfix
* Manually merge `MultiControlNet` #2621
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* cleanups
- don't expose ControlNetCondition
- move scaling to ControlNetModel
* make style error correct
* remove ControlNetCondition to reduce code diff
* refactoring image/cond_scale
* add explain for `images`
* Add docstrings
* all fast-test passed
* Add a slow test
* nit
* Apply suggestions from code review
* small precision fix
* nits
MultiControlNet -> MultiControlNetModel - Matches existing naming a bit
closer
MultiControlNetModel inherit from model utils class - Don't have to
re-write fp16 test
Skip tests that save multi controlnet pipeline - Clearer than changing
test body
Don't auto-batch the number of input images to the number of controlnets.
We generally like to require the user to pass the expected number of
inputs. This simplifies the processing code a bit more
Use existing image pre-processing code a bit more. We can rely on the
existing image pre-processing code and keep the inference loop a bit
simpler.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
* Improve dynamic threshold
* Update code
* Add dynamic threshold to ddim and ddpm
* Encapsulate and leverage code copy mechanism
Update style
* Clean up DDPM/DDIM constructor arguments
* add test
* also add to unipc
---------
Co-authored-by: Peter Lin <peterlin9863@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Onnx] add Stable Diffusion Upscale pipeline
* add a test for the OnnxStableDiffusionUpscalePipeline
* check for VAE config before adjusting scaling factor
* update test assertions, lint fixes
* run fix-copies target
* switch test checkpoint to one hosted on huggingface
* partially restore attention mask
* reshape embeddings after running text encoder
* add longer nightly test for ONNX upscale pipeline
* use package import to fix tests
* fix scheduler compatibility and class labels dtype
* use more precise type
* remove LMS from fast tests
* lookup latent and timestamp types
* add docs for ONNX upscaling, rename lookup table
* replace deprecated pipeline names in ONNX docs
* [Model offload] Add nice warning
* Treat sequential and model offload differently.
Sequential raises an error because the operation would fail with a
cryptic warning later.
* Forcibly move to cpu when offloading.
* make style
* one more fix
* make fix-copies
* up
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Initial commit
* removed images
* Made logging the same as save
* Removed logging function
* Quality fixes
* Quality fixes
* Tested
* Added support back for validation_epochs
* Fixing styles
* Did changes
* Change to log_validation
* Add extra space after wandb import
* Add extra space after wandb
Co-authored-by: Will Berman <wlbberman@gmail.com>
* Fixed spacing
---------
Co-authored-by: Will Berman <wlbberman@gmail.com>
* Tiled VAE for high-res text2img and img2img
* vae tiling, fix formatting
* enable_vae_tiling API and tests
* tiled vae docs, disable tiling for images that would have only one tile
* tiled vae tests, use channels_last memory format
* tiled vae tests, use smaller test image
* tiled vae tests, remove tiling test from fast tests
* up
* up
* make style
* Apply suggestions from code review
* Apply suggestions from code review
* Apply suggestions from code review
* make style
* improve naming
* finish
* apply suggestions
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
---------
Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add scaffold
- copied convert_controlnet_to_diffusers.py from
convert_original_stable_diffusion_to_diffusers.py
* Add support to load ControlNet (WIP)
- this makes Missking Key error on ControlNetModel
* Update to convert ControlNet without error msg
- init impl for StableDiffusionControlNetPipeline
- init impl for ControlNetModel
* cleanup of commented out
* split create_controlnet_diffusers_config()
from create_unet_diffusers_config()
- add config: hint_channels
* Add input_hint_block, input_zero_conv and
middle_block_out
- this makes missing key error on loading model
* add unet_2d_blocks_controlnet.py
- copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D
- this makes missing key error on loading model
* Add loading for input_hint_block, zero_convs
and middle_block_out
- this makes no error message on model loading
* Copy from UNet2DConditionalModel except __init__
* Add ultra primitive test for ControlNetModel
inference
* Support ControlNetModel inference
- without exceptions
* copy forward() from UNet2DConditionModel
* Impl ControlledUNet2DConditionModel inference
- test_controlled_unet_inference passed
* Frozen weight & biases for training
* Minimized version of ControlNet/ControlledUnet
- test_modules_controllnet.py passed
* make style
* Add support model loading for minimized ver
* Remove all previous version files
* from_pretrained and inference test passed
* copied from pipeline_stable_diffusion.py
except `__init__()`
* Impl pipeline, pixel match test (almost) passed.
* make style
* make fix-copies
* Fix to add import ControlNet blocks
for `make fix-copies`
* Remove einops dependency
* Support np.ndarray, PIL.Image for controlnet_hint
* set default config file as lllyasviel's
* Add support grayscale (hw) numpy array
* Add and update docstrings
* add control_net.mdx
* add control_net.mdx to toctree
* Update copyright year
* Fix to add PIL.Image RGB->BGR conversion
- thanks @Mystfit
* make fix-copies
* add basic fast test for controlnet
* add slow test for controlnet/unet
* Ignore down/up_block len check on ControlNet
* add a copy from test_stable_diffusion.py
* Accept controlnet_hint is None
* merge pipeline_stable_diffusion.py diff
* Update class name to SDControlNetPipeline
* make style
* Baseline fast test almost passed (w long desc)
* still needs investigate.
Following didn't passed descriped in TODO comment:
- test_stable_diffusion_long_prompt
- test_stable_diffusion_no_safety_checker
Following didn't passed same as stable_diffusion_pipeline:
- test_attention_slicing_forward_pass
- test_inference_batch_single_identical
- test_xformers_attention_forwardGenerator_pass
these seems come from calc accuracy.
* Add note comment related vae_scale_factor
* add test_stable_diffusion_controlnet_ddim
* add assertion for vae_scale_factor != 8
* slow test of pipeline almost passed
Failed: test_stable_diffusion_pipeline_with_model_offloading
- ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher
but currently latest version == 0.16.0
* test_stable_diffusion_long_prompt passed
* test_stable_diffusion_no_safety_checker passed
- due to its model size, move to slow test
* remove PoC test files
* fix num_of_image, prompt length issue add add test
* add support List[PIL.Image] for controlnet_hint
* wip
* all slow test passed
* make style
* update for slow test
* RGB(PIL)->BGR(ctrlnet) conversion
* fixes
* remove manual num_images_per_prompt test
* add document
* add `image` argument docstring
* make style
* Add line to correct conversion
* add controlnet_conditioning_scale (aka control_scales
strength)
* rgb channel ordering by default
* image batching logic
* Add control image descriptions for each checkpoint
* Only save controlnet model in conversion script
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
typo
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add gerated image example
* a depth mask -> a depth map
* rename control_net.mdx to controlnet.mdx
* fix toc title
* add ControlNet abstruct and link
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: dqueue <dbyqin@gmail.com>
* remove controlnet constructor arguments re: @patrickvonplaten
* [integration tests] test canny
* test_canny fixes
* [integration tests] test_depth
* [integration tests] test_hed
* [integration tests] test_mlsd
* add channel order config to controlnet
* [integration tests] test normal
* [integration tests] test_openpose test_scribble
* change height and width to default to conditioning image
* [integration tests] test seg
* style
* test_depth fix
* [integration tests] size fixes
* [integration tests] cpu offloading
* style
* generalize controlnet embedding
* fix conversion script
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Style adapted to the documentation of pix2pix
* merge main by hand
* style
* [docs] controlling generation doc nits
* correct some things
* add: controlnetmodel to autodoc.
* finish docs
* finish
* finish 2
* correct images
* finish controlnet
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* uP
* upload model
* up
* up
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: dqueue <dbyqin@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Use "hub" directory for cache instead of "diffusers"
* Import cache locations from huggingface_hub
I verified that the constants are available in huggingface_hub version
0.10.0, which is the minimum we require.
Co-authored-by: Lucain Pouget <lucainp@gmail.com>
* make style
* Move cached directories to new location.
* make style
* Apply suggestions by @Wauplin
Co-authored-by: Lucain <lucainp@gmail.com>
* Fix is_file
* Ignore symlinks.
Especially important if we want to ensure that the user may want to invoke the
process again later, if they are keeping multiple envs with different
versions.
* Style
---------
Co-authored-by: Lucain Pouget <lucainp@gmail.com>
* Skip variant tests (UNet1d, UNetRL) on mps.
mish op not yet supported.
* Exclude a couple of panorama tests on mps
They are too slow for fast CI.
* Exclude mps panorama from more tests.
* mps: exclude all fast panorama tests as they keep failing.
* add sdpa processor
* don't use it by default
* add some checks and style
* typo
* support torch sdpa in dreambooth example
* use torch attn proc by default when available
* typo
* add attn mask
* fix naming
* being doc
* doc
* Apply suggestions from code review
* polish
* torctree
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* better name
* style
* add benchamrk table
* Update docs/source/en/optimization/torch2.0.mdx
* up
* fix example
* check if processor is None
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add fp32 benchmakr
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* attend and excite pipeline
* update
update docstring example
remove visualization
remove the base class attention control
remove dependency on stable diffusion pipeline
always apply gaussian filter with default setting
remove run_standard_sd argument
hardcode attention_res and scale_range (related to step size)
Update docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
revert test_float16_inference
revert change to the batch related tests
fix test_float16_inference
handle batch
remove the deprecation message
remove None check, step_size
remove debugging logging
add slow test
indices_to_alter -> indices
add check_input
* skip mps
* style
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* indices -> token_indices
---------
Co-authored-by: evin <evinpinarornek@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Dummy imports] Add missing if else statements for SD]
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add UniPC scheduler
* add the return type to the functions
* code quality check
* add tests
* finish docs
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add store and restore() methods to EMAModel.
* Update src/diffusers/training_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style with doc builder
* remove explicit listing.
* Apply suggestions from code review
Co-authored-by: Will Berman <wlbberman@gmail.com>
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* chore: better variable naming.
* better treatment of temp_stored_params
Co-authored-by: patil-suraj <surajp815@gmail.com>
* make style
* remove temporary params from earth 🌎
* make fix-copies.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: patil-suraj <surajp815@gmail.com>
* add: support for BLIP generation.
* add: support for editing synthetic images.
* remove unnecessary comments.
* add inits and run make fix-copies.
* version change of diffusers.
* fix: condition for loading the captioner.
* default conditions_input_image to False.
* guidance_amount -> cross_attention_guidance_amount
* fix inputs to check_inputs()
* fix: attribute.
* fix: prepare_attention_mask() call.
* debugging.
* better placement of references.
* remove torch.no_grad() decorations.
* put torch.no_grad() context before the first denoising loop.
* detach() latents before decoding them.
* put deocding in a torch.no_grad() context.
* add reconstructed image for debugging.
* no_grad(0
* apply formatting.
* address one-off suggestions from the draft PR.
* back to torch.no_grad() and add more elaborate comments.
* refactor prepare_unet() per Patrick's suggestions.
* more elaborate description for .
* formatting.
* add docstrings to the methods specific to pix2pix zero.
* suspecting a redundant noise prediction.
* needed for gradient computation chain.
* less hacks.
* fix: attention mask handling within the processor.
* remove attention reference map computation.
* fix: cross attn args.
* fix: prcoessor.
* store attention maps.
* fix: attention processor.
* update docs and better treatment to xa args.
* update the final noise computation call.
* change xa args call.
* remove xa args option from the pipeline.
* add: docs.
* first test.
* fix: url call.
* fix: argument call.
* remove image conditioning for now.
* 🚨 add: fast tests.
* explicit placement of the xa attn weights.
* add: slow tests 🐢
* fix: tests.
* edited direction embedding should be on the same device as prompt_embeds.
* debugging message.
* debugging.
* add pix2pix zero pipeline for a non-deterministic test.
* debugging/
* remove debugging message.
* make caption generation _
* address comments (part I).
* address PR comments (part II)
* fix: DDPM test assertion.
* refactor doc.
* address PR comments (part III).
* fix: type annotation for the scheduler.
* apply styling.
* skip_mps and add note on embeddings in the docs.
* add total number checkpoints to training scripts
* Update examples/dreambooth/train_dreambooth.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
There isn't a space between the "Scope" paragraph and "Ethical Guidelines", here: https://huggingface.co/docs/diffusers/main/en/conceptual/ethical_guidelines , yet I can't see that in the preview. In this PR, I'm simply adding some spaces in the hopes that it resolves the issue.....
* initial docs about KarrasDiffusionSchedulers
* typo
* grammer
* Update docs/source/en/api/schedulers/overview.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* do not list the schedulers explicitly
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Log Unconditional Image Generation Samples to WandB
* Check for wandb installation and parity between onnxruntime script
* Log epoch to wandb
* Check for tensorboard logger early on
* style fixes
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* pipeline_variant
* Add docs for when clip_stats_path is specified
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* prepare_latents # Copied from re: @patrickvonplaten
* NoiseAugmentor->ImageNormalizer
* stable_unclip_prior default to None re: @patrickvonplaten
* prepare_prior_extra_step_kwargs
* prior denoising scale model input
* {DDIM,DDPM}Scheduler -> KarrasDiffusionSchedulers re: @patrickvonplaten
* docs
* Update docs/source/en/api/pipelines/stable_unclip.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* unet check length input
* prep test file for changes
* correct all tests
* clean up
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [LoRA] Freezing the model weights
Freeze the model weights since we don't need to calculate grads for them.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Resolves ValueError: `num_inference_steps`: 1000 cannot be larger than `self.config.train_timesteps`: 50 as the unet model trained with this scheduler can only handle maximal 50 timesteps.
* EMA: fix `state_dict()` & add `cur_decay_value`
* EMA: fix a bug in `load_state_dict()`
'float' object (`state_dict["power"]`) has no attribute 'get'.
* del train_unconditional_ort.py
* Quality check and adding tokenizer
* Adapted stable diffusion to mixed precision+finished up style fixes
* Fixed based on patrick's review
* Fixed oom from number of validation images
* Removed unnecessary np.array conversion
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* better accelerated saving
* up
* finish
* finish
* uP
* up
* up
* fix
* Apply suggestions from code review
* correct ema
* Remove @
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix torchvision.transforms and transforms function naming clash
* Update unconditional script for onnx
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Add center crop and horizontal flip to args
* Update command to use center crop and random flip
* Add center crop and horizontal flip to args
* Update command to use center crop and random flip
* Modify UNet2DConditionModel
- allow skipping mid_block
- adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
- allow user to set dimension for the timestep embedding (`time_embed_dim`)
- the kernel_size for `conv_in` and `conv_out` is now configurable
- add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
- allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
- added 2 arguments `attn1_types` and `attn2_types`
* currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
`BasicTransformerBlock` block with 2 cross-attention , otherwise we
get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block; note that I stil kept
the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
- the position of downsample layer and upsample layer is now configurable
- in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
this use case
- if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
inside cross attention block
add up/down blocks for k-upscaler
modify CrossAttention class
- make the `dropout` layer in `to_out` optional
- `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
- `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
- `attention_dropout`: add an optional dropout on attention score
adapt BasicTransformerBlock
- add an ada groupnorm layer to conditioning attention input with timestep embedding
- allow skipping the FeedForward layer in between the attentions
- replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
update timestep embedding: add new act_fn gelu and an optional act_2
modified ResnetBlock2D
- refactored with AdaGroupNorm class (the timestep scale shift normalization)
- add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
- add option to use input AdaGroupNorm on the input instead of groupnorm
- add options to add a dropout layer after each conv
- allow user to set the bias in conv_shortcut (needed for k-upscaler)
- add gelu
adding conversion script for k-upscaler unet
add pipeline
* fix attention mask
* fix a typo
* fix a bug
* make sure model can be used with GPU
* make pipeline work with fp16
* fix an error in BasicTransfomerBlock
* make style
* fix typo
* some more fixes
* uP
* up
* correct more
* some clean-up
* clean time proj
* up
* uP
* more changes
* remove the upcast_attention=True from unet config
* remove attn1_types, attn2_types etc
* fix
* revert incorrect changes up/down samplers
* make style
* remove outdated files
* Apply suggestions from code review
* attention refactor
* refactor cross attention
* Apply suggestions from code review
* update
* up
* update
* Apply suggestions from code review
* finish
* Update src/diffusers/models/cross_attention.py
* more fixes
* up
* up
* up
* finish
* more corrections of conversion state
* act_2 -> act_2_fn
* remove dropout_after_conv from ResnetBlock2D
* make style
* simplify KAttentionBlock
* add fast test for latent upscaler pipeline
* add slow test
* slow test fp16
* make style
* add doc string for pipeline_stable_diffusion_latent_upscale
* add api doc page for latent upscaler pipeline
* deprecate attention mask
* clean up embeddings
* simplify resnet
* up
* clean up resnet
* up
* correct more
* up
* up
* improve a bit more
* correct more
* more clean-ups
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add docstrings for new unet config
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* # Copied from
* encode the image if not latent
* remove force casting vae to fp32
* fix
* add comments about preconditioning parameters from k-diffusion paper
* attn1_type, attn2_type -> add_self_attention
* clean up get_down_block and get_up_block
* fix
* fixed a typo(?) in ada group norm
* update slice attention processer for cross attention
* update slice
* fix fast test
* update the checkpoint
* finish tests
* fix-copies
* fix-copy for modeling_text_unet.py
* make style
* make style
* fix f-string
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix import
* correct changes
* fix resnet
* make fix-copies
* correct euler scheduler
* add missing #copied from for preprocess
* revert
* fix
* fix copies
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/cross_attention.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* clean up conversion script
* KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
* more
* Update src/diffusers/models/unet_2d_condition.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove prepare_extra_step_kwargs
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix a typo in timestep embedding
* remove num_image_per_prompt
* fix fasttest
* make style + fix-copies
* fix
* fix xformer test
* fix style
* doc string
* make style
* fix-copies
* docstring for time_embedding_norm
* make style
* final finishes
* make fix-copies
* fix tests
---------
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Related to #2124
The current implementation is throwing a shape mismatch error. Which makes sense, as this line is obviously missing, comparing to XFormersCrossAttnProcessor and LoRACrossAttnProcessor.
I don't have formal tests, but I compared `LoRACrossAttnProcessor` and `LoRAXFormersCrossAttnProcessor` ad-hoc, and they produce the same results with this fix.
Flagged images would be set to the blank image instead of the original image that contained the NSF concept for optional viewing.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
scheduling_ddpm: fix variance in the case of learned_range type.
In the case of learned_range variance type, there are missing logs
and exponent comparing to the theory (see "Improved Denoising Diffusion
Probabilistic Models" section 3.1 equation 15:
https://arxiv.org/pdf/2102.09672.pdf).
* Short doc on changing the scheduler in Flax.
* Apply fix from @patil-suraj
Co-authored-by: Suraj Patil <surajp815@gmail.com>
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com>
-- This commit adopts `requests` in place of `wget` to fetch config `.yaml`
files as part of `load_pipeline_from_original_stable_diffusion_ckpt` API.
-- This was done because in Windows PowerShell one needs to explicitly ensure
that `wget` binary is part of the PATH variable. If not present, this leads
to the code not being able to download the `.yaml` config file.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
Co-authored-by: Abhishek Varma <abhishek@nod-labs.com>
* Section on using LoRA alpha / scale.
* Accept suggestion
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Clarify on merge.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* make scaling factor cnfig arg of vae
* fix
* make flake happy
* fix ldm
* fix upscaler
* qualirty
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* solve conflicts, addres some comments
* examples
* examples min version
* doc
* fix type
* typo
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove duplicate line
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Allow `UNet2DModel` to use arbitrary class embeddings.
We can currently use class conditioning in `UNet2DConditionModel`, but
not in `UNet2DModel`. However, `UNet2DConditionModel` requires text
conditioning too, which is unrelated to other types of conditioning.
This commit makes it possible for `UNet2DModel` to be conditioned on
entities other than timesteps. This is useful for training /
research purposes. We can currently train models to perform
unconditional image generation or text-to-image generation, but it's not
straightforward to train a model to perform class-conditioned image
generation, if text conditioning is not required.
We could potentiall use `UNet2DConditionModel` for class-conditioning
without text embeddings by using down/up blocks without
cross-conditioning. However:
- The mid block currently requires cross attention.
- We are required to provide `encoder_hidden_states` to `forward`.
* Style
* Align class conditioning, add docstring for `num_class_embeds`.
* Copy docstring to versatile_diffusion UNetFlatConditionModel
* make tests deterministic
* run slow tests
* prepare for testing
* finish
* refactor
* add print statements
* finish more
* correct some test failures
* more fixes
* set up to correct tests
* more corrections
* up
* fix more
* more prints
* add
* up
* up
* up
* uP
* uP
* more fixes
* uP
* up
* up
* up
* up
* fix more
* up
* up
* clean tests
* up
* up
* up
* more fixes
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* make
* correct
* finish
* finish
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* add text embeds to sd
* add text embeds to sd
* finish tests
* finish
* finish
* make style
* fix tests
* make style
* make style
* up
* better docs
* fix
* fix
* new try
* up
* up
* finish
* add: a doc on LoRA support in diffusers.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* apply PR suggestions.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove visually incoherent elements.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* allow passing op to xFormers attention
original code by @patil-suraj
huggingface/diffusers@ae0cc0b71f
* correct style by `make style`
* add attention_op arg documents
* add usage example to docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add usage example to docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* code style correction by `make style`
* Update docstring code to a valid python example
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Update docstring code to a valid python example
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* style correction by `make style`
* Update code exmaple to fully functional
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* allow a local model directory to be used for merging
* moved checkpoint merge bugfix into main for testing
* possibly fix local variable "config_dict" referenced before assignment
* fix deprecation warning
* debugging...
* debugging
* allow safetensors
* safetensors try again
* fix syntax error
* further debugging
* fix logic error when checkpoint 2 is none
* more debugging...
* more debuging...
* more debugging...
* more debugging...
* debugging
* clean up status reporting
* skip the requires_safety_checker boolean
* moved checkpoint merge bugfix into main for testing
* possibly fix local variable "config_dict" referenced before assignment
* fix deprecation warning
* allow safetensors
* fix logic error when checkpoint 2 is none
* clean up status reporting
* undo hack to use private repo for community pipelines
* allow a local model directory to be used for merging
* allow safetensors
* clean up status reporting
* reformatted with black
* sort imported modules correctly
* Update examples/community/checkpoint_merger.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/community/checkpoint_merger.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/community/checkpoint_merger.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix import style error
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix resuming state when using gradient checkpointing.
Also, allow --resume_from_checkpoint to be used when the checkpoint does
not yet exist (a normal training run will be started).
* style
* Dreambooth: use `optimizer.zero_grad(set_to_none=True)` to reduce VRAM usage
* Allow the user to control `optimizer.zero_grad(set_to_none=True)` with --set_grads_to_none
* Update Dreambooth readme
* Fix link in readme
* Fix header size in readme
* Safetensors loading in "convert_diffusers_to_original_stable_diffusion"
Adds diffusers format saftetensors loading support
* Fix import sort order: convert_diffusers_to_original_stable_diffusion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* example on fine-tuning with LoRA.
* apply make quality.
* fix: pipeline loading.
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* apply suggestions for PR review.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* apply make style and make quality.
* chore: remove mention of dreambooth from text2image.
* add: weight path and wandb run link.
* Apply suggestions from code review
* apply make style.
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Correct Pix2Pix example
- no advertisement of revision -> it'll be deprecated soon
- by default safety checker should be used
* Update docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx
* up
* convert __main__ to a function call and call it
* add missing type hint
* make style check pass
* move loading to src/diffusers
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* improve EMA
* style
* one EMA model
* quality
* fix tests
* fix test
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* re organise the unconditional script
* backwards compatibility
* default to init values for some args
* fix ort script
* issubclass => isinstance
* update state_dict
* docstr
* doc
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* use .to if device is passed
* deprecate device
* make flake happy
* fix typo
Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Lora] first upload
* add first lora version
* upload
* more
* first training
* up
* correct
* improve
* finish loaders and inference
* up
* up
* fix more
* up
* finish more
* finish more
* up
* up
* change year
* revert year change
* Change lines
* Add cloneofsimo as co-author.
Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
* finish
* fix docs
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* upload
* finish
Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* added dit model
* import
* initial pipeline
* initial convert script
* initial pipeline
* make style
* raise valueerror
* single function
* rename classes
* use DDIMScheduler
* timesteps embedder
* samples to cpu
* fix var names
* fix numpy type
* use timesteps class for proj
* fix typo
* fix arg name
* flip_sin_to_cos and better var names
* fix C shape cal
* make style
* remove unused imports
* cleanup
* add back patch_size
* initial dit doc
* typo
* Update docs/source/api/pipelines/dit.mdx
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* added copyright license headers
* added example usage and toc
* fix variable names asserts
* remove comment
* added docs
* fix typo
* upstream changes
* set proper device for drop_ids
* added initial dit pipeline test
* update docs
* fix imports
* make fix-copies
* isort
* fix imports
* get rid of more magic numbers
* fix code when guidance is off
* remove block_kwargs
* cleanup script
* removed to_2tuple
* use FeedForward class instead of another MLP
* style
* work on mergint DiTBlock with BasicTransformerBlock
* added missing final_dropout and args to BasicTransformerBlock
* use norm from block
* fix arg
* remove unused arg
* fix call to class_embedder
* use timesteps
* make style
* attn_output gets multiplied
* removed commented code
* use Transformer2D
* use self.is_input_patches
* fix flags
* fixed conversion to use Transformer2DModel
* fixes for pipeline
* remove dit.py
* fix timesteps device
* use randn_tensor and fix fp16 inf.
* timesteps_emb already the right dtype
* fix dit test class
* fix test and style
* fix norm2 usage in vq-diffusion
* added author names to pipeline and lmagenet labels link
* fix tests
* use norm_type as string
* rename dit to transformer
* fix name
* fix test
* set norm_type = "layer" by default
* fix tests
* do not skip common tests
* Update src/diffusers/models/attention.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* revert AdaLayerNorm API
* fix norm_type name
* make sure all components are in eval mode
* revert norm2 API
* compact
* finish deprecation
* add slow tests
* remove @
* refactor some stuff
* upload
* Update src/diffusers/pipelines/dit/pipeline_dit.py
* finish more
* finish docs
* improve docs
* finish docs
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
re: https://github.com/huggingface/diffusers/issues/1857
We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1).
- [x] add mixin to UnCLIPPipelineFastTests
- [x] add mixin to UnCLIPImageVariationPipelineFastTests
- [x] Move UnCLIPPipeline flags in mixin to base class
- [x] Small MPS fixes for F.pad and F.interpolate
- [x] Made test unCLIP model's dimensions smaller to run tests faster
* [Stable Diffusion Img2Img] resize source images to integer multiple of 8 instead of 32
* [Alt Diffusion Img2Img] resize source images to multiple of 8 instead of 32
* [Img2Img] fix AltDiffusion Img2Img resolution test
* [Img2Img] add Stable Diffusion Img2Img resolution test
* [Cycle Diffusion] round resolution to multiplies of 8 instead of 32
* [ONNX SD Img2Img] round resolution to multiplies of 64 instead of 32
* [SD Depth2Img] round resolution to multiplies of 8 instead of 32
* [Repaint] round resolution to multiplies of 8 instead of 32
* fix make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* from_flax
* oops
* oops
* make style with pip install -e ".[dev]"
* oops
* now code quality happy 😋
* allow_patterns += FLAX_WEIGHTS_NAME
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* for test
* bye bye is_flax_available()
* oops
* Update src/diffusers/models/modeling_pytorch_flax_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/modeling_pytorch_flax_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/modeling_pytorch_flax_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* make style
* add test
* finihs
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* implemented multi subject dreambooth in research_projects
* minor edits to readme
* added style and quality fixes
Co-authored-by: Krista Opsahl-Ong <kristaopsahlong@gmail.com>
* init for korean docs
* edit build yml file for multi language docs
* edit one more build yml file for multi language docs
* add title for get_frontmatter error
* add translating.md
* default language for docs is en
* Update docs/TRANSLATING.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Repro] Correct reproducability
* up
* up
* uP
* up
* need better image
* allow conversion from no state dict checkpoints
* up
* up
* up
* up
* check tensors
* check tensors
* check tensors
* check tensors
* next try
* up
* up
* better name
* up
* up
* Apply suggestions from code review
* correct more
* up
* replace all torch randn
* fix
* correct
* correct
* finish
* fix more
* up
* init for korean docs
* edit build yml file for multi language docs
* edit one more build yml file for multi language docs
* add title for get_frontmatter error
* Various Fixes for Flax Dreambooth
- Correctly update the progress bar every epoch
- Allow specifying a pretrained VAE
- Allow specifying a revision to pretrained models
- Cache compiled models between invocations (speeds up TPU execution a lot!)
- Save intermediate checkpoints by specifying `save_steps`
* Don't die when save_steps is not set.
* Address CR
* Address comments
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Support training SD V2 with Flax
Mostly involves supporting a v_prediction scheduler.
The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead.
* Add to other top-level files.
* [Deterministic torch randn] Allow tensors to be generated on CPU
* fix more
* up
* fix more
* up
* Update src/diffusers/utils/torch_utils.py
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Apply suggestions from code review
* up
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* misc fixes
* more comments
* Update examples/textual_inversion/textual_inversion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* set transformers verbosity to warning
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* allow using non-ema weights for training
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* address more review comment
* reorganise a few lines
* always pad text to max_length to match original training
* ifx collate_fn
* remove unused code
* don't prepare ema_unet, don't register lr scheduler
* style
* assert => ValueError
* add allow_tf32
* set log level
* fix comment
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* [Unclip] Make sure latents can be reused
* allow one to directly pass embeddings
* up
* make unclip for text work
* finish allowing to pass embeddings
* correct more
* make style
* move files a bit
* more refactors
* fix more
* more fixes
* fix more onnx
* make style
* upload
* fix
* up
* fix more
* up again
* up
* small fix
* Update src/diffusers/__init__.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* initial
* type hints
* update scheduler type hint
* add to README
* add example generation to README
* v -> mix_factor
* load scheduler from pretrained
* Make xformers optional even if it is available
* Raise exception if xformers is used but not available
* Rename use_xformers to enable_xformers_memory_efficient_attention
* Add a note about xformers in README
* Reformat code style
* Make safety_checker optional in more pipelines.
* Remove inappropriate comment in inpaint pipeline.
* InPaint Test: set feature_extractor to None.
* Remove import
* img2img test: set feature_extractor to None.
* inpaint sd2 test: set feature_extractor to None.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* first proposal
* rename
* up
* Apply suggestions from code review
* better
* up
* finish
* up
* rename
* correct versatile
* up
* up
* up
* up
* fix
* Apply suggestions from code review
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add error message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* use repeat_interleave
* fix repeat
* Trigger Build
* don't install accelerate from main
* install released accelrate for mps test
* Remove additional accelerate installation from main.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Section header for in-painting, inference from checkpoint.
* Inference: link to section to perform inference from checkpoint.
* Move Dreambooth in-painting instructions to the proper place.
* [Flax] Stateless schedulers, fixes and refactors
* Remove scheduling_common_flax and some renames
* Update src/diffusers/schedulers/scheduling_pndm_flax.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py
* fix formatting
* fix style
* Update src/diffusers/optimization.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fail if there are less images than the effective batch size.
* Remove lr-scheduler arg as it's currently ignored.
* Make guidance_scale work for batch_size > 1.
* [Batched Generators] all batched generators
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* hey
* up again
* fix tests
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct tests
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix links to flash attention.
* Add xformers installation instructions.
* Make link to xformers install more prominent.
* Link to xformers install from training docs.
* Add examples with Intel optimizations (BF16 fine-tuning and inference)
* Remove unused package
* Add README for intel_opts and refine the description for research projects
* Add notes of intel opts for diffusers
* update inpaint_legacy to allow the use of predicted noise to construct intermediate diffused images
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add state checkpointing to other training scripts
* Fix first_epoch
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update Dreambooth checkpoint help message.
* Dreambooth docs: checkpoints, inference from a checkpoint.
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Remove bogus file
* [Docs] Remove mentioning of gated access since no longer exsits
* add docs to index
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Dreambooth: save / restore training state.
* make style
* Rename vars for clarity.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Remove unused import
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [SD] Make sure batched input works correctly
* uP
* uP
* up
* up
* uP
* up
* fix mask stuff
* up
* uP
* more up
* up
* uP
* up
* finish
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Added Community pipeline for comparing Stable Diffusion v1.1-4
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* Made changes to provide support for current iteration of from_pretrained and added example
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* updated a small spelling error
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* added pipeline entry to table
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* Initial code for attempt at improving SD <--> diffusers conversions for v2.0
* Updates to support round-trip between orig. SD 2.0 and diffusers models
* Corrected formatting to Black standard
* Correcting import formatting
* Fixed imports (properly this time)
* add some corrections
* remove inference files
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* dreambooth: fix#1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16
* dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for #1566
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update examples/dreambooth/train_dreambooth.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
easy fix for undefined name in train_dreambooth.py
import_model_class_from_model_name_or_path loads a pretrained model
and refers to args.revision in a context where args is undefined. I modified
the function to take revision as an argument and modified the invocation
of the function to pass in the revision from args. Seems like this was caused
by a cut and paste.
* Do not recompile when guidance_scale changes.
* Remove debug for simplicity.
* make style
* Make guidance_scale an array.
* Make DEBUG a constant to avoid passing it down.
* Add comments for clarification.
* add paint by example
* mkae loading possibel
* up
* Update src/diffusers/models/attention.py
* up
* finalize weight structure
* make example work
* make it work
* up
* up
* fix
* del
* add
* update
* Apply suggestions from code review
* correct transformer 2d
* finish
* up
* up
* up
* up
* fix
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Apply suggestions from code review
* up
* finish
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add check_min_version for examples
* move __version__ to the top
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix comment
* fix error_message
* adapt the install message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add AudioDiffusionPipeline and LatentAudioDiffusionPipeline
* add docs to toc
* fix tests
* fix tests
* fix tests
* fix tests
* fix tests
* Update pr_tests.yml
Fix tests
* parent 499ff34b3e
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041721 +0000
parent 499ff34b3e
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041704 +0000
add colab notebook
[Flax] Fix loading scheduler from subfolder (#1319)
[FLAX] Fix loading scheduler from subfolder
Fix/Enable all schedulers for in-painting (#1331)
* inpaint fix k lms
* onnox as well
* up
Correct path to schedlure (#1322)
* [Examples] Correct path
* uP
Avoid nested fix-copies (#1332)
* Avoid nested `# Copied from` statements during `make fix-copies`
* style
Fix img2img speed with LMS-Discrete Scheduler (#896)
Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the `integrate.quad` call later on- by long I mean more than 10x slower.
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Fix the order of casts for onnx inpainting (#1338)
Legacy Inpainting Pipeline for Onnx Models (#1237)
* Add legacy inpainting pipeline compatibility for onnx
* remove commented out line
* Add onnx legacy inpainting test
* Fix slow decorators
* pep8 styling
* isort styling
* dummy object
* ordering consistency
* style
* docstring styles
* Refactor common prompt encoding pattern
* Update tests to permanent repository home
* support all available schedulers until ONNX IO binding is available
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* updated styling from PR suggested feedback
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Jax infer support negative prompt (#1337)
* support negative prompts in sd jax pipeline
* pass batched neg_prompt
* only encode when negative prompt is None
Co-authored-by: Juan Acevedo <jfacevedo@google.com>
Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
Minor change to Imagic Readme
Missing dir causes an error when running the example code.
make style
change the sample model (#1352)
* Update alt_diffusion.mdx
* Update alt_diffusion.mdx
Add bit diffusion [WIP] (#971)
* Create bit_diffusion.py
Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG
* adding bit diffusion to new branch
ran tests
* tests
* tests
* tests
* tests
* removed test folders + added to README
* Update README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* move Mel to module in pipeline construction, make librosa optional
* fix imports
* fix copy & paste error in comment
* fix style
* add missing register_to_config
* fix class docstrings
* fix class docstrings
* tweak docstrings
* tweak docstrings
* update slow test
* put trailing commas back
* respect alphabetical order
* remove LatentAudioDiffusion, make vqvae optional
* move Mel from models back to pipelines :-)
* allow loading of pretrained audiodiffusion models
* fix tests
* fix dummies
* remove reference to latent_audio_diffusion in docs
* unused import
* inherit from SchedulerMixin to make loadable
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make attn slice recursive
* remove set_attention_slice from blocks
* fix copies
* make enable_attention_slicing base class method of DiffusionPipeline
* fix set_attention_slice
* fix set_attention_slice
* fix copies
* add tests
* up
* up
* up
* update
* up
* uP
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
The mask and instance image were being cropped in different ways without --center_crop, causing the model to learn to ignore the mask in some cases. This PR fixes that and generate more consistent results.
[textual_inversion] Add an option to only save embeddings
Add an command line option --only_save_embeds to the example script, for
not saving the full model. Then only the learned embeddings are saved,
which can be added to the original model at runtime in a similar way as
they are created in the training script.
Saving the full model is forced when --push_to_hub is used. (Implements #759)
* Add parameter safe_serialization to DiffusionPipeline.save_pretrained
* Add option safe_serialization on ModelMixin.save_pretrained
* Add test test_save_safe_serialization
* Black
* Re-trigger the CI
* Fix doc-builder
* Validate files are saved as safetensor in test_save_safe_serialization
- Add the missing `scale_model_input` method to `FlaxLMSDiscreteScheduler`
- Use `jnp.append` for appending to `state.derivatives`
- Use `jnp.delete` to pop from `state.derivatives`
* Create train_dreambooth_inpaint.py
train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training
* Update train_dreambooth_inpaint.py
refactored train_dreambooth_inpaint with black
* Update train_dreambooth_inpaint.py
* Update train_dreambooth_inpaint.py
* Update train_dreambooth_inpaint.py
Fix prior preservation
* add instructions to readme, fix SD2 compatibility
* Do not use torch.long in mps
Addresses #1056.
* Use torch.int instead of float.
* Propagate changes.
* Do not silently change float -> int.
* Propagate changes.
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Moving the mem efficiient attention activation to the top + recursive
* black, too bad there's no pre-commit ?
Co-authored-by: Benjamin Lefaudeux <benjamin@photoroom.com>
* feat: switch core pipelines to use image arg
* test: update tests for core pipelines
* feat: switch examples to use image arg
* docs: update docs to use image arg
* style: format code using black and doc-builder
* fix: deprecate use of init_image in all pipelines
* Flax: start adapting to Stable Diffusion 2
* More changes.
* attention_head_dim can be a tuple.
* Fix typos
* Add simple SD 2 integration test.
Slice values taken from my Ampere GPU.
* Add simple UNet integration tests for Flax.
Note that the expected values are taken from the PyTorch results. This
ensures the Flax and PyTorch versions are not too far off.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Typos and style
* Tests: verify jax is available.
* Style
* Make flake happy
* Remove typo.
* Simple Flax SD 2 pipeline tests.
* Import order
* Remove unused import.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: @camenduru
* Add heun
* Finish first version of heun
* remove bogus
* finish
* finish
* improve
* up
* up
* fix more
* change progress bar
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* finish
* up
* up
* up
* [Proposal] Support loading from safetensors if file is present.
* Style.
* Fix.
* Adding some test to check loading logic.
+ modify download logic to not download pytorch file if not necessary.
* Fixing the logic.
* Adressing comments.
* factor out into a function.
* Remove dead function.
* Typo.
* Extra fetch only if safetensors is there.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionUpscalePipeline
* fix a few things
* make it better
* fix image batching
* run vae in fp32
* fix docstr
* resize to mul of 64
* doc
* remove safety_checker
* add max_noise_level
* fix Copied
* begin tests
* slow tests
* default max_noise_level
* remove kwargs
* doc
* fix
* fix fast tests
* fix fast tests
* no sf
* don't offload vae
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adapt ddpm, ddpmsolver to prediction_type.
* Deprecate predict_epsilon in __init__.
* Bring FlaxDDIMScheduler up to date with DDIMScheduler.
* Set prediction_type as an ivar for consistency.
* Convert pipeline_ddpm
* Adapt tests.
* Adapt unconditional training script.
* Adapt BitDiffusion example.
* Add missing kwargs in dpmsolver_multistep
* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.
* make style
* Remove import no longer in use.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Use config.prediction_type everywhere
* Add a couple of Flax prediction type tests.
* make style
* fix register deprecated arg
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* up
* convert dual unet
* revert dual attn
* adapt for vd-official
* test the full pipeline
* mixed inference
* mixed inference for text2img
* add image prompting
* fix clip norm
* split text2img and img2img
* fix format
* refactor text2img
* mega pipeline
* add optimus
* refactor image var
* wip text_unet
* text unet end to end
* update tests
* reshape
* fix image to text
* add some first docs
* dual guided pipeline
* fix token ratio
* propose change
* dual transformer as a native module
* DualTransformer(nn.Module)
* DualTransformer(nn.Module)
* correct unconditional image
* save-load with mega pipeline
* remove image to text
* up
* uP
* fix
* up
* final fix
* remove_unused_weights
* test updates
* save progress
* uP
* fix dual prompts
* some fixes
* finish
* style
* finish renaming
* up
* fix
* fix
* fix
* finish
Co-authored-by: anton-l <anton@huggingface.co>
* make sure fp16 runs well
* add fp16 test for superes
* Update src/diffusers/models/unet_2d.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* gen on cuda
* always run fast inferecne test on cpu
* run on cpu
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix non square images with UNet2DModel and DDIM/DDPM pipelines
* fix unet_2d `sample_size` docstring
* update pipeline tests for unet uncond
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Create bit_diffusion.py
Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG
* adding bit diffusion to new branch
ran tests
* tests
* tests
* tests
* tests
* removed test folders + added to README
* Update README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Handle batches and Tensors in `prepare_mask_and_masked_image`
* `blackfy`
upgrade `black`
* handle mask as `np.array`
* add docstring
* revert `black` changes with smaller line length
* missing ValueError in docstring
* raise `TypeError` for image as tensor but not mask
* typo in mask shape selection
* check for batch dim
* fix: wrong indentation
* add tests
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* support negative prompts in sd jax pipeline
* pass batched neg_prompt
* only encode when negative prompt is None
Co-authored-by: Juan Acevedo <jfacevedo@google.com>
* Add legacy inpainting pipeline compatibility for onnx
* remove commented out line
* Add onnx legacy inpainting test
* Fix slow decorators
* pep8 styling
* isort styling
* dummy object
* ordering consistency
* style
* docstring styles
* Refactor common prompt encoding pattern
* Update tests to permanent repository home
* support all available schedulers until ONNX IO binding is available
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* updated styling from PR suggested feedback
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the `integrate.quad` call later on- by long I mean more than 10x slower.
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* being tests
* fix model ids
* don't use safety checker in tests
* add im2img2 tests
* fix integration tests
* integration tests
* style
* add sentencepiece in test dep
* quality
* 4 decimalk points
* fix im2img test
* increase the tok slightly
* add conversion script for vae
* up
* up
* some fixes
* add text model
* use the correct config
* add docs
* move model in it's own file
* move model in its own file
* pass attenion mask to text encoder
* pass attn mask to uncond inputs
* quality
* fix image2image
* add imag2image in init
* fix import
* fix one more import
* fix import, dummy objetcs
* fix copied from
* up
* finish
Co-authored-by: patil-suraj <surajp815@gmail.com>
* add conversion script for vae
* uP
* uP
* more changes
* push
* up
* finish again
* up
* up
* up
* up
* finish
* up
* uP
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* up
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* re-add RL model code
* match model forward api
* add register_to_config, pass training tests
* fix tests, update forward outputs
* remove unused code, some comments
* add to docs
* remove extra embedding code
* unify time embedding
* remove conv1d output sequential
* remove sequential from conv1dblock
* style and deleting duplicated code
* clean files
* remove unused variables
* clean variables
* add 1d resnet block structure for downsample
* rename as unet1d
* fix renaming
* rename files
* add get_block(...) api
* unify args for model1d like model2d
* minor cleaning
* fix docs
* improve 1d resnet blocks
* fix tests, remove permuts
* fix style
* add output activation
* rename flax blocks file
* Add Value Function and corresponding example script to Diffuser implementation (#884)
* valuefunction code
* start example scripts
* missing imports
* bug fixes and placeholder example script
* add value function scheduler
* load value function from hub and get best actions in example
* very close to working example
* larger batch size for planning
* more tests
* merge unet1d changes
* wandb for debugging, use newer models
* success!
* turns out we just need more diffusion steps
* run on modal
* merge and code cleanup
* use same api for rl model
* fix variance type
* wrong normalization function
* add tests
* style
* style and quality
* edits based on comments
* style and quality
* remove unused var
* hack unet1d into a value function
* add pipeline
* fix arg order
* add pipeline to core library
* community pipeline
* fix couple shape bugs
* style
* Apply suggestions from code review
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
* update post merge of scripts
* add mdiblock / outblock architecture
* Pipeline cleanup (#947)
* valuefunction code
* start example scripts
* missing imports
* bug fixes and placeholder example script
* add value function scheduler
* load value function from hub and get best actions in example
* very close to working example
* larger batch size for planning
* more tests
* merge unet1d changes
* wandb for debugging, use newer models
* success!
* turns out we just need more diffusion steps
* run on modal
* merge and code cleanup
* use same api for rl model
* fix variance type
* wrong normalization function
* add tests
* style
* style and quality
* edits based on comments
* style and quality
* remove unused var
* hack unet1d into a value function
* add pipeline
* fix arg order
* add pipeline to core library
* community pipeline
* fix couple shape bugs
* style
* Apply suggestions from code review
* clean up comments
* convert older script to using pipeline and add readme
* rename scripts
* style, update tests
* delete unet rl model file
* remove imports in src
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
* Update src/diffusers/models/unet_1d_blocks.py
* Update tests/test_models_unet.py
* RL Cleanup v2 (#965)
* valuefunction code
* start example scripts
* missing imports
* bug fixes and placeholder example script
* add value function scheduler
* load value function from hub and get best actions in example
* very close to working example
* larger batch size for planning
* more tests
* merge unet1d changes
* wandb for debugging, use newer models
* success!
* turns out we just need more diffusion steps
* run on modal
* merge and code cleanup
* use same api for rl model
* fix variance type
* wrong normalization function
* add tests
* style
* style and quality
* edits based on comments
* style and quality
* remove unused var
* hack unet1d into a value function
* add pipeline
* fix arg order
* add pipeline to core library
* community pipeline
* fix couple shape bugs
* style
* Apply suggestions from code review
* clean up comments
* convert older script to using pipeline and add readme
* rename scripts
* style, update tests
* delete unet rl model file
* remove imports in src
* add specific vf block and update tests
* style
* Update tests/test_models_unet.py
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
* fix quality in tests
* fix quality style, split test file
* fix checks / tests
* make timesteps closer to main
* unify block API
* unify forward api
* delete lines in examples
* style
* examples style
* all tests pass
* make style
* make dance_diff test pass
* Refactoring RL PR (#1200)
* init file changes
* add import utils
* finish cleaning files, imports
* remove import flags
* clean examples
* fix imports, tests for merge
* update readmes
* hotfix for tests
* quality
* fix some tests
* change defaults
* more mps test fixes
* unet1d defaults
* do not default import experimental
* defaults for tests
* fix tests
* fix-copies
* fix
* changes per Patrik's comments (#1285)
* changes per Patrik's comments
* update conversion script
* fix renaming
* skip more mps tests
* last test fix
* Update examples/rl/README.md
Co-authored-by: Ben Glickenhaus <benglickenhaus@gmail.com>
* Add a reference to the name 'Sampler'
- Facilitate people that are familiar with the name samplers to understand that we call that schedulers
- Better SEO if people are googling for samplers to find our library as well
* Update README.md with a reference to 'Sampler'
* Match the generator device to the pipeline for DDPM and DDIM
* style
* fix
* update values
* fix fast tests
* trigger slow tests
* deprecate
* last value fixes
* mps fixes
Flax tests: don't hardcode number of devices.
This makes it possible to test on CPU/GPU. However, expected slices are
only checked when there are 8 devices.
* [Scheduler] Move predict epsilon to init
* up
* uP
* uP
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Make errors for invalid options without "--with_prior_preservation"
* Make --instance_prompt required
* Removed needless check because --instance_data_dir is marked with required
* Updated messages
* Use logger.warning instead of raise errors
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Schedulers: don't use float64 on mps
* Test set_timesteps() on device (float schedulers).
* SD pipeline: use device in set_timesteps.
* SD in-painting pipeline: use device in set_timesteps.
* Tests: fix mps crashes.
* Skip test_load_pipeline_from_git on mps.
Not compatible with float16.
* Use device.type instead of str in Euler schedulers.
* adds image to image inpainting with `PIL.Image.Image` inputs
the base implementation claims to support `torch.Tensor` but seems it
would also fail in this case.
* `make style` and `make quality`
* updates community examples readme
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add enable sequential cpu offloading to other stable diffusion pipelines
* trigger ci
* fix styling
* interpolate before converting to device to avoid breking when cpu_offload is enabled with fp16
Co-authored-by: Pedro Gengo <pedro.gabriel.lourenco@hotmail.com>
* style again I need to stop forgething this thing
* fix inpainting bug that could cause device misalignment
Co-authored-by: Pedro Gengo <pedro.gabriel.lourenco@hotmail.com>
* Apply suggestions from code review
Co-authored-by: Pedro Gengo <pedro.gabriel.lourenco@hotmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* increase the precision of slice-based tests and make the default test case easier to single out
* increase precision of unit tests which already rely on float comparisons
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make accelerate hard dep
* default fast init
* move params to cpu when device map is None
* handle device_map=None
* handle torch < 1.9
* remove device_map="auto"
* style
* add accelerate in torch extra
* remove accelerate from extras["test"]
* raise an error if torch is available but not accelerate
* update installation docs
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* improve defautl loading speed even further, allow disabling fats loading
* address review comments
* adapt the tests
* fix test_stable_diffusion_fast_load
* fix test_read_init
* temp fix for dummy checks
* Trigger Build
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Changes for VQ-diffusion VQVAE
Add specify dimension of embeddings to VQModel:
`VQModel` will by default set the dimension of embeddings to the number
of latent channels. The VQ-diffusion VQVAE has a smaller
embedding dimension, 128, than number of latent channels, 256.
Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
unet block helpers. VQ-diffusion's VQVAE uses those two block types.
* Changes for VQ-diffusion transformer
Modify attention.py so SpatialTransformer can be used for
VQ-diffusion's transformer.
SpatialTransformer:
- Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
- `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
- modified forward pass to take optional timestep embeddings
ImagePositionalEmbeddings:
- added to provide positional embeddings to discrete inputs for latent pixels
BasicTransformerBlock:
- norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
- modified forward pass to take optional timestep embeddings
CrossAttention:
- now may optionally take a bias parameter for its query, key, and value linear layers
FeedForward:
- Internal layers are now configurable
ApproximateGELU:
- Activation function in VQ-diffusion's feedforward layer
AdaLayerNorm:
- Norm layer modified to incorporate timestep embeddings
* Add VQ-diffusion scheduler
* Add VQ-diffusion pipeline
* Add VQ-diffusion convert script to diffusers
* Add VQ-diffusion dummy objects
* Add VQ-diffusion markdown docs
* Add VQ-diffusion tests
* some renaming
* some fixes
* more renaming
* correct
* fix typo
* correct weights
* finalize
* fix tests
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finish
* finish
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* feat: add repaint
* fix: fix quality check with `make fix-copies`
* fix: remove old unnecessary arg
* chore: change default to DDPM (looks better in experiments)
* ".to(device)" changed to "device="
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* make generator device-specific
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* make generator device-specific and change shape
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* fix: add preprocessing for image and mask
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* fix: update test
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Update src/diffusers/pipelines/repaint/pipeline_repaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add docs and examples
* Fix toctree
Co-authored-by: fja <fja@zurich.ibm.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
* Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly"
This reverts commit c5efb52564.
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
* fixed code style
Co-authored-by: lukovnikov <lukovnikov@users.noreply.github.com>
* Fix equality test for ddim and ddpm
* add docs for use_clipped_model_output in DDIM
* fix inline comment
* reorder imports in test_pipelines.py
* Ignore use_clipped_model_output if scheduler doesn't take it
* improve test precision
get tests passing with greater precision using lewington images
* make old numpy load function a wrapper around a more flexible numpy loading function
* adhere to black formatting
* add more black formatting
* adhere to isort
* loosen precision and replace path
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* 2x speedup using memory efficient attention
* remove einops dependency
* Swap K, M in op instantiation
* Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
* make xformers a soft dependency
* remove one-liner functions
* change one letter variable to appropriate names
* Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
* Add memory efficient attention toggle to img2img and inpaint pipelines
* Clearer management of xformers' availability
* update optimizations markdown to add info about memory efficient attention
* add benchmarks for TITAN RTX
* More detailed explanation of how the mem eff benchmark were ran
* Removing autocast from optimization markdown
* import_utils: import torch only if is available
Co-authored-by: Nouamane Tazi <nouamane98@gmail.com>
* initial commit to add imagic to stable diffusion community pipelines
* remove some testing changes
* comments from PR review for imagic stable diffusion
* remove changes from pipeline_stable_diffusion as part of imagic pipeline
* clean up example code and add line back in to pipeline_stable_diffusion for imagic pipeline
* remove unused functions
* small code quality changes for imagic pipeline
* clean up readme
* remove hardcoded logging values for imagic community example
* undo change for DDIMScheduler
Remove some unused parameter
The `downsample_padding` parameter does not seem to be used in `CrossAttnUpBlock2D` (or by any up block for that matter) so removing it.
* [Better scheduler docs] Improve usage examples of schedulers
* finish
* fix warnings and add test
* finish
* more replacements
* adapt fast tests hf token
* correct more
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Integrate compatibility with euler
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Docs: refer to pre-RC version of PyTorch 1.13.0.
* Remove temporary workaround for unavailable op.
* Update comment to make it less ambiguous.
* Remove use of contiguous in mps.
It appears to not longer be necessary.
* Special case: use einsum for much better performance in mps
* Update mps docs.
* MPS: make pipeline work in half precision.
Tests: upgrade PyTorch cuda to 11.7.
Otherwise the cuda versions of torch and torchvision mismatch, and
examples tests fail. We were requesting cuda 11.6 for PyTorch, and the
default torchvision (via setup.py).
Another option would be to include torchvision in the same pip install
line as torch.
* Add failing test for #940.
* Do not use torch.float64 in mps.
* style
* Temporarily skip add_noise for IPNDMScheduler.
Until #990 is addressed.
* Fix additional float64 error in mps.
* Improve add_noise test
* Slight edit – I think it's clearer this way.
* add textual inversion flax
* make style
* make style
* replicate vae and unet params
* make style
* minor
* save after end of training
* style
* Temporary fix
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Add Flax instruction
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Make training code usable by external scripts
Add parameter inputs to training and argument parsing function to allow this script to be used by an external call.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add method to enable cuda with minimal gpu usage to stable diffusion
* add test to minimal cuda memory usage
* ensure all models but unet are onn torch.float32
* move to cpu_offload along with minor internal changes to make it work
* make it test against accelerate master branch
* coming back, its official: I don't know how to make it test againt the master branch from accelerate
* make it install accelerate from master on tests
* go back to accelerate>=0.11
* undo prettier formatting on yml files
* undo prettier formatting on yml files againn
* start
* add more logic
* Update src/diffusers/models/unet_2d_condition_flax.py
* match weights
* up
* make model work
* making class more general, fixing missed file rename
* small fix
* make new conversion work
* up
* finalize conversion
* up
* first batch of variable renamings
* remove c and c_prev var names
* add mid and out block structure
* add pipeline
* up
* finish conversion
* finish
* upload
* more fixes
* Apply suggestions from code review
* add attr
* up
* uP
* up
* finish tests
* finish
* uP
* finish
* fix test
* up
* naming consistency in tests
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* remove hardcoded 16
* Remove bogus
* fix some stuff
* finish
* improve logging
* docs
* upload
Co-authored-by: Nathan Lambert <nol@berkeley.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Docs: refer to pre-RC version of PyTorch 1.13.0.
* Remove temporary workaround for unavailable op.
* Update comment to make it less ambiguous.
* Remove use of contiguous in mps.
It appears to not longer be necessary.
* Special case: use einsum for much better performance in mps
* Update mps docs.
* Minor doc update.
* Accept suggestion
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Update README.md
Additionally add FLAX so the model card can be slimmer and point to this page
* Find and replace all
* v-1-5 -> v1-5
* revert test changes
* Update README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/quicktour.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update README.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/quicktour.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Revert certain references to v1-5
* Docs changes
* Apply suggestions from code review
Co-authored-by: apolinario <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Initial Wildcard Stable Diffusion Pipeline
* Added some additional example usage
* style
* Added links in README and additional documentation
* Initial Wildcard Stable Diffusion Pipeline
* Added some additional example usage
* style
* Added links in README and additional documentation
* cleanup readme again
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Support LMSDiscreteScheduler in LDMPipeline
This is a small change to support all schedulers such as LMSDiscreteScheduler in LDMPipeline.
What's changed
-------
* Add the `scale_model_input` function before `step` to ensure correct denoising (L77)
* Add "scale the initial noise by the standard deviation required by the scheduler"
* run `make style`
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* First draft
* created the SpeechToImagePipeline class
* Corrected speech_to_image_diffusion.py style
* Added safety checker
* Corrected style
* Adding examples to README
* begin pipe
* add new pipeline
* add tests
* correct fast test
* up
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
* Update tests/test_pipelines.py
* up
* up
* make style
* add fp16 test
* doc, comments
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* [CI] Add Apple M1 tests
* setup-python
* python build
* conda install
* remove branch
* only 3.8 is built for osx-arm
* try fetching prebuilt tokenizers
* use user cache
* update shells
* Reports and cleanup
* -> MPS
* Disable parallel tests
* Better naming
* investigate worker crash
* return xdist
* restart
* num_workers=2
* still crashing?
* faulthandler for segfaults
* faulthandler for segfaults
* remove restarts, stop on segfault
* torch version
* change installation order
* Use pre-RC version of PyTorch.
To be updated when it is released.
* Skip crashing test on MPS, add new one that works.
* Skip cuda tests in mps device.
* Actually use generator in test.
I think this was a typo.
* make style
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Bump to 0.6.0.dev0
* Deprecate tensor_format and .samples
* style
* upd
* upd
* style
* sample -> images
* Update src/diffusers/schedulers/scheduling_ddpm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_ddim.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_karras_ve.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_lms_discrete.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_sde_ve.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_sde_vp.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Remove set_format in Flax pipeline.
* Remove DummyChecker.
* Run safety_checker in pipeline.
* Don't pmap on every call.
We could have decorated `generate` with `pmap`, but I wanted to keep it
in case someone wants to invoke it in non-parallel mode.
* Remove commented line
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Replicate outside __call__, prepare for optional jitting.
* Remove unnecessary clipping.
As suggested by @kashif.
* Do not jit unless requested.
* Send all args to generate.
* make style
* Remove unused imports.
* Fix docstring.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Give more customizable options for safety checker
* Apply suggestions from code review
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* Finish
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Add diffusers version and pipeline class to the Hub UA
* Fallback to class name for pipelines
* Update src/diffusers/modeling_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/modeling_flax_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Remove autoclass
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Dummy imports] Better error message
* Test: load pipeline with LMS scheduler.
Fails with a cryptic message if scipy is not installed.
* Correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* mps: alt. implementation for repeat_interleave
* style
* Bump mps version of PyTorch in the documentation.
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Simplify: do not check for device.
* style
* Fix repeat dimensions:
- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.
* Split long lines as suggested by Suraj.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* pass norm_num_groups param and add tests
* set resnet_groups for FlaxUNetMidBlock2D
* fixed docstrings
* fixed typo
* using is_flax_available util and created require_flax decorator
* begin text2image script
* loading the datasets, preprocessing & transforms
* handle input features correctly
* add gradient checkpointing support
* fix output names
* run unet in train mode not text encoder
* use no_grad instead of freezing params
* default max steps None
* pad to longest
* don't pad when tokenizing
* fix encode on multi gpu
* fix stupid bug
* add random flip
* add ema
* fix ema
* put ema on cpu
* improve EMA model
* contiguous_format
* don't warp vae and text encode in accelerate
* remove no_grad
* use randn_like
* fix resize
* improve few things
* log epoch loss
* set log level
* don't log each step
* remove max_length from collate
* style
* add report_to option
* make scale_lr false by default
* add grad clipping
* add an option to use 8bit adam
* fix logging in multi-gpu, log every step
* more comments
* remove eval for now
* adress review comments
* add requirements file
* begin readme
* begin readme
* fix typo
* fix push to hub
* populate readme
* update readme
* remove use_auth_token from the script
* address some review comments
* better mixed precision support
* remove redundant to
* create ema model early
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* better description for train_data_dir
* add diffusers in requirements
* update dataset_name_mapping
* update readme
* add inference example
Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Support deepspeed
* Dreambooth DeepSpeed documentation
* Remove unnecessary casts, documentation
Due to recent commits some casts to half precision are not necessary
anymore.
Mention that DeepSpeed's version of Adam is about 2x faster.
* Review comments
* add accelerate to load models with smaller memory footprint
* remove low_cpu_mem_usage as it is reduntant
* move accelerate init weights context to modelling utils
* add test to ensure results are the same when loading with accelerate
* add tests to ensure ram usage gets lower when using accelerate
* move accelerate logic to single snippet under modelling utils and remove it from configuration utils
* format code using to pass quality check
* fix imports with isor
* add accelerate to test extra deps
* only import accelerate if device_map is set to auto
* move accelerate availability check to diffusers import utils
* format code
* add device map to pipeline abstraction
* lint it to pass PR quality check
* fix class check to use accelerate when using diffusers ModelMixin subclasses
* use low_cpu_mem_usage in transformers if device_map is not available
* NoModuleLayer
* comment out tests
* up
* uP
* finish
* Update src/diffusers/pipelines/stable_diffusion/safety_checker.py
* finish
* uP
* make style
Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
* debug an exception
if dst_path is not a file, it will raise Exception in the function src_path.samefile:
FileNotFoundError: [Errno 2] No such file or directory: '/home/lilongwei/notebook/onnx_diffusion/vae_decoder/model.onnx'
* Update src/diffusers/onnx_utils.py
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* handle dtype in vae and image2image pipeline
* fix inpaint in fp16
* dtype should be handled in add_noise
* style
* address review comments
* add simple fast tests to check fp16
* fix test name
* put mask in fp16
This is to ensure that the final latent slices stay somewhat consistent as more changes are introduced into the library.
Signed-off-by: James R T <jamestiotio@gmail.com>
Signed-off-by: James R T <jamestiotio@gmail.com>
* Swap fp16 error to warning
Also remove the associated test
* Formatting
* warn -> warning
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Raise an error when moving an fp16 pipeline to CPU
* Raise an error when moving an fp16 pipeline to CPU
* style
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Improve the message
* cuda
* Update tests/test_pipelines.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* init
* improve add_noise
* [debug start] run slow test
* [debug end]
* quick revert
* Add docstrings and warnings + API tests
* Make the warning less spammy
* renamed single letter variables
* renamed x to meaningful variable in resnet.py
Hello @patil-suraj can you verify it
Thanks
* Reformatted using black
* renamed x to meaningful variable in resnet.py
Hello @patil-suraj can you verify it
Thanks
* reformatted the files
* modified unboundlocalerror in line 374
* removed referenced before error
* renamed single variable x -> hidden_state, p-> pad_value
Co-authored-by: Nikhil A V <nikhilav@Nikhils-MacBook-Pro.local>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Add an argument "negative_prompt"
* Fix argument order
* Fix to use TypeError instead of ValueError
* Removed needless batch_size multiplying
* Fix to multiply by batch_size
* Add truncation=True for long negative prompt
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix styles
* Renamed ucond_tokens to uncond_tokens
* Added description about "negative_prompt"
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add accelerate to load models with smaller memory footprint
* remove low_cpu_mem_usage as it is reduntant
* move accelerate init weights context to modelling utils
* add test to ensure results are the same when loading with accelerate
* add tests to ensure ram usage gets lower when using accelerate
* move accelerate logic to single snippet under modelling utils and remove it from configuration utils
* format code using to pass quality check
* fix imports with isor
* add accelerate to test extra deps
* only import accelerate if device_map is set to auto
* move accelerate availability check to diffusers import utils
* format code
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Conversion script
* ran black
* ran isort
* remove unused import
* map location so everything gets loaded onto CPU before conversion
* ran black again
* Update setup.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Don't use `load_state_dict` if torch is not installed.
* Define `SchedulerOutput` to use torch or flax arrays.
* Don't import LMSDiscreteScheduler without torch.
* Create distinct FlaxSchedulerOutput.
* Additional changes required for FlaxSchedulerMixin
* Do not import torch pipelines in Flax.
* Revert "Define `SchedulerOutput` to use torch or flax arrays."
This reverts commit f653140134.
* Prefix Flax scheduler outputs for consistency.
* make style
* FlaxSchedulerOutput is now a dataclass.
* Don't use f-string without placeholders.
* Add blank line.
* Style (docstrings)
* Add callback parameters for Stable Diffusion pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
* Lint code with `black --preview`
Signed-off-by: James R T <jamestiotio@gmail.com>
* Refactor callback implementation for Stable Diffusion pipelines
* Fix missing imports
Signed-off-by: James R T <jamestiotio@gmail.com>
* Fix documentation format
Signed-off-by: James R T <jamestiotio@gmail.com>
* Add kwargs parameter to standardize with other pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
* Modify Stable Diffusion pipeline callback parameters
Signed-off-by: James R T <jamestiotio@gmail.com>
* Remove useless imports
Signed-off-by: James R T <jamestiotio@gmail.com>
* Change types for timestep and onnx latents
* Fix docstring style
* Return decode_latents and run_safety_checker back into __call__
* Remove unused imports
* Add intermediate state tests for Stable Diffusion pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
* Fix intermediate state tests for Stable Diffusion pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
Signed-off-by: James R T <jamestiotio@gmail.com>
* Allow resolutions that are not multiples of 64
* ran black
* fix bug
* add test
* more explanation
* more comments
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* initial commit
* make UNet stream capturable
* try to fix noise_pred value
* remove cuda graph and keep NB
* non blocking unet with PNDMScheduler
* make timesteps np arrays for pndm scheduler
because lists don't get formatted to tensors in `self.set_format`
* make max async in pndm
* use channel last format in unet
* avoid moving timesteps device in each unet call
* avoid memcpy op in `get_timestep_embedding`
* add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`
* update TODO
* replace `channels_last` kwarg with `memory_format` for more generality
* revert the channels_last changes to leave it for another PR
* remove non_blocking when moving input ids to device
* remove blocking from all .to() operations at beginning of pipeline
* fix merging
* fix merging
* model can run in other precisions without autocast
* attn refactoring
* Revert "attn refactoring"
This reverts commit 0c70c0e189.
* remove restriction to run conv_norm in fp32
* use `baddbmm` instead of `matmul`for better in attention for better perf
* removing all reshapes to test perf
* Revert "removing all reshapes to test perf"
This reverts commit 006ccb8a8c.
* add shapes comments
* hardcore whats needed for jitting
* Revert "hardcore whats needed for jitting"
This reverts commit 2fa9c698ea.
* Revert "remove restriction to run conv_norm in fp32"
This reverts commit cec592890c.
* revert using baddmm in attention's forward
* cleanup comment
* remove restriction to run conv_norm in fp32. no quality loss was noticed
This reverts commit cc9bc1339c.
* add more optimizations techniques to docs
* Revert "add shapes comments"
This reverts commit 31c58eadb8.
* apply suggestions
* make quality
* apply suggestions
* styling
* `scheduler.timesteps` are now arrays so we dont need .to()
* remove useless .type()
* use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`
* move scheduler timestamps to correct device if tensors
* add device to `set_timesteps` in LMSD scheduler
* `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it
* quick fix
* styling
* remove kwargs from schedulers `set_timesteps`
* revert to using max in K-LMS inpaint pipeline test
* Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"
This reverts commit 00d5a51e5c.
* move timesteps to correct device before loop in SD pipeline
* apply previous fix to other SD pipelines
* UNet now accepts tensor timesteps even on wrong device, to avoid errors
- it shouldnt affect performance if timesteps are alrdy on correct device
- it does slow down performance if they're on the wrong device
* fix pipeline when timesteps are arrays with strides
* correcting the beta value assignment
* updating DDIM and LMSDiscreteFlax schedulers
* bringing back the changes that were lost as part of main branch merge
* Replace deprecation warning f-string with class name.
When `__repr__` is invoked in the instance serialization of
`config_dict` fails, because it contains `kwargs` of type `<class
inspect._empty>`.
* Revert "Replace deprecation warning f-string with class name."
This reverts commit 1c4eb8cb10.
* Do not attempt to register `"kwargs"` as an attribute.
Otherwise serialization could fail.
This may happen for other attributes, so we should create a better
solution.
* pytorch only schedulers
* fix style
* remove match_shape
* pytorch only ddpm
* remove SchedulerMixin
* remove numpy from karras_ve
* fix types
* remove numpy from lms_discrete
* remove numpy from pndm
* fix typo
* remove mixin and numpy from sde_vp and ve
* remove remaining tensor_format
* fix style
* sigmas has to be torch tensor
* removed set_format in readme
* remove set format from docs
* remove set_format from pipelines
* update tests
* fix typo
* continue to use mixin
* fix imports
* removed unsed imports
* match shape instead of assuming image shapes
* remove import typo
* update call to add_noise
* use math instead of numpy
* fix t_index
* removed commented out numpy tests
* timesteps needs to be discrete
* cast timesteps to int in flax scheduler too
* fix device mismatch issue
* small fix
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline
* todo comment
* Fix imports
* Fix imports
* add dummies
* Fix empty init
* make pipeline work
* up
* Allow dtype to be overridden on model load.
This may be a temporary solution until #567 is addressed.
* Convert params to bfloat16 or fp16 after loading.
This deals with the weights, not the model.
* Use Flax schedulers (typing, docstring)
* PNDM: replace control flow with jax functions.
Otherwise jitting/parallelization don't work properly as they don't know
how to deal with traced objects.
I temporarily removed `step_prk`.
* Pass latents shape to scheduler set_timesteps()
PNDMScheduler uses it to reserve space, other schedulers will just
ignore it.
* Wrap model imports inside availability checks.
* Optionally return state in from_config.
Useful for Flax schedulers.
* Do not convert model weights to dtype.
* Re-enable PRK steps with functional implementation.
Values returned still not verified for correctness.
* Remove left over has_state var.
* make style
* Apply suggestion list -> tuple
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestion list -> tuple
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Remove unused comments.
* Use zeros instead of empty.
Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Return encoded texts by DiffusionPipelines
* Updated README to show hot to use enoded_text_input
* Reverted examples in README.md
* Reverted all
* Warning for long prompts
* Fix bugs
* Formatted
* refactor: pipelines readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* docs: remove todo comment from flax pipeline
Signed-off-by: Ryan Russell <git@ryanrussell.org>
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs
* Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra
* Reordered library imports to follow standard
* didnt get import order quite right apparently
* Forgot to change name of LMSDiscreteSchedulerOutput
* Aha, needed some extra libs for make style to fully work
* add grad ckpt to downsample blocks
* make it work
* don't pass gradient_checkpointing to upsample block
* add tests for UNet2DConditionModel
* add test_gradient_checkpointing
* add gradient_checkpointing for up and down blocks
* add functions to enable and disable grad ckpt
* remove the forward argument
* better naming
* make supports_gradient_checkpointing private
* Optionally return state in from_config.
Useful for Flax schedulers.
* has_state is now a property, make check more strict.
I don't check the class is `SchedulerMixin` to prevent circular
dependencies. It should be enough that the class name starts with "Flax"
the object declares it "has_state" and the "create_state" exists too.
* Use state in pipeline from_pretrained.
* Make style
* Fix typo in docstring.
* Allow dtype to be overridden on model load.
This may be a temporary solution until #567 is addressed.
* Create latents in float32
The denoising loop always computes the next step in float32, so this
would fail when using `bfloat16`.
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline
* todo comment
* Fix imports
* Fix imports
* add dummies
* Fix empty init
* make pipeline work
* up
* Use Flax schedulers (typing, docstring)
* Wrap model imports inside availability checks.
* more updates
* make sure flax is not broken
* make style
* more fixes
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@latenitesoft.com>
* first commit:
- add `from_pt` argument in `from_pretrained` function
- add `modeling_flax_pytorch_utils.py` file
* small nit
- fix a small nit - to not enter in the second if condition
* major changes
- modify FlaxUnet modules
- first conversion script
- more keys to be matched
* keys match
- now all keys match
- change module names for correct matching
- upsample module name changed
* working v1
- test pass with atol and rtol= `4e-02`
* replace unsued arg
* make quality
* add small docstring
* add more comments
- add TODO for embedding layers
* small change
- use `jnp.expand_dims` for converting `timesteps` in case it is a 0-dimensional array
* add more conditions on conversion
- add better test to check for keys conversion
* make shapes consistent
- output `img_w x img_h x n_channels` from the VAE
* Revert "make shapes consistent"
This reverts commit 4cad1aeb4a.
* fix unet shape
- channels first!
* Unify offset configuration in DDIM and PNDM schedulers
* Format
Add missing variables
* Fix pipeline test
* Update src/diffusers/schedulers/scheduling_ddim.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Default set_alpha_to_one to false
* Format
* Add tests
* Format
* add deprecation warning
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix is_onnx_available
Fix: If user install onnxruntime-gpu, is_onnx_available() will return False.
* add more onnxruntime candidates
* Run `make style`
Co-authored-by: anton-l <anton@huggingface.co>
* begin text2img conversion script
* add fn to convert config
* create config if not provided
* update imports and use UNet2DConditionModel
* fix imports, layer names
* fix unet coversion
* add function to convert VAE
* fix vae conversion
* update main
* create text model
* update config creating logic for unet
* fix config creation
* update script to create and save pipeline
* remove unused imports
* fix checkpoint loading
* better name
* save progress
* finish
* up
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* First UNet Flax modeling blocks.
Mimic the structure of the PyTorch files.
The model classes themselves need work, depending on what we do about
configuration and initialization.
* Remove FlaxUNet2DConfig class.
* ignore_for_config non-config args.
* Implement `FlaxModelMixin`
* Use new mixins for Flax UNet.
For some reason the configuration is not correctly applied; the
signature of the `__init__` method does not contain all the parameters
by the time it's inspected in `extract_init_dict`.
* Import `FlaxUNet2DConditionModel` if flax is available.
* Rm unused method `framework`
* Update src/diffusers/modeling_flax_utils.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Indicate types in flax.struct.dataclass as pointed out by @mishig25
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
* Fix typo in transformer block.
* make style
* some more changes
* make style
* Add comment
* Update src/diffusers/modeling_flax_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Rm unneeded comment
* Update docstrings
* correct ignore kwargs
* make style
* Update docstring examples
* Make style
* Style: remove empty line.
* Apply style (after upgrading black from pinned version)
* Remove some commented code and unused imports.
* Add init_weights (not yet in use until #513).
* Trickle down deterministic to blocks.
* Rename q, k, v according to the latest PyTorch version.
Note that weights were exported with the old names, so we need to be
careful.
* Flax UNet docstrings, default props as in PyTorch.
* Fix minor typos in PyTorch docstrings.
* Use FlaxUNet2DConditionOutput as output from UNet.
* make style
Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* beta never changes removed from state
* fix typos in docs
* removed unused var
* initial ddim flax scheduler
* import
* added dummy objects
* fix style
* fix typo
* docs
* fix typo in comment
* set return type
* added flax ddom
* fix style
* remake
* pass PRNG key as argument and split before use
* fix doc string
* use config
* added flax Karras VE scheduler
* make style
* fix dummy
* fix ndarray type annotation
* replace returns a new state
* added lms_discrete scheduler
* use self.config
* add_noise needs state
* use config
* use config
* docstring
* added flax score sde ve
* fix imports
* fix typos
* add different method for sliced attention
* Update src/diffusers/models/attention.py
* Apply suggestions from code review
* Update src/diffusers/models/attention.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* initial attempt at solving
* fix pndm power of 3 inference_step
* add power of 3 test
* fix index in pndm test, remove ddim test
* add comments, change to round()
* update expected results of slow tests
* relax sum and mean tests
* Print shapes when reporting exception
* formatting
* fix sentence
* relax test_stable_diffusion_fast_ddim for gpu fp16
* relax flakey tests on GPU
* added comment on large tolerences
* black
* format
* set scheduler seed
* added generator
* use np.isclose
* set num_inference_steps to 50
* fix dep. warning
* update expected_slice
* preprocess if image
* updated expected results
* updated expected from CI
* pass generator to VAE
* undo change back to orig
* use orignal
* revert back the expected on cpu
* revert back values for CPU
* more undo
* update result after using gen
* update mean
* set generator for mps
* update expected on CI server
* undo
* use new seed every time
* cpu manual seed
* reduce num_inference_steps
* style
* use generator for randn
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* renamed variable names
q -> query
k -> key
v -> value
b -> batch
c -> channel
h -> height
w -> weight
* rename variable names
missed some in the initial commit
* renamed more variable names
As per code review suggestions, renamed x -> hidden_states and x_in -> residual
* fixed minor typo
* docs for attention
* types for embeddings
* unet2d docstrings
* UNet2DConditionModel docstrings
* fix typos
* style and vq-vae docstrings
* docstrings for VAE
* Update src/diffusers/models/unet_2d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* added inherits from sentence
* docstring to forward
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finish model docs
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Initial support for mps in Stable Diffusion pipeline.
* Initial "warmup" implementation when using mps.
* Make some deterministic tests pass with mps.
* Disable training tests when using mps.
* SD: generate latents in CPU then move to device.
This is especially important when using the mps device, because
generators are not supported there. See for example
https://github.com/pytorch/pytorch/issues/84288.
In addition, the other pipelines seem to use the same approach: generate
the random samples then move to the appropriate device.
After this change, generating an image in MPS produces the same result
as when using the CPU, if the same seed is used.
* Remove prints.
* Pass AutoencoderKL test_output_pretrained with mps.
Sampling from `posterior` must be done in CPU.
* Style
* Do not use torch.long for log op in mps device.
* Perform incompatible padding ops in CPU.
UNet tests now pass.
See https://github.com/pytorch/pytorch/issues/84535
* Style: fix import order.
* Remove unused symbols.
* Remove MPSWarmupMixin, do not apply automatically.
We do apply warmup in the tests, but not during normal use.
This adopts some PR suggestions by @patrickvonplaten.
* Add comment for mps fallback to CPU step.
* Add README_mps.md for mps installation and use.
* Apply `black` to modified files.
* Restrict README_mps to SD, show measures in table.
* Make PNDM indexing compatible with mps.
Addresses #239.
* Do not use float64 when using LDMScheduler.
Fixes#358.
* Fix typo identified by @patil-suraj
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Adapt example to new output style.
* Restore 1:1 results reproducibility with CompVis.
However, mps latents need to be generated in CPU because generators
don't work in the mps device.
* Move PyTorch nightly to requirements.
* Adapt `test_scheduler_outputs_equivalence` ton MPS.
* mps: skip training tests instead of ignoring silently.
* Make VQModel tests pass on mps.
* mps ddim tests: warmup, increase tolerance.
* ScoreSdeVeScheduler indexing made mps compatible.
* Make ldm pipeline tests pass using warmup.
* Style
* Simplify casting as suggested in PR.
* Add Known Issues to readme.
* `isort` import order.
* Remove _mps_warmup helpers from ModelMixin.
And just make changes to the tests.
* Skip tests using unittest decorator for consistency.
* Remove temporary var.
* Remove spurious blank space.
* Remove unused symbol.
* Remove README_mps.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Initial version of `fp16` page.
* Fix typo in README.
* Change titles of fp16 section in toctree.
* PR suggestion
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* PR suggestion
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Clarify attention slicing is useful even for batches of 1
Explained by @patrickvonplaten after a suggestion by @keturn.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Do not talk about `batches` in `enable_attention_slicing`.
* Use Tip (just for fun), add link to method.
* Comment about fp16 results looking the same as float32 in practice.
* Style: docstring line wrapping.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* init schedulers docs
* add some docstrings, fix sidebar formatting
* add docstrings
* [Type hint] PNDM schedulers (#335)
* [Type hint] PNDM Schedulers
* ran make style
* updated timesteps type hint
* apply suggestions from code review
* ran make style
* removed unused import
* [Type hint] scheduling ddim (#343)
* [Type hint] scheduling ddim
* apply suggestions from code review
apply suggestions to also return the return type
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* update class docstrings
* add docstrings
* missed merge edit
* add general docs page
* modify headings for right sidebar
Co-authored-by: Partho <parthodas6176@gmail.com>
Co-authored-by: Santiago Víquez <santi.viquez@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Initial description of Stable Diffusion pipeline.
* Placeholder docstrings to test preview.
* Add docstrings to Stable Diffusion pipeline.
* Style
* Docs for all the SD pipelines + attention slicing.
* Style: wrap long lines.
* Update text_inversion.mdx
Getting in a bit of background info
* fixed typo mode -> model
* Link SD and re-write a few bits for clarity
* Copied in info from the example script
As suggested by surajpatil :)
* removed an unnecessary heading
Use `expand` instead of ones to broadcast tensor.
As suggested by @bes-dev. According the documentation this shouldn't
take any memory - it just plays with the strides.
* [Type hint] scheduling ddim
* apply suggestions from code review
apply suggestions to also return the return type
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Type hint] PNDM Schedulers
* ran make style
* updated timesteps type hint
* apply suggestions from code review
* ran make style
* removed unused import
* Use ONNX / Core ML compatible method to broadcast.
Unfortunately `tile` could not be used either, it's still not compatible
with ONNX.
See #284.
* Add comment about why broadcast_to is not used.
Also, apply style to changed files.
* Make sure broadcast remains in same device.
* Fix tqdm and OOM
* tqdm auto
* tqdm is still spamming try to disable it altogether
* rather just set the pipe config, to keep the global tqdm clean
* style
* add textual inversion script
* make the loop work
* make coarse_loss optional
* save pipeline after training
* add arg pretrained_model_name_or_path
* fix saving
* fix gradient_accumulation_steps
* style
* fix progress bar steps
* scale lr
* add argument to accept style
* remove unused args
* scale lr using num gpus
* load tokenizer using args
* add checks when converting init token to id
* improve commnets and style
* document args
* more cleanup
* fix default adamw arsg
* TextualInversionWrapper -> CLIPTextualInversionWrapper
* fix tokenizer loading
* Use the CLIPTextModel instead of wrapper
* clean dataset
* remove commented code
* fix accessing grads for multi-gpu
* more cleanup
* fix saving on multi-GPU
* init_placeholder_token_embeds
* add seed
* fix flip
* fix multi-gpu
* add utility methods in wrapper
* remove ipynb
* don't use wrapper
* dont pass vae an dunet to accelerate prepare
* bring back accelerator.accumulate
* scale latents
* use only one progress bar for steps
* push_to_hub at the end of training
* remove unused args
* log some important stats
* store args in tensorboard
* pretty comments
* save the trained embeddings
* mobe the script up
* add requirements file
* more cleanup
* fux typo
* begin readme
* style -> learnable_property
* keep vae and unet in eval mode
* address review comments
* address more comments
* removed unused args
* add train command in readme
* update readme
* Changed variable name from "h" to "hidden_states"
Per issue #198 , changed variable name from "h" to "hidden_states" in the forward function only. I am happy to change any other variable names, please advise recommended new names.
* Update src/diffusers/models/resnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Refactor] Remove set_seed and class attributes
* apply anton's suggestiosn
* fix
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
* update
* make style
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* make fix-copies
* make style
* make style and new copies
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* format timesteps attrs to np arrays in pndm scheduler
because lists don't get formatted to tensors in `self.set_format`
* convert to long type to use timesteps as indices for tensors
* add scheduler set_format test
* fix `_timesteps` type
* make style with black 22.3.0 and isort 5.10.1
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Type hint] Karras VE pipeline
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Helpful exception if inference steps not set in schedulers (#263)
* Apply suggestions from codereview by patrickvonplaten
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Refactor progress bar of pipeline __call__
* Make any tqdm configs available
* remove init
* add some tests
* remove file
* finish
* make style
* improve progress bar test
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Init CI
* clarify cpu
* style
* Check scripts quality too
* Drop smi for cpu tests
* Run PR tests on cpu docker envs
* Update .github/workflows/push_tests.yml
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Try minimal python container
* Print env, install stable GPU torch
* Manual torch install
* remove deprecated platform.dist()
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Accept latents as input for StableDiffusionPipeline.
* Notebook to demonstrate reusable seeds (latents).
* More accurate type annotation
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Review comments: move to device, raise instead of assert.
* Actually commit the test notebook.
I had mistakenly pushed an empty file instead.
* Adapt notebook to Colab.
* Update examples readme.
* Move notebook to personal repo.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Restore `is_modelcards_available` in `.utils`.
Otherwise attempting to import `hub_utils` (in training scripts, for
example), fails.
This was removed during the refactor in df90f0c.
* Implement `pipeline.to(device)`
* DiffusionPipeline.to() decides best device on None.
* Breaking change: torch_device removed from __call__
`pipeline.to()` now has PyTorch semantics.
* Use kwargs and deprecation notice
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Apply torch_device compatibility to all pipelines.
* style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: anton-l <anton@huggingface.co>
* add SafetyChecker
* better name, fix checker
* add checker in main init
* remove from main init
* update logic to detect pipeline module
* style
* handle all safety logic in safety checker
* draw text
* can't draw
* small fixes
* treat special care as nsfw
* remove commented lines
* update safety checker
Thanks for taking the time to fill out this bug report!
Thanks a lot for taking the time to file this issue 🤗.
Issues do not only help to improve the library, but also publicly document common problems, questions, workflows for the whole community!
Thus, issues are of the same importance as pull requests when contributing to this library ❤️.
In order to make your issue as **useful for the community as possible**, let's try to stick to some simple guidelines:
- 1. Please try to be as precise and concise as possible.
*Give your issue a fitting title. Assume that someone which very limited knowledge of diffusers can understand your issue. Add links to the source code, documentation other issues, pull requests etc...*
- 2. If your issue is about something not working, **always** provide a reproducible code snippet. The reader should be able to reproduce your issue by **only copy-pasting your code snippet into a Python shell**.
*The community cannot solve your issue if it cannot reproduce it. If your bug is related to training, add your training script and make everything needed to train public. Otherwise, just add a simple Python code snippet.*
- 3. Add the **minimum amount of code / context that is needed to understand, reproduce your issue**.
*Make the life of maintainers easy. `diffusers` is getting many issues every day. Make sure your issue is about one bug and one bug only. Make sure you add only the context, code needed to understand your issues - nothing more. Generally, every issue is a way of documenting this library, try to make it a good documentation entry.*
- type:markdown
attributes:
value:|
For more in-detail information on how to write good issues you can have a look [here](https://huggingface.co/course/chapter8/5?fw=pt)
- type:textarea
id:bug-description
attributes:
@@ -20,6 +33,8 @@ body:
label:Reproduction
description:Please provide a minimal reproducible code which we can copy/paste and reproduce the issue.
placeholder:Reproduction
validations:
required:true
- type:textarea
id:logs
attributes:
@@ -30,8 +45,36 @@ body:
id:system-info
attributes:
label:System Info
description:Please share your system info with us,
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md)?
- [ ] Did you read our [philosophy doc](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) (important for complex PRs)?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/diffusers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Core library:
- Schedulers: @williamberman and @patrickvonplaten
- Pipelines: @patrickvonplaten and @sayakpaul
- Training examples: @sayakpaul and @patrickvonplaten
- Docs: @stevenliu and @yiyixu
- JAX and MPS: @pcuenca
- Audio: @sanchit-gandhi
- General functionalities: @patrickvonplaten and @sayakpaul
description:Sets up miniconda in your ${RUNNER_TEMP} environment and gives you the ${CONDA_RUN} environment variable so you don't have to worry about polluting non-empeheral runners anymore
inputs:
python-version:
description:If set to any value, dont use sudo to clean the workspace
required:false
type:string
default:"3.9"
miniconda-version:
description:Miniconda version to install
required:false
type:string
default:"4.12.0"
environment-file:
description:Environment file to install dependencies from
required:false
type:string
default:""
runs:
using:composite
steps:
# Use the same trick from https://github.com/marketplace/actions/setup-miniconda
# to refresh the cache daily. This is kind of optional though
if [ "$AVAIL" -lt "$MINIMUM_AVAILABLE_SPACE_IN_KB" ]; then
echo "There is only ${AVAIL}KB free space left in $MOUNT, which is less than the minimum requirement of ${MINIMUM_AVAILABLE_SPACE_IN_KB}KB. Please help create an issue to PyTorch Release Engineering via https://github.com/pytorch/test-infra/issues and provide the link to the workflow run."
exit 1;
else
echo "There is ${AVAIL}KB free space left in $MOUNT, continue"
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to contribute to Diffusers 🧨
We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it!
Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. <a href="https://Discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/Discord/823813159592001537?color=5865F2&logo=Discord&logoColor=white"></a>
Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions. We also recommend you become familiar with the [ethical guidelines](https://huggingface.co/docs/diffusers/conceptual/ethical_guidelines) that guide our project and ask you to adhere to the same principles of transparency and responsibility.
We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered.
## Overview
You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to
the core library.
In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community.
* 1. Asking and answering questions on [the Diffusers discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers) or on [Discord](https://discord.gg/G7tWnz98XR).
* 2. Opening new issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues/new/choose)
* 3. Answering issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues)
* 4. Fix a simple issue, marked by the "Good first issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
* 5. Contribute to the [documentation](https://github.com/huggingface/diffusers/tree/main/docs/source).
* 6. Contribute a [Community Pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3Acommunity-examples)
* 7. Contribute to the [examples](https://github.com/huggingface/diffusers/tree/main/examples).
* 8. Fix a more difficult issue, marked by the "Good second issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22).
* 9. Add a new pipeline, model, or scheduler, see ["New Pipeline/Model"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) and ["New scheduler"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) issues. For this contribution, please have a look at [Design Philosophy](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md).
As said before, **all contributions are valuable to the community**.
In the following, we will explain each contribution a bit more in detail.
For all contributions 4.-9. you will need to open a PR. It is explained in detail how to do so in [Opening a pull requst](#how-to-open-a-pr)
### 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord
Any question or comment related to the Diffusers library can be asked on the [discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/) or on [Discord](https://discord.gg/G7tWnz98XR). Such questions and comments include (but are not limited to):
- Reports of training or inference experiments in an attempt to share knowledge
- Presentation of personal projects
- Questions to non-official training examples
- Project proposals
- General feedback
- Paper summaries
- Asking for help on personal projects that build on top of the Diffusers library
- General questions
- Ethical questions regarding diffusion models
- ...
Every question that is asked on the forum or on Discord actively encourages the community to publicly
share knowledge and might very well help a beginner in the future that has the same question you're
having. Please do pose any questions you might have.
In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from.
**Please** keep in mind that the more effort you put into asking or answering a question, the higher
the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database.
In short, a high quality question or answer is *precise*, *concise*, *relevant*, *easy-to-understand*, *accesible*, and *well-formated/well-posed*. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section.
**NOTE about channels**:
[*The forum*](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it's easier to look up questions and answers that we posted some time ago.
In addition, questions and answers posted in the forum can easily be linked to.
In contrast, *Discord* has a chat-like format that invites fast back-and-forth communication.
While it will most likely take less time for you to get an answer to your question on Discord, your
question won't be visible anymore over time. Also, it's much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers.
### 2. Opening new issues on the GitHub issues tab
The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.
Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design.
In a nutshell, this means that everything that is **not** related to the **code of the Diffusers library** (including the documentation) should **not** be asked on GitHub, but rather on either the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR).
**Please consider the following guidelines when opening a new issue**:
- Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues).
- Please never report a new issue on another (related) issue. If another issue is highly related, please
open a new issue nevertheless and link to the related issue.
- Make sure your issue is written in English. Please use one of the great, free online translation services, such as [DeepL](https://www.deepl.com/translator) to translate from your native language to English if you are not comfortable in English.
- Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that `python -c "import diffusers; print(diffusers.__version__)"` is higher or matches the latest Diffusers version.
- Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues.
New issues usually include the following.
#### 2.1. Reproducible, minimal bug reports.
A bug report should always have a reproducible code snippet and be as minimal and concise as possible.
This means in more detail:
- Narrow the bug down as much as you can, **do not just dump your whole code file**
- Format your code
- Do not include any external libraries except for Diffusers depending on them.
- **Always** provide all necessary information about your environment; for this, you can run: `diffusers-cli env` in your shell and copy-paste the displayed information to the issue.
- Explain the issue. If the reader doesn't know what the issue is and why it is an issue, she cannot solve it.
- **Always** make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell.
- If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the [Hub](https://huggingface.co) to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible.
For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section.
You can open a bug report [here](https://github.com/huggingface/diffusers/issues/new/choose).
#### 2.2. Feature requests.
A world-class feature request addresses the following points:
1. Motivation first:
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.
You can open a feature request [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=).
#### 2.3 Feedback.
Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look [here](https://huggingface.co/docs/diffusers/conceptual/philosophy). If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed.
If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions.
You can open an issue about feedback [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).
#### 2.4 Technical questions.
Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide detail on
why this part of the code is difficult to understand.
You can open an issue about a technical question [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&template=bug-report.yml).
#### 2.5 Proposal to add a new model, scheduler, or pipeline.
If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information:
* Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release.
* Link to any of its open-source implementation.
* Link to the model weights if they are available.
If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don't forget
to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it.
You can open a request for a model/pipeline/scheduler [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=New+model%2Fpipeline%2Fscheduler&template=new-model-addition.yml).
### 3. Answering issues on the GitHub issues tab
Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct.
Some tips to give a high-quality answer to an issue:
- Be as concise and minimal as possible
- Stay on topic. An answer to the issue should concern the issue and only the issue.
- Provide links to code, papers, or other sources that prove or encourage your point.
- Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet.
Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great
help to the maintainers if you can answer such issues, encouraging the author of the issue to be
more precise, provide the link to a duplicated issue or redirect them to [the forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR)
If you have verified that the issued bug report is correct and requires a correction in the source code,
please have a look at the next sections.
For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the [Opening a pull requst](#how-to-open-a-pr) section.
### 4. Fixing a "Good first issue"
*Good first issues* are marked by the [Good first issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) label. Usually, the issue already
explains how a potential solution should look so that it is easier to fix.
If the issue hasn't been closed and you would like to try to fix this issue, you can just leave a message "I would like to try this issue.". There are usually three scenarios:
- a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it.
- b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR.
- c.) There is already an open PR to fix the issue, but the issue hasn't been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR.
### 5. Contribute to the documentation
A good library **always** has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a **highly
valuable contribution**.
Contributing to the library can have many forms:
- Correcting spelling or grammatical errors.
- Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we are very happy if you take some time to correct it.
- Correct the shape or dimensions of a docstring input or output tensor.
- Clarify documentation that is hard to understand or incorrect.
- Update outdated code examples.
- Translating the documentation to another language.
Anything displayed on [the official Diffusers doc page](https://huggingface.co/docs/diffusers/index) is part of the official documentation and can be corrected, adjusted in the respective [documentation source](https://github.com/huggingface/diffusers/tree/main/docs/source).
Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally.
### 6. Contribute a community pipeline
[Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) are usually the first point of contact between the Diffusers library and the user.
Pipelines are examples of how to use Diffusers [models](https://huggingface.co/docs/diffusers/api/models) and [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview).
We support two types of pipelines:
- Official Pipelines
- Community Pipelines
Both official and community pipelines follow the same design and consist of the same type of components.
Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code
resides in [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines).
In contrast, community pipelines are contributed and maintained purely by the **community** and are **not** tested.
They reside in [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and while they can be accessed via the [PyPI diffusers package](https://pypi.org/project/diffusers/), their code is not part of the PyPI distribution.
The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all
possible ways diffusion models can be used for inference, but some of them may be of interest to the community.
Officially released diffusion pipelines,
such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures
high quality of maintenance, no backward-breaking code changes, and testing.
More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library.
To add a community pipeline, one should add a <name-of-the-community>.py file to [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and adapt the [examples/community/README.md](https://github.com/huggingface/diffusers/tree/main/examples/community/README.md) to include an example of the new pipeline.
An example can be seen [here](https://github.com/huggingface/diffusers/pull/2400).
Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors.
Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the
core package.
### 7. Contribute to training examples
Diffusers examples are a collection of training scripts that reside in [examples](https://github.com/huggingface/diffusers/tree/main/examples).
We support two types of training examples:
- Official training examples
- Research training examples
Research training examples are located in [examples/research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) whereas official training examples include all folders under [examples](https://github.com/huggingface/diffusers/tree/main/examples) except the `research_projects` and `community` folders.
The official training examples are maintained by the Diffusers' core maintainers whereas the research training examples are maintained by the community.
This is because of the same reasons put forward in [6. Contribute a community pipeline](#contribute-a-community-pipeline) for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models.
If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the `research_projects` folder and maintained by the author.
Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the
training examples, it is required to clone the repository:
Therefore when adding an example, the `requirements.txt` file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example's training script. See, for example, the [DreamBooth `requirements.txt` file](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/requirements.txt).
Training examples of the Diffusers library should adhere to the following philosophy:
- All the code necessary to run the examples should be found in a single Python file
- One should be able to run the example from the command line with `python <your-example>.py --args`
- Examples should be kept simple and serve as **an example** on how to use Diffusers for training. The purpose of example scripts is **not** to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials.
To contribute an example, it is highly recommended to look at already existing examples such as [dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) to get an idea of how they should look like.
We strongly advise contributors to make use of the [Accelerate library](https://github.com/huggingface/accelerate) as it's tightly integrated
with Diffusers.
Once an example script works, please make sure to add a comprehensive `README.md` that states how to use the example exactly. This README should include:
- An example command on how to run the example script as shown [here e.g.](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#running-locally-with-pytorch).
- A link to some training results (logs, models, ...) that show what the user can expect as shown [here e.g.](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5).
- If you are adding a non-official/research training example, **please don't forget** to add a sentence that you are maintaining this training example which includes your git handle as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/intel_opts#diffusers-examples-with-intel-optimizations).
If you are contributing to the official training examples, please also make sure to add a test to [examples/test_examples.py](https://github.com/huggingface/diffusers/blob/main/examples/test_examples.py). This is not necessary for non-official training examples.
### 8. Fixing a "Good second issue"
*Good second issues* are marked by the [Good second issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) label. Good second issues are
usually more complicated to solve than [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
The issue description usually gives less guidance on how to fix the issue and requires
a decent understanding of the library by the interested contributor.
If you are interested in tackling a second good issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn't merged and try to open an improved PR.
Good second issues are usually more difficult to get merged compared to good first issues, so don't hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged.
### 9. Adding pipelines, models, schedulers
Pipelines, models, and schedulers are the most important pieces of the Diffusers library.
They provide easy access to state-of-the-art diffusion technologies and thus allow the community to
build powerful generative AI applications.
By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem.
Diffusers has a couple of open feature requests for all three components - feel free to gloss over them
if you don't know yet what specific component you would like to add:
- [Model or pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22)
Before adding any of the three components, it is strongly recommended that you give the [Philosophy guide](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) a read to better understand the design of any of the three components. Please be aware that
we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy
as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please
open a [Feedback issue](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=) instead so that it can be discussed whether a certain design
pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us.
Please make sure to add links to the original codebase/paper to the PR and ideally also ping the
original author directly on the PR so that they can follow the progress and potentially help with questions.
If you are unsure or stuck in the PR, don't hesitate to leave a message to ask for a first review or help.
## How to write a good issue
**The better your issue is written, the higher the chances that it will be quickly resolved.**
1. Make sure that you've used the correct template for your issue. You can pick between *Bug Report*, *Feature Request*, *Feedback about API Design*, *New model/pipeline/scheduler addition*, *Forum*, or a blank issue. Make sure to pick the correct one when opening [a new issue](https://github.com/huggingface/diffusers/issues/new/choose).
2.**Be precise**: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write "Error in diffusers".
3.**Reproducibility**: No reproducible code snippet == no solution. If you encounter a bug, maintainers **have to be able to reproduce** it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, *i.e.* that there are no missing imports or missing links to images, ... Your issue should contain an error message **and** a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data.
4.**Minimalistic**: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets.
5. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better.
6. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the [official GitHub formatting docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for more information.
7. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library.
## How to write a good PR
1. Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged.
2. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of "also fixing another problem while we're adding it". It is much more difficult to review pull requests that solve multiple, unrelated problems at once.
3. If helpful, try to add a code snippet that displays an example of how your addition can be used.
4. The title of your pull request should be a summary of its contribution.
5. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
6. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
7. Try to formulate and format your text as explained in [How to write a good issue](#how-to-write-a-good-issue).
8. Make sure existing tests pass;
9. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
CircleCI does not run the slow tests, but GitHub actions does every night!
10. All public methods must have informative docstrings that work nicely with markdown. See `[pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py)` for an example.
11. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
[`hf-internal-testing`](https://huggingface.co/hf-internal-testing) or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images) to place these files.
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
## How to open a PR
Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to be able to contribute to
🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.
Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L244)):
1. Fork the [repository](https://github.com/huggingface/diffusers) by
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
### Syncing forked main with upstream (HuggingFace) main
To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Philosophy
🧨 Diffusers provides **state-of-the-art** pretrained diffusion models across multiple modalities.
Its purpose is to serve as a **modular toolbox** for both inference and training.
We aim at building a library that stands the test of time and therefore take API design very seriously.
In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on [PyTorch's Design Principles](https://pytorch.org/docs/stable/community/design.html#pytorch-design-philosophy). Let's go over the most important ones:
## Usability over Performance
- While Diffusers has many built-in performance-enhancing features (see [Memory and Speed](https://huggingface.co/docs/diffusers/optimization/fp16)), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library.
- Diffusers aim at being a **light-weight** package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as `accelerate`, `safetensors`, `onnx`, etc...). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages.
- Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired.
## Simple over easy
As PyTorch states, **explicit is better than implicit** and **simple is better than complex**. This design philosophy is reflected in multiple parts of the library:
- We follow PyTorch's API with methods like [`DiffusionPipeline.to`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.to) to let the user handle device management.
- Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible.
- Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers.
- Separately trained components of the diffusion pipeline, *e.g.* the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. Dreambooth or textual inversion training
is very simple thanks to diffusers' ability to separate single components of the diffusion pipeline.
## Tweakable, contributor-friendly over abstraction
For large parts of the library, Diffusers adopts an important design principle of the [Transformers library](https://github.com/huggingface/transformers), which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as [Don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
In short, just like Transformers does for modeling files, diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers.
Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable.
**However**, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because:
- Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions.
- Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions.
- Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel.
At Hugging Face, we call this design the **single-file policy** which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look
at [this blog post](https://huggingface.co/blog/transformers-design-philosophy).
In diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don't follow this design fully for diffusion models is because almost all diffusion pipelines, such
as [DDPM](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/ddpm), [Stable Diffusion](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/stable_diffusion/overview#stable-diffusion-pipelines), [UnCLIP (Dalle-2)](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/unclip#overview) and [Imagen](https://imagen.research.google/) all rely on the same diffusion model, the [UNet](https://huggingface.co/docs/diffusers/api/models#diffusers.UNet2DConditionModel).
Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗.
We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it [directly on GitHub](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).
## Design Philosophy in Details
Now, let's look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consist of three major classes, [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines), [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models), and [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
Let's walk through more in-detail design decisions for each class.
### Pipelines
Pipelines are designed to be easy to use (therefore do not follow [*Simple over easy*](#simple-over-easy) 100%)), are not feature complete, and should loosely be seen as examples of how to use [models](#models) and [schedulers](#schedulers) for inference.
The following design principles are followed:
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [#Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
- Pipelines all inherit from [`DiffusionPipeline`]
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
- Pipelines should be used **only** for inference.
- Pipelines should be very readable, self-explanatory, and easy to tweak.
- Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs.
- Pipelines are **not** intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at [InvokeAI](https://github.com/invoke-ai/InvokeAI), [Diffuzers](https://github.com/abhishekkrthakur/diffuzers), and [lama-cleaner](https://github.com/Sanster/lama-cleaner)
- Every pipeline should have one and only one way to run it via a `__call__` method. The naming of the `__call__` arguments should be shared across all pipelines.
- Pipelines should be named after the task they are intended to solve.
- In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file.
### Models
Models are designed as configurable toolboxes that are natural extensions of [PyTorch's Module class](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). They only partly follow the **single-file policy**.
The following design principles are followed:
- Models correspond to **a type of model architecture**. *E.g.* the [`UNet2DConditionModel`] class is used for all UNet variations that expect 2D image inputs and are conditioned on some context.
- All models can be found in [`src/diffusers/models`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and every model architecture shall be defined in its file, e.g. [`unet_2d_condition.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_condition.py), [`transformer_2d.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformer_2d.py), etc...
- Models **do not** follow the single-file policy and should make use of smaller model building blocks, such as [`attention.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py), [`resnet.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py), [`embeddings.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py), etc... **Note**: This is in stark contrast to Transformers' modeling files and shows that models do not really follow the single-file policy.
- Models intend to expose complexity, just like PyTorch's module does, and give clear error messages.
- Models all inherit from `ModelMixin` and `ConfigMixin`.
- Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain.
- Models should by default have the highest precision and lowest performance setting.
- To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different.
- Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and "foreseeing" future changes, *e.g.* it is usually better to add `string` "...type" arguments that can easily be extended to new future types instead of boolean `is_..._type` arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work.
- The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and
readable longterm, such as [UNet blocks](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py) and [Attention processors](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
### Schedulers
Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the **single-file policy**.
The following design principles are followed:
- All schedulers are found in [`src/diffusers/schedulers`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
- Schedulers are **not** allowed to import from large utils files and shall be kept very self-contained.
- One scheduler python file corresponds to one scheduler algorithm (as might be defined in a paper).
- If schedulers share similar functionalities, we can make use of the `#Copied from` mechanism.
- Schedulers all inherit from `SchedulerMixin` and `ConfigMixin`.
- Schedulers can be easily swapped out with the [`ConfigMixin.from_config`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method as explained in detail [here](./using-diffusers/schedulers.md).
- Every scheduler has to have a `set_num_inference_steps`, and a `step` function. `set_num_inference_steps(...)` has to be called before every denoising process, *i.e.* before `step(...)` is called.
- Every scheduler exposes the timesteps to be "looped over" via a `timesteps` attribute, which is an array of timesteps the model will be called upon
- The `step(...)` function takes a predicted model output and the "current" sample (x_t) and returns the "previous", slightly more denoised sample (x_t-1).
- Given the complexity of diffusion schedulers, the `step` function does not expose all the complexity and can be a bit of a "black box".
- In almost all cases, novel schedulers shall be implemented in a new scheduling file.
🤗 Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of diffusion models.
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](https://huggingface.co/docs/diffusers/conceptual/philosophy#usability-over-performance), [simple over easy](https://huggingface.co/docs/diffusers/conceptual/philosophy#simple-over-easy), and [customizability over abstractions](https://huggingface.co/docs/diffusers/conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
More precisely, 🤗 Diffusers offers:
🤗 Diffusers offers three core components:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)).
-Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
-Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion models (see [examples](https://github.com/huggingface/diffusers/tree/main/examples)).
## Quickstart
In order to get started, we recommend taking a look at two notebooks:
- The [Getting started with Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, and also to understand each independent building block in the library.
- The [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook summarizes diffuser model training methods. This notebook takes a step-by-step approach to training your
diffuser model on an image dataset, with explanatory graphics.
## Examples
If you want to run the code yourself 💻, you can try out:
| Text-to-Image Latent Diffusion | [](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion) |
| Faces generator | [](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion) |
| DDPM with different schedulers | [](https://huggingface.co/spaces/fusing/celeba-diffusion) |
## Definitions
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
<em> Figure from ImageGen (https://imagen.research.google/). </em>
<p>
## Philosophy
- Readability and clarity is prefered over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
- State-of-the-art [diffusion pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) that can be run in inference with just a few lines of code.
-Interchangeable noise [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview) for different diffusion speeds and output quality.
-Pretrained [models](https://huggingface.co/docs/diffusers/api/models) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.
## Installation
**With `pip`**
We recommend installing 🤗 Diffusers in a virtual environment from PyPi or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/#installation), please refer to their official documentation.
### PyTorch
With `pip` (official package):
```bash
pip install --upgrade diffusers# should install diffusers 0.2.0
pip install --upgrade diffusers[torch]
```
**With `conda`**
With `conda` (maintained by the community):
```sh
conda install -c conda-forge diffusers
```
## In the works
### Flax
For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:
With `pip` (official package):
- Diffusers for audio
- Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105).
- Diffusers for video generation
- Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54)
```bash
pip install --upgrade diffusers[flax]
```
A few pipeline components are already being worked on, namely:
### Apple Silicon (M1/M2) support
- BDDMPipeline for spectrogram-to-sound vocoding
- GLIDEPipeline to support OpenAI's GLIDE model
- Grad-TTS for text to audio generation / conditional audio generation
Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide.
We want diffusers to be a toolbox useful for diffusers models in general; if you find yourself limited in any way by the current API, or would like to see additional models, schedulers, or techniques, please open a [GitHub issue](https://github.com/huggingface/diffusers/issues) mentioning what you would like to see.
## Quickstart
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 4000+ checkpoints):
| [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
| [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. |
## Contribution
We ❤️ contributions from the open-source community!
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)
Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or
@@ -155,7 +209,20 @@ This library concretizes previous work by many different authors and would not h
-@CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion)
-@hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion)
-@ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim).
-@ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim)
-@yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.
## Citation
```bibtex
@misc{von-platen-etal-2022-diffusers,
author={Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.
---
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name}{path_to_docs}
```
For example:
```bash
doc-builder preview diffusers docs/source/en
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
Use the relative style to link to the new file so that the versioned docs continue to work.
For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).
## Writing Documentation - Specification
The `huggingface/diffusers` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.
### Adding a new tutorial
Adding a new tutorial or section is done in two steps:
- Add a new file under `docs/source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
- Link that file in `docs/source/_toctree.yml` on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.
### Adding a new pipeline/scheduler
When adding a new pipeline:
- create a file `xxx.md` under `docs/source/api/pipelines` (don't hesitate to copy an existing file as template).
- Link that file in (*Diffusers Summary*) section in `docs/source/api/pipelines/overview.md`, along with the link to the paper, and a colab notebook (if available).
- Write a short overview of the diffusion model:
- Overview with paper & authors
- Paper abstract
- Tips and tricks and how to use it best
- Possible an end-to-end example of how to use it
- Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
```
## XXXPipeline
[[autodoc]] XXXPipeline
- all
- __call__
```
This will include every public method of the pipeline that is documented, as well as the `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`.
```
[[autodoc]] XXXPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
```
You can follow the same process to create a new scheduler under the `docs/source/api/schedulers` folder
### Writing source documentation
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will be converted into a link with
`pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
#### Defining arguments in a method
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:
```
Args:
n_layers (`int`): The number of layers of the model.
```
If the description is too long to fit in one line, another indentation is necessary before writing the description
after the argument.
Here's an example showcasing everything so far:
```
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
[`~PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
```
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:
```
def my_function(x: str = None, a: float = 1):
```
then its documentation should look like this:
```
Args:
x (`str`, *optional*):
This argument controls ...
a (`float`, *optional*, defaults to 1):
This argument is used to ...
```
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).
#### Writing a multi-line code block
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```
# first line of code
# second line
# etc
```
````
#### Writing a return block
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.
Here's an example of a single value return:
```
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```
Here's an example of a tuple return, comprising several objects:
```
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```
#### Adding an image
Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
## Styling the docstring
We have an automatic script running with the `make style` command that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library
This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
### Translating the Diffusers documentation into your language
As part of our mission to democratize machine learning, we'd love to make the Diffusers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.
**🗞️ Open an issue**
To get started, navigate to the [Issues](https://github.com/huggingface/diffusers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button.
Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.
**🍴 Fork the repository**
First, you'll need to [fork the Diffusers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.
Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:
**📋 Copy-paste the English version with a new language code**
The documentation files are in one leading directory:
- [`docs/source`](https://github.com/huggingface/diffusers/tree/main/docs/source): All the documentation materials are organized here by language.
You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/diffusers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:
```bash
cd ~/path/to/diffusers/docs
cp -r source/en source/LANG-ID
```
Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.
**✍️ Start translating**
The fun part comes - translating the text!
The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website.
> 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory!
The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml):
```yaml
- sections:
- local:pipeline_tutorial# Do not change this! Use the same name for your .md file
title:Pipelines for inference# Translate this!
...
title:Tutorials# Translate this!
```
Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.
> 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/diffusers/issues) and tag @patrickvonplaten.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Configuration
Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from [`ModelMixin`] inherit from [`ConfigMixin`] which stores all the parameters that are passed to their respective `__init__` methods in a JSON-configuration file.
<Tip>
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pipelines
The [`DiffusionPipeline`] is the quickest way to load any pretrained diffusion pipeline from the [Hub](https://huggingface.co/models?library=diffusers) for inference.
<Tip>
You shouldn't use the [`DiffusionPipeline`] class for training or finetuning a diffusion model. Individual
components (for example, [`UNet2DModel`] and [`UNet2DConditionModel`]) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead.
</Tip>
The pipeline type (for example [`StableDiffusionPipeline`]) of any diffusion pipeline loaded with [`~DiffusionPipeline.from_pretrained`] is automatically
detected and pipeline components are loaded and passed to the `__init__` function of the pipeline.
Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrained`].
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VAE Image Processor
The [`VaeImageProcessor`] provides a unified API for [`StableDiffusionPipeline`]'s to prepare image inputs for VAE encoding and post-processing outputs once they're decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
All pipelines with [`VaeImageProcessor`] accepts PIL Image, PyTorch tensor, or NumPy arrays as image inputs and returns outputs based on the `output_type` argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the `output_type` argument (for example `output_type="pt"`). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines.
## VaeImageProcessor
[[autodoc]] image_processor.VaeImageProcessor
## VaeImageProcessorLDM3D
The [`VaeImageProcessorLDM3D`] accepts RGB and depth inputs and returns RGB and depth outputs.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Loaders
Adapters (textual inversion, LoRA, hypernetworks) allow you to modify a diffusion model to generate images in a specific style without training or finetuning the entire model. The adapter weights are typically only a tiny fraction of the pretrained model's which making them very portable. 🤗 Diffusers provides an easy-to-use `LoaderMixin` API to load adapter weights.
<Tip warning={true}>
🧪 The `LoaderMixins` are highly experimental and prone to future changes. To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`.
| `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` | 50 | only report the most critical errors |
| `diffusers.logging.ERROR` | 40 | only report errors |
| `diffusers.logging.WARNING` or `diffusers.logging.WARN` | 30 | only report errors and warnings (default) |
| `diffusers.logging.INFO` | 20 | only report errors, warnings, and basic information |
| `diffusers.logging.DEBUG` | 10 | report all information |
By default, `tqdm` progress bars are displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] are used to enable or disable this behavior.
Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: [Designing a Better Asymmetric VQGAN for StableDiffusion](https://arxiv.org/abs/2306.04632) by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua.
The abstract from the paper is:
*StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN*
Evaluation results can be found in section 4.1 of the original paper.
Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in [madebyollin/taesd](https://github.com/madebyollin/taesd) by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion's VAE that can quickly decode the latents in a [`StableDiffusionPipeline`] or [`StableDiffusionXLPipeline`] almost instantly.
The variational autoencoder (VAE) model with KL loss was introduced in [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114v11) by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images.
The abstract from the paper is:
*How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.*
## Loading from the original format
By default the [`AutoencoderKL`] should be loaded with [`~ModelMixin.from_pretrained`], but it can also be loaded
from the original format using [`FromOriginalVAEMixin.from_single_file`] as follows:
```py
fromdiffusersimportAutoencoderKL
url="https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors"# can also be local file
The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang and Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.
The abstract from the paper is:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Loading from the original format
By default the [`ControlNetModel`] should be loaded with [`~ModelMixin.from_pretrained`], but it can also be loaded
from the original format using [`FromOriginalControlnetMixin.from_single_file`] as follows:
🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution \\(p_{\theta}(x_{t-1}|x_{t})\\).
All models are built from the base [`ModelMixin`] class which is a [`torch.nn.module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) providing basic functionality for saving and loading models, locally and from the Hugging Face Hub.
The Prior Transformer was originally introduced in [Hierarchical Text-Conditional Image Generation with CLIP Latents
](https://huggingface.co/papers/2204.06125) by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process.
The abstract from the paper is:
*Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.*
A Transformer model for image-like data from [CompVis](https://huggingface.co/CompVis) that is based on the [Vision Transformer](https://huggingface.co/papers/2010.11929) introduced by Dosovitskiy et al. The [`Transformer2DModel`] accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.
When the input is **continuous**:
1. Project the input and reshape it to `(batch_size, sequence_length, feature_dimension)`.
2. Apply the Transformer blocks in the standard way.
3. Reshape to image.
When the input is **discrete**:
<Tip>
It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don't contain a prediction for the masked pixel because the unnoised image cannot be masked.
</Tip>
1. Convert input (classes of latent pixels) to embeddings and apply positional embeddings.
2. Apply the Transformer blocks in the standard way.
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 1D UNet model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
The VQ-VAE model was introduced in [Neural Discrete Representation Learning](https://huggingface.co/papers/1711.00937) by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike [`AutoencoderKL`], the [`VQModel`] works in a quantized latent space.
The abstract from the paper is:
*Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.*
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Outputs
All models outputs are subclasses of [`~utils.BaseOutput`], data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AltDiffusion
AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://huggingface.co/papers/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu.
The abstract from the paper is:
*In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.*
## Tips
`AltDiffusion` is conceptually the same as [Stable Diffusion](./stable_diffusion/overview).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Attend-and-Excite
Attend-and-Excite for Stable Diffusion was proposed in [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://attendandexcite.github.io/Attend-and-Excite/) and provides textual attention control over image generation.
The abstract from the paper is:
*Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.*
You can find additional information about Attend-and-Excite on the [project page](https://attendandexcite.github.io/Attend-and-Excite/), the [original codebase](https://github.com/AttendAndExcite/Attend-and-Excite), or try it out in a [demo](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Audio Diffusion
[Audio Diffusion](https://github.com/teticio/audio-diffusion) is by Robert Dargavel Smith, and it leverages the recent advances in image generation from diffusion models by converting audio samples to and from Mel spectrogram images.
The original codebase, training scripts and example notebooks can be found at [teticio/audio-diffusion](https://github.com/teticio/audio-diffusion).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AudioLDM
AudioLDM was proposed in [AudioLDM: Text-to-Audio Generation with Latent Diffusion Models](https://huggingface.co/papers/2301.12503) by Haohe Liu et al. Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM
is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from [CLAP](https://huggingface.co/docs/transformers/main/model_doc/clap)
latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional
sound effects, human speech and music.
The abstract from the paper is:
*Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at https://audioldm.github.io.*
The original codebase can be found at [haoheliu/AudioLDM](https://github.com/haoheliu/AudioLDM).
## Tips
When constructing a prompt, keep in mind:
* Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific (for example, "water stream in a forest" instead of "stream").
* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with.
During inference:
* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AudioLDM 2
AudioLDM 2 was proposed in [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://arxiv.org/abs/2308.05734)
by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate
text-conditional sound effects, human speech and music.
Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM 2
is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from text embeddings. Two
text encoder models are used to compute the text embeddings from a prompt input: the text-branch of [CLAP](https://huggingface.co/docs/transformers/main/en/model_doc/clap)
and the encoder of [Flan-T5](https://huggingface.co/docs/transformers/main/en/model_doc/flan-t5). These text embeddings
are then projected to a shared embedding space by an [AudioLDM2ProjectionModel](https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm2#diffusers.AudioLDM2ProjectionModel).
A [GPT2](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2) _language model (LM)_ is used to auto-regressively
predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding
vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The [UNet](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2UNet2DConditionModel)
of AudioLDM 2 is unique in the sense that it takes **two** cross-attention embeddings, as opposed to one cross-attention
conditioning, as in most other LDMs.
The abstract of the paper is the following:
*Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called language of audio (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate new state-of-the-art or competitive performance to previous approaches.*
This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original codebase can be
found at [haoheliu/audioldm2](https://github.com/haoheliu/audioldm2).
## Tips
### Choosing a checkpoint
AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio
generation. The third checkpoint is trained exclusively on text-to-music generation.
All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet.
See table below for details on the three checkpoints:
| Checkpoint | Task | UNet Model Size | Total Model Size | Training Data / h |
* Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g. "water stream in a forest" instead of "stream").
* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with.
* Using a **negative prompt** can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of "Low quality."
### Controlling inference
* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.
### Evaluating generated waveforms:
* The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation
* Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly.
The following example demonstrates how to construct good music generation using the aforementioned tips: [example](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.example).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between
scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines)
section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AutoPipeline
`AutoPipeline` is designed to:
1. make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use
2. use multiple pipelines in your workflow
Based on the task, the `AutoPipeline` class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the `from_pretrained()` method.
To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the `from_pipe()` method to transfer the components from the original pipeline to the new one.
Consistency Models were proposed in [Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
The abstract from the paper is:
*Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. *
The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models), and additional checkpoints are available at [openai](https://huggingface.co/openai).
The pipeline was contributed by [dg845](https://github.com/dg845) and [ayushtues](https://huggingface.co/ayushtues). ❤️
## Tips
For an additional speed-up, use `torch.compile` to generate multiple images in <1 second:
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# ControlNet
[Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
Using a pretrained model, we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract from the paper is:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.*
This model was contributed by [takuma104](https://huggingface.co/takuma104). ❤️
The original codebase can be found at [lllyasviel/ControlNet](https://github.com/lllyasviel/ControlNet).
## Usage example
In the following we give a simple example of how to use a *ControlNet* checkpoint with Diffusers for inference.
The inference pipeline is the same for all pipelines:
* 1. Take an image and run it through a pre-conditioning processor.
* 2. Run the pre-processed image through the [`StableDiffusionControlNetPipeline`].
Let's have a look at a simple example using the [Canny Edge ControlNet](https://huggingface.co/lllyasviel/sd-controlnet-canny).
Next, we process the image to get the canny image. This is step *1.* - running the pre-conditioning processor. The pre-conditioning processor is different for every ControlNet. Please see the model cards of the [official checkpoints](#controlnet-with-stable-diffusion-1.5) for more information about other models.
First, we need to install opencv:
```
pip install opencv-contrib-python
```
Next, let's also install all required Hugging Face libraries:
**Note**: To see how to run all other ControlNet checkpoints, please have a look at [ControlNet with Stable Diffusion 1.5](#controlnet-with-stable-diffusion-1.5).
<!-- TODO: add space -->
## Combining multiple conditionings
Multiple ControlNet conditionings can be combined for a single image generation. Pass a list of ControlNets to the pipeline's constructor and a corresponding list of conditionings to `__call__`.
When combining conditionings, it is helpful to mask conditionings such that they do not overlap. In the example, we mask the middle of the canny map where the pose conditioning is located.
It can also be helpful to vary the `controlnet_conditioning_scales` to emphasize one conditioning over the other.
Guess Mode is [a ControlNet feature that was implemented](https://github.com/lllyasviel/ControlNet#guess-mode--non-prompt-mode) after the publication of [the paper](https://arxiv.org/abs/2302.05543). The description states:
>In this mode, the ControlNet encoder will try best to recognize the content of the input control map, like depth map, edge map, scribbles, etc, even if you remove all prompts.
#### The core implementation:
It adjusts the scale of the output residuals from ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to `0.1`. As the blocks get deeper, the scale increases exponentially, and the scale for the output of the MidBlock becomes `1.0`.
Since the core implementation is just this, **it does not have any impact on prompt conditioning**. While it is common to use it without specifying any prompts, it is also possible to provide prompts if desired.
#### Usage:
Just specify `guess_mode=True` in the pipe() function. A `guidance_scale` between 3.0 and 5.0 is [recommended](https://github.com/lllyasviel/ControlNet#guess-mode--non-prompt-mode).
```py
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
ControlNet requires a *control image* in addition to the text-to-image *prompt*.
Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more.
All checkpoints can be found under the authors' namespace [lllyasviel](https://huggingface.co/lllyasviel).
**13.04.2024 Update**: The author has released improved controlnet checkpoints v1.1 - see [here](#controlnet-v1.1).
### ControlNet v1.0
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|[lllyasviel/sd-controlnet-openpose](https://huggingface.co/lllyasviel/sd-controlnet_openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|[lllyasviel/sd-controlnet-scribble](https://huggingface.co/lllyasviel/sd-controlnet_scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# ControlNet with Stable Diffusion XL
[Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
Using a pretrained model, we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract from the paper is:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.*
We provide support using ControlNets with [Stable Diffusion XL](./stable_diffusion/stable_diffusion_xl.md) (SDXL).
You can find numerous SDXL ControlNet checkpoints from [this link](https://huggingface.co/models?other=stable-diffusion-xl&other=controlnet). There are some smaller ControlNet checkpoints too:
We also encourage you to train custom ControlNets; we provide a [training script](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md) for this.
🚨 At the time of this writing, many of these SDXL ControlNet checkpoints are experimental and there is a lot of room for improvement. We encourage our users to provide feedback. 🚨
## MultiControlNet
You can compose multiple ControlNet conditionings from different image inputs to create a *MultiControlNet*. To get better results, it is often helpful to:
1. mask conditionings such that they don't overlap (for example, mask the area of a canny image where the pose conditioning is located)
2. experiment with the [`controlnet_conditioning_scale`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline.__call__.controlnet_conditioning_scale) parameter to determine how much weight to assign to each conditioning input
In this example, you'll combine a canny image and a human pose estimation image to generate a new image.
Load a list of ControlNet models that correspond to each conditioning, and pass them to the [`StableDiffusionXLControlNetPipeline`]. Use the faster [`UniPCMultistepScheduler`] and nable model offloading to reduce memory usage.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Cycle Diffusion
Cycle Diffusion is a text guided image-to-image generation model proposed in [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://huggingface.co/papers/2210.05559) by Chen Henry Wu, Fernando De la Torre.
The abstract from the paper is:
*Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs.*
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Dance Diffusion
[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is by Zach Evans.
Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by [Harmonai](https://github.com/Harmonai-org).
The original codebase of this implementation can be found at [Harmonai-org](https://github.com/Harmonai-org/sample-generator).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DDIM
[Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The abstract from the paper is:
*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
The original codebase can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim).
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DDPM
[Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2006.11239) (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the *discrete denoising scheduler* from the paper as well as the pipeline.
The abstract from the paper is:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
The original codebase can be found at [hohonathanho/diffusion](https://github.com/hojonathanho/diffusion).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DeepFloyd IF
## Overview
DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding.
The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules:
- Stage 1: a base model that generates 64x64 px image based on text prompt,
- Stage 2: a 64x64 px => 256x256 px super-resolution model, and a
- Stage 3: a 256x256 px => 1024x1024 px super-resolution model
Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings,
which are then fed into a UNet architecture enhanced with cross-attention and attention pooling.
Stage 3 is [Stability's x4 Upscaling model](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler).
The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset.
Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis.
## Usage
Before you can use IF, you need to accept its usage conditions. To do so:
1. Make sure to have a [Hugging Face account](https://huggingface.co/join) and be logged in
2. Accept the license on the model card of [DeepFloyd/IF-I-XL-v1.0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0). Accepting the license on the stage I model card will auto accept for the other IF models.
3. Make sure to login locally. Install `huggingface_hub`
```sh
pip install huggingface_hub --upgrade
```
run the login function in a Python shell
```py
fromhuggingface_hubimportlogin
login()
```
and enter your [Hugging Face Hub access token](https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens).
[](https://huggingface.co/spaces/DeepFloyd/IF)
**Google Colab**
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/deepfloyd_if_free_tier_google_colab.ipynb)
### Text-to-Image Generation
By default diffusers makes use of [model cpu offloading](https://huggingface.co/docs/diffusers/optimization/fp16#model-offloading-for-fast-inference-and-memory-savings)
to run the whole IF pipeline with as little as 14 GB of VRAM.
prompt='a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
The same IF model weights can be used for text-guided image-to-image translation or image variation.
In this case just make sure to load the weights using the [`IFInpaintingPipeline`] and [`IFInpaintingSuperResolutionPipeline`] pipelines.
**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the [`~DiffusionPipeline.components()`] function as explained [here](#converting-between-different-pipelines).
The same IF model weights can be used for text-guided image-to-image translation or image variation.
In this case just make sure to load the weights using the [`IFInpaintingPipeline`] and [`IFInpaintingSuperResolutionPipeline`] pipelines.
**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the [`~DiffusionPipeline.components()`] function as explained [here](#converting-between-different-pipelines).
text_encoder=text_encoder,# pass the previously instantiated 8bit text encoder
unet=None,
device_map="auto",
)
prompt='a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DiffEdit
[DiffEdit: Diffusion-based semantic image editing with mask guidance](https://huggingface.co/papers/2210.11427) is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.
The abstract from the paper is:
*Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.*
The original codebase can be found at [Xiang-cd/DiffEdit-stable-diffusion](https://github.com/Xiang-cd/DiffEdit-stable-diffusion), and you can try it out in this [demo](https://blog.problemsolversguild.com/technical/research/2022/11/02/DiffEdit-Implementation.html).
This pipeline was contributed by [clarencechen](https://github.com/clarencechen). ❤️
## Tips
* The pipeline can generate masks that can be fed into other inpainting pipelines. Check out the code examples below to know more.
* In order to generate an image using this pipeline, both an image mask (manually specified or generated using `generate_mask`)
and a set of partially inverted latents (generated using `invert`) _must_ be provided as arguments when calling the pipeline to generate the final edited image.
Refer to the code examples below for more details.
* The function `generate_mask` exposes two prompt arguments, `source_prompt` and `target_prompt`,
that let you control the locations of the semantic edits in the final image to be generated. Let's say,
you wanted to translate from "cat" to "dog". In this case, the edit direction will be "cat -> dog". To reflect
this in the generated mask, you simply have to set the embeddings related to the phrases including "cat" to
`source_prompt_embeds` and "dog" to `target_prompt_embeds`. Refer to the code example below for more details.
* When generating partially inverted latents using `invert`, assign a caption or text embedding describing the
overall image to the `prompt` argument to help guide the inverse latent sampling process. In most cases, the
source concept is sufficently descriptive to yield good results, but feel free to explore alternatives.
Please refer to [this code example](#generating-image-captions-for-inversion) for more details.
* When calling the pipeline to generate the final edited image, assign the source concept to `negative_prompt`
and the target concept to `prompt`. Taking the above example, you simply have to set the embeddings related to
the phrases including "cat" to `negative_prompt_embeds` and "dog" to `prompt_embeds`. Refer to the code example
below for more details.
* If you wanted to reverse the direction in the example above, i.e., "dog -> cat", then it's recommended to:
* Swap the `source_prompt` and `target_prompt` in the arguments to `generate_mask`.
* Change the input prompt for `invert` to include "dog".
* Swap the `prompt` and `negative_prompt` in the arguments to call the pipeline to generate the final edited image.
* Note that the source and target prompts, or their corresponding embeddings, can also be automatically generated. Please, refer to [this discussion](#generating-source-and-target-embeddings) for more details.
## Usage example
### Based on an input image with a caption
When the pipeline is conditioned on an input image, we first obtain partially inverted latents from the input image using a
`DDIMInverseScheduler` with the help of a caption. Then we generate an editing mask to identify relevant regions in the image using the source and target prompts. Finally,
the inverted noise and generated mask is used to start the generation process.
Now, generate the image with the inverted latents and semantically generated mask:
```py
image = pipeline(
prompt=target_prompt,
mask_image=mask_image,
image_latents=inv_latents,
generator=generator,
negative_prompt=source_prompt,
).images[0]
image.save("edited_image.png")
```
## Generating image captions for inversion
The authors originally used the source concept prompt as the caption for generating the partially inverted latents. However, we can also leverage open source and public image captioning models for the same purpose.
Below, we provide an end-to-end example with the [BLIP](https://huggingface.co/docs/transformers/model_doc/blip) model
We encourage you to play around with the different parameters supported by the
`generate()` method ([documentation](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_tf_utils.TFGenerationMixin.generate)) for the generation quality you are looking for.
**4. Load the embedding model**:
Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model.
```py
from diffusers import StableDiffusionDiffEditPipeline
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DiT
[Scalable Diffusion Models with Transformers](https://huggingface.co/papers/2212.09748) (DiT) is by William Peebles and Saining Xie.
The abstract from the paper is:
*We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.*
The original codebase can be found at [facebookresearch/dit](https://github.com/facebookresearch/dit).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Kandinsky
## Overview
Kandinsky inherits best practices from [DALL-E 2](https://huggingface.co/papers/2204.06125) and [Latent Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/latent_diffusion), while introducing some new ideas.
It uses [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for encoding images and text, and a diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach enhances the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov). The original codebase can be found [here](https://github.com/ai-forever/Kandinsky-2)
## Usage example
In the following, we will walk you through some examples of how to use the Kandinsky pipelines to create some visually aesthetic artwork.
### Text-to-Image Generation
For text-to-image generation, we need to use both [`KandinskyPriorPipeline`] and [`KandinskyPipeline`].
The first step is to encode text prompts with CLIP and then diffuse the CLIP text embeddings to CLIP image embeddings,
as first proposed in [DALL-E 2](https://cdn.openai.com/papers/dall-e-2.pdf).
Let's throw a fun prompt at Kandinsky to see what it comes up with.
We also provide an end-to-end Kandinsky pipeline [`KandinskyCombinedPipeline`], which combines both the prior pipeline and text-to-image pipeline, and lets you perform inference in a single step. You can create the combined pipeline with the [`~AutoPipelineForText2Image.from_pretrained`] method
Under the hood, it will automatically load both [`KandinskyPriorPipeline`] and [`KandinskyPipeline`]. To generate images, you no longer need to call both pipelines and pass the outputs from one to another. You only need to call the combined pipeline once. You can set different `guidance_scale` and `num_inference_steps` for the prior pipeline with the `prior_guidance_scale` and `prior_num_inference_steps` arguments.
The Kandinsky model works extremely well with creative prompts. Here is some of the amazing art that can be created using the exact same process but with different prompts.
```python
prompt="bird eye view shot of a full body woman with cyan light orange magenta makeup, digital art, long braided hair her face separated by makeup in the style of yin Yang surrealism, symmetrical face, real image, contrasting tone, pastel gradient background"
The same Kandinsky model weights can be used for text-guided image-to-image translation. In this case, just make sure to load the weights using the [`KandinskyImg2ImgPipeline`] pipeline.
**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the [`~DiffusionPipeline.components`] function as explained [here](#converting-between-different-pipelines).
🚨🚨🚨 __Breaking change for Kandinsky Mask Inpainting__ 🚨🚨🚨
We introduced a breaking change for Kandinsky inpainting pipeline in the following pull request: https://github.com/huggingface/diffusers/pull/4207. Previously we accepted a mask format where black pixels represent the masked-out area. This is inconsistent with all other pipelines in diffusers. We have changed the mask format in Knaindsky and now using white pixels instead.
Please upgrade your inpainting code to follow the above. If you are using Kandinsky Inpaint in production. You now need to change the mask to:
```python
# For PIL input
importPIL.ImageOps
mask=PIL.ImageOps.invert(mask)
# For PyTorch and Numpy input
mask=1-mask
```
### Interpolate
The [`KandinskyPriorPipeline`] also comes with a cool utility function that will allow you to interpolate the latent space of different images and texts super easily. Here is an example of how you can create an Impressionist-style portrait for your pet based on "The Starry Night".
Note that you can interpolate between texts and images - in the below example, we passed a text prompt "a cat" and two images to the `interplate` function, along with a `weights` variable containing the corresponding weights for each condition we interplate.
After compilation you should see a very fast inference time. For more information,
feel free to have a look at [Our PyTorch 2.0 benchmark](https://huggingface.co/docs/diffusers/main/en/optimization/torch2.0).
<Tip>
To generate images directly from a single pipeline, you can use [`KandinskyCombinedPipeline`], [`KandinskyImg2ImgCombinedPipeline`], [`KandinskyInpaintCombinedPipeline`].
These combined pipelines wrap the [`KandinskyPriorPipeline`] and [`KandinskyPipeline`], [`KandinskyImg2ImgPipeline`], [`KandinskyInpaintPipeline`] respectively into a single
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Kandinsky 2.2
The Kandinsky 2.2 release includes robust new text-to-image models that support text-to-image generation, image-to-image generation, image interpolation, and text-guided image inpainting. The general workflow to perform these tasks using Kandinsky 2.2 is the same as in Kandinsky 2.1. First, you will need to use a prior pipeline to generate image embeddings based on your text prompt, and then use one of the image decoding pipelines to generate the output image. The only difference is that in Kandinsky 2.2, all of the decoding pipelines no longer accept the `prompt` input, and the image generation process is conditioned with only `image_embeds` and `negative_image_embeds`.
Same as with Kandinsky 2.1, the easiest way to perform text-to-image generation is to use the combined Kandinsky pipeline. This process is exactly the same as Kandinsky 2.1. All you need to do is to replace the Kandinsky 2.1 checkpoint with 2.2.
Now, let's look at an example where we take separate steps to run the prior pipeline and text-to-image pipeline. This way, we can understand what's happening under the hood and how Kandinsky 2.2 differs from Kandinsky 2.1.
First, let's create the prior pipeline and text-to-image pipeline with Kandinsky 2.2 checkpoints.
Now you can pass these embeddings to the text-to-image pipeline. When using Kandinsky 2.2 you don't need to pass the `prompt` (but you do with the previous version, Kandinsky 2.1).
We used the text-to-image pipeline as an example, but the same process applies to all decoding pipelines in Kandinsky 2.2. For more information, please refer to our API section for each pipeline.
### Text-to-Image Generation with ControlNet Conditioning
In the following, we give a simple example of how to use [`KandinskyV22ControlnetPipeline`] to add control to the text-to-image generation with a depth image.
First, let's take an image and extract its depth map.
Now we can pass the image embeddings and the depth image we extracted to the controlnet pipeline. With Kandinsky 2.2, only prior pipelines accept `prompt` input. You do not need to pass the prompt to the controlnet pipeline.
### Image-to-Image Generation with ControlNet Conditioning
Kandinsky 2.2 also includes a [`KandinskyV22ControlnetImg2ImgPipeline`] that will allow you to add control to the image generation process with both the image and its depth map. This pipeline works really well with [`KandinskyV22PriorEmb2EmbPipeline`], which generates image embeddings based on both a text prompt and an image.
For our robot cat example, we will pass the prompt and cat image together to the prior pipeline to generate an image embedding. We will then use that image embedding and the depth map of the cat to further control the image generation process.
We can use the same cat image and its depth map from the last example.
Here is the output. Compared with the output from our text-to-image controlnet example, it kept a lot more cat facial details from the original image and worked into the robot style we asked for.
After compilation you should see a very fast inference time. For more information,
feel free to have a look at [Our PyTorch 2.0 benchmark](https://huggingface.co/docs/diffusers/main/en/optimization/torch2.0).
<Tip>
To generate images directly from a single pipeline, you can use [`KandinskyV22CombinedPipeline`], [`KandinskyV22Img2ImgCombinedPipeline`], [`KandinskyV22InpaintCombinedPipeline`].
These combined pipelines wrap the [`KandinskyV22PriorPipeline`] and [`KandinskyV22Pipeline`], [`KandinskyV22Img2ImgPipeline`], [`KandinskyV22InpaintPipeline`] respectively into a single
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Latent Diffusion
Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract from the paper is:
*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
The original codebase can be found at [Compvis/latent-diffusion](https://github.com/CompVis/latent-diffusion).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Unconditional Latent Diffusion
Unconditional Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract from the paper is:
*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
The original codebase can be found at [CompVis/latent-diffusion](https://github.com/CompVis/latent-diffusion).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Text-to-image model editing
[Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://huggingface.co/papers/2303.08084) is by Hadas Orgad, Bahjat Kawar, and Yonatan Belinkov. This pipeline enables editing diffusion model weights, such that its assumptions of a given concept are changed. The resulting change is expected to take effect in all prompt generations related to the edited concept.
The abstract from the paper is:
*Text-to-image diffusion models often make implicit assumptions about the world when generating images. While some assumptions are useful (e.g., the sky is blue), they can also be outdated, incorrect, or reflective of social biases present in the training data. Thus, there is a need to control these assumptions without requiring explicit user input or costly re-training. In this work, we aim to edit a given implicit assumption in a pre-trained diffusion model. Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a "source" under-specified prompt for which the model makes an implicit assumption (e.g., "a pack of roses"), and a "destination" prompt that describes the same setting, but with a specified desired attribute (e.g., "a pack of blue roses"). TIME then updates the model's cross-attention layers, as these layers assign visual meaning to textual tokens. We edit the projection matrices in these layers such that the source prompt is projected close to the destination prompt. Our method is highly efficient, as it modifies a mere 2.2% of the model's parameters in under one second. To evaluate model editing approaches, we introduce TIMED (TIME Dataset), containing 147 source and destination prompt pairs from various domains. Our experiments (using Stable Diffusion) show that TIME is successful in model editing, generalizes well for related prompts unseen during editing, and imposes minimal effect on unrelated generations.*
You can find additional information about model editing on the [project page](https://time-diffusion.github.io/), [original codebase](https://github.com/bahjat-kawar/time-diffusion), and try it out in a [demo](https://huggingface.co/spaces/bahjat-kawar/time-diffusion).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pipelines
Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different scheduler or even model components.
All pipelines are built from the base [`DiffusionPipeline`] class which provides basic functionality for loading, downloading, and saving all the components.
<Tip warning={true}>
Pipelines do not offer any training functionality. You'll notice PyTorch's autograd is disabled by decorating the [`~DiffusionPipeline.__call__`] method with a [`torch.no_grad`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) decorator because pipelines should not be used for training. If you're interested in training, please take a look at the [Training](../traininig/overview) guides instead!
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PaintByExample
[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://huggingface.co/papers/2211.13227) is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen.
The abstract from the paper is:
*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*
The original codebase can be found at [Fantasy-Studio/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example), and you can try it out in a [demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example).
## Tips
PaintByExample is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint is warm-started from [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to inpaint partly masked images conditioned on example and reference images.
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# MultiDiffusion
[MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation](https://huggingface.co/papers/2302.08113) is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel.
The abstract from the paper is:
*Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.*
You can find additional information about MultiDiffusion on the [project page](https://multidiffusion.github.io/), [original codebase](https://github.com/omerbt/MultiDiffusion), and try it out in a [demo](https://huggingface.co/spaces/weizmannscience/MultiDiffusion).
## Tips
While calling [`StableDiffusionPanoramaPipeline`], it's possible to specify the `view_batch_size` parameter to be > 1.
For some GPUs with high performance, this can speedup the generation process and increase VRAM usage.
To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default.
Circular padding is applied to ensure there are no stitching artifacts when working with
panoramas to ensure a seamless transition from the rightmost part to the leftmost part.
By enabling circular padding (set `circular_padding=True`), the operation applies additional
crops after the rightmost point of the image, allowing the model to "see” the transition
from the rightmost part to the leftmost part. This helps maintain visual consistency in
a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree
panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied
to ensure that the decoded latents match in the RGB space.
For example, without circular padding, there is a stitching artifact (default):
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Parallel Sampling of Diffusion Models
[Parallel Sampling of Diffusion Models](https://huggingface.co/papers/2305.16317) is by Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari.
The abstract from the paper is:
*Diffusion models are powerful generative models but suffer from slow sampling, often taking 1000 sequential denoising steps for one sample. As a result, considerable efforts have been directed toward reducing the number of denoising steps, but these methods hurt sample quality. Instead of reducing the number of denoising steps (trading quality for speed), in this paper we explore an orthogonal approach: can we run the denoising steps in parallel (trading compute for speed)? In spite of the sequential nature of the denoising steps, we show that surprisingly it is possible to parallelize sampling via Picard iterations, by guessing the solution of future denoising steps and iteratively refining until convergence. With this insight, we present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel. ParaDiGMS is the first diffusion sampling method that enables trading compute for speed and is even compatible with existing fast sampling techniques such as DDIM and DPMSolver. Using ParaDiGMS, we improve sampling speed by 2-4x across a range of robotics and image generation models, giving state-of-the-art sampling speeds of 0.2s on 100-step DiffusionPolicy and 16s on 1000-step StableDiffusion-v2 with no measurable degradation of task reward, FID score, or CLIP score.*
The original codebase can be found at [AndyShih12/paradigms](https://github.com/AndyShih12/paradigms), and the pipeline was contributed by [AndyShih12](https://github.com/AndyShih12). ❤️
## Tips
This pipeline improves sampling speed by running denoising steps in parallel, at the cost of increased total FLOPs.
Therefore, it is better to call this pipeline when running on multiple GPUs. Otherwise, without enough GPU bandwidth
sampling may be even slower than sequential sampling.
The two parameters to play with are `parallel` (batch size) and `tolerance`.
- If it fits in memory, for a 1000-step DDPM you can aim for a batch size of around 100
(for example, 8 GPUs and `batch_per_device=12` to get `parallel=96`). A higher batch size
may not fit in memory, and lower batch size gives less parallelism.
- For tolerance, using a higher tolerance may get better speedups but can risk sample quality degradation.
If there is quality degradation with the default tolerance, then use a lower tolerance like `0.001`.
For a 1000-step DDPM on 8 A100 GPUs, you can expect around a 3x speedup from [`StableDiffusionParadigmsPipeline`] compared to the [`StableDiffusionPipeline`]
by setting `parallel=80` and `tolerance=0.1`.
🤗 Diffusers offers [distributed inference support](../training/distributed_inference) for generating multiple prompts
in parallel on multiple GPUs. But [`StableDiffusionParadigmsPipeline`] is designed for speeding up sampling of a single prompt by using multiple GPUs.
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# InstructPix2Pix
[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/papers/2211.09800) is by Tim Brooks, Aleksander Holynski and Alexei A. Efros.
The abstract from the paper is:
*We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.*
You can find additional information about InstructPix2Pix on the [project page](https://www.timothybrooks.com/instruct-pix2pix), [original codebase](https://github.com/timothybrooks/instruct-pix2pix), and try it out in a [demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pix2Pix Zero
[Zero-shot Image-to-Image Translation](https://huggingface.co/papers/2302.03027) is by Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu.
The abstract from the paper is:
*Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing.*
You can find additional information about Pix2Pix Zero on the [project page](https://pix2pixzero.github.io/), [original codebase](https://github.com/pix2pixzero/pix2pix-zero), and try it out in a [demo](https://huggingface.co/spaces/pix2pix-zero-library/pix2pix-zero-demo).
## Tips
* The pipeline can be conditioned on real input images. Check out the code examples below to know more.
* The pipeline exposes two arguments namely `source_embeds` and `target_embeds`
that let you control the direction of the semantic edits in the final image to be generated. Let's say,
you wanted to translate from "cat" to "dog". In this case, the edit direction will be "cat -> dog". To reflect
this in the pipeline, you simply have to set the embeddings related to the phrases including "cat" to
`source_embeds` and "dog" to `target_embeds`. Refer to the code example below for more details.
* When you're using this pipeline from a prompt, specify the _source_ concept in the prompt. Taking
the above example, a valid input prompt would be: "a high resolution painting of a **cat** in the style of van gough".
* If you wanted to reverse the direction in the example above, i.e., "dog -> cat", then it's recommended to:
* Swap the `source_embeds` and `target_embeds`.
* Change the input prompt to include "dog".
* To learn more about how the source and target embeddings are generated, refer to the [original
paper](https://arxiv.org/abs/2302.03027). Below, we also provide some directions on how to generate the embeddings.
* Note that the quality of the outputs generated with this pipeline is dependent on how good the `source_embeds` and `target_embeds` are. Please, refer to [this discussion](#generating-source-and-target-embeddings) for some suggestions on the topic.
We encourage you to play around with the different parameters supported by the
`generate()` method ([documentation](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_tf_utils.TFGenerationMixin.generate)) for the generation quality you are looking for.
**4. Load the embedding model**:
Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model.
```py
from diffusers import StableDiffusionPix2PixZeroPipeline
And you're done! [Here](https://colab.research.google.com/drive/1tz2C1EdfZYAPlzXXbTnf-5PRBiR8_R1F?usp=sharing) is a Colab Notebook that you can use to interact with the entire process.
Now, you can use these embeddings directly while calling the pipeline:
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PNDM
[Pseudo Numerical methods for Diffusion Models on manifolds](https://huggingface.co/papers/2202.09778) (PNDM) is by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao.
The abstract from the paper is:
*Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules.*
The original codebase can be found at [luping-liu/PNDM](https://github.com/luping-liu/PNDM).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# RePaint
[RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2201.09865) is by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool.
The abstract from the paper is:
*Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions.*
The original codebase can be found at [andreas128/RePaint](https://github.com/andreas128/RePaint).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Score SDE VE
[Score-Based Generative Modeling through Stochastic Differential Equations](https://huggingface.co/papers/2011.13456) (Score SDE) is by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole. This pipeline implements the variance expanding (VE) variant of the stochastic differential equation method.
The abstract from the paper is:
*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*
The original codebase can be found at [yang-song/score_sde_pytorch](https://github.com/yang-song/score_sde_pytorch).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Self-Attention Guidance
[Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://huggingface.co/papers/2210.00939) is by Susung Hong et al.
The abstract from the paper is:
*Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.*
You can find additional information about Self-Attention Guidance on the [project page](https://ku-cvlab.github.io/Self-Attention-Guidance), [original codebase](https://github.com/KU-CVLAB/Self-Attention-Guidance), and try it out in a [demo](https://huggingface.co/spaces/susunghong/Self-Attention-Guidance) or [notebook](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Semantic Guidance
Semantic Guidance for Diffusion Models was proposed in [SEGA: Instructing Diffusion using Semantic Dimensions](https://huggingface.co/papers/2301.12247) and provides strong semantic control over image generation.
Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition.
The abstract from the paper is:
*Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.*
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Shap-E
The Shap-E model was proposed in [Shap-E: Generating Conditional 3D Implicit Functions](https://huggingface.co/papers/2305.02463) by Alex Nichol and Heewon Jun from [OpenAI](https://github.com/openai).
The abstract from the paper is:
*We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space.*
The original codebase can be found at [openai/shap-e](https://github.com/openai/shap-e).
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
</Tip>
## Usage Examples
In the following, we will walk you through some examples of how to use Shap-E pipelines to create 3D objects in gif format.
### Text-to-3D image generation
We can use [`ShapEPipeline`] to create 3D object based on a text prompt. In this example, we will make a birthday cupcake for :firecracker: diffusers library's 1 year birthday. The workflow to use the Shap-E text-to-image pipeline is same as how you would use other text-to-image pipelines in diffusers.
The output of [`ShapEPipeline`] is a list of lists of images frames. Each list of frames can be used to create a 3D object. Let's use the `export_to_gif` utility function in diffusers to make a 3D cupcake!
For both [`ShapEPipeline`] and [`ShapEImg2ImgPipeline`], you can generate mesh output by passing `output_type` as `mesh` to the pipeline, and then use the [`ShapEPipeline.export_to_ply`] utility function to save the output as a `ply` file. We also provide a [`ShapEPipeline.export_to_obj`] function that you can use to save mesh outputs as `obj` files.
Huggingface Datasets supports mesh visualization for mesh files in `glb` format. Below we will show you how to convert your mesh file into `glb` format so that you can use the Dataset viewer to render 3D objects.
We need to install `trimesh` library.
```
pip install trimesh
```
To convert the mesh file into `glb` format,
```python
importtrimesh
mesh=trimesh.load("3d_cake.ply")
mesh.export("3d_cake.glb",file_type="glb")
```
By default, the mesh output of Shap-E is from the bottom viewpoint; you can change the default viewpoint by applying a rotation transformation
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Spectrogram Diffusion
[Spectrogram Diffusion](https://huggingface.co/papers/2206.05408) is by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel.
*An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes.*
The original codebase can be found at [magenta/music-spectrogram-diffusion](https://github.com/magenta/music-spectrogram-diffusion).
As depicted above the model takes as input a MIDI file and tokenizes it into a sequence of 5 second intervals. Each tokenized interval then together with positional encodings is passed through the Note Encoder and its representation is concatenated with the previous window's generated spectrogram representation obtained via the Context Encoder. For the initial 5 second window this is set to zero. The resulting context is then used as conditioning to sample the denoised Spectrogram from the MIDI window and we concatenate this spectrogram to the final output as well as use it for the context of the next MIDI window. The process repeats till we have gone over all the MIDI inputs. Finally a MelGAN decoder converts the potentially long spectrogram to audio which is the final result of this pipeline.
<Tip>
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Text-to-Image Generation with Adapter Conditioning
## Overview
[T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.08453) by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie.
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract of the paper is the following:
*The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate structure control is needed. In this paper, we aim to ``dig out" the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and small T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, and achieve rich control and editing effects. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications.*
This model was contributed by the community contributor [HimariO](https://github.com/HimariO) ❤️ .
## Available Pipelines:
| Pipeline | Tasks | Demo
|---|---|:---:|
| [StableDiffusionAdapterPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py) | *Text-to-Image Generation with T2I-Adapter Conditioning* | -
## Usage example
In the following we give a simple example of how to use a *T2IAdapter* checkpoint with Diffusers for inference.
All adapters use the same pipeline.
1. Images are first converted into the appropriate *control image* format.
2. The *control image* and *prompt* are passed to the [`StableDiffusionAdapterPipeline`].
Let's have a look at a simple example using the [Color Adapter](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1).
Non-diffusers checkpoints can be found under [TencentARC/T2I-Adapter](https://huggingface.co/TencentARC/T2I-Adapter/tree/main/models).
### T2I-Adapter with Stable Diffusion 1.4
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>|
|[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>|
|[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>|
|[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>|
|[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>|
|[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>|
|[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> |
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Depth-to-image
The Stable Diffusion model can also infer depth based on an image using [MiDas](https://github.com/isl-org/MiDaS). This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a `depth_map` to preserve the image structure.
<Tip>
Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!
<!--Copyright 2023 The GLIGEN Authors and The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# GLIGEN (Grounded Language-to-Image Generation)
The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN). The [`StableDiffusionGLIGENPipeline`] can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes, if input images are given, this pipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs.
The abstract from the [paper](https://huggingface.co/papers/2301.07093) is:
*Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.*
<Tip>
Make sure to check out the Stable Diffusion [Tips](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently!
If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations!
</Tip>
This pipeline was contributed by [Nikhil Gajendrakumar](https://github.com/nikhil-masterful).
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Image variation
The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by [Justin Pinkney](https://www.justinpinkney.com/) from [Lambda](https://lambdalabs.com/).
The original codebase can be found at [LambdaLabsML/lambda-diffusers](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) and additional official checkpoints for image variation can be found at [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers).
<Tip>
Make sure to check out the Stable Diffusion [Tips](./overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Image-to-image
The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images.
The [`StableDiffusionImg2ImgPipeline`] uses the diffusion-denoising mechanism proposed in [SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations](https://huggingface.co/papers/2108.01073) by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon.
The abstract from the paper is:
*Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing.*
<Tip>
Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.