Compare commits

..

2463 Commits

Author SHA1 Message Date
Dhruv Nair
e8d40b3d5d update 2024-02-17 09:59:51 +00:00
Dhruv Nair
d699d686c0 update 2024-02-13 05:59:30 +00:00
Alex Umnov
e7696e20f9 Updated lora inference instructions (#6913)
* Updated lora inference instructions

* Update examples/dreambooth/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update README.md

* Update README.md

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-13 09:35:20 +05:30
Piyush Thakur
4b89aeffe1 [Type annotations] fixed in save_model_card (#6948)
fixed type annotations

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-13 08:56:45 +05:30
Steven Liu
0a1daadef8 [docs] Community pipelines (#6929)
fix
2024-02-12 10:38:13 -08:00
Sayak Paul
371f765908 [Diffusers -> Original SD conversion] fix things (#6933)
* fix: bias loading bug

* fixes for SDXL

* apply changes to the conversion script to match single_file_utils.py

* do transpose to match the single file loading logic.
2024-02-12 17:30:22 +05:30
Piyush Thakur
75aee39eac [Model Card] standardize T2I Adapter Sdxl model card (#6947)
standardize model card template t21-adapter-sdxl
2024-02-12 16:43:20 +05:30
Dhruv Nair
215e6804d3 Unpin torch versions in CI (#6945)
* update

* update

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-12 16:01:05 +05:30
Disty0
9254d1f39a Pass device to enable_model_cpu_offload in maybe_free_model_hooks (#6937) 2024-02-12 13:42:32 +05:30
Piyush Thakur
e1bdcc7af3 [Model Card] standardize T2I Sdxl Lora model card (#6944)
* standardize model card template t2i-lora-sdxl

* type annotations
2024-02-12 11:45:40 +05:30
Dhruv Nair
84905ca728 Update PixArt Alpha test module to match src module (#6943)
update
2024-02-12 11:01:33 +05:30
Piyush Thakur
6f336650c3 [Model Card] standardize T2I Sdxl model card (#6942)
standardize model card template t2i-sdxl
2024-02-12 10:01:20 +05:30
Piyush Thakur
06a042cd0e [Model Card] standardize T2I Lora model card (#6940)
standardize model card t2i-lora
2024-02-12 10:01:13 +05:30
Piyush Thakur
8772496586 [Model Card] standardize T2I model card (#6939)
* standardize model card

* fix base_model
2024-02-12 10:00:41 +05:30
dg845
35fd84be27 Replace hardcoded values in SchedulerCommonTest with properties (#5479)
---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2024-02-10 21:34:20 -10:00
YiYi Xu
f2756253e6 Fix a bug in AutoPipeline.from_pipe when switching pipeline with optional components (#6820)
* fix

* add tests

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-09 21:56:52 -10:00
YiYi Xu
0071478d9e allow attention processors to have different signatures (#6915)
add

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-02-09 21:56:19 -10:00
Sayak Paul
7c8cab313e post release 0.26.2 (#6885)
* post release

* style

* Empty-Commit

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-02-09 07:36:38 -10:00
Sayak Paul
ca9ed5e8d1 [LoRA] deprecate certain lora methods from the old backend. (#6889)
* deprecate certain lora methods from the old backend.

* uncomment necessary things.

* safe remove old lora backend 👋
2024-02-09 17:14:32 +01:00
Bingxin Ke
98b6bee1a1 [Community Pipeline][Bug Fix] marigold_depth_estimation: input image value range (#6787)
[FIX] IMPORTANT: rgb normalization
2024-02-09 16:55:37 +01:00
Dhruv Nair
ab7113487c More IPAdapter test fixes (#6888)
update
2024-02-09 13:54:58 +05:30
camaro
59c307f1d5 Standardize model card for Controlnet (#6910)
* controlnet

* style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-09 12:29:08 +05:30
Sayak Paul
159885adc6 correct hub_token exposition behaviour (thanks to @bghira). (#6918) 2024-02-08 18:38:27 -10:00
Aryan
7337eea59b [refactor] unnecessary lines in decode_latents in video pipelines (#6682)
* refactor decode latents in video pipelines

* make fix-copies
2024-02-08 17:10:52 -10:00
camaro
f07899a57c Standardize model card for Controlnet SDXL (#6908)
controlnet-sdxl
2024-02-09 07:53:39 +05:30
camaro
a83cc0c0bc Standardize model card for Controlnet flax (#6909)
* controlnet-flax

* style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-09 07:52:56 +05:30
C Q
db5194a45d Fix Compatibility Issues in stable_diffusion_xl_reference.py (#6251)
* Fix Compatibility Issues in stable_diffusion_xl_reference.py

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-02-08 10:52:59 -10:00
shaoxiaowang
e6c9c2513f fix examples/community/pipeline_stable_diffusion_xl_instantid.py (#6759)
Co-authored-by: wangshaoxiao <wangshaoxiao@xiaomi.com>
2024-02-08 10:09:40 -10:00
Kyunghwan Kim
d643b6691f Add vae tiling and slicing in img2img and inpaint (#6871)
* Add vae tiling in img2img and inpaint

* Add vae tiling not slicing
2024-02-08 09:50:00 -10:00
Michael
f5c9be3a0a Remove <cat-toy> validation prompt from examples/textual_inversion/textual_inversion_sdxl.py (#6877)
Remove <cat-toy> validation prompt from textual_inversion_sdxl.py

The `<cat-toy>` validation prompt is a default choice for the example task in the README. But no other part of `textual_inversion_sdxl.py` references the cat toy and `textual_inversion.py` has a default validation prompt of `None` as well.

So bring `textual_inversion_sdxl.py` in line with `textual_inversion.py` and change default validation prompt to `None`
2024-02-08 09:46:06 -10:00
Laisky.Cai
1824d0050e fix: load_image should support PIL Image (#6904)
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-08 08:25:37 -10:00
Sayak Paul
30e5e81d58 change to 2024 in the license (#6902)
change to 2024
2024-02-08 08:19:31 -10:00
Masamune Ishihara
8de78001df Add fps argument to export_to_gif function. (#6786) 2024-02-08 21:59:51 +05:30
Patryk Bartkowiak
3ac2357794 changed positional parameters to named parameters like in docs (#6905)
Co-authored-by: Patryk Bartkowiak <patryk.bartkowiak@tcl.com>
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
2024-02-08 21:39:03 +05:30
Ehsan Akhgari
17808a091e Fix bug when converting checkpoint to diffusers format (#6900)
This fixes #6899.
2024-02-08 18:52:11 +05:30
Sayak Paul
491a933a1b [I2VGenXL] attention_head_dim in the UNet (#6872)
* attention_head_dim

* debug

* print more info

* correct num_attention_heads behaviour

* down_block_num_attention_heads -> num_attention_heads.

* correct the image link in doc.

* add: deprecation for num_attention_head

* fix: test argument to use attention_head_dim

* more fixes.

* quality

* address comments.

* remove depcrecation.
2024-02-08 12:30:14 +05:30
Sayak Paul
aa82df52e7 [IP Adapters] introduce ip_adapter_image_embeds in the SD pipeline call (#6868)
* add: support for passing ip adapter image embeddings

* debugging

* make feature_extractor unloading conditioned on safety_checker

* better condition

* type annotation

* index to look into value slices

* more debugging

* debugging

* serialize embeddings dict

* better conditioning

* remove unnecessary prints.

* Update src/diffusers/loaders/ip_adapter.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* make fix-copies and styling.

* styling and further copy fixing.

* fix: check_inputs call in controlnet sdxl img2img pipeline

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-02-08 11:10:10 +05:30
Srimanth Agastyaraju
a11b0f83b7 Fix: training resume from fp16 for SDXL Consistency Distillation (#6840)
* Fix: training resume from fp16 for lcm distill lora sdxl

* Fix coding quality - run linter

* Fix 1 - shift mixed precision cast before optimizer

* Fix 2 - State dict errors by removing load_lora_into_unet

* Update train_lcm_distill_lora_sdxl.py - Revert default cache dir to None

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-08 11:09:29 +05:30
Sayak Paul
1835510524 Remove torch_dtype in to() to end deprecation (#6886)
* remove torch_dtype from to()

* remove torch_dtype from usage scripts.

* remove old lora backend

* Revert "remove old lora backend"

This reverts commit adcddf6ba4.
2024-02-08 09:38:57 +05:30
camaro
4a3d52850b fix: keyword argument mismatch (#6895) 2024-02-08 09:37:56 +05:30
YiYi Xu
97d004b9b4 [ip-adapter] make sure length of scale is same as number of ip-adapters when using set_ip_adapter_scale (#6884)
add

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-02-07 10:13:12 -10:00
Sayak Paul
76696dca55 [Model Card] standardize dreambooth model card (#6729)
* feat: standarize model card creation for dreambooth training.

* correct 'inference

* remove comments.

* take component out of kwargs

* style

* add: card template to have a leaner description.

* widget support.

* propagate changes to train_dreambooth_lora

* propagate changes to custom diffusion

* make widget properly type-annotated
2024-02-07 15:07:11 +05:30
Félix Sanz
17612de451 fix: typo in callback function name and property (#6834)
* fix: callback function name is incorrect

On this tutorial there is a function defined and then used inside `callback_on_step_end` argument, but the name was not correct (mismatch)

* fix: typo in num_timestep (correct is num_timesteps)

fixed property name
2024-02-06 12:05:40 -08:00
Dhruv Nair
994360f7a5 Fix last IP Adapter test (#6875)
update
2024-02-06 08:53:40 -10:00
Dhruv Nair
e6a48db633 Refactor Deepfloyd IF tests. (#6855)
* update

* update

* update
2024-02-06 16:43:17 +05:30
sayakpaul
4f1df69d1a Revert "add attention_head_dim"
This reverts commit 15f6b22466.
2024-02-06 14:48:49 +05:30
sayakpaul
15f6b22466 add attention_head_dim 2024-02-06 14:48:07 +05:30
Sayak Paul
e6fd9ada3a [I2vGenXL] clean up things (#6845)
* remove _to_tensor

* remove _to_tensor definition

* remove _collapse_frames_into_batch

* remove lora for not bloating the code.

* remove sample_size.

* simplify code a bit more

* ensure timesteps are always in tensor.
2024-02-06 09:22:07 +05:30
Edward Li
493228a708 Fix AutoencoderTiny with use_slicing (#6850)
* Fix `AutoencoderTiny` with `use_slicing`

When using slicing with AutoencoderTiny, the encoder mistakenly encodes the entire batch for every image in the batch.

* Fixed formatting issue
2024-02-05 09:18:22 -10:00
Dhruv Nair
8bf046b7fb Add single file and IP Adapter support to PIA Pipeline (#6851)
update
2024-02-05 16:23:18 +05:30
Dhruv Nair
bb99623d09 Update IP Adapter tests to use cosine similarity distance (#6806)
* update

* update
2024-02-05 16:22:59 +05:30
Dhruv Nair
fdf55b1f1c Fix posix path issue in testing utils (#6849)
update
2024-02-05 08:57:18 +05:30
小咩Goat
c6f8c310c3 Fix forward pass in UNetMotionModel when gradient checkpoint is enabled (#6744)
fix #6742

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2024-02-05 08:04:01 +05:30
YiYi Xu
64909f17b7 update IP-adapter code in UNetMotionModel (#6828)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-02-05 07:26:46 +05:30
Dhruv Nair
f09ca909c8 Multiple small fixes to Video Pipeline docs (#6805)
* update

* update

* update

* Update src/diffusers/pipelines/i2vgen_xl/pipeline_i2vgen_xl.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* update

* update

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-02-05 07:24:38 +05:30
YiYi Xu
a5fc62f819 add self.use_ada_layer_norm_* params back to BasicTransformerBlock (#6841)
fix sd reference community ppeline

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-02-04 11:16:44 -10:00
Linoy Tsaban
fbdf26bac5 [dreambooth lora sdxl] add sdxl micro conditioning (#6795)
* add micro conditioning

* remove redundant lines

* style

* fix missing 's'

* fix missing shape bug due to missing RGB if statement

* remove redundant if, change arg order

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-04 16:00:09 +02:00
Fabio Rigano
13001ee315 Bugfix in IPAdapterFaceID (#6835) 2024-02-03 08:56:55 -10:00
Linoy Tsaban
65329aed98 [advanced dreambooth lora sdxl script] new features + bug fixes (#6691)
* add noise_offset param

* micro conditioning - wip

* image processing adjusted and moved to support micro conditioning

* change time ids to be computed inside train loop

* change time ids to be computed inside train loop

* change time ids to be computed inside train loop

* time ids shape fix

* move token replacement of validation prompt to the same section of instance prompt and class prompt

* add offset noise to sd15 advanced script

* fix token loading during validation

* fix token loading during validation in sdxl script

* a little clean

* style

* a little clean

* style

* sdxl script - a little clean + minor path fix

sd 1.5 script - change default resolution value

* ad 1.5 script - minor path fix

* fix missing comma in code example in model card

* clean up commented lines

* style

* remove time ids computed outside training loop - no longer used now that we utilize micro-conditioning, as all time ids are now computed inside the training loop

* style

* [WIP] - added draft readme, building off of examples/dreambooth/README.md

* readme

* readme

* readme

* readme

* readme

* readme

* readme

* readme

* removed --crops_coords_top_left from CLI args

* style

* fix missing shape bug due to missing RGB if statement

* add blog mention at the start of the reamde as well

* Update examples/advanced_diffusion_training/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* change note to render nicely as well

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-03 17:33:43 +02:00
Stephen
02338c9317 Change path to posix (testing_utils.py) (#6803)
change path to pathlib as_posix

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-02-03 12:44:13 +05:30
Younes Belkada
15ed53d272 Fixes LoRA SDXL training script with DDP + PEFT (#6816)
Update train_dreambooth_lora_sdxl.py
2024-02-03 09:46:32 +05:30
UmerHA
9cc59ba089 [Contributor Experience] Fix test collection on MPS (#6808)
* Update testing_utils.py

* Update testing_utils.py
2024-02-02 20:59:00 +05:30
YiYi Xu
adcbe674a4 [refactor]Scheduler.set_begin_index (#6728) 2024-02-01 09:51:02 -10:00
Sayak Paul
ec9840a5db [Refactor] harmonize the module structure for models in tests (#6738)
* harmonize the module structure for models in tests

* make the folders modules.

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-02-01 14:23:39 +05:30
YiYi Xu
093a03a1a1 add is_torchvision_available (#6800)
* add

* remove transformer

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-31 20:01:44 -10:00
Patrick von Platen
c3369f5673 fix torchvision import (#6796) 2024-01-31 12:13:10 -10:00
Sayak Paul
04cd6adf8c [Feat] add I2VGenXL for image-to-video generation (#6665)
---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-01-31 10:38:51 -10:00
YiYi Xu
66722dbea7 [sdxl k-diffusion pipeline]move sigma to device (#6757)
move sigma to device

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-31 09:29:15 -10:00
YiYi Xu
2e8d18e699 [IP-Adapter] Support multiple IP-Adapters (#6573)
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Alvaro Somoza <somoza.alvaro@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-31 07:11:15 -10:00
Steven Liu
03373de0db [docs] Add missing parameter (#6775)
add missing param
2024-01-31 08:53:40 -08:00
Dhruv Nair
56bea6b4a1 Add PIA Model/Pipeline (#6698)
* update

* update

* updaet

* add tests and docs

* clean up

* add to toctree

* fix copies

* pr review feedback

* fix copies

* fix tests

* update docs

* update

* update

* update docs

* update

* update

* update

* update
2024-01-31 18:00:17 +02:00
Dhruv Nair
d7dc0ffd79 Fix setting scaling factor in VAE config (#6779)
fix
2024-01-31 19:47:22 +05:30
Kashif Rasul
97ee616971 add ipo, hinge and cpo loss to dpo trainer (#6788)
add ipo and hinge loss to dpo trainer
2024-01-31 16:41:31 +05:30
Sayak Paul
0fc62d1702 [Kandinsky tests] add is_flaky to test_model_cpu_offload_forward_pass (#6762)
* add is_flaky to test_model_cpu_offload_forward_pass

* style

* update

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2024-01-31 14:51:12 +05:30
Dhruv Nair
f4d3f913f4 Pin torch < 2.2.0 in test runners (#6780)
* update

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-31 13:41:18 +05:30
Viet Nguyen
1cab64b3be Update train_diffusion_dpo.py (#6754)
* Update train_diffusion_dpo.py

Address #6702

* Update train_diffusion_dpo_sdxl.py

* Empty-Commit

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-31 12:46:23 +05:30
Sayak Paul
8d7dc85312 add note about serialization (#6764) 2024-01-31 12:45:40 +05:30
dg845
87a92f779c Fix bug in ResnetBlock2D.forward where LoRA Scale gets Overwritten (#6736)
Fix bug in ResnetBlock2D.forward when not USE_PEFT_BACKEND and using scale_shift for time emb where the lora scale  gets overwritten.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-30 14:43:48 -10:00
Yunxuan Xiao
0db766ba77 [DDPMScheduler] Load alpha_cumprod to device to avoid redundant data movement. (#6704)
* load cumprod tensor to device

Signed-off-by: woshiyyya <xiaoyunxuan1998@gmail.com>

* fixing ci

Signed-off-by: woshiyyya <xiaoyunxuan1998@gmail.com>

* make fix-copies

Signed-off-by: woshiyyya <xiaoyunxuan1998@gmail.com>

---------

Signed-off-by: woshiyyya <xiaoyunxuan1998@gmail.com>
2024-01-30 13:19:37 -10:00
Dhruv Nair
8e94663503 Update export to video to support new tensor_to_vid function in video pipelines (#6715)
update
2024-01-30 19:43:33 +05:30
YiYi Xu
b09b90e24c udpate ip-adapter slow tests (#6760)
* udpate slices

* up

* hopefully last one

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-29 17:55:41 -10:00
Sajad Norouzi
058b47553e Fix mixed precision fine-tuning for text-to-image-lora-sdxl example. (#6751)
* Fix mixed precision fine-tuning for text-to-image-lora-sdxl example.

* fix text_encoder_two bug.

---------

Co-authored-by: Sajad Norouzi <sajadn@dev-dsk-sajadn-2a-87239470.us-west-2.amazon.com>
2024-01-30 06:55:02 +05:30
xhedit
7f58a76f48 Update lora.md with a more accurate description of rank (#6724)
* Update lora.md with a more accurate description of rank

* Update docs/source/en/training/lora.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-01-29 09:41:51 -08:00
Sayak Paul
09b7bfce91 [Core] move transformer scripts to transformers modules (#6747)
* move transformer scripts to transformers modules

* move transformer model test

* move prior transformer test to  directory

* fix doc path

* correct doc path

* add: __init__.py
2024-01-29 22:28:28 +05:30
Fabio Rigano
5d8b1987ec Add unload_textual_inversion method (#6656)
* Add unload_textual_inversion

* Fix dicts in tokenizer

* Fix quality

* Apply suggestions from code review

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Fix variable name after last update

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-01-29 06:26:22 -10:00
Sayak Paul
acd1962769 correct hflip arg (#6743) 2024-01-28 17:42:34 +05:30
Stephen
5b1b80a5b6 Change os.path to pathlib Path (#6737)
Change os.path to pathlib
2024-01-28 10:24:13 +05:30
gzguevara
8581d9bce4 changed to posix unet (#6719)
changed to posix

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-27 16:31:52 +05:30
Yingtian Liu
c101066227 Correct SNR weighted loss in v-prediction case by only adding 1 to SNR on the denominator (#6307)
* fix minsnr implementation for v-prediction case

* format code

* always compute snr when snr_gamma is specified

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-27 09:18:09 +05:30
Sayak Paul
d4c7ab7bf1 [Hub] feat: explicitly tag to diffusers when using push_to_hub (#6678)
* feat: explicitly tag to diffusers when using push_to_hub

* remove tags.

* reset repo.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix: tests

* fix: push_to_hub behaviour for tagging from save_pretrained

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* import fixes.

* add library name to existing model card.

* add: standalone test for generate_model_card

* fix tests for standalone method

* moved library_name to a better place.

* merge create_model_card and generate_model_card.

* fix test

* address lucain's comments

* fix return identation

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* address further comments.

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Lucain <lucainp@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Lucain <lucainp@gmail.com>
2024-01-26 23:01:48 +05:30
dg845
ea9dc3fa90 Add UFOGenScheduler to Community Examples (#6650)
* Add UFOGenScheduler with diffusers imports changed from relative to absolute.

* make style

* Add community README entry for UFOGenScheduler.
2024-01-26 15:11:14 +02:00
dg845
b4220e97b1 Add Community Example Consistency Training Script (#6717)
* initial commit for unconditional/class-conditional consistency training script

* make style

* Add entry for consistency training script in community README.

* Move consistency training script from community to research_projects/consistency_training

* Add requirements.txt and README to research_projects/consistency_training directory.

* Manually revert community README changes for consistency training.

* Fix path to script after moving script to research projects.

* Add option to load U-Net weights from pretrained model.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-26 15:10:57 +02:00
Dhruv Nair
dc85b578c2 Move tests for SD inference variant pipelines into their own modules (#6707)
* update

* update

* update
2024-01-26 14:09:41 +02:00
Dhruv Nair
0d927c7542 Add IP Adapters to slow tests (#6714)
update
2024-01-25 19:52:50 -10:00
Andrew Ishutin
5b93338235 fix custom diffusion training with concept list (#6710)
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-26 11:17:51 +08:00
Aryan V S
7c1c705f60 fix community README (#6645) 2024-01-25 17:52:52 -08:00
Aryan V S
9e72016468 [docs] AnimateDiff Video-to-Video (#6712)
* add animatediff vid2vid to docs

* Update docs/source/en/api/pipelines/animatediff.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* apply suggestions from review

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-01-25 17:51:43 -08:00
Patrick von Platen
3e9716f22b Correct sigmas cpu settings (#6708) 2024-01-26 09:32:24 +08:00
Steven Liu
87bfbc320d [docs] UViT2D (#6643)
* uvit2d

* fix

* fix?

* add correct paper

* fix paths

* update abstract
2024-01-25 09:37:28 -08:00
Aryan V S
a517f665a4 AnimateDiff Video to Video (#6328)
* begin animatediff img2video and video2video

* revert animatediff to original implementation

* add img2video as pipeline

* update

* add vid2vid pipeline

* update imports

* update

* remove copied from line for check_inputs

* update

* update examples

* add multi-batch support

* fix __init__.py files

* move img2vid to community

* update community readme and examples

* fix

* make fix-copies

* add vid2vid batch params

* apply suggestions from review

Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com>

* add test for animatediff vid2vid

* torch.stack -> torch.cat

Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com>

* make style

* docs for vid2vid

* update

* fix prepare_latents

* fix docs

* remove img2vid

* update README to :main

* remove slow test

* refactor pipeline output

* update docs

* update docs

* merge community readme from :main

* final fix i promise

* add support for url in animatediff example

* update example

* update callbacks to latest implementation

* Update src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix merge

* Apply suggestions from code review

* remove callback and callback_steps as suggested in review

* Update tests/pipelines/animatediff/test_animatediff_video2video.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix import error caused due to unet refactor in #6630

* fix numpy import error after tensor2vid refactor in #6626

* make fix-copies

* fix numpy error

* fix progress bar test

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-24 18:22:26 +05:30
Brandon Strong
16748d1eba SD 1.5 Support For Advanced Lora Training (train_dreambooth_lora_sdxl_advanced.py) (#6449)
* sd1.5 support in separate script

A quick adaptation to support people interested in using this method on 1.5 models.

* sd15 prompt text encoding and unet conversions

as per @linoytsaban 's recommendations. Testing would be appreciated,

* Readability and quality improvements

Removed some mentions of SDXL, and some arguments that don't apply to sd 1.5, and cleaned up some comments.

* make style/quality commands

* tracker rename and run-it doc

* Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py

* Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py

---------

Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
2024-01-24 11:20:08 +02:00
Haofan Wang
c9081a8abd [Fix bugs] pipeline_controlnet_sd_xl.py (#6653)
* Update pipeline_controlnet_sd_xl.py

* Update pipeline_controlnet_xs_sd_xl.py
2024-01-24 09:18:12 +05:30
Yasuna
0eb68d9ddb [Docs] update: tutorials ja | AUTOPIPELINE.md (#6629)
* add en file

* translate 1-118 lines

* add text

* add toctree

* fix

* fix typo

* fix link
2024-01-23 09:19:55 -08:00
Haofan Wang
9941b3f124 Add InstantID Pipeline (#6673)
* add instantid pipeline

* format

* Update README.md

* Update README.md

* format

---------

Co-authored-by: ResearcherXman <xhs.research@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-23 17:25:23 +05:30
Ayush Mangal
16b9f98b48 [WIP][Community Pipeline] InstaFlow! One-Step Stable Diffusion with Rectified Flow (#6057)
* Add instaflow community pipeline

* Make styling fixes

* Add lora

* Fix formatting

* Add docs

* Update README.md

* Update README.md

* Remove do LORA

* Update readme

* Update README.md

* Update README.md

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-23 13:52:50 +02:00
Dhruv Nair
fee93c81eb [Refactor] Update from single file (#6428)
* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update'

* update

* update

* update

* update

* update

* update

* up

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* up

* update

* update

* update

* update

* update'

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* clean

* update

* update

* clean up

* clean up

* update

* clean

* clean

* update

* updaet

* clean up

* fix docs

* update

* update

* Revert "update"

This reverts commit dbfb8f1ea9.

* update

* update

* update

* update

* fix controlnet

* fix scheduler

* fix controlnet tests
2024-01-23 14:42:03 +05:30
Sayak Paul
5308cce994 [Tests] Test for passing local config file to from_single_file() (#6638)
make config file local too.
2024-01-23 14:21:23 +05:30
YiYi Xu
318556b20e fix dpm related slow test failure (#6680)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-22 18:52:05 -10:00
Dhruv Nair
6620eda357 Standardise outputs for video pipelines (#6626)
* update

* update

* update

* update

* update

* update

* update

* clean up

* clean up
2024-01-23 10:07:07 +05:30
Sayak Paul
1f0705adcf [Big refactor] move unets to unets module 🦋 (#6630)
* move unets to  module 🦋

* parameterize unet-level import.

* fix flax unet2dcondition model import

* models __init__

* mildly depcrecating models.unet_2d_blocks in favor of models.unets.unet_2d_blocks.

* noqa

* correct depcrecation behaviour

* inherit from the actual classes.

* Empty-Commit

* backwards compatibility for unet_2d.py

* backward compatibility for unet_2d_condition

* bc for unet_1d

* bc for unet_1d_blocks
2024-01-23 08:57:58 +05:30
M. Tolga Cangöz
5e96333cb2 Update README (#6669)
Update number of checkpoints and repositories in README
2024-01-22 08:08:07 -08:00
Sayak Paul
da95a28ff6 [Diffusion DPO] apply fixes from #6547 (#6668)
apply fixes from #6547
2024-01-22 20:14:54 +05:30
Dhruv Nair
d66d554dc2 Add tearDown method to LoRA tests. (#6660)
* update

* update
2024-01-22 14:00:37 +05:30
Junsong Chen
c7df846dec add Sa-Solver (#5975)
* add Sa-Solver



---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: scxue <xueshuchen17@mails.ucas.edu.cn>
Co-authored-by: jschen <chenjunsong4@h-partners.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-21 21:37:44 -10:00
Vinh H. Pham
8e7bbfbe5a add padding_mask_crop to all inpaint pipelines (#6360)
* add padding_mask_crop
---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-01-21 19:42:22 -10:00
YiYi Xu
e2773c6255 fix SDXL-kdiffusion tests (#6647)
🤞🤞🤞

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-19 17:37:29 -10:00
YiYi Xu
ac61eefc9f fix DPM Scheduler with use_karras_sigmas option (#6477)
* fix

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-19 07:08:22 -10:00
HelloWorldBeginner
f95615b823 Fixed the bug related to saving DeepSpeed models. (#6628)
* Fixed the bug related to saving DeepSpeed models.

* Add information about training SD models using DeepSpeed to the README.

* Apply suggestions from code review

---------

Co-authored-by: mhh001 <mahonghao1@huawei.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-19 19:21:57 +05:30
SangKim
a9288b49c9 Modularize InstructPix2Pix SDXL inferencing during and after training in examples (#6569) 2024-01-19 15:47:34 +05:30
elucida
c54419658b refactor: extract init/forward function in UNet2DConditionModel (#6478)
* - extract function for stage in UNet2DConditionModel init & forward
- Add new function get_mid_block() to unet_2d_blocks.py

* add type hint to get_mid_block aligned with get_up_block and get_down_block; rename _set_xxx function

* add type hint and  use keyword arguments

* remove `copy from` in versatile diffusion
2024-01-19 12:13:34 +02:00
Aryan V S
6382663dc8 [Community] Experimental AnimateDiff Image to Video (open to improvements) (#6509)
* add animatediff img2vid

* fix

* Update examples/community/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix code snippet between ip adapter face id and animatediff img2vid

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-19 12:05:41 +02:00
spezialspezial
58b8dce129 Update convert_from_ckpt.py / read checkpoint config yaml contents (#6633)
Update convert_from_ckpt.py / read config yaml contents

Added missing reading of config yaml file contents
2024-01-19 08:25:03 +05:30
Dhruv Nair
a65ca8a059 Fix failing tests due to Posix Path (#6627)
update
2024-01-18 19:19:57 +05:30
Steven Liu
5ca062e011 [docs] Fix missing API function (#6604)
fix?
2024-01-17 13:59:09 -08:00
Linoy Tsaban
619e3ab6f6 [bug fix] advanced dreambooth lora sdxl - fixes bugs described in #6486 (#6599)
* fixes bugs:
1. redundant retraction
2. param clone
3. stopping optimization of text encoder params

* param upscaling

* style
2024-01-17 20:11:45 +05:30
Patrick von Platen
9e2804f720 Update pr_test_peft_backend.yml to use 1 process for testing (#6613) 2024-01-17 19:25:30 +05:30
Aryan V S
9112028ed8 FreeInit (#6315)
* freeinit

* update freeinit implementation based on review

Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com>

* fix

* another fix

* refactor

* fix timesteps missing bug

* apply suggestions from review

Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com>

* add test for freeinit

* apply suggestions from review

Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com>

* refactor

* fix test

* fix tensor not on same device

* update

* remove return_intermediate_results

* fix broken freeinit test

* update animatediff docs

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2024-01-17 17:17:07 +05:30
Steve Rhoades
dce06680d2 Fixes torch.compile() compatible training (#6589)
resolve conflicts
2024-01-17 07:47:03 +05:30
Bhavay Malhotra
dd63168319 Update installation.md (#6438)
* Update installation.md

* Update installation.md

* Update installation.md
2024-01-16 13:41:38 -08:00
Celestial Phineas
1040dfd9cc [Fix] Multiple image conditionings in a single batch for StableDiffusionControlNetPipeline (#6334)
* [Fix] Multiple image conditionings in a single batch for `StableDiffusionControlNetPipeline`.

* Refactor `check_inputs` in `StableDiffusionControlNetPipeline` to avoid redundant codes.

* Make the behavior of MultiControlNetModel to be the same to the original ControlNetModel

* Keep the code change minimum for nested list support

* Add fast test `test_inference_nested_image_input`

* Remove redundant check for nested image condition in `check_inputs`

Remove `len(image) == len(prompt)` check out of `check_image()`

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Better `ValueError` message for incompatible nested image list size

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Fix syntax error in `check_inputs`

* Remove warning message for multi-ControlNets with multiple prompts

* Fix a typo in test_controlnet.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Add test case for multiple prompts, single image conditioning in `StableDiffusionMultiControlNetPipelineFastTests`

* Improved `ValueError` message for nested `controlnet_conditioning_scale`

* Documenting the behavior of image list as `StableDiffusionControlNetPipeline` input

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-01-16 10:40:55 -10:00
Sayak Paul
49a4b377c1 remove omegaconf from the residues 👋 (#6600)
remove omegaconf 👋
2024-01-16 21:55:41 +05:30
JuanCarlosPi
dff35a86e4 Change in ip-adapter docs. CLIPVisionModelWithProjection should be im… (#6597)
Change in ip-adapter docs. CLIPVisionModelWithProjection should be imported from transformers, not diffusers
2024-01-16 08:18:13 -08:00
Yondon Fu
8842bcadb9 [SVD] Return np.ndarray when output_type="np" (#6507)
[SVD] Fix output_type="np"
2024-01-16 14:51:36 +05:30
Steve Rhoades
181280baba Fixes training resuming: Advanced Dreambooth LoRa Training (#6566)
* Fixes #6418 Advanced Dreambooth LoRa Training

* change order of import to fix nit

* fix nit, use cast_training_params

* remove torch.compile fix, will move to a new PR

* remove unnecessary import
2024-01-16 14:30:49 +05:30
Charchit Sharma
53f498d2a4 Use of Posix to better support Windows compatibility in testing_utils (#6587)
* changes in utils

* removed loc
2024-01-16 10:27:29 +02:00
Charchit Sharma
990860911f change to posix for better Windows support for lora loaders (#6590)
* posix lora

* changes and style fix
2024-01-16 13:46:29 +05:30
Fabio Rigano
23eed39702 Fix path generation in IP Adapter (#6564)
* Fix path generation on Windows

* Update set_default_attn_processors

* Use pathlib

* Fix quality

* Fix copy

* Revert changes in set_default_attn_processors

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-16 11:25:22 +05:30
YiYi Xu
fefed44543 update slow test for SDXL k-diffusion pipeline (#6588)
update expected slice
2024-01-15 18:54:33 -10:00
Dong
814f56d2fe 🐛 fix ip-adapter controlnet img2img missing code (#6528)
* 🐛 fix ip-adapter controlnet img2img missing code

* 📝 edit test

* 📝 edit test

* 📝 run make style and quality

* 🎨 remove slow tests
2024-01-15 16:41:09 -10:00
SangKim
96d6e16550 Enable image resizing to adjust its height and width in StableDiffusionXLInstructPix2PixPipeline (#6581)
* Enable image resizing to adjust its height and width in StableDiffusionXLInstructPix2PixPipeline

* Ensure that validation is performed at every 'validation_step', not at every step
2024-01-16 07:50:34 +05:30
Aryan V S
c11de13588 [training] fix training resuming problem for fp16 (SD LoRA DreamBooth) (#6554)
* fix training resume

* update

* update
2024-01-16 07:27:06 +05:30
Patrick von Platen
357855f8fc [Docs] Fix controlnet indent (#6578) 2024-01-15 18:12:55 +02:00
Fabio Rigano
f825221b5d [Community Pipeline] IPAdapter FaceID (#6276)
* Add support for IPAdapter FaceID

* Add docs

* Move subfolder to kwargs

* Fix quality

* Fix image encoder loading

* Fix loading + add test

* Move to community folder

* Fix style

* Revert constant update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-15 16:43:54 +02:00
Aryan V S
119d734f6e [AnimateDiff+Controlnet] Fix multicontrolnet support (#6551)
* fix multicontrolnet support

* update README with multicontrolnet example
2024-01-15 16:36:54 +02:00
Sayak Paul
cb4b3f0b78 [OmegaConf] replace it with yaml (#6488)
* remove omegaconf from convert_from_ckpt.

* remove from single_file.

* change to string based ubscription.

* style

* okay

* fix: vae_param

* no . indexing.

* style

* style

* turn getattrs into explicit if/else

* style

* propagate changes to ldm_uncond.

* propagate to gligen

* propagate to if.

* fix: quotes.

* propagate to audioldm.

* propagate to audioldm2

* propagate to musicldm.

* propagate to vq_diffusion

* propagate to zero123.

* remove omegaconf from diffusers codebase.
2024-01-15 20:02:10 +05:30
Haofan Wang
3d574b3bbe Fix a bug of flip in SDXL training script (#6547)
* Update train_text_to_image_sdxl.py

* Update train_text_to_image_lora_sdxl.py
2024-01-15 16:28:04 +02:00
Charchit Sharma
09903774d9 Make T2I Adapter SDXL Training Script torch.compile compatible (#6577)
update for t2i_adapter
2024-01-15 19:42:56 +05:30
dependabot[bot]
d6a70d8ba8 Bump jinja2 from 3.1.2 to 3.1.3 in /examples/research_projects/realfill (#6539)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.2...3.1.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 16:10:10 +02:00
Charchit Sharma
e3103e171f Make InstructPix2Pix SDXL Training Script torch.compile compatible (#6576)
* changes for pix2pix_sdxl

* style fix
2024-01-15 17:54:03 +05:30
Charchit Sharma
b053053ac9 Make InstructPix2Pix Training Script torch.compile compatible (#6558)
* added torch.compile for pix2pix

* required changes
2024-01-15 17:03:22 +05:30
Vinh H. Pham
08702fc1cb Make text-to-image SDXL LoRA Training Script torch.compile compatible (#6556)
make compile compatible
2024-01-15 16:58:16 +05:30
Vinh H. Pham
7ce89e979c Make text-to-image SD LoRA Training Script torch.compile compatible (#6555)
make compile compatible
2024-01-15 16:55:08 +05:30
gzguevara
05faf3263b SDXL text-to-image torch compatible (#6550)
* torch compatible

* code quality fix

* ruff style

* ruff format
2024-01-15 16:49:11 +05:30
Sayak Paul
a080f0d3a2 [Training Utils] create a utility for casting the lora params during training. (#6553)
create a utility for casting the lora params during training.
2024-01-15 13:51:13 +05:30
Sayak Paul
79df50388d [Training] fix training resuming problem when using FP16 (SDXL LoRA DreamBooth) (#6514)
* fix: training resume from fp16.

* add: comment

* remove residue from another branch.

* remove more residues.

* thanks to Younes; no hacks.

* style.

* clean things a bit and modularize _set_state_dict_into_text_encoder

* add comment about the fix detailed.
2024-01-12 17:11:06 +05:30
Vinh H. Pham
7d631825b0 Make Dreambooth SD Training Script torch.compile compatible (#6532)
* support compile

* make style

* move unwrap_model inside function

* change unwrap call

* run make style

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Revert "Update examples/dreambooth/train_dreambooth.py"

This reverts commit 70ab09732e.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-12 12:50:15 +05:30
gzguevara
33d2b5b087 SD text-to-image torch compile compatible (#6519)
* added unwrapper

* fiz typo
2024-01-12 09:28:35 +05:30
Suvaditya Mukherjee
f486d34b04 Make ControlNet SD Training Script torch.compile compatible (#6525)
* update: make controlnet script torch compile compatible

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* update: correct earlier mistakes for compilation

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* update: fix code style issues

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

---------

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
2024-01-12 09:27:26 +05:30
Charchit Sharma
e44b205e0b Make ControlNet SDXL Training Script torch.compile compatible (#6526)
* make torch.compile compatible

* fix quality
2024-01-12 09:25:09 +05:30
Vinh H. Pham
60cb44323d Make Dreambooth SD LoRA Training Script torch.compile compatible (#6534)
support compile
2024-01-12 09:24:03 +05:30
Radamés Ajna
1dd0ac9401 [DPO Training] pass tracker name as argument (#6542)
pass tracker name as argumentw
2024-01-12 09:15:39 +05:30
Yassine El Boudouri
c6b04589b6 Remove conversion to RGB (#6479)
* Remove conversion to RGB

* Add a Conversion Function

* Add type hint for convert_method

* Update src/diffusers/utils/loading_utils.py

Update docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docstring

* Optimize imports

* Optimize imports (2)

* Reformat code

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-12 07:20:24 +05:30
Sayak Paul
5dc3471380 [SVD] support generators that are created on GPU (#6484)
* debug generator

* fix?

* fix?

* fix

* remove print.

* revert none check
2024-01-11 20:08:18 +05:30
Aryan V S
9df566e6da [Community] StyleAligned Pipeline (#6489)
* add stylealigned sdxl pipeline

* bugfix

* update docs

* remove einops dependency

* update README

* update example docstring
2024-01-11 14:35:55 +01:00
Sayak Paul
be0b425762 [Training] make checkpointing compatible when using torch.compile (part II) (#6511)
make checkpointing compatible when using torch.compile.
2024-01-11 18:37:30 +05:30
jquintanilla4
da843b3d53 .load_ip_adapter in StableDiffusionXLAdapterPipeline (#6246)
* Added testing notebook and .load_ip_adapter to XLAdapterPipeline

* Added annotations

* deleted testing notebook

* Update src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* code clean up

* Add feature_extractor and image_encoder to components

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-01-11 13:04:08 +05:30
dg845
17cece072a Fix bug in LCM Distillation Scripts when args.unet_time_cond_proj_dim is used (#6523)
* Fix bug where unet's time_cond_proj_dim is not set correctly if using args.unet_time_cond_proj_dim.

* make style
2024-01-11 08:21:07 +05:30
Steven Liu
a551ddf928 [docs] mask_blur and padding_mask_crop (#6498)
new inpaint features
2024-01-10 08:14:34 -08:00
Steven Liu
1d57892980 [docs] Callbacks (#6471)
edits
2024-01-10 08:14:07 -08:00
antoine-scenario
3e8b63216e Add IP-Adapter to StableDiffusionXLControlNetImg2ImgPipeline (#6293)
* add IP-Adapter to StableDiffusionXLControlNetImg2ImgPipeline

Update src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

fix tests

* fix failing test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-09 22:02:11 -10:00
YiYi Xu
dd4459ad79 [Refactor] splitingResnetBlock2D into multiple blocks (#6166)
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-09 21:38:05 -10:00
YiYi Xu
6313645b6b add StableDiffusionXLKDiffusionPipeline (#6447)
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-09 16:29:01 -10:00
Rahul Raman
2d1f2182cc example: Train Instruct pix2 pix with lora implementation (#6469)
* base template file - train_instruct_pix2pix.py

* additional import and parser argument requried for lora

* finetune only instructpix2pix model -- no need to include these layers

* inject lora layers

* freeze unet model -- only lora layers are trained

* training modifications to train only lora parameters

* store only lora parameters

* move train script to research project

* run quality and style code checks

* move train script to a new folder

* add README

* update README

* update references in README

---------

Co-authored-by: Rahul Raman <rahulraman@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-10 06:38:19 +05:30
Steven Liu
3be7c96e28 [docs] Stable video diffusion (#6472)
svd
2024-01-09 09:21:58 -08:00
Steven Liu
3c79dd9dbe [docs] PEFT adapter API (#6499)
follow up
2024-01-09 08:09:15 -08:00
Steven Liu
9d767916da [docs] Fast diffusion (#6470)
* edits

* fix

* feedback
2024-01-09 08:08:31 -08:00
Patrick von Platen
ae7cd5ad4c Link issue template to discussions 2024-01-09 17:00:15 +01:00
Sayak Paul
4497b3ec98 [Training] make DreamBooth SDXL LoRA training script compatible with torch.compile (#6483)
* make it torch.compile comaptible

* make the text encoder compatible too.

* style
2024-01-09 20:11:26 +05:30
Yifan Zhou
fc63ebdd3a [Community Pipeline] Rerender-A-Video: Zero-Shot Video-to-Video Translation (#6332)
* upload codes and doc

* lint

* lint

* lint

* update code

* remove blank lines

* Fix load url
2024-01-09 14:55:34 +01:00
Sayak Paul
8b6fae4ce5 [SVD] fix: vae type (#6475)
fix: vae type
2024-01-09 14:50:02 +01:00
jiqing-feng
aa1797e109 enable stable-xl textual inversion (#6421)
* enable stable-xl textual inversion

* check if optimizer_2 exists

* check text_encoder_2 before using

* add textual inversion for sdxl in a single file

* fix style

* fix example style

* reset for error changes

* add readme for sdxl

* fix style

* disable autocast as it will cause cast error when weight_dtype=bf16

* fix spelling error

* fix style and readme and 8bit optimizer

* add README_sdxl.md link

* add tracker key on log_validation

* run style

* rm the second center crop

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-09 15:12:33 +05:30
Patrick von Platen
5bacc2f5af [SAG] Support more schedulers, add better error message and make tests faster (#6465)
* finish

* finish

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-09 09:24:38 +05:30
Yasuna
6ae7e8112a [Docs] update: tutorials ja | INDEX.md, TUTORIAL_OVERVIEW.md, TOCTREE.yml (#6338)
* add tutorials to toctree.yml

* fix title

* fix words

* add overview ja

* fix diffusion to 拡散

* fix line 21

* add space

* delete supported pipline

* fix tutorial_overview.md

* fix space

* fix typo

* Delete docs/source/ja/tutorials/using_peft_for_inference.md

this file is not translated

* Delete docs/source/ja/tutorials/basic_training.md

this file is not translated

* Delete docs/source/ja/tutorials/autopipeline.md

this file is not translated

* fix toctree
2024-01-08 09:06:46 -08:00
Sayak Paul
e0f349c2b0 correct reviewers in PR template (#6485) 2024-01-08 19:12:04 +05:30
Sayak Paul
774f5c4581 minor changes to the SVD doc (#6466)
minor changes
2024-01-06 08:40:46 +05:30
Sayak Paul
a483a8eddf Rename REAMDE.md to README.md 2024-01-05 20:49:44 +05:30
Lucain
9a9daee724 Fix offline mode import (#6467) 2024-01-05 15:34:40 +01:00
Sayak Paul
585f941366 [Core] introduce PeftAdapterMixin module. (#6416)
* introduce integrations module.

* remove duplicate methods.

* better imports.

* move to loaders.py

* remove peftadaptermixin from modelmixin.

* add: peftadaptermixin selectively.

* add: entry to _toctree

* Empty-Commit
2024-01-05 18:18:28 +05:30
Dhruv Nair
86a26761ac Correctly handle creating model index json files when setting compiled modules in pipelines. (#6436)
update
2024-01-05 18:02:09 +05:30
Liang Hou
6ef2b8a92f Fix amused paper link (#6462) 2024-01-05 13:12:09 +01:00
Vinh H. Pham
3848606c7e [Community Pipeline] Add gluegen (#6433)
* init works

* add gluegen pipeline

* add gluegen code

* add another way to load language adapter

* make style

* Update README.md

* change doc
2024-01-05 13:05:40 +01:00
Sayak Paul
2a97067b84 [Experimental] Diffusion LoRA DPO training (#6422)
* add: experimental script for diffusion dpo training.

* random_crop cli.

* fix: caption tokenization.

* fix: pixel_values index.

* fix: grad?

* debug

* fix: reduction.

* fixes in the loss calculation.

* style

* fix: unwrap call.

* fix: validation inference.

* add: initial sdxl script

* debug

* make sure images in the tuple are of same res

* fix model_max_length

* report print

* boom

* fix: numerical issues.

* fix: resolution

* comment about resize.

* change the order of the training transformation.

* save call.

* debug

* remove print

* manually detaching necessary?

* use the same vae for validation.

* add: readme.
2024-01-05 16:40:06 +05:30
Sayak Paul
ae060fc4f1 [feat] introduce unload_lora(). (#6451)
* introduce unload_lora.

* fix-copies
2024-01-05 16:22:11 +05:30
Sayak Paul
9d945b2b90 0.25.0 post release (#6358)
* post release

* style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2024-01-05 16:13:27 +05:30
Junsheng121
d184291c7d null-text-inversion-pipeline-implementation (#6329)
* null-text-inversion-implementation

* edited

* edited

* edited

* edited

* edited

* edit

* makestyle

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-05 11:35:21 +01:00
Sayak Paul
0a0bb526aa [LoRA depcrecation] LoRA depcrecation trilogy (#6450)
* edebug

* debug

* more debug

* more more debug

* remove tests for LoRAAttnProcessors.

* rename
2024-01-05 15:48:20 +05:30
Linoy Tsaban
2fada8dc1b [bug fix] fixes #6444 - checkpointing save issue in advanced dreambooth lora sdxl script (#6464)
* unwrap text encoder when saving hook only for full text encoder tuning

* unwrap text encoder when saving hook only for full text encoder tuning

* save embeddings in each checkpoint as well

* save embeddings in each checkpoint as well

* save embeddings in each checkpoint as well

* Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-05 15:35:24 +05:30
jiqing-feng
f2d51a28f7 Intel Gen 4 Xeon and later support bf16 (#6367)
* Intel Gen 4 Xeon and later support bf16

* fix bf16 notes
2024-01-05 11:47:28 +05:30
Horseee
811fd06292 [Doc] Add DeepCache in section optimization/General optimizations (#6390)
* add documentation for DeepCache

* fix typo

* add wandb url for DeepCache

* fix some typos

* add item in _toctree.yml

* update formats for arguments

* Update deepcache.md

* Update docs/source/en/optimization/deepcache.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* add StableDiffusionXLPipeline in doc

* Separate SDPipeline and SDXLPipeline

* Add the paper link of ablation experiments for hyper-parameters

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-01-05 09:57:08 +05:30
dg845
f3d1333e02 Improve LCM(-LoRA) Distillation Scripts (#6420)
* Make WDS pipeline interpolation type configurable.

* Make the VAE encoding batch size configurable.

* Make lora_alpha and lora_dropout configurable for LCM LoRA scripts.

* Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable.

* Make LoRA target modules configurable for LCM-LoRA scripts.

* Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script.

* apply suggestions from review
2024-01-05 06:55:13 +05:30
Steven Liu
acd926f4f2 [docs] Fix local links (#6440)
fix local links
2024-01-04 09:59:11 -08:00
Lucain
691d8d3e15 Respect offline mode when loading pipeline (#6456)
* Respect offline mode when loading model

* default to local entry if connectionerror
2024-01-04 17:05:55 +01:00
Sayak Paul
e7c0af5e71 add: amused paper link. (#6453) 2024-01-04 13:44:54 +05:30
Sayak Paul
107e02160a [LoRA tests] fix stuff related to assertions arising from the recent changes. (#6448)
* debug

* debug test_with_different_scales_fusion_equivalence

* use the right method.

* place it right.

* let's see.

* let's see again

* alright then.

* add a comment.
2024-01-04 12:55:15 +05:30
sayakpaul
6dbef45e6e Revert "debug"
This reverts commit 7715e6c31c.
2024-01-04 10:39:38 +05:30
sayakpaul
7715e6c31c debug 2024-01-04 10:39:00 +05:30
sayakpaul
05b3d36a25 Revert "debug"
This reverts commit fb4aec0ce3.
2024-01-04 10:38:04 +05:30
sayakpaul
fb4aec0ce3 debug 2024-01-04 10:37:28 +05:30
Sayak Paul
63de23e3db disable running peft non-peft lora test in the peft env. (#6437)
* disable running peft non-peft lora test in the peft env.

* Empty-Commit
2024-01-04 10:18:46 +05:30
Chi
2993257f2a Batter way to write binarize() function. (#6394)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* Batter way to write binarize function

* Solve check_code_quality error

* My mistake to run pull request but not reformated file

* Update image_processor.py

* remove extra variable and space

* Update image_processor.py

* Run ruff libarary to reformat my file

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-01-04 09:32:08 +05:30
Sayak Paul
aad18faa3e Update README_sdxl.md to update the LR (#6432)
Update README_sdxl.md
2024-01-03 20:55:51 +05:30
Sayak Paul
d700140076 [LoRA deprecation] handle rest of the stuff related to deprecated lora stuff. (#6426)
* handle rest of the stuff related to deprecated lora stuff.

* fix: copies

* don't modify the uNet in-place.

* fix: temporal autoencoder.

* manually remove lora layers.

* don't copy unet.

* alright

* remove lora attn processors from unet3d

* fix: unet3d.

* styl

* Empty-Commit
2024-01-03 20:54:09 +05:30
Sayak Paul
2e4dc3e25d [LoRA] add: test to check if peft loras are loadable in non-peft envs. (#6400)
* add: test to check if peft loras are loadable in non-peft envs.

* add torch_device approrpiately.

* fix: get_dummy_inputs().

* test logits.

* rename

* debug

* debug

* fix: generator

* new assertion values after fixing the seed.

* shape

* remove print statements and settle this.

* to update values.

* change values when lora config is initialized under a fixed seed.

* update colab link

* update notebook link

* sanity restored by getting the exact same values without peft.
2024-01-03 09:57:49 +05:30
YiYi Xu
3e2961f0b4 [doc] update inpaint doc to use apply_overlay (#6364)
add doc

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-02 11:16:36 -10:00
Vinh H. Pham
79c380bc80 Correct how apply_overlay read crop_coords (#6417)
correct reading variables
2024-01-02 19:31:12 +01:00
Aryan V S
e30b661437 Update lpw_xl pipeline to latest diffusers (#6411)
* add clip_skip, freeu, qkv

* fix

* add ip-adapter support

* callback on step end

* update

* fix NoneType bug

* fix

* add guidance scale embedding

* add textual inversion
2024-01-02 16:28:45 +01:00
Linoy Tsaban
b4077af212 [bug fix] using snr gamma and prior preservation loss in the dreambooth lora sdxl training scripts (#6356)
* change timesteps used to calculate snr when --with_prior_preservation is enabled

* change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script)

* style

* revert canonical script to before snr gamma change

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-02 09:21:39 -06:00
Daniel Socek
9f2bff502e [svd] fix noise_aug_strength type in svd pipe (#6389) 2024-01-02 14:45:07 +01:00
CyrusVorwald
0cb92717f9 add StableDiffusionXLControlNetInpaintPipeline to auto pipeline (#6302)
* add StableDiffusionXLControlNetInpaintPipeline to auto pipeline

* fixed style
2024-01-02 14:44:48 +01:00
Fabio Rigano
86714b72d0 Add unload_ip_adapter method (#6192)
* Add unload_ip_adapter method

* Update attn_processors with original layers

* Add test

* Use set_default_attn_processor

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-02 14:40:46 +01:00
Sayak Paul
61f6c5472a [LoRA] Remove the use of depcrecated loRA functionalities such as LoRAAttnProcessor (#6369)
* start deprecating loraattn.

* fix

* wrap into unet_lora_state_dict

* utilize text_encoder_lora_params

* utilize text_encoder_attn_modules

* debug

* debug

* remove print

* don't use text encoder for test_stable_diffusion_lora

* load the procs.

* set_default_attn_processor

* fix: set_default_attn_processor call.

* fix: lora_components[unet_lora_params]

* checking for 3d.

* 3d.

* more fixes.

* debug

* debug

* debug

* debug

* more debug

* more debug

* more debug

* more debug

* more debug

* more debug

* hack.

* remove comments and prep for a PR.

* appropriate set_lora_weights()

* fix

* fix: test_unload_lora_sd

* fix: test_unload_lora_sd

* use dfault attebtion processors.

* debu

* debug nan

* debug nan

* debug nan

* use NaN instead of inf

* remove comments.

* fix: test_text_encoder_lora_state_dict_unchanged

* attention processor default

* default attention processors.

* default

* style
2024-01-02 18:14:04 +05:30
lookas
17546020fc Fix #6409 (#6410)
* Update value_guided_sampling.py

Fix #6409

* Comply code style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-02 11:16:21 +05:30
2510
8a366b835c Fix gradient-checkpointing option is ignored in SDXL+LoRA training. (#6388) (#6402)
* Fix gradient-checkpointing option is ignored in SDXL+LoRA training. (#6388)

* Fix gradient-checkpointing option is ignored in SD+LoRA training.

* Fix gradient checkpoint is not applied to text encoders. (SDXL+LoRA)

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-01-01 08:51:04 +05:30
Sayak Paul
61d223c884 add: CUDA graph details. (#6408) 2023-12-31 13:43:26 +05:30
apolinário
bf725e044e Add new WebUI conversion state_dict_utils to __init__ utils (#6404)
* Add new state_dict_utils to __init__ utils

* style

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2023-12-30 09:31:39 -06:00
apolinário
1622265e13 Add WebUI format support to Advanced Training Script (#6403)
* Add WebUI format support to Advanced Training Script

* style

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2023-12-30 08:45:49 -06:00
apolinário
0b63ad5ad5 Create convert_diffusers_sdxl_lora_to_webui.py (#6395)
* Create convert_diffusers_sdxl_lora_to_webui.py

* Move some conversion logic to utils

* fix logging import

* Add usage example

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2023-12-30 08:15:11 -06:00
Sayak Paul
6a376ceea2 [LoRA] remove unnecessary components from lora peft test suite (#6401)
remove unnecessary components from lora peft suite/
2023-12-30 18:25:40 +05:30
gzguevara
9f283b01d2 changed w&b report link (#6387) 2023-12-29 19:49:11 +05:30
Sayak Paul
203724e9d9 [Docs] add note on fp16 in fast diffusion (#6380)
add note on fp16
2023-12-29 09:38:50 +05:30
gzguevara
e7044a4221 multi-subject-dreambooth-inpainting with 🤗 datasets (#6378)
* files added

* fixing code quality

* fixing code quality

* fixing code quality

* fixing code quality

* sorted import block

* seperated import wandb

* ruff on script

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-29 09:33:49 +05:30
Sayak Paul
034b39b8cb [docs] add details concerning diffusers-specific bits. (#6375)
add details concerning diffusers-specific bits.
2023-12-28 23:12:49 +05:30
Sayak Paul
2db73f4a50 remove delete documentation trigger workflows. (#6373) 2023-12-28 18:26:14 +05:30
Adrian Punga
84d7faebe4 Fix support for MPS in KDPM2AncestralDiscreteScheduler (#6365)
Fix support for MPS

MPS doesn't support float64
2023-12-28 10:22:02 +01:00
YiYi Xu
4c483deb90 [refactor embeddings] gligen + ip-adapter (#6244)
* refactor ip-adapter-imageproj, gligen

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-12-27 18:48:42 -10:00
Sayak Paul
1ac07d8a8d [Training examples] Follow up of #6306 (#6346)
* add to dreambooth lora.

* add: t2i lora.

* add: sdxl t2i lora.

* style

* lcm lora sdxl.

* unwrap

* fix: enable_adapters().
2023-12-28 07:37:50 +05:30
apolinário
1fff527702 Fix keys for lora format on advanced training scripts (#6361)
fix keys for lora format on advanced training scripts
2023-12-27 11:38:03 -06:00
apolinário
645a62bf3b Add PEFT to advanced training script (#6294)
* Fix ProdigyOPT in SDXL Dreambooth script

* style

* style

* Add PEFT to Advanced Training Script

* style

* style

*  style 

* change order for logic operation

* add lora alpha

* style

* Align PEFT to new format

* Update train_dreambooth_lora_sdxl_advanced.py

Apply #6355 fix

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2023-12-27 10:00:32 -03:00
Dhruv Nair
6414d4e4f9 Fix chunking in SVD (#6350)
fix
2023-12-27 13:07:41 +01:00
Andy W
43672b4a22 Fix "push_to_hub only create repo in consistency model lora SDXL training script" (#6102)
* fix

* style fix

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-27 15:25:19 +05:30
dg845
9df3d84382 Fix LCM distillation bug when creating the guidance scale embeddings using multiple GPUs. (#6279)
Fix bug when creating the guidance embeddings using multiple GPUs.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-27 14:25:21 +05:30
Jianqi Pan
c751449011 fix: use retrieve_latents (#6337) 2023-12-27 10:44:26 +05:30
Dhruv Nair
c1e8bdf1d4 Move ControlNetXS into Community Folder (#6316)
* update

* update

* update

* update

* update

* make style

* remove docs

* update

* move to research folder.

* fix-copies

* remove _toctree entry.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-27 08:15:23 +05:30
Sayak Paul
78b87dc25a [LoRA] make LoRAs trained with peft loadable when peft isn't installed (#6306)
* spit diffusers-native format from the get go.

* rejig the peft_to_diffusers mapping.
2023-12-27 08:01:10 +05:30
Will Berman
0af12f1f8a amused update links to new repo (#6344)
* amused update links to new repo

* lint
2023-12-26 22:46:28 +01:00
Justin Ruan
6e123688dc Remove unused parameters and fixed FutureWarning (#6317)
* Remove unused parameters and fixed `FutureWarning`

* Fixed wrong config instance

* update unittest for `DDIMInverseScheduler`
2023-12-26 22:09:10 +01:00
YiYi Xu
f0a588b8e2 adding auto1111 features to inpainting pipeline (#6072)
* add inpaint_full_res

* fix

* update

* move get_crop_region to image processor

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* move apply_overlay to image processor

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-26 10:20:29 -10:00
priprapre
fa31704420 [SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run (#6270)
Replace the link for wandb log with the correct run
2023-12-26 21:13:11 +01:00
Sayak Paul
9d79991da0 [Docs] fix: video rendering on svd. (#6330)
fix: video rendering on svd.
2023-12-26 21:05:22 +01:00
Will Berman
7d865ac9c6 amused other pipelines docs (#6343)
other pipelines
2023-12-26 20:20:32 +01:00
Dhruv Nair
fb02316db8 Add AnimateDiff conversion scripts (#6340)
* add scripts

* update
2023-12-26 22:40:00 +05:30
Dhruv Nair
98a2b3d2d8 Update Animatediff docs (#6341)
* update

* update

* update
2023-12-26 22:39:46 +05:30
Dhruv Nair
2026ec0a02 Interruptable Pipelines (#5867)
* add interruptable pipelines

* add tests

* updatemsmq

* add interrupt property

* make fix copies

* Revert "make fix copies"

This reverts commit 914b35332b.

* add docs

* add tutorial

* Update docs/source/en/tutorials/interrupting_diffusion_process.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/tutorials/interrupting_diffusion_process.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* update

* fix quality issues

* fix

* update

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-26 22:39:26 +05:30
dg845
3706aa3305 Add rescale_betas_zero_snr Argument to DDPMScheduler (#6305)
* Add rescale_betas_zero_snr argument to DDPMScheduler.

* Propagate rescale_betas_zero_snr changes to DDPMParallelScheduler.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-26 17:54:30 +01:00
Sayak Paul
d4f10ea362 [Diffusion fast] add doc for diffusion fast (#6311)
* add doc for diffusion fast

* add entry to _toctree

* Apply suggestions from code review

* fix titlew

* fix: title entry

* add note about fuse_qkv_projections
2023-12-26 22:19:55 +05:30
Younes Belkada
3aba99af8f [Peft / Lora] Add adapter_names in fuse_lora (#5823)
* add adapter_name in fuse

* add tesrt

* up

* fix CI

* adapt from suggestion

* Update src/diffusers/utils/testing_utils.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* change to `require_peft_version_greater`

* change variable names in test

* Update src/diffusers/loaders/lora.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* break into 2 lines

* final comments

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-12-26 16:54:47 +01:00
Sayak Paul
6683f97959 [Training] Add datasets version of LCM LoRA SDXL (#5778)
* add: script to train lcm lora for sdxl with 🤗 datasets

* suit up the args.

* remove comments.

* fix num_update_steps

* fix batch unmarshalling

* fix num_update_steps_per_epoch

* fix; dataloading.

* fix microconditions.

* unconditional predictions debug

* fix batch size.

* no need to use use_auth_token

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* make vae encoding batch size an arg

* final serialization in kohya

* style

* state dict rejigging

* feat: no separate teacher unet.

* debug

* fix state dict serialization

* debug

* debug

* debug

* remove prints.

* remove kohya utility and make style

* fix serialization

* fix

* add test

* add peft dependency.

* add: peft

* remove peft

* autocast device determination from accelerator

* autocast

* reduce lora rank.

* remove unneeded space

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style

* remove prompt dropout.

* also save in native diffusers ckpt format.

* debug

* debug

* debug

* better formation of the null embeddings.

* remove space.

* autocast fixes.

* autocast fix.

* hacky

* remove lora_sayak

* Apply suggestions from code review

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* style

* make log validation leaner.

* move back enabled in.

* fix: log_validation call.

* add: checkpointing tests

* taking my chances to see if disabling autocasting has any effect?

* start debugging

* name

* name

* name

* more debug

* more debug

* index

* remove index.

* print length

* print length

* print length

* move unet.train() after add_adapter()

* disable some prints.

* enable_adapters() manually.

* remove prints.

* some changes.

* fix params_to_optimize

* more fixes

* debug

* debug

* remove print

* disable grad for certain contexts.

* Add support for IPAdapterFull (#5911)

* Add support for IPAdapterFull


Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix a bug in `add_noise` function  (#6085)

* fix

* copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>

* [Advanced Diffusion Script] Add Widget default text (#6100)

add widget

* [Advanced Training Script] Fix pipe example (#6106)

* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)

* adapter for StableDiffusionControlNetImg2ImgPipeline

* fix-copies

* fix-copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* IP adapter support for most pipelines (#5900)

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py

* update tests

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py

* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py

* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

* revert changes to sd_attend_and_excite and sd_upscale

* make style

* fix broken tests

* update ip-adapter implementation to latest

* apply suggestions from review

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* fix: lora_alpha

* make vae casting conditional/

* param upcasting

* propagate comments from https://github.com/huggingface/diffusers/pull/6145

Co-authored-by: dg845 <dgu8957@gmail.com>

* [Peft] fix saving / loading when unet is not "unet" (#6046)

* [Peft] fix saving / loading when unet is not "unet"

* Update src/diffusers/loaders/lora.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* undo stablediffusion-xl changes

* use unet_name to get unet for lora helpers

* use unet_name

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [Wuerstchen] fix fp16 training and correct lora args (#6245)

fix fp16 training

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [docs] fix: animatediff docs (#6339)

fix: animatediff docs

* add: note about the new script in readme_sdxl.

* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"

This reverts commit 4c7e983bb5.

* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"

This reverts commit 0bb9cf0216.

* Revert "[docs] fix: animatediff docs (#6339)"

This reverts commit 11659a6f74.

* remove tokenize_prompt().

* assistive comments around enable_adapters() and diable_adapters().

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com>
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com>
Co-authored-by: dg845 <dgu8957@gmail.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
2023-12-26 21:22:05 +05:30
Sayak Paul
4e7b0cb396 [docs] fix: animatediff docs (#6339)
fix: animatediff docs
2023-12-26 19:13:49 +05:30
Kashif Rasul
35b81fffae [Wuerstchen] fix fp16 training and correct lora args (#6245)
fix fp16 training

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-26 11:40:04 +01:00
Kashif Rasul
e0d8c910e9 [Peft] fix saving / loading when unet is not "unet" (#6046)
* [Peft] fix saving / loading when unet is not "unet"

* Update src/diffusers/loaders/lora.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* undo stablediffusion-xl changes

* use unet_name to get unet for lora helpers

* use unet_name

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-26 11:39:28 +01:00
dg845
a3d31e3a3e Change LCM-LoRA README Script Example Learning Rates to 1e-4 (#6304)
Change README LCM-LoRA example learning rates to 1e-4.
2023-12-25 21:29:20 +05:30
Jianqi Pan
84c403aedb fix: cannot set guidance_scale (#6326)
fix: set guidance_scale
2023-12-25 21:16:57 +05:30
Sayak Paul
f4b0b26f7e [Tests] Speed up example tests (#6319)
* remove validation args from textual onverson tests

* reduce number of train steps in textual inversion tests

* fix: directories.

* debig

* fix: directories.

* remove validation tests from textual onversion

* try reducing the time of test_text_to_image_checkpointing_use_ema

* fix: directories

* speed up test_text_to_image_checkpointing

* speed up test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints

* fix

* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints

* set checkpoints_total_limit to 2.

* test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints speed up

* speed up test_unconditional_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints

* debug

* fix: directories.

* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit

* speed up: test_controlnet_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints

* speed up test_controlnet_sdxl

* speed up dreambooth tests

* speed up test_dreambooth_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints

* speed up test_custom_diffusion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints

* speed up test_text_to_image_lora_sdxl_text_encoder_checkpointing_checkpoints_total_limit

* speed up # checkpoint-2 should have been deleted

* speed up examples/text_to_image/test_text_to_image.py::TextToImage::test_text_to_image_checkpointing_checkpoints_total_limit

* additional speed ups

* style
2023-12-25 19:50:48 +05:30
Sayak Paul
89459a5d56 fix: lora peft dummy components (#6308)
* fix: lora peft dummy components

* fix: dummy components
2023-12-25 11:26:45 +05:30
Sayak Paul
008d9818a2 fix: t2i apdater paper link (#6314) 2023-12-25 10:45:14 +05:30
mwkldeveloper
2d43094ffc fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same in train_text_to_image_lora.py (#6259)
* fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same

* format source code

* format code

* remove the autocast blocks within the pipeline

* add autocast blocks to pipeline caller in train_text_to_image_lora.py
2023-12-24 14:34:35 +05:30
Celestial Phineas
7c05b975b7 Fix typos in the ValueError for a nested image list as StableDiffusionControlNetPipeline input. (#6286)
Fixed typos in the `ValueError` for a nested image list as input.
2023-12-24 14:32:24 +05:30
Dhruv Nair
fe574c8b29 LoRA Unfusion test fix (#6291)
update

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-24 14:31:48 +05:30
Sayak Paul
90b9479903 [LoRA PEFT] fix LoRA loading so that correct alphas are parsed (#6225)
* initialize alpha too.

* add: test

* remove config parsing

* store rank

* debug

* remove faulty test
2023-12-24 09:59:41 +05:30
apolinário
df76a39e1b Fix Prodigy optimizer in SDXL Dreambooth script (#6290)
* Fix ProdigyOPT in SDXL Dreambooth script

* style

* style
2023-12-22 06:42:04 -06:00
Bingxin Ke
3369bc810a [Community Pipeline] Add Marigold Monocular Depth Estimation (#6249)
* [Community Pipeline] Add Marigold Monocular Depth Estimation

- add single-file pipeline
- update README

* fix format - add one blank line

* format script with ruff

* use direct image link in example code

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-22 15:41:46 +05:30
Pedro Cuenca
7fe47596af Allow diffusers to load with Flax, w/o PyTorch (#6272) 2023-12-22 09:37:30 +01:00
Dhruv Nair
59d1caa238 Remove peft tests from old lora backend tests (#6273)
update
2023-12-22 13:35:52 +05:30
Dhruv Nair
c022e52923 Remove ONNX inpaint legacy (#6269)
update

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-22 13:35:21 +05:30
Will Berman
4039815276 open muse (#5437)
amused

rename

Update docs/source/en/api/pipelines/amused.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

AdaLayerNormContinuous default values

custom micro conditioning

micro conditioning docs

put lookup from codebook in constructor

fix conversion script

remove manual fused flash attn kernel

add training script

temp remove training script

add dummy gradient checkpointing func

clarify temperatures is an instance variable by setting it

remove additional SkipFF block args

hardcode norm args

rename tests folder

fix paths and samples

fix tests

add training script

training readme

lora saving and loading

non-lora saving/loading

some readme fixes

guards

Update docs/source/en/api/pipelines/amused.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Update examples/amused/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Update examples/amused/train_amused.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

vae upcasting

add fp16 integration tests

use tuple for micro cond

copyrights

remove casts

delegate to torch.nn.LayerNorm

move temperature to pipeline call

upsampling/downsampling changes
2023-12-21 11:40:55 -08:00
Sayak Paul
5b186b7128 [Refactor] move ldm3d out of stable_diffusion. (#6263)
ldm3d.
2023-12-21 18:59:55 +05:30
Sayak Paul
ab0459f2b7 [Deprecated pipelines] remove pix2pix zero from init (#6268)
remove pix2pix zero from init
2023-12-21 18:17:28 +05:30
Sayak Paul
9c7cc36011 [Refactor] move panorama out of stable_diffusion (#6262)
* move panorama out.

* fix: diffedit

* fix: import.

* fix: impirt
2023-12-21 18:17:05 +05:30
Sayak Paul
325f6c53ed [Refactor] move attend and excite out of stable_diffusion. (#6261)
* move attend and excite out.

* fix: import

* fix diffedit
2023-12-21 16:49:32 +05:30
Benjamin Bossan
43979c2890 TST Fix LoRA test that fails with PEFT >= 0.7.0 (#6216)
See #6185 for context.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-21 11:50:05 +01:00
Sayak Paul
9ea6ac1b07 [Refactor] move sag out of stable_diffusion (#6264)
move sag out of .
2023-12-21 16:09:49 +05:30
Sayak Paul
2c34c7d6dd [Refactor] move gligen out of stable diffusion. (#6265)
* move gligen out of stable diffusion.

* fix: import

* fix import module
2023-12-21 15:26:52 +05:30
Sayak Paul
bffadde126 [Refactor] move k diffusion out of stable_diffusion (#6267)
move k diffusion out of stable_diffusion
2023-12-21 15:24:24 +05:30
YShow
35a969d297 [Training] remove depcreated method from lora scripts again (#6266)
* remove depcreated method from lora scripts

* check code quality
2023-12-21 14:17:52 +05:30
sayakpaul
c5ff469d0e Revert "move attend and excite out of stable_diffusion"
This reverts commit bcecfbc873.
2023-12-21 12:35:58 +05:30
sayakpaul
bcecfbc873 move attend and excite out of stable_diffusion 2023-12-21 12:35:09 +05:30
Sayak Paul
6269045c5b [Refactor] move diffedit out of stable_diffusion (#6260)
* move diffedit out of stable_diffuson

* fix: import

* style

* fix: import
2023-12-21 12:26:36 +05:30
lvzi
6ca9c4af05 fix: unscale fp16 gradient problem & potential error (#6086) (#6231)
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-21 09:09:26 +05:30
dependabot[bot]
0532cece97 Bump transformers from 4.34.0 to 4.36.0 in /examples/research_projects/realfill (#6255)
Bump transformers in /examples/research_projects/realfill

Bumps [transformers](https://github.com/huggingface/transformers) from 4.34.0 to 4.36.0.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.34.0...v4.36.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 09:03:17 +05:30
Sayak Paul
22b45304bf [Refactor upsamplers and downsamplers] separate out upsamplers and downsamplers. (#6128)
* separate out upsamplers and downsamplers.

* import all the necessary blocks in resnet for backward comp.

* move upsample2d and downsample2d to utils.

* move downsample_2d to downsamplers.py

* apply feedback

* fix import

* samplers -> sampling
2023-12-20 21:01:33 +05:30
Beinsezii
457abdf2cf EulerAncestral add rescale_betas_zero_snr (#6187)
* EulerAncestral add `rescale_betas_zero_snr`

Uses same infinite sigma fix from EulerDiscrete. Interestingly the
ancestral version had the opposite problem: too much contrast instead of
too little.

* UT for EulerAncestral `rescale_betas_zero_snr`

* EulerAncestral upcast samples during step()

It helps this scheduler too, particularly when the model is using bf16.

While the noise dtype is still the model's it's automatically upcasted
for the add so all it affects is determinism.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-20 13:09:25 +05:30
hako-mikan
ff43dba7ea [Fix] Fix Regional Prompting Pipeline (#6188)
* Update regional_prompting_stable_diffusion.py

* reformat

* reformat

* reformat

* reformat

* reformat

* reformat

* reformat

* regormat

* reformat

* reformat

* reformat

* reformat

* Update regional_prompting_stable_diffusion.py

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-20 10:37:19 +05:30
Steven Liu
5433962992 [docs] Batched seeds (#6237)
batched seed
2023-12-19 16:50:18 -08:00
raven
df476d9f63 [Docs] Fix a code example in the ControlNet Inpainting documentation (#6236)
fix document on masked image in inpainting controlnet
2023-12-19 13:14:37 -08:00
YiYi Xu
3e71a20650 [refactor embeddings]pixart-alpha (#6212)
pixart-alpha

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-12-19 07:07:24 -10:00
Sayak Paul
bf40d7d82a add peft dependency to fast push tests (#6229)
* add peft dependency

* add peft dependency at the correct place.
2023-12-19 13:26:25 +05:30
Dhruv Nair
32ff4773d4 ControlNetXS fixes. (#6228)
update
2023-12-19 11:58:34 +05:30
Sayak Paul
288ceebea5 [T2I LoRA training] fix: unscale fp16 gradient problem (#6119)
* fix: unscale fp16 gradient problem

* fix for dreambooth lora sdxl

* make the type-casting conditional.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-19 09:54:17 +05:30
Sayak Paul
9221da4063 fix: init for vae during pixart tests (#6215)
* fix: init for vae during pixart tests

* print the values

* add flatten

* correct assertion value for test_inference

* correct assertion values for test_inference_non_square_images

* run styling

* debug test_inference_with_multiple_images_per_prompt

* fix assertion values for test_inference_with_multiple_images_per_prompt
2023-12-18 18:16:57 -10:00
YiYi Xu
57fde871e1 offload the optional module image_encoder (#6151)
* offload image_encoder

* add test

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-18 15:10:01 -10:00
Fabio Rigano
68e962395c Add converter method for ip adapters (#6150)
* Add converter method for ip adapters

* Move converter method

* Update to image proj converter

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-18 13:46:43 -10:00
Dhruv Nair
781775ea56 Slow Test for Pipelines minor fixes (#6221)
update
2023-12-19 00:45:51 +05:30
Patrick von Platen
fa3c86beaf [SVD] Fix guidance scale (#6002)
* [SVD] Fix guidance scale

* make style
2023-12-18 19:33:24 +01:00
Haofan Wang
7d0a47f387 Update train_text_to_image_lora.py (#6144)
* Update train_text_to_image_lora.py

* Fix typo?

---------

Co-authored-by: M. Tolga Cangöz <46008593+standardAI@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-18 19:33:05 +01:00
Aryan V S
67b3d3267e Support img2img and inpaint in lpw-xl (#6114)
* add img2img and inpaint support to lpw-xl

* update community README

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-18 19:19:11 +01:00
TilmannR
4e77056885 Update README.md (#6191)
Typo: The script for LoRA training is `train_text_to_image_lora_prior.py` not `train_text_to_image_prior_lora.py`.

Alternatively you could rename the file and keep the README.md unchanged.
2023-12-18 19:08:29 +01:00
Dhruv Nair
a0c54828a1 Deprecate Pipelines (#6169)
* deprecate pipe

* make style

* update

* add deprecation message

* format

* remove tests for deprecated pipelines

* remove deprecation message

* make style

* fix copies

* clean up

* clean

* clean

* clean

* clean up

* clean up

* clean up toctree

* clean up

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-18 23:08:29 +05:30
Patrick von Platen
8d891e6e1b [Torch Compile] Fix torch compile for svd vae (#6217) 2023-12-18 18:21:17 +01:00
Patrick von Platen
cce1fe2d41 [Text-to-Video] Clean up pipeline (#6213)
* make style

* make style

* make style

* make style
2023-12-18 18:21:09 +01:00
Abin Thomas
d816bcb5e8 Fix t2i. blog url (#6205) 2023-12-18 09:12:28 -08:00
d8ahazard
6976cab7ca Fix possible re-conversion issues after extracting from safetensors (#6097)
* Fix possible re-conversion issues after extracting from diffusers

Properly rename specific vae keys.

* Whoops
2023-12-18 11:51:20 +01:00
Dhruv Nair
fcbed3fa79 Fix SDXL Inpainting from single file with Refiner Model (#6147)
* update

* update

* update
2023-12-18 11:45:37 +01:00
Sayak Paul
b98b314b7a [Training] remove depcreated method from lora scripts. (#6207)
remove depcreated method from lora scripts.
2023-12-18 15:52:43 +05:30
Omar Sanseviero
74558ff65b Nit fix to training params (#6200) 2023-12-18 11:06:16 +01:00
Yudong Jin
49644babd3 Fix the test script in examples/text_to_image/README.md (#6209)
* Update examples/text_to_image/README.md

* Update examples/text_to_image/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-18 15:36:00 +05:30
Sayak Paul
56b3b21693 [Refactor autoencoders] feat: introduce autoencoders module (#6129)
* feat: introduce autoencoders module

* more changes for styling and copy fixing

* path changes in the docs.

* fix: import structure in init.

* fix controlnetxs import
2023-12-18 12:42:15 +05:30
Sayak Paul
9cef07da5a [Benchmarks] fix: lcm benchmarking reporting (#6198)
* fix: lcm benchmarking reporting

* fix generate_csv_dict call.
2023-12-17 15:32:11 +05:30
Sayak Paul
2d94c7838e [Core] feat: enable fused attention projections for other SD and SDXL pipelines (#6179)
* feat: enable fused attention projections for other SD and SDXL pipelines

* add: test for SD fused projections.
2023-12-16 08:45:54 +05:30
Sayak Paul
a81334e3f0 [LoRA] add an error message when dealing with _best_guess_weight_name ofline (#6184)
* add an error message when dealing with _best_guess_weight_name ofline

* simplify condition
2023-12-16 08:36:08 +05:30
Dhruv Nair
d704a730cd Compile test fix (#6104)
* update

* update
2023-12-15 18:34:46 +05:30
dg845
49db233b35 Clean Up Comments in LCM(-LoRA) Distillation Scripts. (#6145)
* Clean up comments in LCM(-LoRA) distillation scripts.

* Calculate predicted source noise noise_pred correctly for all prediction_types.

* make style

* apply suggestions from review

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-15 18:18:16 +05:30
Dhruv Nair
93ea26f272 Add PEFT to training deps (#6148)
add peft to training deps

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-15 08:39:59 +05:30
Dhruv Nair
f5dfe2a8b0 LoRA test fixes (#6163)
* update

* update

* update

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-15 08:39:41 +05:30
Patrick von Platen
4836cfad98 [Sigmas] Keep sigmas on CPU (#6173)
* correct

* Apply suggestions from code review

* make style
2023-12-15 07:43:18 +05:30
Kuba
1ccbfbb663 [docs] Add missing \ in lora.md (#6174) 2023-12-14 16:55:43 -08:00
Linoy Tsaban
29dfe22a8e [advanced dreambooth lora sdxl training script] load pipeline for inference only if validation prompt is used (#6171)
* load pipeline for inference only if validation prompt is used

* move things outside

* load pipeline for inference only if validation prompt is used

* fix readme when validation prompt is used

---------

Co-authored-by: linoytsaban <linoy@huggingface.co>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
2023-12-14 11:45:33 -06:00
Aryan V S
56806cdbfd Add missing subclass docs, Fix broken example in SD_safe (#6116)
* fix broken example in pipeline_stable_diffusion_safe

* fix typo in pipeline_stable_diffusion_pix2pix_zero

* add missing docs

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-14 09:20:30 -08:00
Steven Liu
8ccc76ab37 [docs] IP-Adapter API doc (#6140)
add ip-adapter

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-14 09:19:37 -08:00
Monohydroxides
c46711e895 [Community] Add SDE Drag pipeline (#6105)
* Add community pipeline: sde_drag.py

* Update README.md

* Update README.md

Update example code and visual example

* Update sde_drag.py

Update code example.
2023-12-14 20:47:20 +05:30
Sayak Paul
1d686bac81 [feat: Benchmarking Workflow] add stuff for a benchmarking workflow (#5839)
* add poc for benchmarking workflow.

* import

* fix argument

* fix: argument

* fix: path

* fix

* fix

* path

* output csv files.

* workflow cleanup

* append token

* add utility to push to hf dataset

* fix: kw arg

* better reporting

* fix: headers

* better formatting of the numbers.

* better type annotation

* fix: formatting

* moentarily disable check

* push results.

* remove disable check

* introduce base classes.

* img2img class

* add inpainting pipeline

* intoduce base benchmark class.

* add img2img and inpainting

* feat: utility to compare changes

* fix

* fix import

* add args

* basepath

* better exception handling

* better path handling

* fix

* fix

* remove

* ifx

* fix

* add: support for controlnet.

* image_url -> url

* move images to huggingface hub

* correct urls.

* root_ckpt

* flush before benchmarking

* don't install accelerate from source

* add runner

* simplify Diffusers Benchmarking step

* change runner

* fix: subprocess call.

* filter percentage values

* fix controlnet benchmark

* add t2i adapters.

* fix filter columns

* fix t2i adapter benchmark

* fix init.

* fix

* remove safetensors flag

* fix args print

* fix

* feat: run_command

* add adapter resolution mapping

* benchmark t2i adapter fix.

* fix adapter input

* fix

* convert to L.

* add flush() add appropriate places

* better filtering

* okay

* get env for torch

* convert to float

* fix

* filter out nans.

* better coment

* sdxl

* sdxl for other benchmarks.

* fix: condition

* fix: condition for inpainting

* fix: mapping for resolution

* fix

* include kandinsky and wuerstchen

* fix: Wuerstchen

* Empty-Commit

* [Community] AnimateDiff + Controlnet Pipeline (#5928)

* begin work on animatediff + controlnet pipeline

* complete todos, uncomment multicontrolnet, input checks

Co-Authored-By: EdoardoBotta <botta.edoardo@gmail.com>

* update

Co-Authored-By: EdoardoBotta <botta.edoardo@gmail.com>

* add example

* update community README

* Update examples/community/README.md

---------

Co-authored-by: EdoardoBotta <botta.edoardo@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* EulerDiscreteScheduler add `rescale_betas_zero_snr` (#6024)

* EulerDiscreteScheduler add `rescale_betas_zero_snr`

* Revert "[Community] AnimateDiff + Controlnet Pipeline (#5928)"

This reverts commit 821726d7c0.

* Revert "EulerDiscreteScheduler add `rescale_betas_zero_snr` (#6024)"

This reverts commit 3dc2362b5a.

* add SDXL turbo

* add lcm lora to the mix as well.

* fix

* increase steps to 2 when running turbo i2i

* debug

* debug

* debug

* fix for good

* fix and isolate better

* fuse lora so that torch compile works with peft

* fix: LCMLoRA

* better identification for LCM

* change to cron job

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com>
Co-authored-by: EdoardoBotta <botta.edoardo@gmail.com>
Co-authored-by: Beinsezii <39478211+Beinsezii@users.noreply.github.com>
2023-12-12 11:03:34 +05:30
M. Tolga Cangöz
0a401b95b7 [Docs] Fix typos (#6122)
Fix typos and trim trailing whitespaces
2023-12-11 10:55:28 -08:00
Edward Li
664e931bcb Correct type annotation for VaeImageProcessor.numpy_to_pil (#6111)
From `(np.ndarray) -> PIL.Image.Image` to `(np.ndarray) -> List[PIL.Image.Image]`.
2023-12-11 15:22:04 +05:30
Aryan V S
88bdd97ccd IP adapter support for most pipelines (#5900)
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py

* update tests

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py

* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py

* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py

* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

* revert changes to sd_attend_and_excite and sd_upscale

* make style

* fix broken tests

* update ip-adapter implementation to latest

* apply suggestions from review

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-10 21:19:14 +05:30
Charchit Sharma
08b453e382 IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
* adapter for StableDiffusionControlNetImg2ImgPipeline

* fix-copies

* fix-copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-09 11:02:55 +05:30
apolinário
2a111bc9fe [Advanced Training Script] Fix pipe example (#6106) 2023-12-08 15:56:35 +01:00
apolinário
16e6997f0d [Advanced Diffusion Script] Add Widget default text (#6100)
add widget
2023-12-08 12:45:27 +01:00
YiYi Xu
3b9b98656e Fix a bug in add_noise function (#6085)
* fix

* copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-12-07 11:35:28 -10:00
Fabio Rigano
b65928b556 Add support for IPAdapterFull (#5911)
* Add support for IPAdapterFull


Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-07 06:40:39 -10:00
Beinsezii
6bf1ca2c79 EulerDiscreteScheduler add rescale_betas_zero_snr (#6024)
* EulerDiscreteScheduler add `rescale_betas_zero_snr`
2023-12-06 21:51:04 -10:00
Aryan V S
978dec9014 [Community] AnimateDiff + Controlnet Pipeline (#5928)
* begin work on animatediff + controlnet pipeline

* complete todos, uncomment multicontrolnet, input checks

Co-Authored-By: EdoardoBotta <botta.edoardo@gmail.com>

* update

Co-Authored-By: EdoardoBotta <botta.edoardo@gmail.com>

* add example

* update community README

* Update examples/community/README.md

---------

Co-authored-by: EdoardoBotta <botta.edoardo@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-06 21:01:41 -10:00
Dhruv Nair
79a7ab92d1 Fix clearing backend cache from device agnostic testing (#6075)
update
2023-12-07 11:18:31 +05:30
Younes Belkada
c2717317f0 [PEFT] Adapt example scripts to use PEFT (#5388)
* adapt example scripts to use PEFT

* Update examples/text_to_image/train_text_to_image_lora.py

* fix

* add for SDXL

* oops

* make sure to install peft

* fix

* fix

* fix dreambooth and lora

* more fixes

* add peft to requirements.txt

* fix

* final fix

* add peft version in requirements

* remove comment

* change variable names

* add few lines in readme

* add to reqs

* style

* fix issues

* fix lora dreambooth xl tests

* init_lora_weights to gaussian and add out proj where missing

* ammend requirements.

* ammend requirements.txt

* add correct peft versions

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-07 09:39:29 +05:30
Ian
bf7f9b49a2 Fix typing inconsistency in Euler discrete scheduler (#6052) 2023-12-06 23:45:16 +01:00
UmerHA
e192ae08d3 Add ControlNet-XS support (#5827)
* Check in 23-10-05

* check-in 23-10-06

* check-in 23-10-07 2pm

* check-in 23-10-08

* check-in 231009T1200

* check-in 230109

* checkin 231010

* init + forward run

* checkin

* checkin

* ControlNetXSModel is now saveable+loadable

* Forward works

* checkin

* Pipeline works with `no_control=True`

* checkin

* debug: save intermediate outputs of resnet

* checkin

* Understood time error + fixed connection error

* checkin

* checkin 231106T1600

* turned off detailled debug prints

* time debug logs

* small fix

* Separated control_scale for connections/time

* simplified debug logging

* Full denoising works with control scale = 0

* aligned logs

* Added control_attention_head_dim param

* Passing n_heads instead of dim_head into ctrl unet

* Fixed ctrl midblock bug

* Cleanup

* Fixed time dtype bug

* checkin

* 1. from_unet, 2. base passed, 3. all unet params

* checkin

* Finished docstrings

* cleanup

* make style

* checkin

* more tests pass

* Fixed tests

* removed debug logs

* make style + quality

* make fix-copies

* fixed documentation

* added cnxs to doc toc

* added control start/end param

* Update controlnetxs_sdxl.md

* tried to fix copies..

* Fixed norm_num_groups in from_unet

* added sdxl-depth test

* created SD2.1 controlnet-xs pipeline

* re-added debug logs

* Adjusting group norm ; readded logs

* Added debug log statements

* removed debug logs ; started tests for sd2.1

* updated sd21 tests

* fixed tests

* fixed tests

* slightly increased error tolerance for 1 test

* make style & quality

* Added docs for CNXS-SD

* make fix-copies

* Fixed sd compile test ; fixed gradient ckpointing

* vae downs = cnxs conditioning downs; removed guess

* make style & quality

* Fixed tests

* fixed test

* Incorporated review feedback

* simplified control model surgery

* fixed tests & make style / quality

* Updated docs; deleted pip & cursor files

* Rolled back minimal change to resnet

* Update resnet.py

* Update resnet.py

* Update src/diffusers/models/controlnetxs.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/controlnetxs.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Incorporated review feedback

* Update docs/source/en/api/pipelines/controlnetxs_sdxl.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/pipelines/controlnetxs.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/pipelines/controlnetxs.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/pipelines/controlnetxs.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/models/controlnetxs.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/models/controlnetxs.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/pipelines/controlnetxs.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs_sd_xl.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Incorporated doc feedback

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-12-06 23:33:47 +01:00
Steven Liu
87a09d66f3 [docs] SDXL Turbo (#6065)
api docs
2023-12-06 14:33:14 -08:00
Lucain
75ada25048 Harmonize HF environment variables + deprecate use_auth_token (#6066)
* Harmonize HF environment variables + deprecate use_auth_token

* fix import

* fix
2023-12-06 22:22:31 +01:00
Patrick von Platen
2243a59483 [Euler Discrete] Fix sigma (#6078)
* [Euler Discrete] Fix sigma

* make style
2023-12-06 19:59:38 +01:00
apolinário
466d32c442 [Advanced Diffusion Training] Cache latents to avoid VAE passes for every training step (#6076)
* add cache latents

* style
2023-12-06 14:46:53 +01:00
Dhruv Nair
20ba1fdbbd Disable Tests Fetcher (#6060)
update
2023-12-06 18:10:11 +05:30
Pedro Cuenca
ab6672fecd Use CC12M for LCM WDS training example (#5908)
* Fix SD scripts - there are only 2 items per batch

* Adjustments to make the SDXL scripts work with other datasets

* Use public webdataset dataset for examples

* make style

* Minor tweaks to the readmes.

* Stress that the database is illustrative.
2023-12-06 10:35:36 +01:00
Dhruv Nair
f90a5139a2 fix 2023-12-06 06:03:58 +00:00
Sayak Paul
a2bc2e14b9 [feat] allow SDXL pipeline to run with fused QKV projections (#6030)
* debug

* from step

* print

* turn sigma a list

* make str

* init_noise_sigma

* comment

* remove prints

* feat: introduce fused projections

* change to a better name

* no grad

* device.

* device

* dtype

* okay

* print

* more print

* fix: unbind -> split

* fix: qkv >-> k

* enable disable

* apply attention processor within the method

* attn processors

* _enable_fused_qkv_projections

* remove print

* add fused projection to vae

* add todos.

* add: documentation and cleanups.

* add: test for qkv projection fusion.

* relax assertions.

* relax further

* fix: docs

* fix-copies

* correct error message.

* Empty-Commit

* better conditioning on disable_fused_qkv_projections

* check

* check processor

* bfloat16 computation.

* check latent dtype

* style

* remove copy temporarily

* cast latent to bfloat16

* fix: vae -> self.vae

* remove print.

* add _change_to_group_norm_32

* comment out stuff that didn't work

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* reflect patrick's suggestions.

* fix imports

* fix: disable call.

* fix more

* fix device and dtype

* fix conditions.

* fix more

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-06 07:33:26 +05:30
Arsalan
f427345ab1 Device agnostic testing (#5612)
* utils and test modifications to enable device agnostic testing

* device for manual seed in unet1d

* fix generator condition in vae test

* consistency changes to testing

* make style

* add device agnostic testing changes to source and one model test

* make dtype check fns private, log cuda fp16 case

* remove dtype checks from import utils, move to testing_utils

* adding tests for most model classes and one pipeline

* fix vae import
2023-12-05 19:04:13 +05:30
apolinário
6e221334cd [advanced_dreambooth_lora_sdxl_tranining_script] save embeddings locally fix (#6058)
* Update train_dreambooth_lora_sdxl_advanced.py

* remove global function args from dreamboothdataset class

* style

* style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-05 13:52:34 +01:00
Patrick von Platen
53bc30dd45 [From single file] remove depr warning (#6043) 2023-12-05 18:12:25 +05:30
Radamés Ajna
eacf5e34eb Fix demofusion (#6049)
* Update pipeline_demofusion_sdxl.py

* Update README.md
2023-12-05 18:10:46 +05:30
Dhruv Nair
4c05f7856a Ldm unet convert fix (#6038)
* fix

* fix ldm conversion

* fix linting
2023-12-05 18:01:02 +05:30
Dhruv Nair
bbd3572044 Pin Ruff Version (#6059)
pinn ruff
2023-12-05 17:51:37 +05:30
Dhruv Nair
f948778322 Move kandinsky convert script (#6047)
move kandinsky convert script
2023-12-05 15:12:37 +05:30
Steven Liu
4684ea2fe8 [docs] #Copied from mechanism (#6007)
* copied from section

* feedback
2023-12-04 10:12:52 -08:00
Steven Liu
b64f835ea7 [docs] Add Kandinsky 3 (#5988)
* add

* fix api docs

* edits
2023-12-04 10:11:15 -08:00
Linoy Tsaban
880c0fdd36 [advanced dreambooth lora training script][bug_fix] change token_abstraction type to str (#6040)
* improve help tags

* style fix

* changes token_abstraction type to string.
support multiple concepts for pivotal using a comma separated string.

* style fixup

* changed logger to warning (not yet available)

* moved the token_abstraction parsing to be in the same block as where we create the mapping of identifier to token

---------

Co-authored-by: Linoy <linoy@huggingface.co>
2023-12-04 18:38:44 +01:00
RuoyiDu
c36f1c3160 [Community Pipeline] DemoFusion: Democratising High-Resolution Image Generation With No $$$ (#6022)
* Add files via upload

* Update README.md

* Update pipeline_demofusion_sdxl.py

* Update pipeline_demofusion_sdxl.py

* Update examples/community/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-04 19:44:57 +05:30
takuoko
0a08d41961 [Feature] Support IP-Adapter Plus (#5915)
* Support IP-Adapter Plus

* fix format

* restore before black format

* restore before black format

* generic

* Refactor PerceiverAttention

* format

* fix test and refactor PerceiverAttention

* generic encode_image

* keep attention implementation

* merge tests

* encode_image backward compatible

* code quality

* fix controlnet inpaint pipeline

* refactor FFN

* refactor FFN

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-12-04 12:43:34 +01:00
Levi McCallum
e185084a5d Add variant argument to dreambooth lora sdxl advanced (#6021)
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-04 12:04:15 +01:00
Dhruv Nair
b21729225a Update Tests Fetcher (#5950)
* update setup and deps table

* update

* update

* update

* up

* up

* update

* up

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* quality fix

* fix failure reporting

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-04 12:59:41 +05:30
Parth38
8a812e4e14 Update value_guided_sampling.py (#6027)
* Update value_guided_sampling.py

Changed the scheduler step function as predict_epsilon parameter is not there in latest  DDPM Scheduler

* Update value_guided_sampling.md

Updated a link to a working notebook

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-04 10:36:25 +05:30
gujing
bf92e746c0 fix StableDiffusionTensorRT super args error (#6009) 2023-12-04 10:06:23 +05:30
Linoy Tsaban
b785a155d6 [advanced dreambooth lora sdxl training script] improve help tags (#6035)
* improve help tags

* style fix

---------

Co-authored-by: Linoy <linoy@huggingface.co>
2023-12-04 09:41:25 +05:30
Sayak Paul
d486f0e846 [LoRA serialization] fix: duplicate unet prefix problem. (#5991)
* fix: duplicate unet prefix problem.

* Update src/diffusers/loaders/lora.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-02 21:35:16 +05:30
Sayak Paul
3351270627 [PixArt Tests] remove fast tests from slow suite (#5945)
remove fast tests from slow suite
2023-12-02 20:58:27 +05:30
Junsong Chen
4520e1221a adapt PixArtAlphaPipeline for pixart-lcm model (#5974)
* adapt PixArtAlphaPipeline for pixart-lcm model

* remove original_inference_steps from __call__

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-12-02 13:30:40 +05:30
Long(Tony) Lian
618260409f LLMGroundedDiffusionPipeline: inherit from DiffusionPipeline and fix peft (#6023)
* LLMGroundedDiffusionPipeline: inherit from DiffusionPipeline and fix peft

* Use main in the revision in the examples

* Add "Copied from" statements in comments

* Fix formatting with ruff
2023-12-01 09:58:25 -10:00
Patrick von Platen
dadd55fb36 Post Release: v0.24.0 (#5985)
* Post Release: v0.24.0

* post pone deprecation

* post pone deprecation

* Add model_index.json
2023-12-01 18:43:44 +01:00
YiYi Xu
1b6c7ea74e [schedulers] create self.sigmas during __init__ (#6006)
* fix dpm
* all scheulers
2023-12-01 07:15:37 -10:00
YiYi Xu
b41f809a4e [Kandinsky 3.0] Follow-up TODOs (#5944)
clean-up kendinsky 3.0
2023-12-01 07:14:22 -10:00
Patrick von Platen
0f55c17e17 fix style 2023-12-01 15:59:34 +00:00
Charchit Sharma
5058d27f12 added attention_head_dim, attention_type, resolution_idx (#6011) 2023-12-01 16:26:58 +01:00
M. Tolga Cangöz
748c1b3ec7 [Docs] Update a link (#6014)
* Update the location of Python's version

* Trim trailing whitespace
2023-12-01 16:26:25 +01:00
M. Tolga Cangöz
523507034f [logging] Fix assertion bug (#6012)
Fix assertion bug
2023-12-01 16:26:04 +01:00
hako-mikan
46c751e970 [Community Pipeline] Regional Prompting Pipeline (#6015)
* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update examples/community/README.md

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-12-01 16:22:59 +01:00
Patrick von Platen
bc1d28c888 [From Single File] Allow Text Encoder to be passed (#6020)
Allow text encoder to be passed
2023-12-01 16:19:04 +01:00
Sayak Paul
af378c1dd1 [Easy] minor edits to setup.py (#5996)
minor edits to setup
2023-12-01 20:38:46 +05:30
Steven Liu
6ba4c5395f [docs] Fix SVD video (#6004)
Update svd.md
2023-12-01 16:07:47 +01:00
Linoy Tsaban
c1e4529541 [advanced_dreambooth_lora_sdxl_tranining_script] readme fix (#6019)
readme
2023-12-01 15:14:57 +01:00
Linoy Tsaban
d29d97b616 [examples/advanced_diffusion_training] bug fixes and improvements for LoRA Dreambooth SDXL advanced training script (#5935)
* imports and readme bug fixes

* bug fix - ensures text_encoder params are dtype==float32 (when using pivotal tuning) even if the rest of the model is loaded in fp16

* added pivotal tuning to readme

* mapping token identifier to new inserted token in validation prompt (if used)

* correct default value of --train_text_encoder_frac

* change default value of  --adam_weight_decay_text_encoder

* validation prompt generations when using pivotal tuning bug fix

* style fix

* textual inversion embeddings name change

* style fix

* bug fix - stopping text encoder optimization halfway

* readme - will include token abstraction and new inserted tokens when using pivotal tuning
- added type to --num_new_tokens_per_abstraction

* style fix

---------

Co-authored-by: Linoy Tsaban <linoy@huggingface.co>
2023-12-01 14:18:43 +01:00
Jongho Choi
7d4a257c7f Remove a duplicated line? (#6010)
Update __init__.py
2023-12-01 15:49:36 +05:30
Kristian Mischke
141cd52d56 Fix LLMGroundedDiffusionPipeline super class arguments (#5993)
* make `requires_safety_checker` a kwarg instead of a positional argument as it's more future-proof

* apply `make style` formatting edits

* add image_encoder to arguments and pass to super constructor
2023-11-30 10:15:14 -10:00
Steven Liu
f72b28c75b [docs] Fix video link (#5986)
Update svd.md
2023-11-29 20:52:25 +01:00
Suraj Patil
ada8109d5b Fix SVD doc (#5983)
fix url
2023-11-29 19:55:05 +01:00
Patrick von Platen
b34acbdcbc [SDXL Turbo] Add some docs (#5982)
* add diffusers example

* add diffusers example

* Comment about making it faster

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-11-29 19:52:07 +01:00
Suraj Patil
63f767ef15 Add SVD (#5895)
* begin model

* finish blocks

* add_embedding

* addition_time_embed_dim

* use TimestepEmbedding

* fix temporal res block

* fix time_pos_embed

* fix add_embedding

* add conversion script

* fix model

* up

* add new resnet blocks

* make forward work

* return sample in original shape

* fix temb shape in TemporalResnetBlock

* add spatio temporal transformers

* add vae blocks

* fix blocks

* update

* update

* fix shapes in Alphablender and add time activation in res blcok

* use new blocks

* style

* fix temb shape

* fix SpatioTemporalResBlock

* reuse TemporalBasicTransformerBlock

* fix TemporalBasicTransformerBlock

* use TransformerSpatioTemporalModel

* fix TransformerSpatioTemporalModel

* fix time_context dim

* clean up

* make temb optional

* add blocks

* rename model

* update conversion script

* remove UNetMidBlockSpatioTemporal

* add in init

* remove unused arg

* remove unused arg

* remove more unsed args

* up

* up

* check for None

* update vae

* update up/mid blocks for decoder

* begin pipeline

* adapt scheduler

* add guidance scalings

* fix norm eps in temporal transformers

* add temporal autoencoder

* make pipeline run

* fix frame decodig

* decode in float32

* decode n frames at a time

* pass decoding_t to decode_latents

* fix decode_latents

* vae encode/decode in fp32

* fix dtype in TransformerSpatioTemporalModel

* type image_latents same as image_embeddings

* allow using differnt eps in temporal block for video decoder

* fix default values in vae

* pass num frames in decode

* switch spatial to temporal for mixing in VAE

* fix num frames during split decoding

* cast alpha to sample dtype

* fix attention in MidBlockTemporalDecoder

* fix typo

* fix guidance_scales dtype

* fix missing activation in TemporalDecoder

* skip_post_quant_conv

* add vae conversion

* style

* take guidance scale as input

* up

* allow passing PIL to export_video

* accept fps as arg

* add pipeline and vae in init

* remove hack

* use AutoencoderKLTemporalDecoder

* don't scale image latents

* add unet tests

* clean up unet

* clean TransformerSpatioTemporalModel

* add slow svd test

* clean up

* make temb optional in Decoder mid block

* fix norm eps in TransformerSpatioTemporalModel

* clean up temp decoder

* clean up

* clean up

* use c_noise values for timesteps

* use math for log

* update

* fix copies

* doc

* upcast vae

* update forward pass for gradient checkpointing

* make added_time_ids is tensor

* up

* fix upcasting

* remove post quant conv

* add _resize_with_antialiasing

* fix _compute_padding

* cleanup model

* more cleanup

* more cleanup

* more cleanup

* remove freeu

* remove attn slice

* small clean

* up

* up

* remove extra step kwargs

* remove eta

* remove dropout

* remove callback

* remove merge factor args

* clean

* clean up

* move to dedicated folder

* remove attention_head_dim

* docstr and small fix

* update unet doc strings

* rename decoding_t

* correct linting

* store c_skip and c_out

* cleanup

* clean TemporalResnetBlock

* more cleanup

* clean up vae

* clean up

* begin doc

* more cleanup

* up

* up

* doc

* Improve

* better naming

* better naming

* better naming

* better naming

* better naming

* better naming

* better naming

* better naming

* Apply suggestions from code review

* Default chunk size to None

* add example

* Better

* Apply suggestions from code review

* update doc

* Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

* Get torch compile working

* up

* rename

* fix doc

* add chunking

* torch compile

* torch compile

* add modelling outputs

* torch compile

* Improve chunking

* Apply suggestions from code review

* Update docs/source/en/using-diffusers/svd.md

* Close diff tag

* remove slicing

* resnet docstr

* add docstr in resnet

* rename

* Apply suggestions from code review

* update tests

* Fix output type latents

* fix more

* fix more

* Update docs/source/en/using-diffusers/svd.md

* fix more

* add pipeline tests

* remove unused arg

* clean  up

* make sure get_scaling receives tensors

* fix euler scheduler

* fix get_scalings

* simply euler for now

* remove old test file

* use randn_tensor to create noise

* fix device for rand tensor

* increase expected_max_difference

* fix test_inference_batch_single_identical

* actually fix test_inference_batch_single_identical

* disable test_save_load_float16

* skip test_float16_inference

* skip test_inference_batch_single_identical

* fix test_xformers_attention_forwardGenerator_pass

* Apply suggestions from code review

* update StableVideoDiffusionPipelineSlowTests

* update image

* add diffusers example

* fix more

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
2023-11-29 19:13:36 +01:00
PENGUINLIONG
d1b2a1a957 Fixed custom module importing on Windows (#5891)
* Fixed custom module importing on Windows

Windows use back slash and `os.path.join()` follows that convention.

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* Update pipeline_utils.py

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Lucain <lucainp@gmail.com>
2023-11-29 16:33:04 +01:00
Kashif Rasul
01782c220e [Wuerstchen] Adapt lora training example scripts to use PEFT (#5959)
* Adapt lora example scripts to use PEFT

* add to_out.0
2023-11-29 16:18:20 +01:00
vahramtadevosyan
d63a498c3b [Pipeline] Add TextToVideoZeroSDXLPipeline (#4695)
* integrated sdxl for the text2video-zero pipeline

* make fix-copies

* fixed CI issues

* make fix-copies

* added docs and `copied from` statements

* added fast tests

* made a small change in docs

* quality+style check fix

* updated docs. added controlnet inference with sdxl

* added device compatibility for fast tests

* fixed docstrings

* changing vae upcasting

* remove torch.empty_cache to speed up inference

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* made fast tests to run on dummy models only, fixed copied from statements

* fixed testing utils imports

* Added bullet points for SDXL support

* fixed formatting & quality

* Update tests/pipelines/text_to_video/test_text_to_video_zero_sdxl.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/text_to_video/test_text_to_video_zero_sdxl.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fixed minor error for merging

* fixed updates of sdxl

* made fast tests inherit from `PipelineTesterMixin` and run in 3-4secs on CPU

* make style && make quality

* reimplemented fast tests w/o default attn processor

* make style & make quality

* make fix-copies

* make fix-copies

* fixed docs

* make style & make quality & make fix-copies

* bug fix in cross attention

* make style && make quality

* make fix-copies

* fix gpu issues

* make fix-copies

* updated pipeline signature

---------

Co-authored-by: Vahram <vahram.tadevosyan@lambda-loginnode02.cm.cluster>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-29 16:10:43 +01:00
Marko Kostiv
6a4aad43dc Controlnet ssd 1b support (#5779)
* Add SSD-1B support for controlnet model

* Add conditioning_channels into ControlNet init from unet

* Fix black formatting

* Isort fixes

* Adds SSD-1B controlnet pipeline test with UNetMidBlock2D as mid block

* Overrides failing ssd-1b tests

* Fixes tests after main branch update

* Fixes code quality checks

---------

Co-authored-by: Marko Kostiv <marko@linearity.io>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-29 16:10:01 +01:00
Steven Liu
ddd8bd53ed [docs] LCM training (#5796)
* first draft

* feedback
2023-11-29 16:08:05 +01:00
JuanCarlosPi
9f7b2cf2dc Support of ip-adapter to the StableDiffusionControlNetInpaintPipeline (#5887)
* Change pipeline_controlnet_inpaint.py to add ip-adapter support. Changes are similar to those in pipeline_controlnet

* Change tests for the StableDiffusionControlNetInpaintPipeline by adding image_encoder: None

* Update src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-11-29 16:00:24 +01:00
Sayak Paul
895c4b704b [LoRA refactor] move several state dict conversion utils out of lora.py (#5955)
* move several state dict conversion utils out of lora.py

* check

* check

* check

* check

* check

* check

* check

* revert back

* check

* check

* again check

* maybe fix?

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-29 20:24:21 +05:30
Linh Nguyen
636feba552 Rename output_dir argument (#5916)
Fix typo in output_dir argument: "text-inversion-model" → "dreambooth-model"
2023-11-29 15:47:16 +01:00
Andrés Romero
79dc7df03e [bug fix] Inpainting for MultiAdapter (#5922)
* bug in MultiAdapter for Inpainting

* adapter_input is a list for MultiAdapter

---------

Co-authored-by: andres <andres@hax.ai>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-29 15:46:26 +01:00
Charchit Sharma
6031ecbd23 added doc for Kandinsky3.0 (#5937)
* added en doc for Kandinsky3.0

* required changes

* Update docs/source/en/api/pipelines/kandinsky3.md

* Update docs/source/en/api/pipelines/kandinsky3.md

* Update docs/source/en/api/pipelines/kandinsky3.md

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-29 15:32:00 +01:00
Sayak Paul
fdd003d8e2 [Tests] Refactor test_examples.py for better readability (#5946)
* control and custom diffusion

* dreambooth

* instructpix2pix and dreambooth ckpting

* t2i adapters.

* text to image ft

* textual inversion

* unconditional

* workflows

* import fix

* fix import
2023-11-29 18:43:59 +05:30
Steven Liu
172acc98b9 [docs] Update pipeline list (#5952)
add to list
2023-11-29 14:08:39 +01:00
estelleafl
5ae3c3a56b [ldm3d] Ldm3d upscaler to community pipeline (#5870)
---------
Co-authored-by: Aflalo <estellea@isl-gpu27.rr.intel.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-11-28 09:00:39 -10:00
Soumik Rakshit
21bc59ab24 fix: minor typo in docstring (#5961) 2023-11-28 18:18:34 +05:30
Steven Liu
50a749e909 [docs] Fix space (#5898)
* fix

* minor edits
2023-11-27 11:50:59 -08:00
YiYi Xu
d9075be494 [load_textual_inversion]: allow multiple tokens (#5837)
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-11-27 06:52:36 -10:00
Patrick von Platen
b135b6e905 [From_pretrained] Fix warning (#5948) 2023-11-27 14:35:19 +01:00
T. Xu
14a0d21d2e [Community Pipeline] Diffusion Posterior Sampling for General Noisy Inverse Problems (#5939)
* [community pipeline] dps impl

* add type checking

* pass ruff check

* ruff formatter
2023-11-27 14:29:42 +01:00
Patrick von Platen
ebf581e85f [Tests] Make sure that we don't run tests multiple times (#5949)
* [Tests] Make sure that we don't run tests mulitple times

* [Tests] Make sure that we don't run tests mulitple times

* [Tests] Make sure that we don't run tests mulitple times
2023-11-27 14:18:56 +01:00
Patrick von Platen
e550163b9f [Vae] Make sure all vae's work with latent diffusion models (#5880)
* add comments to explain the code better

* add comments to explain the code better

* add comments to explain the code better

* add comments to explain the code better

* add comments to explain the code better

* fix more

* fix more

* fix more

* fix more

* fix more

* fix more
2023-11-27 14:17:47 +01:00
Viktor Grygorchuk
20f0cbc88f fix: error on device for lpw_stable_diffusion_xl pipeline if pipe.enable_sequential_cpu_offload() enabled (#5885)
fix: set device for pipe.enable_sequential_cpu_offload()
2023-11-27 13:47:47 +01:00
Chi
d72a24b790 Replace multiple variables with one variable. (#5715)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* I enhanced the code by replacing multiple redundant variables with a single variable, as they all served the same purpose. Additionally, I utilized the get_activation function for improved flexibility in choosing activation functions.

* Using as black package to reformated my file

* reverte some changes

* Remove conv_out_padding variables and using as conv_in_padding

* conv_out_padding create and add them into the code.

* run black command to solving styling problem

* add little bit space between comment and import statement

* I am utilizing the ruff library to address the style issues in my Makefile.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-27 13:34:52 +01:00
ginjia
d3cda804e7 add LoRA weights load and fuse support for IPEX pipeline (#5920)
add IPEX pipeline LoRA weights loading support
2023-11-27 13:32:43 +01:00
dg845
07eac4d65a Fix LCM Stable Diffusion distillation bug related to parsing unet_time_cond_proj_dim (#5893)
* Fix bug related to parsing unet_time_cond_proj_dim.

* Fix analogous bug in the SD-XL LCM distillation script.
2023-11-27 13:00:40 +01:00
Iván de Prado
c079cae3d4 Avoid computing min() that is expensive when do_normalize is False in the image processor (#5896)
Avoid computing min() that is expensive when do_normalize is False

Avoid extra computing when do_normalize is False
2023-11-27 12:46:26 +01:00
Wang, Yi
c7bfb8b22a set the model to train state before accelerator prepare (#5099)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-11-27 12:43:49 +01:00
dg845
67d070749a Add Custom Timesteps Support to LCMScheduler and Supported Pipelines (#5874)
* Add custom timesteps support to LCMScheduler.

* Add custom timesteps support to StableDiffusionPipeline.

* Add custom timesteps support to StableDiffusionXLPipeline.

* Add custom timesteps support to remaining Stable Diffusion pipelines which support LCMScheduler (img2img, inpaint).

* Add custom timesteps support to remaining Stable Diffusion XL pipelines which support LCMScheduler (img2img, inpaint).

* Add custom timesteps support to StableDiffusionControlNetPipeline.

* Add custom timesteps support to T21 Stable Diffusion (XL) Adapters.

* Clean up Stable Diffusion inpaint tests.

* Manually add support for custom timesteps to AltDiffusion pipelines since make fix-copies doesn't appear to work correctly (it deletes the whole pipeline).

* make style

* Refactor pipeline timestep handling into the retrieve_timesteps function.
2023-11-27 12:39:14 +01:00
Aryan V S
9c357bda3f Deprecate KarrasVeScheduler and ScoreSdeVpScheduler (#5269)
* deprecated: KarrasVeScheduler, ScoreSdeVpScheduler

* delete tests relevant to deprecated schedulers

* chore: run make style

* fix: import error caused due to incorrect _import_structure after deprecation

* fix: ScoreSdeVpScheduler was not importable from diffusers

* remove import added by assumption

* Update src/diffusers/schedulers/__init__.py as suggested by @patrickvonplaten

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make it a part deprecated

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix

* fix

* fix doc

* fix doc....again.......

* remove karras_ve test folder

Co-Authored-By: YiYi Xu <yixu310@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-11-27 12:33:02 +01:00
Sayak Paul
3f7c3511dc [Core] add support for gradient checkpointing in transformer_2d (#5943)
add support for gradient checkpointing in transformer_2d
2023-11-27 16:21:12 +05:30
Junsong Chen
7d6f30e89b [Fix: pixart-alpha] random 512px resolution bug (#5842)
* [Fix: pixart-alpha]
add ASPECT_RATIO_512_BIN in use_resolution_binning for random 512px image generation.

* add slow test file for 512px generation without resolution binning

* fix: slow tests for resolution binning.

---------

Co-authored-by: jschen <chenjunsong4@h-partners.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-27 13:35:35 +05:30
Patrick von Platen
6d2e19f746 [Examples] Allow downloading variant model files (#5531)
* add variant

* add variant

* Apply suggestions from code review

* reformat

* fix: textual_inversion.py

* fix: variant in model_info

---------

Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
2023-11-27 10:43:20 +05:30
Patrick von Platen
2a7f43a73b correct num inference steps 2023-11-24 17:09:26 +00:00
Patrick von Platen
b978334d71 [@cene555][Kandinsky 3.0] Add Kandinsky 3.0 (#5913)
* finalize

* finalize

* finalize

* add slow test

* add slow test

* add slow test

* Fix more

* add slow test

* fix more

* fix more

* fix more

* fix more

* fix more

* fix more

* fix more

* fix more

* fix more

* Better

* Fix more

* Fix more

* add slow test

* Add auto pipelines

* add slow test

* Add all

* add slow test

* add slow test

* add slow test

* add slow test

* add slow test

* Apply suggestions from code review

* add slow test

* add slow test
2023-11-24 17:46:00 +01:00
Sayak Paul
e5f232f76b [Docs] add: 8bit inference with pixart alpha (#5814)
* add: 8bit inference with pixart alpha

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* add: note on 4bit.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* address comment

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-24 20:36:33 +05:30
Linoy Tsaban
3003ff4947 [bug fix] fix small bug in readme template of sdxl lora training script (#5914)
readme improvement and metadata fix
2023-11-23 19:08:49 +01:00
Linoy Tsaban
5ffa603244 [bug fix] fix small bug in readme template of sdxl lora training script (#5906)
* readme bug fix

* style fix

---------

Co-authored-by: Linoy Tsaban <linoy@huggingface.co>
2023-11-23 12:11:50 +01:00
Linoy Tsaban
0eeee618cf Adds an advanced version of the SD-XL DreamBooth LoRA training script supporting pivotal tuning (#5883)
* sdxl dreambooth lora training script with pivotal tuning

* bug fix - args missing from parse_args

* code quality fixes

* comment unnecessary code from TokenEmbedding handler class

* fixup

---------

Co-authored-by: Linoy Tsaban <linoy@huggingface.co>
2023-11-22 16:27:56 +01:00
Andrés Romero
93f1a14cab ControlNet+Adapter pipeline, and ControlNet+Adapter+Inpaint pipeline (#5869)
* ControlNet+Adapter pipeline, and +Inpaint pipeline


---------

Co-authored-by: andres <andres@hax.ai>
2023-11-21 08:59:29 -10:00
Patrick von Platen
13d73d9303 [Lora] Seperate logic (#5809)
* [Lora] Seperate logic

* [Lora] Seperate logic

* [Lora] Seperate logic

* add comments to explain the code better

* add comments to explain the code better
2023-11-21 18:58:37 +01:00
YiYi Xu
ba352aea29 [feat] IP Adapters (author @okotaku ) (#5713)
* add ip-adapter


---------

Co-authored-by: okotaku <to78314910@gmail.com>
Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-21 07:34:30 -10:00
Linoy Tsaban
6fac1369d0 Add features to the Dreambooth LoRA SDXL training script (#5508)
* Additions:
- support for different lr for text encoder
- support for Prodigy optimizer
- support for min snr gamma
- support for custom captions and dataset loading from the hub

* adjusted --caption_column behaviour (to -not- use the second column of the dataset by default if --caption_column is not provided)

* fixed --output_dir / --model_dir_name confusion

* added --repeats, --adam_weight_decay_text_encoder
+ some fixes

* Update examples/dreambooth/train_dreambooth_lora_sdxl.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/dreambooth/train_dreambooth_lora_sdxl.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/dreambooth/train_dreambooth_lora_sdxl.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* - import compute_snr from diffusers/training_utils.py
- cluster adamw together
- when using 'prodigy', if --train_text_encoder == True and --text_encoder_lr != --learning rate, changes the lr of the text encoders optimization params to be --learning_rate (otherwise errors)

* shape fixes when custom captions are used

* formatting and a little cleanup

* code styling

* --repeats default value fixed, changed to 1

* bug fix - removed redundant lines of embedding concatenation when using prior_preservation (that duplicated class_prompt embeddings)

* changed dataset loading logic according to the following usecases (to avoid unnecessary dependency on datasets)-
1. user provides --dataset_name
2. user provides local dir --instance_data_dir that contains a metadata .jsonl file
3. user provides local dir --instance_data_dir that contains only images
in cases [1,2] we import datasets and use load_dataset method, in case [3] we process the data same as in the original script setting

* styling fix

* arg name fix

* adjusted the --repeats logic

* -removed redundant arg and 'if' when loading local folder with prompts
-updated readme template
-some default val fixes
-custom caption tests

* image path fix for readme

* code style

* bug fix

* --caption_column arg

* readme fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Linoy Tsaban <linoy@huggingface.co>
2023-11-21 17:38:43 +01:00
Steven Liu
1093f9d615 [docs] MusicLDM (#5854)
* fix

* feedback
2023-11-21 15:27:41 +01:00
Aryan V S
81780882b8 Addition of new callbacks to controlnets (#5812)
* add new callbacks to src/diffusers/pipelines/controlnet/pipeline_controlnet.py

* update callbacks

* fix repeated kwarg

* update

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-21 15:22:20 +01:00
Dhruv Nair
ebc7bedeb7 Add tests fetcher (#5848)
* add tests fetcher to utils

* add test fetcher

* update

* update

* remove unused dependency version check script

* update

* fix mistake

* update

* update

* update

* update

* update

* update

* update

* remove concurrency params

* update

* update

* update

* update

* update

* update

* move test fetcher to dedicated workflow
2023-11-21 18:01:44 +05:30
co63oc
ee519cfef5 Update README.md (#5855) 2023-11-21 11:56:13 +01:00
Steven Liu
7457aa67cb [docs] Loader APIs (#5813)
* first draft

* remove old loader doc

* start adding lora code examples

* finish

* add link to loralinearlayer

* feedback

* fix
2023-11-20 10:53:13 -08:00
M. Tolga Cangöz
c72a173906 Revert "[Docs] Update and make improvements" (#5858)
* Revert "[`Docs`] Update and make improvements (#5819)"

This reverts commit c697f52476.

* Update README.md

* Update memory.md

* Update basic_training.md

* Update write_own_pipeline.md

* Update fp16.md

* Update basic_training.md

* Update write_own_pipeline.md

* Update write_own_pipeline.md
2023-11-20 10:22:21 -08:00
dg845
dc21498b43 Update LCMScheduler Inference Timesteps to be More Evenly Spaced (#5836)
* Change LCMScheduler.set_timesteps to pick more evenly spaced inference timesteps.

* Change inference_indices implementation to better match previous behavior.

* Add num_inference_steps=26 test case to test_inference_steps.

* run CI

---------

Co-authored-by: patil-suraj <surajp815@gmail.com>
2023-11-20 15:46:10 +01:00
Patrick von Platen
3303aec5f8 make style 2023-11-20 12:54:52 +01:00
ginjia
4abbbff618 fix an issue that ipex occupy too much memory, it will not impact per… (#5625)
* fix an issue that ipex occupy too much memory, it will not impact performance

* make style

---------

Co-authored-by: root <jun.chen@intel.com>
Co-authored-by: Meng Guoqing <guoqing.meng@intel.com>
2023-11-20 12:43:29 +01:00
Younes Belkada
fda297703f [test / peft] Fix silent behaviour on PR tests (#5852)
Update pr_test_peft_backend.yml
2023-11-20 12:42:58 +01:00
Roy Hvaara
2695ba8e9d [JAX] Replace uses of jax.devices("cpu") with jax.local_devices(backend="cpu") (#5864)
An upcoming change to JAX will include non-local (addressable) CPU devices in jax.devices() when JAX is used multicontroller-style, where there are multiple Python processes.

This change preserves the current behavior by replacing uses of jax.devices("cpu"), which previously only returned local devices, with jax.local_devices("cpu"), which will return local devices both now and in the future.

This change is always safe (i.e., it should always preserve the previous behavior), but it may sometimes be unnecessary if code is never used in a multicontroller setting.

Co-authored-by: Peter Hawkins <phawkins@google.com>
2023-11-20 12:32:50 +01:00
Aryan V S
3ab921166d [Community] [WIP] LCM Interpolation Pipeline (#5767)
* wip: add interpolate pipeline for lcm

* update documentation

* update documentation
2023-11-20 12:18:17 +01:00
Kashif Rasul
6b04d61cf6 [Styling] stylify using ruff (#5841)
* ruff format

* not need to use doc-builder's black styling as the doc is styled in ruff

* make fix-copies

* comment

* use run_ruff
2023-11-20 11:48:34 +01:00
Sayak Paul
9c7f7fc475 [ControlNet] fix import in single file loading (#5834)
fix import

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-20 15:50:36 +05:30
Aryan V S
4adad57e57 UnboundLocalError in SDXLInpaint.prepare_latents() (#5648)
* fix: UnboundLocalError with image_latents

* chore: run make style, quality, fix-copies

* revert changes from make fix-copies

* revert changes from make fix-copies

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-11-19 22:39:51 -10:00
Younes Belkada
4e54dfe985 [Tests/LoRA/PEFT] Test also on PEFT / transformers / accelerate latest (#5820)
* add also peft latest on peft CI

* up

* up

* up

* Update .github/workflows/pr_test_peft_backend.yml

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-17 19:17:31 +01:00
Sourab Mangrulkar
6f1435332b Speed up the peft lora unload (#5741)
* Update peft_utils.py

* fix bug

* make the util backwards compatible.

Co-Authored-By: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* fix import issue

* refactor the backward compatibilty condition

* rename the conditional variable

* address comments

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* address comment

---------

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-11-17 19:15:44 +01:00
Patrick von Platen
c6f90daea6 [PEFT] Unpin peft (#5850) 2023-11-17 19:15:02 +01:00
Will Berman
2a84e8bb5a fix memory consistency decoder test (#5828)
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-17 09:31:01 -08:00
Lucain
c896b841e4 Set usedforsecurity=False in hashlib methods (FIPS compliance) (#5790)
* Set usedforsecurity=False in hashlib methods (FIPS compliance)

* update version dependency

* bump hfh version

* bump hfh version
2023-11-17 14:56:58 +01:00
Sayak Paul
69412d0a15 [Docs] add: japanese sdxl as a reference (#5844)
add: japanese sdxl as a reference
2023-11-17 18:14:02 +05:30
Patrick von Platen
913986afa5 Improve setup.py and add dependency check (#5826)
* put peft in requirements

* correct peft

* correct installs

* make style

* make style
2023-11-17 12:05:26 +01:00
Steven Liu
ff573ae245 [docs] Fix title (#5831)
fix section title
2023-11-17 08:53:11 +05:30
M. Tolga Cangöz
c697f52476 [Docs] Update and make improvements (#5819)
Update and make improvements
2023-11-16 13:47:25 -08:00
Suraj Patil
a042909c83 LCM-LoRA docs (#5782)
* begin doc

* fix examples

* add in toctree

* fix toctree

* improve copy

* improve introductions

* add lcm doc

* fix filename

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* address Sayak's comments

* remove controlnet aux

* open in colab

* move to Specific pipeline examples

* update controlent and adapter examples

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-16 16:19:27 +01:00
Suraj Patil
64cbd8e27a Support LCM in ControlNet and Adapter pipelines. (#5822)
* support lcm

* fix tests

* fix tests
2023-11-16 14:59:50 +01:00
Aryan V S
038b42db94 Improve docs and type hints (#5759)
* improvement: docs and type hints

* improvement: docs and type hints

minor refactor

* improvement: docs and type hints

* update with suggestions from review

Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com>

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-16 14:00:32 +05:30
M. Tolga Cangöz
ecbe27a07f [Docs] Fix typos and update files at API's Pipelines page 2 (#5748)
* Fix typos, update, add Copyright info, and trim trailing whitespace

* Update docs/source/en/api/pipelines/text_to_video_zero.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* 1 second is not a long video, but 6 seconds is

* Update text_to_video_zero.md

* Update text_to_video_zero.md

* Update text_to_video_zero.md

* Update wuerstchen.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-15 10:54:55 -08:00
M. Tolga Cangöz
3ad4207d1f [Docs] Fix typos, update, and add visualizations at Using Diffusers' Pipelines for Inference Page (#5649)
* Fix typos, update, add visualizations

* Update sdxl.md

* Update controlnet.md

* Update docs/source/en/using-diffusers/shap-e.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/using-diffusers/shap-e.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update diffedit.md

* Update kandinsky.md

* Update sdxl.md

* Update controlnet.md

* Update docs/source/en/using-diffusers/controlnet.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/using-diffusers/controlnet.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update controlnet.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-15 10:30:01 -08:00
MilkClouds
3517fb9430 fix: enabled num_images_per_prompt>1 for lpw_stable_diffusion_xl (community pipeline) (#5807)
* fix: enabled num_images_per_prompt>1 for lpw_stable_diffusion_xl

* style: fixed isort
2023-11-15 07:40:29 -10:00
Dhruv Nair
cdadb023a2 Make Video Tests faster (#5787)
* update test

* update
2023-11-15 10:56:01 +05:30
M. Tolga Cangöz
51fd3dd206 [Docs] Remove .to('cuda') before .enable_model_cpu_offload() (#5795)
Remove .to('cuda') before cpu_offload, trim trailing whitespaces
2023-11-14 17:20:54 -08:00
Pedro Gabriel Gengo Lourenço
98457580c0 Fixed documentation of consistency decoder (#5768)
* Fixed doc for consistency decoder

* Style fix
2023-11-14 15:27:26 -08:00
M. Tolga Cangöz
3b37488fa3 Fix typos, improve, update at main-page files and .github files (#5588)
* Update keywords; remove version cuz it changes constantly?

* Update if necessary

* Fix typos and links

* version_range_max from PyTorch's setup.py; fix typos; 1 checklist is enough?

* Fix a typo

* Fix typos

* There is already a Blank issue link at the bottom of the page; direct to Diffusers' forum

* Add 🌐 Translating a New Language? page

* Update .github/ISSUE_TEMPLATE/translate.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update PHILOSOPHY.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update PHILOSOPHY.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update PHILOSOPHY.md

* Update PHILOSOPHY.md

* Update CONTRIBUTING.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update PHILOSOPHY.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update PHILOSOPHY.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Add X account link

* Update setup.py

* Update CITATION.cff

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-14 11:46:52 -08:00
Kadir Nar
c7260ce253 🔧 Fix import codes in diffusers library (#5792)
* 🔧 Fix import codes in diffusers library

* Refactor imports in community examples
2023-11-14 11:37:59 -08:00
M. Tolga Cangöz
8092017d3f [Docs] Fix typos and update files at API's Pipelines page 1 (#5744)
* Fix typos, update, add Copyright info, and trim trailing whitespace

* Update alt_diffusion.md

* Remove nonoperational demo

* Update docs/source/en/api/pipelines/consistency_models.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/pipelines/latent_consistency_models.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-14 10:36:20 -08:00
Steven Liu
bae14c8bcb [docs] Update training docs (#5512)
* first draft

* try hfoption syntax

* fix hfoption id

* add text2image

* fix tag

* feedback

* feedbacks

* add textual inversion

* DreamBooth

* lora

* controlnet

* instructpix2pix

* custom diffusion

* t2i

* separate training methods and models

* sdxl

* kandinsky

* wuerstchen

* light edits
2023-11-14 10:29:56 -08:00
Sayak Paul
ded93f798c [Refactor] refactor loaders.py to make it cleaner and leaner. (#5771)
* refactor loaders.py to make it cleaner and leaner.

* refactor loaders init

* inits.

* textual inversion to the init.

* inits.

* remove certain modules from the main init.

* AttnProcsLayers

* fix imports

* avoid circular import.

* fix circular import pt 2.

* address PR comments

* imports

* fix: imports.

* remove from main init for avoiding circular deps.

* remove spurious deps.

* fix-copies.

* fix imports.

* more debug

* more debug

* Apply suggestions from code review

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-14 12:54:28 +01:00
Sayak Paul
a5720e9e31 [PixArt-Alpha] Fix PixArt-Alpha pipeline when number of images to generate is more than 1 (#5752)
* does this fix things?

* attention mask use

* attention mask order

* better masking.

* add: tesrt

* remove mask_featur

* test

* debug

* fix: tests

* deprecate mask_feature

* add deprecation test

* add slow test

* add print statements to retrieve the assertion values.

* fix for the 1024 fast tes

* fix tesy

* fix the remaining

* Apply suggestions from code review

* more debug

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-14 12:45:35 +01:00
Richard Löwenström
16d500455b Use greater than or equal to in version comparisons for peft (#5785) 2023-11-14 12:17:51 +01:00
Yusuke Suzuki
210a07b13c fix exception around NSFW filter on flax stable diffusion (#5675) 2023-11-14 12:16:52 +01:00
Patrick von Platen
0a0ebc7cb4 [LCM] Better error message (#5788) 2023-11-14 12:08:15 +01:00
Patrick von Platen
81df9c85de Unwrap models everywhere (#5789)
more debug
2023-11-14 12:08:03 +01:00
takuoko
bfe94a3993 [Enhacne] Support maybe_raise_or_warn for peft (#5653)
* Support maybe_raise_or_warn for peft

* fix by comment

* unwrap function
2023-11-14 11:40:35 +01:00
Lukas Kuhn
c9c5436c94 download_from_original_stable_diffusion_ckpt initializes correct default pipeline for SDXL (#5784)
* feat: sdxl will be automatically detected as pipeline_class

* fix: formatting

* fix: formatting with black

* fix: import pipeline wrongly sorted
2023-11-14 11:35:26 +01:00
Sourab Mangrulkar
9c8eca702c add lora delete feature (#5738)
* add lora delete feature

* added tests and changed condition

* deal with corner cases

* more corner cases

* rename to `delete_adapter_layers` for consistency

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
2023-11-14 10:51:13 +01:00
co63oc
069123f66e Update checkpoint_merger.py (#5780) 2023-11-14 10:46:11 +01:00
Sayak Paul
ed759f0aee [PixArt-Alpha] Introduce resolution binning (#5739)
* feat: add resolution binning

Co-authored-by: lawrence-cj <jschen@mail.dlut.edu.cn>

* rename

* debug

* add :test

* remove unused variable

* set resolution_binning to False.

---------

Co-authored-by: lawrence-cj <jschen@mail.dlut.edu.cn>
2023-11-14 08:34:59 +05:30
Long(Tony) Lian
5b231aa38b Fix the pipeline name in the examples for LMD+ pipeline. Add a colab link to pipeline README. (#5775)
* Fix the pipeline name in the examples for LMD+ pipeline

* Add LMD+ colab link

* Apply code formatting

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-14 07:43:37 +05:30
M. Tolga Cangöz
a359ff7644 [Docs] Fix typos and update files at API's Main Classes, Models, and Schedulers pages (#5720)
* Fix typos, update, add Copyright info, and trim trailing whitespaces

* Update docs/source/en/api/loaders.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/models/autoencoder_tiny.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/api/models/autoencoder_tiny.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-13 14:32:59 -08:00
Steven Liu
4b45a1e147 [docs] Use other checkpoints with inpaint (#5590)
* tip about inpaint checkpoints

* expand section

* feedback
2023-11-13 12:39:30 -08:00
Steven Liu
f782ca112a [docs] Callbacks (#5735)
* updates

* feedback
2023-11-13 12:11:07 -08:00
Steven Liu
80e78d2cac [docs] Custom community components (#5732)
* fixes

* feedback
2023-11-13 11:01:52 -08:00
JacobYuan7
4d3b4e00ed Update the reference for text_to_video.md (#5706)
* Update the reference for text_to_video.md

The original reference (VideoFusion) might be misleading. VideoFusion is not open-sourced. I am the co-first author of ModelScopeT2V. I change the referred paper to the right one.

* Update docs/source/en/api/pipelines/text_to_video.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-13 19:06:18 +01:00
Thuan H. Nguyen
8fcd52febb Correct code for distributed training of RealFill (#5740)
Correct code for distributed training
2023-11-13 19:01:15 +01:00
Nicolas Hug
0488810f61 Fix realfill example compatibility with latest torchvision version (#5736) 2023-11-13 18:55:17 +01:00
Patrick von Platen
ef7787ea59 make style 2023-11-13 18:54:53 +01:00
Jianqi Pan
1ce4b5f3e3 fix: fix forward function signature of controlnet reference_only pipeline example (#5717)
fix: ignore other args

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-13 18:54:21 +01:00
Kashif Rasul
c9f847a70f [Wuerstchen] fix for when USE_PEFT_BACKEND is True (#5704)
* fix for when USE_PEFT_BACKEND is True

* Update modeling_wuerstchen_prior.py

* revert change

* add lora tests
2023-11-13 18:37:55 +01:00
Younes Belkada
8789d0b6c7 fix styling issues on main (#5754)
fix styling issues
2023-11-13 20:05:15 +05:30
Long(Tony) Lian
b1fbef544c Add LLM-grounded Diffusion (LMD+) pipeline (#5634)
* Add LLM-grounded Diffusion (LMD+) pipeline

* Update the formatting

* Applied formatting
2023-11-11 12:51:05 +05:30
YiYi Xu
a3476d5bd6 Controlnet Img2img: pass height and width to image_processor.preprocess (#5665)
pass height and width to image_processor.preprocess

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-11-10 06:53:20 -10:00
Sayak Paul
1477865e48 post release v0.23.0 (#5730)
* post release

* fix: variant test

* up

* fix: test
2023-11-10 16:35:44 +05:30
aihao
1f87f83e68 add load_datasete data_dir parameter (#5747) 2023-11-09 23:03:14 -10:00
Sayak Paul
77ba494b29 [ConsistencyDecoder] fix: doc type (#5745)
fix: doc type
2023-11-10 14:05:22 +05:30
Garry Dolley
1328aeb274 [Docs] Clarify that these are two separate examples (#5734)
* [Docs] Running the pipeline twice does not appear to be the intention of these examples

One is with `cross_attention_kwargs` and the other (next line) removes it

* [Docs] Clarify that these are two separate examples

One using `scale` and the other without it
2023-11-09 14:26:14 -08:00
M. Tolga Cangöz
53a8439fd1 [Docs] Fix typos and update files at Optimization Page (#5674)
* Fix typos, update, trim trailing whitespace

* Trim trailing whitespaces

* Update docs/source/en/optimization/memory.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/optimization/memory.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update _toctree.yml

* Update adapt_a_model.md

* Reverse

* Reverse

* Reverse

* Update dreambooth.md

* Update instructpix2pix.md

* Update lora.md

* Update overview.md

* Update t2i_adapters.md

* Update text2image.md

* Update text_inversion.md

* Update create_dataset.md

* Update create_dataset.md

* Update create_dataset.md

* Update create_dataset.md

* Update coreml.md

* Delete docs/source/en/training/create_dataset.md

* Original create_dataset.md

* Update create_dataset.md

* Delete docs/source/en/training/create_dataset.md

* Add original file

* Delete docs/source/en/training/create_dataset.md

* Add original one

* Delete docs/source/en/training/text2image.md

* Delete docs/source/en/training/instructpix2pix.md

* Delete docs/source/en/training/dreambooth.md

* Add original files

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-09 13:48:57 -08:00
Suraj Patil
db2d8e76f8 Add LCM Scripts (#5727)
* add lcm scripts

* Co-authored-by: dgu8957@gmail.com
2023-11-09 17:29:12 +01:00
Sayak Paul
bc2ba004c6 [LCM] add: locm docs. (#5723)
* add: locm docs.

* correct path

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* up

* add

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-09 17:21:26 +01:00
Patrick von Platen
3d7eaf83d7 LCM Add Tests (#5707)
* lcm add tests

* uP

* Fix all

* uP

* Add

* all

* uP

* uP

* uP

* uP

* uP

* uP

* uP
2023-11-09 15:45:11 +01:00
Patrick von Platen
bf406ea886 Correct consist dec (#5722)
* uP

* Update src/diffusers/models/consistency_decoder_vae.py

* uP

* uP
2023-11-09 13:10:24 +01:00
Will Berman
2fd46405cd consistency decoder (#5694)
* consistency decoder

* rename

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py

* uP

* Apply suggestions from code review

* uP

* uP

* uP

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-09 12:21:41 +01:00
Dhruv Nair
43346adc1f Install accelerate from PyPI in PR test runner (#5721)
install acclerate from pypi
2023-11-09 15:38:00 +05:30
takuoko
6110d7c95f [Bugfix] fix error of peft lora when xformers enabled (#5697)
* bugfix peft lor

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-08 22:55:36 +01:00
Dhruv Nair
65ef7a0c5c Fix prompt bug in AnimateDiff (#5702)
* fix prompt bug

* add test
2023-11-08 21:49:09 +01:00
apolinário
6e68c71503 Add adapter fusing + PEFT to the docs (#5662)
* Add adapter fusing + PEFT to the docs

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/tutorials/using_peft_for_inference.md

* Update docs/source/en/tutorials/using_peft_for_inference.md

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-08 18:26:53 +01:00
Patrick von Platen
17528afcba Fix styling issues (#5699)
* up

* up

* up

* Empty-Commit

* fix keyword argument call.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-08 17:49:06 +05:30
Sayak Paul
78be400761 [PixArt-Alpha] fix mask feature condition. (#5695)
* fix mask feature condition.

* debug

* remove identical test

* set correct

* Empty-Commit
2023-11-08 17:42:46 +05:30
Patrick von Platen
c803a8f8c0 [LCM] Fix img2img (#5698)
* [LCM] Fix img2img

* make fix-copies

* make fix-copies

* make fix-copies

* up
2023-11-08 11:51:46 +01:00
Philipp Hasper
d384265df7 Fixed is_safetensors_compatible() handling of windows path separators (#5650)
Closes #4665
2023-11-08 11:51:15 +01:00
Kirill
11c125667b Fix the misaligned pipeline usage in dreamshaper docstrings (#5700)
Fix the misaligned pipeline usage
2023-11-08 11:49:03 +01:00
YiYi Xu
69996938cf speed up Shap-E fast test (#5686)
skip rendering

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-11-08 11:43:20 +01:00
Chi
9ae90593c0 Replacing the nn.Mish activation function with a get_activation function. (#5651)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* I removed the dummy variable defined in both the encoder and decoder.

* Now, I run black package to reformat my file

* Remove the redundant line from the adapter.py file.

* Black package using to reformated my file

* Replacing the nn.Mish activation function with a get_activation function allows developers to more easily choose the right activation function for their task. Additionally, removing redundant variables can improve code readability and maintainability.

* I try to fix this: Fast tests for PRs / Fast PyTorch Models & Schedulers CPU tests (pull_request)

* Update src/diffusers/models/resnet.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-11-08 07:58:21 +05:30
M. Tolga Cangöz
7942bb8dc2 [Docs] Fix typos, improve, update at Using Diffusers' Task page (#5611)
* Fix typos, improve, update; kandinsky doesn't want fp16 due to deprecation; ogkalu and kohbanye don't have safetensor; add make_image_grid for better visualization

* Update inpaint.md

* Remove erronous Space

* Update docs/source/en/using-diffusers/conditional_image_generation.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update img2img.md

* load_image() already converts to RGB

* Update depth2img.md

* Update img2img.md

* Update inpaint.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-07 15:42:20 -08:00
dg845
aab6de22c3 Improve LCMScheduler (#5681)
* Refactor LCMScheduler.step such that prev_sample == denoised at the last timestep in the schedule.

* Make timestep scaling when calculating boundary conditions configurable.

* Reparameterize timestep_scaling to be a multiplicative rather than division scaling.

* make style

* fix dtype conversion

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-07 18:48:18 +01:00
Sayak Paul
1dc231d14a [PixArt-Alpha] Support non-square images (#5672)
* debug

* support non-square images

* add: test

* fix: test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-07 18:21:33 +01:00
Sayak Paul
84cd9e8d01 Make sure DDPM and diffusers can be used without Transformers (#5668)
* fix: import bug

* fix

* fix

* fix import utils for lcm

* fix: pixart alpha init

* Fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-07 17:38:12 +01:00
Sayak Paul
a8523bffa8 [PixArt-Alpha] fix mask_feature so that precomputed embeddings work with a batch size > 1 (#5677)
* fix embeds

* remove todo

* add: test

* better name
2023-11-07 17:12:47 +01:00
Dhruv Nair
97c8199dbb Explicit torch/flax dependency check (#5673)
* explicit torch dependency check

* update

* update

* update
2023-11-07 16:38:20 +01:00
Dhruv Nair
414d7c4991 Fix Basic Transformer Block (#5683)
* fix

* Update src/diffusers/models/attention.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-07 16:36:49 +01:00
Dhruv Nair
8ca179a0a9 Update free model hooks (#5680)
update free model hooks
2023-11-07 20:50:57 +05:30
Dhruv Nair
71f56c771a Model tests xformers fixes (#5679)
* fix model xformers test

* update
2023-11-07 20:50:41 +05:30
Dhruv Nair
6a89a6c93a Update custom diffusion attn processor (#5663)
update custom diffusion attn processor
2023-11-07 12:46:38 +05:30
Beinsezii
9bafef34bd Add Pixart to AUTO_TEXT2IMAGE_PIPELINES_MAPPING (#5664) 2023-11-07 07:45:56 +05:30
Sayak Paul
64603389da post release (v0.22.0) (#5658)
post release
2023-11-06 16:23:38 +01:00
Patrick von Platen
f05d75c076 [Custom Pipelines] Make sure that community pipelines can use repo revision (#5659)
fix custom pipelines
2023-11-06 15:11:48 +01:00
Sayak Paul
aec3de8bdb correct pipeline class name (#5652) 2023-11-06 14:08:27 +05:30
Sayak Paul
d61889fc17 [Feat] PixArt-Alpha (#5642)
* init pixart alpha pipeline

* fix: import

* script

* script

* script

* add: vae to the pipeline

* add: vae_scale_factor

* add: checkpoint_path

* clean conversion script a bit.

* size embeddings.

* fix: size embedding

* update scrip

* support for interpolation of position embedding.

* support for conditioning.

* ..

* ..

* ..

* final layer

* final layer

* align if encode_prompt

* support for caption embedding

* refactor

* refactor

* refactor

* start cross attention

* start cross attention

* cross_attention_dim

* cross

* cross

* support for resolution and aspect_ratio

* support for caption projection

* refactor patch embeddings

* batch_size

* up

* commit

* commit

* commit.

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze

* squeeze.

* squeeze.

* fix final block./

* fix final block./

* fix final block./

* clean

* fix: interpolation scale.

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging'

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* make --checkpoint_path non-required.

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* remove num_tokens

* timesteps -> timestep

* timesteps -> timestep

* timesteps -> timestep

* timesteps -> timestep

* timesteps -> timestep

* timesteps -> timestep

* debug

* debug

* update conversion script.

* update conversion script.

* update conversion script.

* debug

* debug

* debug

* clean

* debug

* debug

* debug

* debug

* debug

* debug

* debug

* debug

* deug

* debug

* debug

* debug

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* clean

* fix

* fix

* boom

* boom

* some changes

* boom

* save

* up

* remove i

* fix more tests

* DPMSolverMultistepScheduler

* fix

* offloading

* fix conversion script

* fix conversion script

* remove print

* remove support for negative prompt embeds.

* typo.

* remove extra kwargs

* bring conversion script to where it was

* fix

* trying mu luck

* trying my luck again

* again

* again

* again

* clean up

* up

* up

* update example

* support for 512

* remove spacing

* finalize docs.

* test debug

* fix: assertion values.

* debug

* debug

* debug

* fix: repeat

* remove prints.

* Apply suggestions from code review

* Apply suggestions from code review

* Correct more

* Apply suggestions from code review

* Change all

* Clean more

* fix more

* Fix more

* Fix more

* Correct more

* address patrick's comments.

* remove unneeded args

* clean up pipeline.

* sty;e

* make the use of additional conditions better conditioned.

* None better

* dtype

* height and width validation

* add a note about size brackets.

* fix

* spit out slow test outputs.

* fix?

* fix optional test

* fix more

* remove unneeded comment

* debug

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-06 08:40:04 +01:00
YiYi Xu
2b23ec82e8 add callbacks to denoising step (#5427)
* draft1

* update

* style

* move to the end of loop

* update

* update callbak_on_step_end_inputs

* Revert "update"

This reverts commit 5f9b153183.

* Revert "update callbak_on_step_end_inputs"

This reverts commit 44889f4dab.

* update

* update test required_optional_params

* remove self.lora_scale

* img2img

* inpaint

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix

* apply feedbacks on img2img + inpaint: keep only important pipeline attributes

* depth

* pix2pix

* make _callback_tensor_inputs an class variable so that we can use it for testing

* add a basic tst for callback

* add a read-only tensor input timesteps + fix tests

* add second test for callback cfg

* sdxl

* sdxl img2img

* sdxl inpaint

* kandinsky prior

* kandinsky decoder

* kandinsky img2img + combined

* kandinsky inpaint

* fix copies

* fix

* consistent default inputs

* fix copies

* wuerstchen_prior prior

* test_wuerstchen_decoder + fix test for prior

* wuerstchen_combined pipeline + skip tests

* skip test for kandinsky combined

* lcm

* remove timesteps etc

* add doc string

* copies

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* make style and improve tests

* up

* up

* fix more

* fix cfg test

* tests for callbacks

* fix for real

* update

* lcm img2img

* add doc

* add doc page to index

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-05 20:00:41 +01:00
Chi
080081bded Remove the redundant line from the adapter.py file. (#5618)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* I removed the dummy variable defined in both the encoder and decoder.

* Now, I run black package to reformat my file

* Remove the redundant line from the adapter.py file.

* Black package using to reformated my file

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-03 22:02:36 -10:00
Sayak Paul
dd9a5caf61 [Core] support for tiny autoencoder in img2img (#5636)
* support for tiny autoencoder in img2img

Co-authored-by: slep0v <37597789+slep0v@users.noreply.github.com>

* copy fix

* line space

* line space

* clean up

* spit out expected value

* spit out expected value

* assertion values.

* assertion values.

---------

Co-authored-by: slep0v <37597789+slep0v@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-11-03 15:31:53 +01:00
M. Tolga Cangöz
a35e72b032 [Docs] Fix typos, improve, update at Using Diffusers' Tecniques page (#5627)
Fix typos, improve, update; better visualization
2023-11-03 13:51:41 +01:00
dg845
beb8f216ed Clean up LCM Pipeline and Test Code. (#5641)
* Clean up LCM pipeline and pipeline test code.

* Add comment for LCM img2img sampling loop.
2023-11-03 13:50:48 +01:00
Ryan Dick
7ad70cee74 Model loading speed optimization (#5635)
Move unchanging operation out of loop for speed benefit.
2023-11-03 13:48:13 +01:00
Sayak Paul
60c5eb5877 [Easy] clean up the LCM docstrings. (#5637)
* clean up the LCM docstrings.

* clean up

* fix: examples

* Apply suggestions from code review
2023-11-03 12:14:48 +01:00
YiYi Xu
d122206466 fix a bug in AutoPipeline.from_pipe() when creating a controlnet pipeline from an existing controlnet (#5638)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-11-03 12:14:19 +01:00
Sayak Paul
c84982a804 [Easy] Minor AnimateDiff Doc nits (#5640)
minor
2023-11-03 16:27:54 +05:30
Dhruv Nair
84e7bb875d Update animatediff docs to include section on Motion LoRAs (#5639)
update animatediff docs
2023-11-03 15:53:59 +05:30
Patrick von Platen
072e00897a [LCM] Make sure img2img works (#5632)
* [LCM] Clean up implementations

* Add all

* correct more

* correct more

* finish

* up
2023-11-02 19:50:47 +01:00
M. Tolga Cangöz
b91d5ddd1a [Docs] Fix typos, improve, update at Using Diffusers' Loading & Hub page (#5584)
* Fix typos, improve, update

* Change to trending and apply some Grammarly fixes

* Grammarly fixes

* Update loading_adapters.md

* Update loading_adapters.md

* Update other-formats.md

* Update push_to_hub.md

* Update loading_adapters.md

* Update loading.md

* Update docs/source/en/using-diffusers/push_to_hub.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update schedulers.md

* Update docs/source/en/using-diffusers/loading.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/using-diffusers/loading_adapters.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update A1111 LoRA files part

* Update other-formats.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-02 11:05:43 -07:00
Dhruv Nair
2a8cf8e39f Animatediff Proposal (#5413)
* draft design

* clean up

* clean up

* clean up

* clean up

* clean up

* clean  up

* clean up

* clean up

* clean up

* update pipeline

* clean up

* clean up

* clean up

* add tests

* change motion block

* clean up

* clean up

* clean up

* update

* update

* update

* update

* update

* update

* update

* update

* clean up

* update

* update

* update model test

* update

* update

* update

* update

* make style

* update

* fix embeddings

* update

* merge upstream

* max fix copies

* fix bug

* fix mistake

* add docs

* update

* clean up

* update

* clean up

* clean up

* fix docstrings

* fix docstrings

* update

* update

* clean  up

* update
2023-11-02 15:04:03 +01:00
M. Tolga Cangöz
9ced7844da [Docs] Fix typos, improve, update at Conceptual Guides page (#5585)
* Fix typos, improve, update

* Update docs/source/en/conceptual/contribution.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/conceptual/contribution.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/conceptual/philosophy.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update philosophy.md

* Update philosophy.md

* Update docs/source/en/conceptual/philosophy.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/using-diffusers/controlling_generation.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/using-diffusers/controlling_generation.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Remove e.g.; some Grammarly fixes

* Update docs/source/en/conceptual/philosophy.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update contribution.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-02 12:55:23 +01:00
Patrick von Platen
9723f8a557 [Tests] Fix cpu offload test (#5626)
* fix more

* fix more
2023-11-02 12:49:58 +01:00
Sayak Paul
b81f709fb6 [remote code] document trust remote code. (#5620)
document trust remote code.
2023-11-02 12:02:31 +01:00
Steven Liu
75ea54a151 [docs] Kandinsky guide (#4555)
* kandinsky 2.1 first draft

* add kandinsky 2.2

* fix identical section headers

* try hfoptions syntax

* add img2img

* add inpaint

* add interpolate

* fix tag

* more cleanups

* typo

* update hfoptions id

* align hfoptions tags
2023-11-01 15:36:22 -07:00
Patrick von Platen
c0f0582651 [SDXL Adapter] Revert load lora (#5615)
* fix

* fix
2023-11-01 22:18:58 +01:00
M. Tolga Cangöz
b81c69e489 [Docs] Fix typos, improve, update at Get Started page (#5587)
* Fix typos, improve, update

* Update _toctree.yml

* Update docs/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply Grammarly fixes

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-01 22:11:57 +01:00
Younes Belkada
02ba50c610 [PEFT / LoRA] Fix civitai bug when network alpha is an empty dict (#5608)
* fix civitai bug

* add test

* up

* fix test

* added slow test.

* style

* Update src/diffusers/utils/peft_utils.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update src/diffusers/utils/peft_utils.py

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-11-01 22:08:22 +01:00
Patrick von Platen
4f2bf67355 Revert "Fix the order of width and height of original size in SDXL training script" (#5614)
Revert "Fix the order of width and height of original size in SDXL training script (#5382)"

This reverts commit 45db049973.
2023-11-01 22:04:47 +01:00
Chi
29cf163b95 Remove Redundant Variables from Encoder and Decoder (#5569)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* I removed the dummy variable defined in both the encoder and decoder.

* Now, I run black package to reformat my file

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-11-01 21:50:33 +01:00
Patrick von Platen
839c2a5ece fix 2023-11-01 21:39:30 +01:00
ilisparrow
5712c3d2ef [Core] enable lora for sdxl adapters too and add slow tests. (#5555)
* Enable lora for sdxl adapters too.

Issue #5516

* fix: assertion values.

* Use numpy_cosine_similarity_distance on the arrays

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* Use numpy_cosine_similarity_distance on the arrays

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* Changed imports orders to pass tests

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

---------

Co-authored-by: Ilias A <iliasamri00@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-11-01 21:25:38 +01:00
clarencechen
151998e1c2 Update final CPU offloading code for more diffusion pipelines (#5589)
* Update final model offload for more pipelines

Add test to ensure all pipeline components are returned to CPU after
execution with model offloading

* Add comment to explain early UNet offload in Text-to-Video pipeline

* Style
2023-11-01 21:22:56 +01:00
Steven Liu
d1eb14bc35 [docs] Lu lambdas (#5602)
lu lambdas
2023-11-01 11:47:11 -07:00
M. Tolga Cangöz
5c75a5fbc4 [Docs] Fix typos, improve, update at Tutorials page (#5586)
* Fix typos, improve, update

* Update autopipeline.md

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-11-01 10:40:47 -07:00
M. Tolga Cangöz
442017ccc8 [Docs] Fix typos (#5583)
* Add Copyright info

* Fix typos, improve, update

* Update deepfloyd_if.md

* Update ldm3d_diffusion.md

* Update opt_overview.md
2023-10-31 10:04:08 -07:00
Dhruv Nair
f1d052c5b8 Update docker image for xformers (#5597)
update docker image for xformers
2023-10-31 15:02:10 +05:30
YiYi Xu
ce9484b139 fix a mistake in text2image training script for kandinsky2.2 (#5244)
fix

Co-authored-by: yiyixuxu <yixu@Yis-MacBook-Pro.local>
2023-10-30 23:06:16 -10:00
Jincheng Miao
ed00ead345 [Community Pipelines] add textual inversion support for stable_diffusion_ipex (#5571) 2023-10-31 11:54:16 +05:30
TimothyAlexisVass
f0b2f6ce05 Fix divide by zero RuntimeWarning (#5543) 2023-10-31 11:39:08 +05:30
Younes Belkada
32fea1cc9b [core / PEFT ]Bump transformers min version for PEFT integration (#5579)
Update constants.py
2023-10-30 19:35:46 +01:00
Aryan V S
bb46be2f18 Fix incorrect loading of custom pipeline (#5568)
* update

* update

* update

* update
2023-10-30 19:32:11 +01:00
Cheng Lu
ac7b1716b7 Stabilize DPM++, especially for SDXL and SDE-DPM++ (#5541)
* stabilize dpmpp for sdxl by using euler at the final step

* add lu's uniform logsnr time steps

* add test

* fix check_copies

* fix tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-30 06:36:53 -10:00
Peter @sHTiF Stefcek
3fc10ded00 add fix to be able use StableDiffusionXLAdapterPipeline.from_single_file (#5547) 2023-10-30 16:46:44 +01:00
Thuan H. Nguyen
5b087e82d1 Add realfill (#5456)
* Add realfill

* Move realfill folder

* Fix some format issues
2023-10-30 15:21:40 +01:00
Younes Belkada
8f3100db9f [PEFT / Tests] Add peft slow tests on push (#5419)
* add peft slow tests workflow

* Update .github/workflows/push_tests.yml

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-30 14:27:00 +01:00
Patrick von Platen
3ec828d6dd Fix moved _expand_mask function (#5581)
* finish

* finish
2023-10-30 14:25:31 +01:00
Gabriel de Souza
9135e54e76 docs: initial pt translation (#5549)
* docs: initial pt translation

* docs: add pt build to github workflow and fix some missing translations
2023-10-27 10:51:35 -07:00
jiaqiw09
e140c0562e fix error reported 'find_unused_parameters' running in mutiple GPUs (#5355)
* fix error reported 'find_unused_parameters' running in mutiple GPUs or NPUs

* fix code check of importing module by its alphabetic order

---------

Co-authored-by: jiaqiw <wangjiaqi50@huawei.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-10-27 22:49:14 +05:30
Steven Liu
595ba6f786 [docs] Internal classes API (#5513)
* internal classes api

* add internal class overview

* fix toctree
2023-10-27 09:48:41 -07:00
Sayak Paul
798591346d [Core] fix FreeU disable method (#5552)
* disable freeu debug

* debug

* potentially fix.

* finish

* manually remove the spaces

* remove tab
2023-10-27 21:29:11 +05:30
YiYi Xu
f912f39b50 correct checkpoint in kandinsky2.2 doc page (#5550)
update checkpoint

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-10-27 08:49:15 +05:30
nickkolok
0d4b459be6 Update train_dreambooth.py - fix typos (#5539) 2023-10-26 13:35:05 -07:00
Patrick von Platen
cee1cd6e9c [Remote code] Add functionality to run remote models, schedulers, pipelines (#5472)
* upload custom remote poc

* up

* make style

* finish

* better name

* Apply suggestions from code review

* Update tests/pipelines/test_pipelines.py

* more fixes

* remove ipdb

* more fixes

* fix more

* finish tests

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-26 17:11:49 +02:00
p1kit
5b448a5e5d [Tests] Optimize test configurations for faster execution (#5535)
Optimize test configurations for faster execution
2023-10-26 16:02:34 +05:30
Patrick von Platen
a69ebe5527 [Tests] Speed up expert of mixture tests (#5533)
* [Tests] Speed up expert of mixture tests

* make style
2023-10-26 09:42:27 +02:00
Chi
ce7f334472 Remove multiple if-else statement in the get_activation function. (#5446)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* I use a lower method in the activation function.

* Replace multiple if-else statements with a dictionary of activation functions, and call one if statement to retrieve the appropriate function.

* I am using black package to reforamted my file

* I defined the ACTIVATION_FUNCTIONS variable outside of the function

* activation function variable convert to lower case

* First, I resolved the conflict issue. Then, I ran the Black package to reformat my file.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-26 09:36:30 +05:30
Ran Ran
8959c5b9de Add from_pt flag to enable model from PT (#5501)
* Add from_pt flag to enable model from PT

* Format the file

* Reformat the file
2023-10-25 23:07:34 +02:00
Steven Liu
bc8a08f67c [docs] Loader docs (#5473)
* first draft

* make fix-copies

* add peft section

* manual fix

* make fix-copies again

* manually revert changes to other files
2023-10-25 09:45:05 -07:00
Yi-Xuan XU
dbce14da56 fix a bug on torch_dtype argument in from_single_file of ControlNetModel (#5528)
fix wrong parameter
2023-10-25 17:29:56 +02:00
RampagingSloth
71ad02607d Fix missing punctuation in PHILOSOPHY.md (#5530)
Fix missing punctuation.
2023-10-25 17:29:34 +02:00
Patrick von Platen
dd981256ad make fix-copies 2023-10-25 17:19:38 +02:00
Aryan V S
0c9f174d59 Improve typehints and docs in diffusers/models (#5391)
* improvement: add typehints and docs to src/diffusers/models/attention_processor.py

* improvement: add typehints and docs to src/diffusers/models/vae.py

* improvement: add missing docs in src/diffusers/models/vq_model.py

* improvement: add typehints and docs to src/diffusers/models/transformer_temporal.py

* improvement: add typehints and docs to src/diffusers/models/t5_film_transformer.py

* improvement: add type hints to src/diffusers/models/unet_1d_blocks.py

* improvement: add missing type hints to src/diffusers/models/unet_2d_blocks.py

* fix: CI error (make fix-copies required)

* fix: CI error (make fix-copies required again)

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-10-25 17:19:15 +02:00
Patrick von Platen
d420d71398 make style 2023-10-25 16:12:14 +02:00
Logan
a1fad8286f Add a new community pipeline (#5477)
* Add a new community pipeline

examples/community/latent_consistency_img2img.py

which can be called like this

import torch
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
                "SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main")

            # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)

img2img=LatentConsistencyModelPipeline_img2img(
    vae=pipe.vae,
    text_encoder=pipe.text_encoder,
    tokenizer=pipe.tokenizer,
    unet=pipe.unet,
    #scheduler=pipe.scheduler,
    scheduler=None,
    safety_checker=None,
    feature_extractor=pipe.feature_extractor,
    requires_safety_checker=False,
)

img = Image.open("thisismyimage.png")

result = img2img(prompt,img,strength,num_inference_steps=4)

* Apply suggestions from code review

Fix name formatting for scheduler

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update readme (and run formatter on latent_consistency_img2img.py)

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-25 16:11:56 +02:00
Patrick von Platen
dc943eb99d [Schedulers] Fix 2nd order other than heun (#5526)
* [Schedulers] Fix 2nd order other than heun

* Apply suggestions from code review
2023-10-25 14:39:56 +02:00
YiYi Xu
0fc25715a1 fix a bug in 2nd order schedulers when using in ensemble of experts config (#5511)
* fix

* fix copies

* remove heun from tests

* add back heun and fix the tests to include 2nd order

* fix the other test too

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* make style

* add more comments

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-25 12:34:05 +02:00
AnyISalIn
de71fa59f5 fix error of peft lora when xformers enabled (#5506)
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-10-25 10:58:15 +05:30
Chengxi Guo
dcbfe662ef fix typo (#5505)
Signed-off-by: mymusise <mymusise1@gmail.com>
2023-10-24 17:14:05 -07:00
dg845
958e17dada Add Latent Consistency Models Pipeline (#5448)
* initial commit for LatentConsistencyModelPipeline and LCMScheduler based on the community pipeline

* Add callback and freeu support.

* apply suggestions from review

* Clean up LCMScheduler

* Remove timeindex argument to LCMScheduler.step.

* Add support for clipping or thresholding the predicted original sample.

* Remove unused methods and arguments in LCMScheduler.

* Improve comment about (lack of) negative prompt support.

* Change input guidance_scale to match the StableDiffusionPipeline (Imagen) CFG formulation.

* Move lcm_origin_steps from pipeline __call__ to LCMScheduler.__init__/config (as origin_steps).

* Fix typo when clipping/thresholding in LCMScheduler.

* Add some initial LCMScheduler tests.

* add type annotations from review

* Fix type annotation bug.

* Override test_add_noise_device in LCMSchedulerTest since hardcoded timesteps doesn't work under default settings.

* Add generator argument pipeline prepare_latents call.

* Cast LCMScheduler.timesteps to long in set_timesteps.

* Add onestep and multistep full loop scheduler tests.

* Set default height/width to None and don't hardcode guidance scale embedding dim.

* Add initial LatentConsistencyPipeline fast and slow tests.

* Add initial documentation for LatentConsistencyModelPipeline and LCMScheduler.

* Make remaining failing fast tests pass.

* make style

* Make original_inference_steps configurable from pipeline __call__ again.

* make style

* Remove guidance_rescale arg from pipeline __call__ since LCM currently doesn't support CFG.

* Make LCMScheduler defaults match config of LCM_Dreamshaper_v7 checkpoint.

* Fix LatentConsistencyPipeline slow tests and add dummy expected slices.

* Add checks for original_steps in LCMScheduler.set_timesteps.

* make fix-copies

* Improve LatentConsistencyModelPipeline docs.

* Apply suggestions from code review

Co-authored-by: Aryan V S <avs050602@gmail.com>

* Apply suggestions from code review

Co-authored-by: Aryan V S <avs050602@gmail.com>

* Apply suggestions from code review

Co-authored-by: Aryan V S <avs050602@gmail.com>

* Update src/diffusers/schedulers/scheduling_lcm.py

* Apply suggestions from code review

Co-authored-by: Aryan V S <avs050602@gmail.com>

* finish

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Aryan V S <avs050602@gmail.com>
2023-10-24 21:06:02 +02:00
Steven Liu
7c3a75a1ce [docs] General updates (#5378)
* first draft

* feedback

* feedback
2023-10-24 11:51:55 -07:00
Isamu Isozaki
b8896a154a Japanese docs (#5478)
* Finished _toctree.yml and index.md

* Finished installation.md

* Properly finished installation.md and almost finished quicktour

* Finished quicktour

* Finished stable diffusion doc

* Fixed _toctree.yml

* Fixed requests

* Fix country code

* Properly push
2023-10-24 11:30:04 -07:00
Bowen Bao
c7617e482a Register BaseOutput subclasses as supported torch.utils._pytree nodes (#5459)
* Register BaseOutput subclasses as supported torch.utils._pytree nodes

* lint

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-10-24 15:01:47 +05:30
Sayak Paul
77241c48af [Core] Refactor activation and normalization layers (#5493)
* move out the activations.

* move normalization layers.

* add doc.

* add doc.

* fix: paths

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-10-24 08:49:43 +05:30
Abhishar Sinha
096f84b05f Fixed autoencoder typo (#5500) 2023-10-23 13:59:00 -07:00
YiYi Xu
9e1edfc1ad fix a few issues in controlnet inpaint pipelines (#5470)
* add

* Update docs/source/en/api/pipelines/controlnet_sdxl.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-23 09:24:51 -10:00
Steven Liu
6b06c30a65 [docs] Fix links (#5499)
fix links
2023-10-23 20:39:29 +02:00
zideliu
188d864fa3 [BUG] in transformer_temporal Fix Bugs (#5496)
Fix Bugs
2023-10-23 20:38:41 +02:00
Kyunghwan Kim
6e608d8a35 Fix typo in controlnet docs (#5486) 2023-10-23 20:36:35 +02:00
Dhruv Nair
33293ed504 Fix Slow Tests (#5469)
fix tests
2023-10-23 20:24:31 +02:00
Sayak Paul
48ce118d1c [torch.compile] fix graph break problems partially (#5453)
* fix: controlnet graph?

* fix: sample

* fix:

* remove print

* styling

* fix-copies

* prevent more graph breaks?

* prevent more graph breaks?

* see?

* revert.

* compilation.

* rpopagate changes to controlnet sdxl pipeline too.

* add: clean version checking.
2023-10-23 23:41:52 +05:30
Patrick von Platen
1ade42f729 make style 2023-10-23 19:43:54 +02:00
Shyam Marjit
677df5ac12 fixed SDXL text encoder training bug #5016 (#5078)
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-23 19:43:43 +02:00
Andrei Filatov
16851efa0f Update README.md (#5497)
Right now, only "main" branch has this community pipeline code. So, adding it manually into pipeline
2023-10-23 18:57:43 +02:00
Ryan Dick
0eac9cd04e Make T2I-Adapter downscale padding match the UNet (#5435)
* Update get_dummy_inputs(...) in T2I-Adapter tests to take image height and width as params.

* Update the T2I-Adapter unit tests to run with the standard number of UNet down blocks so that all T2I-Adapter down blocks get exercised.

* Update the T2I-Adapter down blocks to better match the padding behavior of the UNet.

* Revert "Update the T2I-Adapter unit tests to run with the standard number of UNet down blocks so that all T2I-Adapter down blocks get exercised."

This reverts commit 6d4a060a34.

* Create  utility functions for testing the T2I-Adapter downscaling bahevior.

* (minor) Improve readability with an intermediate named variable.

* Statically parameterize  T2I-Adapter test dimensions rather than generating them dynamically.

* Fix static checks.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-23 18:52:31 +02:00
Younes Belkada
bc7a4d4917 [PEFT] Fix scale unscale with LoRA adapters (#5417)
* fix scale unscale v1

* final fixes + CI

* fix slow trst

* oops

* fix copies

* oops

* oops

* fix

* style

* fix copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-21 22:17:18 +05:30
Vishnu V Jaddipal
8dba180885 Added support to create asymmetrical U-Net structures (#5400)
* Added args, kwargs to ```U

* Add UNetMidBlock2D as a supported mid block type

* Fix extra init input for UNetMidBlock2D, change allowed types for Mid-block init

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_condition.py

* Update unet_2d_blocks.py

* Update unet_2d_blocks.py

* Update unet_2d_blocks.py

* Update unet_2d_condition.py

* Update unet_2d_blocks.py

* Updated docstring, increased check strictness

Updated the docstring for ```UNet2DConditionModel``` to include ```reverse_transformer_layers_per_block``` and updated checking for nested list type ```transformer_layers_per_block```

* Add basic shape-check test for asymmetrical unets

* Update src/diffusers/models/unet_2d_blocks.py

Removed blank line

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_condition.py

Remove blank space

* Update unet_2d_condition.py

Changed docstring for `mid_block_type`

* Fixed docstring and wrong default value

* Reformat with black

* Reformat with necessary commands

* Add UNetMidBlockFlat to versatile_diffusion/modeling_text_unet.py to ensure consistency

* Removed args, kwargs, use on mid-block type

* Make fix-copies

* Update src/diffusers/models/unet_2d_condition.py

Wrap into single line

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* make fix-copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-20 11:42:28 +02:00
kesimeg
5366db5df1 fix une2td ignoring class_labels (#5401)
* fix une2td ignoring class_labels

* unet2.py error message updated

* style and quality changes

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-10-20 14:21:46 +05:30
Patrick von Platen
e516858886 make style 2023-10-18 22:48:49 +02:00
Chi
36a0bacc29 Beautiful Doc string added into the UNetMidBlock2D class. (#5389)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* Update unet_2d_blocks.py

Added Beutifull doc-string into the UNetMidBlock2D class.

* Update unet_2d_blocks.py

I replaced the definition in this parameter resnet_time_scale_shift and resnet_groups.

* Update unet_2d_blocks.py

I remove additional sentences into the resnet_groups argument.

* Update unet_2d_blocks.py

I replaced my definition with the maintainer definition in the attention_head_dim parameter.

* I am using black package for reformated my file

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-10-18 22:48:36 +02:00
Patrick von Platen
9ad0530fea make style 2023-10-18 22:39:29 +02:00
linjiapro
45db049973 Fix the order of width and height of original size in SDXL training script (#5382)
* wip

* wip

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-18 22:38:52 +02:00
Liang Hou
a68f5062fb style(sdxl): remove identity assignments (#5418) 2023-10-18 22:38:20 +02:00
__mo_san__
b864d674a5 Update-DeepFloyd-IF-Pipelines-Docstrings (#5304)
* added TODOs

* Enhanced and reformatted the docstrings of IFPipeline methods.

* Enhanced and fixed the docstrings of IFImg2ImgSuperResolutionPipeline methods.

* Enhanced and fixed the docstrings of IFImg2ImgPipeline methods.

* Enhanced and fixed the docstrings of IFInpaintingSuperResolutionPipeline methods.

* Enhanced and fixed the docstrings of IFInpaintingPipeline  methods.

* Enhanced and fixed the docstrings of IFSuperResolutionPipeline methods.

* Update src/diffusers/pipelines/deepfloyd_if/pipeline_if.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* remove redundant code

* fix code style

* revert the ordering to not break backwards compatibility

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-10-18 21:40:41 +02:00
Patrick von Platen
85dccab7fd Add latent consistency (#5438)
* Add latent consistency

* Update examples/community/README.md

* Add latent consistency

* make fix copies

* Apply suggestions from code review
2023-10-18 14:18:31 +02:00
Sayak Paul
87fd3ce32b [from_single_file()]fix: local single file loading. (#5440)
fix: local single file loading.
2023-10-18 17:33:12 +05:30
Patrick von Platen
109d5bbe0d Merge branch 'main' of https://github.com/huggingface/diffusers 2023-10-18 09:27:46 +00:00
Patrick von Platen
f277d5e540 make fix copies 2023-10-18 09:27:35 +00:00
Dhruv Nair
28e8d1f6ec Fix pipe fetcher for slow tests (#5424)
* fix pipe fetcher

* filter out community pipelines
2023-10-18 00:42:56 +05:30
Dhruv Nair
98a0712d69 Update base image for slow CUDA tests (#5426)
update base image for tests
2023-10-17 23:18:53 +05:30
Susheel Thapa
324d18fba2 Chore: Typo fixed in multiple files (#5422) 2023-10-17 08:17:03 -07:00
Arka
ad8068e414 changed channel parameters for UNET and VAE. Changed configs parameters of CLIPText (#5370)
* changed channel parameters for UNET and VAE. Decreased hidden layers size with increased attention heads and intermediate size

* changed the assertion check range

* clean up

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-10-17 14:46:42 +05:30
Sayak Paul
b4cbbd5ed2 [Examples] Follow up of #5393 (#5420)
* fix: create_repo()

* Empty-Commit
2023-10-17 12:07:39 +05:30
Sayak Paul
8b3d2aeaf8 [Core] Fix/pipeline without text encoders for SDXL (#5301)
* fix: sdxl pipeline when unet is not available.

* fix moe

* account for text

* ifx more

* don't make unet optional.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* split conditionals.

* add optional components to sdxl pipeline

* propagate changes to the rest of the pipelines.

* add: test

* add to all

* fix: rest of the pipelines.

* use pipeline_class variable

* separate pipeline mixin

* use safe_serialization

* fix: test

* access actual output.

* add: optional test to adapter and ip2p sdxl pipeline tests/

* add optional test to controlnet sdxl.

* fix tests

* fix ip2p tests

* fix more

* fifx more.

* use np output type.

* fix for StableDiffusionXLMultiControlNetPipelineFastTests.

* fix: SDXLOptionalComponentsTesterMixin

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix tests

* Empty-Commit

* revert previous

* quality

* fix: test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-17 11:17:06 +05:30
Patrick von Platen
57239dacd0 make style 2023-10-16 16:29:50 +02:00
Gregg Helt
de12776b3a Add ability to mix usage of T2I-Adapter(s) and ControlNet(s). (#5362)
* Add ability to mix usage of T2I-Adapter(s) and ControlNet(s).
Previously, UNet2DConditional implemnetation onloy allowed use of one or the other.
Adds new forward() arg down_intrablock_additional_residuals specifically for T2I-Adapters. If down_intrablock_addtional_residuals is not used, maintains backward compatibility with prior usage of only T2I-Adapter or ControlNet but not both

* Improving forward() arg docs in src/diffusers/models/unet_2d_condition.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Add deprecation warning if down_block_additional_residues is used for T2I-Adapter (intrablock residuals)

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Oops my bad, fixing last commit.

* Added import of diffusers utils.deprecate

* Conform to max line length

* Modifying T2I-Adapter pipelines to reflect change to UNet forward() arg for T2I-Adapter residuals.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-16 16:29:05 +02:00
Sayak Paul
cc12f3ec92 [Examples] Update with HFApi (#5393)
* update training examples to use HFAPI.

* update training example.

* reflect the changes in the korean version too.

* Empty-Commit
2023-10-16 19:34:46 +05:30
Heinz-Alexander Fuetterer
0ea78f9707 chore: fix typos (#5386)
* chore: fix typos

* Update src/diffusers/pipelines/shap_e/renderer.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-16 15:23:37 +02:00
Sayak Paul
5495073faf [Docs] add docs on peft diffusers integration (#5359)
* add docs on peft diffusers integration/

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: pacman100 <13534540+pacman100@users.noreply.github.com>

* update URLs.

* Apply suggestions from code review

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* minor changes

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* reflect the latest changes.

* note about update.

---------

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: pacman100 <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-10-16 18:41:37 +05:30
Kashif Rasul
d03c9099bc [Wuerstchen] text to image training script (#5052)
* initial script

* formatting

* prior trainer wip

* add efficient_net_encoder

* add CLIPTextModel

* add prior ema support

* optimizer

* fix typo

* add dataloader

* prompt_embeds and image_embeds

* intial training loop

* fix output_dir

* fix add_noise

* accelerator check

* make effnet_transforms dynamic

* fix training loop

* add validation logging

* use loaded text_encoder

* use PreTrainedTokenizerFast

* load weigth from pickle

* save_model_card

* remove unused file

* fix typos

* save prior pipeilne in its own folder

* fix imports

* fix pipe_t2i

* scale image_embeds

* remove snr_gamma

* format

* initial lora prior training

* log_validation and save

* initial gradient working

* remove save/load hooks

* set set_attn_processor on prior_prior

* add lora script

* typos

* use LoraLoaderMixin for prior pipeline

* fix usage

* make fix-copies

* yse repo_id

* write_lora_layers is a staitcmethod

* use defualts

* fix defaults

* undo

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/loaders.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/loaders.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/wuerstchen/modeling_wuerstchen_prior.py

* Update src/diffusers/loaders.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/loaders.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add graident checkpoint support to prior

* gradient_checkpointing

* formatting

* Update examples/wuerstchen/text_to_image/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/wuerstchen/text_to_image/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/wuerstchen/text_to_image/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/wuerstchen/text_to_image/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/wuerstchen/text_to_image/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/wuerstchen/text_to_image/train_text_to_image_lora_prior.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/loaders.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/wuerstchen/text_to_image/train_text_to_image_prior.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* use default unet and text_encoder

* fix test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-10-16 15:00:33 +02:00
Sayak Paul
93df5bb670 [Examples] fix unconditioning generation training example for mixed-precision training (#5407)
* fix: unconditional generation example

* fix: float in loss.

* apply styling.
2023-10-16 14:11:35 +05:30
Sayak Paul
07b297e7de [Bot] FIX stale.py uses timezone-aware datetime (#5396)
reflect stalebot change from https://github.com/huggingface/peft/pull/1016/
2023-10-16 00:54:50 +05:30
Younes Belkada
2bfa55f4ed [core / PEFT / LoRA] Integrate PEFT into Unet (#5151)
* v1

* add tests and fix previous failing tests

* fix CI

* add tests + v1 `PeftLayerScaler`

* style

* add scale retrieving mechanism system

* fix CI

* up

* up

* simple approach --> not same results for some reason

* fix issues

* fix copies

* remove unneeded method

* active adapters!

* fix merge conflicts

* up

* up

* kohya - test-1

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix scale

* fix copies

* add comment

* multi adapters

* fix tests

* oops

* v1 faster loading - in progress

* Revert "v1 faster loading - in progress"

This reverts commit ac925f8132.

* kohya same generation

* fix some slow tests

* peft integration features for unet lora

1. Support for Multiple ranks/alphas
2. Support for Multiple active adapters
3. Support for enabling/disabling LoRAs

* fix `get_peft_kwargs`

* Update loaders.py

* add some tests

* add unfuse tests

* fix tests

* up

* add set adapter from sourab and tests

* fix multi adapter tests

* style & quality

* style

* remove comment

* fix `adapter_name` issues

* fix unet adapter name for sdxl

* fix enabling/disabling adapters

* fix fuse / unfuse unet

* nit

* fix

* up

* fix cpu offloading

* fix another slow test

* fix another offload test

* add more tests

* all slow tests pass

* style

* fix alpha pattern for unet and text encoder

* Update src/diffusers/loaders.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update src/diffusers/models/attention.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* up

* up

* clarify comment

* comments

* change comment order

* change comment order

* stylr & quality

* Update tests/lora/test_lora_layers_peft.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix bugs and add tests

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* refactor

* suggestion

* add break statemebt

* add compile tests

* move slow tests to peft tests as I modified them

* quality

* refactor a bit

* style

* change import

* style

* fix CI

* refactor slow tests one last time

* style

* oops

* oops

* oops

* final tweak tests

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/loaders.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* comments

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* remove comments

* more comments

* try

* revert

* add `safe_merge` tests

* add comment

* style, comments and run tests in fp16

* add warnings

* fix doc test

* replace with `adapter_weights`

* add `get_active_adapters()`

* expose `get_list_adapters` method

* better error message

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* style

* trigger slow lora tests

* fix tests

* maybe fix last test

* revert

* Update src/diffusers/loaders.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update src/diffusers/loaders.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update src/diffusers/loaders.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update src/diffusers/loaders.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* move `MIN_PEFT_VERSION`

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* let's not use class variable

* fix few nits

* change a bit offloading logic

* check earlier

* rm unneeded block

* break long line

* return empty list

* change logic a bit and address comments

* add typehint

* remove parenthesis

* fix

* revert to fp16 in tests

* add to gpu

* revert to old test

* style

* Update src/diffusers/loaders.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* change indent

* Apply suggestions from code review

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-10-13 16:47:03 +02:00
Sayak Paul
9bc55e8b7f [Core] Add FreeU to all the core pipelines and their (mostly-used) derivatives (#5376)
* add: freeu to the core sdxl pipeline.

* add: freeu to video2video

* add: freeu to the core SD pipelines.

* add: freeu to image variation for sdxl.

* add: freeu to SD ControlNet pipelines.

* add: freeu to SDXL controlnet pipelines.

* add: freu to t2i adapter pipelines.

* make fix-copies.
2023-10-13 17:16:16 +05:30
Dhruv Nair
4d2c981d55 New xformers test runner (#5349)
* move xformers to dedicated runner

* fix

* remove ptl from test runner images
2023-10-13 00:32:39 +05:30
Steven Liu
cf03f5b718 [docs] Minor fixes (#5369)
minor fixes
2023-10-11 17:18:29 -07:00
Patrick von Platen
5313aa6231 make style 2023-10-11 11:19:35 +00:00
Chi
ea8364e581 I Added Doc-String Into The class. (#5293)
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-11 13:19:06 +02:00
Soumik Rakshit
e00df25aee Fix StableDiffusionXLImg2ImgPipeline creation in sdxl tutorial (#5367)
fix: StableDiffusionXLImg2ImgPipeline creation in sdxl tutorial
2023-10-11 13:07:53 +02:00
Aryan V S
91fd181245 Improve typehints and docs in diffusers/models (#5312)
* improvement: add missing typehints and docs to diffusers/models/attention.py

* chore: convert doc strings to raw python strings

add missing typehints

* improvement: add missing typehints and docs to diffusers/models/adapter.py

* improvement: add missing typehints and docs to diffusers/models/lora.py

* docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/lora.py

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-11 13:04:59 +02:00
Sayak Paul
0fa32bd674 [Examples] use loralinear instead of depecrecated lora attn procs. (#5331)
* use loralinear instead of depecrecated lora attn procs.

* fix parameters()

* fix saving

* add back support for add kv proj.

* fix: param accumul,ation.

* propagate the changes.
2023-10-11 13:02:42 +02:00
ssusie
aea73834f6 Adding PyTorch XLA support for sdxl inference (#5273)
* Added  mark_step for sdxl to run with pytorch xla. Also updated README with instructions for xla

* adding soft dependency on torch_xla

* fix some styling

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-10-11 12:35:37 +02:00
Steven Liu
a139213578 [docs] Create a mask for inpainting (#5322)
add mask making section
2023-10-10 08:29:27 -07:00
Humphrey009
9c82b68f07 fix problem of 'accelerator.is_main_process' to run in mutiple GPUs (#5340)
fix problem of 'accelerator.is_main_process' to run in mutiple GPUs or NPUs

Co-authored-by: jiaqiw <wangjiaqi50@huawei.com>
2023-10-10 15:39:22 +05:30
Julien Simon
d3e0750d5d Add missing dependency in requirements file (#5345)
Update requirements_sdxl.txt

Add missing 'datasets'

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-10 15:37:58 +05:30
Roy Hvaara
4ac205e32f [JAX] Replace uses of jnp.array in types with jnp.ndarray. (#4719)
`jnp.array` is a function, not a type:
https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html
so it never makes sense to use `jnp.array` in a type annotation.

Presumably the intent was to write `jnp.ndarray` aka `jax.Array`. Change uses of `jnp.array` to `jnp.ndarray`.

Co-authored-by: Peter Hawkins <phawkins@google.com>
2023-10-09 22:04:40 +02:00
Patrick von Platen
ed2f956072 Fix loading broken LoRAs that could give NaN (#5316)
* Fix fuse Lora

* improve a bit

* make style

* Update src/diffusers/models/lora.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* ciao C file

* ciao C file

* test & make style

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-10-09 18:01:55 +02:00
Jake Vanderplas
a844065384 replace references to deprecated KeyArray & PRNGKeyArray (#5324) 2023-10-09 17:31:50 +02:00
Jonathan Whitaker
35952e61c1 Fix links in docs to adapter code (#5323)
Update adapter.md to fix links to adapter pipelines
2023-10-09 17:20:12 +02:00
Patrick von Platen
d199bc62ec make style 2023-10-09 17:12:12 +02:00
Aryan V S
8d314c96ee [HacktoberFest] Add missing docstrings to diffusers/models (#5248)
* add missing docstrings

* chore: run make quality

* improvement: include docs suggestion by @yiyixuxu

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-09 17:09:16 +02:00
Brian Yarbrough
e2c0208c86 Add py.typed for PEP 561 compliance (#5326)
See #5325
2023-10-09 17:08:55 +02:00
Aryan V S
bd72927c63 Improve typehints and docs in diffusers/models (#5299)
* improvement: add typehints and docs to diffusers/models/activations.py

* improvement: add typehints and docs to diffusers/models/resnet.py
2023-10-09 16:29:23 +02:00
__mo_san__
c4d66200b7 make-fast-test-for-StableDiffusionControlNetPipeline-faster (#5292)
* decrease UNet2DConditionModel & ControlNetModel blocks

* decrease UNet2DConditionModel & ControlNetModel blocks

* decrease even more blocks & number of norm groups

* decrease vae block out channels and n of norm goups

* fix code style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-09 14:56:27 +05:30
Sebastian
2ed7e05fc2 Improve performance of fast test by reducing down blocks (#5290)
* Reduce number of down block channels

* Remove debug code

* Set new excepted image slice values for sdxl euler test
2023-10-09 14:49:56 +05:30
Pu Cao
cc2c4ae759 fix inference in custom diffusion (#5329)
* Update train_custom_diffusion.py

* make style

* Empty-Commit

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-09 10:08:01 +02:00
chuzh
6bd55b54bc Fix [core/GLIGEN]: TypeError when iterating over 0-d tensor with In-painting mode when EulerAncestralDiscreteScheduler is used (#5305)
* fix(gligen_inpaint_pipeline): 🐛 Wrap the timestep() 0-d tensor in a list to convert to 1-d tensor. This avoids the TypeError caused by trying to directly iterate over a 0-dimensional tensor in the denoising stage

* test(gligen/gligen_text_image): unit test using the EulerAncestralDiscreteScheduler

---------

Co-authored-by: zhen-hao.chu <zhen-hao.chu@vitrox.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-09 09:54:01 +02:00
Zeng Xian
0513a8cfd8 fix typo in train dreambooth lora description (#5332) 2023-10-08 14:54:33 +02:00
Shubham S Jagtap
306dc6e751 Update README.md (#5267)
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-10-06 14:50:18 -07:00
vedant2003
dd25ef5679 [Hacktoberfest]Fixing issues #5241 (#5255)
* Update pipeline_wuerstchen_prior.py

* prior_num_inference_steps updated

* height, width, num_inference_steps, and guidance_scale synced

* parameters synced

* latent_mean, latent_std, and resolution_multiple synced

* prior_num_inference_steps changed

* Formatted pipeline_wuerstchen_prior.py

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py

---------

Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
2023-10-06 14:19:38 -07:00
TimothyAlexisVass
016866792d Minor fixes (#5309)
tiny fixes
2023-10-06 11:20:06 -07:00
Steven Liu
f0a2c63753 [docs] Improved inpaint docs (#5210)
* start

* finish draft

* add section

* edits

* feedback

* make fix-copies

* rebase
2023-10-06 09:44:24 -07:00
Sayak Paul
7eaae83f16 [LoRA] fix: torch.compile() for lora conv (#5298)
fix: torch.compile() for lora conv
2023-10-06 17:14:47 +02:00
Dhruv Nair
872ae1dd12 Add from single file to StableDiffusionUpscalePipeline and StableDiffusionLatentUpscalePipeline (#5194)
* add from single file

* clean up

* make style

* add single file loading for upscaling
2023-10-06 13:18:13 +02:00
Dhruv Nair
6ce01bd647 Bump tolerance on shape test (#5289)
bump tolerance on shape test
2023-10-06 10:25:18 +02:00
Patrick von Platen
0922210c5c Update bug-report.yml 2023-10-06 09:42:20 +02:00
Bagheera
02a8d664a2 Min-SNR Gamma: correct the fix for SNR weighted loss in v-prediction … (#5238)
Min-SNR Gamma: correct the fix for SNR weighted loss in v-prediction by adding 1 to SNR rather than the resulting loss weights

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-10-05 20:52:27 +02:00
Sayak Paul
e6faf607f7 add: entry for DDPO support. (#5250)
* add: entry for DDPO support.

* move to training

* address steven's comments./
2023-10-05 14:29:00 +02:00
Dhruv Nair
d8d8b2ae77 pin torch version (#5297) 2023-10-05 13:00:30 +02:00
Kadir Nar
84b82a6cb7 [Core] Add FreeU mechanism (#5164)
*  Added Fourier filter function to upsample blocks

* 🔧 Update Fourier_filter for float16 support

*  Added UNetFreeUConfig to UNet model for FreeU adaptation 🛠️

* move unet to its original form and add fourier_filter to torch_utils.

* implement freeU enable mechanism

* implement disable mechanism

* resolution index.

* correct resolution idx condition.

* fix copies.

* no need to use resolution_idx in vae.

* spell out the kwargs

* proper config property

* fix attribution setting

* place unet hasattr properly.

* fix: attribute access.

* proper disable

* remove validation method.

* debug

* debug

* debug

* debug

* debug

* debug

* potential fix.

* add: doc.

* fix copies

* add: tests.

* add: support freeU in SDXL.

* set default value of resolution idx.

* set default values for resolution_idx.

* fix copies

* fix rest.

* fix copies

* address PR comments.

* run fix-copies

* move apply_free_u to utils and other minors.

* introduce support for video (unet3D)

* minor ups

* consistent fix-copies.

* consistent stuff

* fix-copies

* add: rest

* add: docs.

* fix: tests

* fix: doc path

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* style up

* move to techniques.

* add: slow test for sd freeu.

* add: slow test for sd freeu.

* add: slow test for sd freeu.

* add: slow test for sd freeu.

* add: slow test for sd freeu.

* add: slow test for sd freeu.

* add: slow test for video with freeu

* add: slow test for video with freeu

* add: slow test for video with freeu

* style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-10-05 10:37:04 +02:00
1toTree
e46ec5f88f Zh doc (#4807)
* Update _toctree.yml

* Add files via upload

* Update docs/source/zh/stable_diffusion.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-10-04 08:54:13 -07:00
Patrick von Platen
25c177aace make style 2023-10-04 11:46:34 +02:00
Anatoly Belikov
c7e08958b8 handle case when controlnet is list or tuple (#5179)
* handle case when controlnet is list

* Update src/diffusers/loaders.py

* Apply suggestions from code review

* Update src/diffusers/loaders.py

* typecheck comment

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-04 11:46:10 +02:00
Dhruv Nair
dd5a36291f New Pipeline Slow Test runners (#5131)
* pipline fetcher

* update script

* clean up

* clean up

* clean up

* new pipeline runner

* rename tests to match modules

* test actions in pr

* change runner to gpu

* clean up

* clean up

* clean up

* fix report

* fix reporting

* clean up

* show test stats in failure reports

* give names to jobs

* add lora tests

* split torch cuda tests and add compile tests

* clean up

* fix tests

* change push to run only on main

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-04 11:42:17 +02:00
Patrick von Platen
7271f8b717 Fix UniPC scheduler for 1D (#5276) 2023-10-03 11:49:27 +02:00
Patrick von Platen
dfcce3ca6e [Research folder] Add SDXL example (#5275)
* [SDXL Flax] Add research folder

* Add co-author

Co-authored-by: Juan Acevedo <jfacevedo@google.com>

---------

Co-authored-by: Juan Acevedo <jfacevedo@google.com>
2023-10-03 07:51:46 +02:00
Patrick von Platen
2457599114 make fix copies 2023-10-02 17:53:17 +00:00
Patrick von Platen
bdd16116f3 [Schedulers] Fix callback steps (#5261)
* fix all

* make fix copies

* make fix copies
2023-10-02 19:52:53 +02:00
Leng Yue
c8b0f0eb21 Update UniPC to support 1D diffusion. (#5199)
* Update Unipc einsum to support 1D and 3D diffusion.

* Add unittest

* Update unittest & edge case

* Fix unittest

* Fix testing_utils.py

* Fix unittest file

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-02 19:17:46 +02:00
stano
7a4324cce3 Add a docstring for the AutoencoderKL's encode (#5239)
* Add docstring for the AutoencoderKL's encode

#5229

* Support Python 3.8 syntax in AutoencoderKL.decode type hints

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Follow the style guidelines in AutoencoderKL's encode

#5230

---------

Co-authored-by: stano <>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-02 19:17:34 +02:00
stano
37a787a106 Add docstring for the AutoencoderKL's decode (#5242)
* Add docstring for the AutoencoderKL's decode

#5230

* Follow the style guidelines in AutoencoderKL's decode

#5230

---------

Co-authored-by: stano <>
2023-10-02 18:42:32 +02:00
Sayak Paul
d56825e4b4 fix: how print training resume logs. (#5117)
* fix: how print training resume logs.

* propagate changes to text-to-image scripts.

* propagate changes to instructpix2pix.

* propagate changes to dreambooth

* propagate changes to custom diffusion and instructpix2pix

* propagate changes to kandinsky

* propagate changes to textual inv.

* debug

* fix: checkpointing.

* debug

* debug

* debug

* back to the square

* debug

* debug

* change condition order.

* debug

* debug

* debug

* debug

* revert to original

* clean

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-02 18:29:52 +02:00
dg845
cd1b8d7ca8 [WIP] Refactor UniDiffuser Pipeline and Tests (#4948)
* Add VAE slicing and tiling methods.

* Switch to using VaeImageProcessing for preprocessing and postprocessing of images.

* Rename the VaeImageProcessor to vae_image_processor to avoid a name clash with the CLIPImageProcessor (image_processor).

* Remove the postprocess() function because we're using a VaeImageProcessor instead.

* Remove UniDiffuserPipeline.decode_image_latents because we're using VaeImageProcessor instead.

* Refactor generating text from text latents into a decode_text_latents method.

* Add enable_full_determinism() to UniDiffuser tests.

* make style

* Add PipelineLatentTesterMixin to UniDiffuserPipelineFastTests.

* Remove enable_model_cpu_offload since it is now part of DiffusionPipeline.

* Rename the VaeImageProcessor instance to self.image_processor for consistency with other pipelines and rename the CLIPImageProcessor instance to clip_image_processor to avoid a name clash.

* Update UniDiffuser conversion script.

* Make safe_serialization configurable in UniDiffuser conversion script.

* Rename image_processor to clip_image_processor in UniDiffuser tests.

* Add PipelineKarrasSchedulerTesterMixin to UniDiffuserPipelineFastTests.

* Add initial test for compiling the UniDiffuser model (not tested yet).

* Update encode_prompt and _encode_prompt to match that of StableDiffusionPipeline.

* Turn off standard classifier-free guidance for now.

* make style

* make fix-copies

* apply suggestions from review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-10-02 18:24:55 +02:00
Patrick von Platen
db91e710da make style 2023-10-02 16:14:53 +00:00
Nandika-A
2a62aadcff Add docstrings in forward methods of adapter model (#5253)
* added docstrings in forward methods of T2IAdapter model and FullAdapter model

* added docstrings in forward methods of FullAdapterXL and AdapterBlock models

* Added docstrings in forward methods of adapter models
2023-10-02 18:14:41 +02:00
Patrick von Platen
4f74a5e1f7 [PEFT warnings] Only sure deprecation warnings in the future (#5240)
* [PEFT warnings] Only sure deprecation warnings in the future

* make style
2023-10-02 17:33:32 +02:00
Dhruv Nair
bbe8d3ae13 Compile test fixes (#5235)
compile test fixes
2023-10-02 17:06:10 +02:00
Zanz2
907fd91ce9 Fixed constants.py not using hugging face hub environment variable (#5222) 2023-10-02 15:51:24 +02:00
Pedro Cuenca
0c7cb9a613 Flax: Ignore PyTorch, ONNX files when they coexist with Flax weights (#5237)
Ignore PyTorch, ONNX files when they coexist with Flax weights
2023-10-02 12:17:23 +02:00
Mishig
84e5cc596c Fix doc KO unconditional_image_generation.md (#5236)
Fix indent issue
2023-09-29 18:49:40 +02:00
Younes Belkada
cc92332096 [PEFT / LoRA ] Fix text encoder scaling (#5204)
* move text encoder changes

* fix

* add comment.

* fix tests

* Update src/diffusers/utils/peft_utils.py

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-29 18:30:53 +02:00
Charles Bensimon
9cfd4ef074 Make BaseOutput dataclasses picklable (#5234)
* Make BaseOutput dataclasses picklable

* make style

* Test

* Empty commit

* Simpler and safer
2023-09-29 16:35:16 +02:00
Patrick von Platen
78a78515d6 make style 2023-09-29 08:55:26 +02:00
Seunghyeon Kim
9c03a7da43 Fix DDIMInverseScheduler (#5145)
* fix ddim inverse scheduler

* update test of ddim inverse scheduler

* update test of pix2pix_zero

* update test of diffedit

* fix typo

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-29 08:55:00 +02:00
Steven Liu
1d3120fbaa [docs] Quicktour fixes (#5211)
* fix

* feedback
2023-09-29 08:43:58 +02:00
Dhruv Nair
c78ee143e9 Move more slow tests to nightly (#5220)
* move to nightly

* fix mistake
2023-09-28 19:00:41 +05:30
Ömer Veysel Çağatan
622f35b1d0 fixed vae scaling (#5213) 2023-09-28 15:13:27 +02:00
Younes Belkada
39baf0b41b [PEFT / LoRA ] Kohya checkpoints support (#5202)
kohya support
2023-09-28 15:07:23 +02:00
Nicholas Bardy
1c4c4c48d9 Correct file name in t2i adapter training readme (#5207)
Update README_sdxl.md
2023-09-28 14:51:02 +02:00
Younes Belkada
d840253f6a [PEFT] Fix typo for import (#5217)
Update loaders.py
2023-09-28 17:33:25 +05:30
Pedro Cuenca
536c297a14 Trickle down split_head_dim (#5208) 2023-09-28 01:03:36 +02:00
Benjamin Paine
693a0d08e4 Remove Offensive Language from Community Pipelines (#5206)
* Update run_onnx_controlnet.py

* Update run_tensorrt_controlnet.py
2023-09-27 22:02:25 +02:00
Patrick von Platen
cac7adab11 [Flax SDXL] fix zero out sdxl (#5203) 2023-09-27 18:29:07 +02:00
Patrick von Platen
a584d42ce5 [LoRA, Xformers] Fix xformers lora (#5201)
* fix xformers lora

* improve

* fix
2023-09-27 21:46:32 +05:30
Sayak Paul
cdcc01be0e [Examples] add compute_snr() to training utils. (#5188)
add compute_snr() to training utils.
2023-09-27 21:42:20 +05:30
Dhruv Nair
ba59e92fb0 Fix memory issues in tests (#5183)
* fix memory issues

* set _offload_gpu_id

* set gpu offload id
2023-09-27 14:04:57 +02:00
Sourab Mangrulkar
02247d9ce1 PEFT Integration for Text Encoder to handle multiple alphas/ranks, disable/enable adapters and support for multiple adapters (#5147)
* more fixes

* up

* up

* style

* add in setup

* oops

* more changes

* v1 rzfactor CI

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* few todos

* protect torch import

* style

* fix fuse text encoder

* Update src/diffusers/loaders.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* replace with `recurse_replace_peft_layers`

* keep old modules for BC

* adjustments on `adjust_lora_scale_text_encoder`

* nit

* move tests

* add conversion utils

* remove unneeded methods

* use class method instead

* oops

* use `base_version`

* fix examples

* fix CI

* fix weird error with python 3.8

* fix

* better fix

* style

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add comment

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* conv2d support for recurse remove

* added docstrings

* more docstring

* add deprecate

* revert

* try to fix merge conflicts

* peft integration features for text encoder

1. support multiple rank/alpha values
2. support multiple active adapters
3. support disabling and enabling adapters

* fix bug

* fix code quality

* Apply suggestions from code review

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* fix bugs

* Apply suggestions from code review

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* address comments

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-Authored-By: Patrick von Platen <patrick.v.platen@gmail.com>

* fix code quality

* address comments

* address comments

* Apply suggestions from code review

* find and replace

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-27 13:21:54 +02:00
YiYi Xu
940f9410cb Add test_full_loop_with_noise tests to all scheduler with add_nosie function (#5184)
* add fast tests for dpm-multi

* add more tests

* style

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-27 13:08:37 +02:00
Patrick von Platen
ad06e5106e [Docs] Improve xformers page (#5196)
[Docs] Improve
2023-09-27 16:02:15 +05:30
Pedro Cuenca
ae2fc01a91 Wrap lines in docstring (#5190) 2023-09-26 20:10:40 +02:00
Juan Acevedo
16d56c4b4f F/flax split head dim (#5181)
* split_head_dim flax attn

* Make split_head_dim non default

* make style and make quality

* add description for split_head_dim flag

* Update src/diffusers/models/attention_flax.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Juan Acevedo <jfacevedo@google.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-26 20:08:30 +02:00
Patrick von Platen
c82f7bafba [SDXL Flax] fix SDXL flax init (#5187)
* fix SDXL flax init

* finish

* Fix
2023-09-26 19:55:05 +02:00
Pedro Cuenca
d9e7857af3 timestep_spacing for FlaxDPMSolverMultistepScheduler (#5189)
* timestep_spacing for FlaxDPMSolverMultistepScheduler

* Style
2023-09-26 19:54:53 +02:00
Steven Liu
fd1c54abf2 [docs] Improved text-to-image guide (#4938)
* first draft

* edits

* feedback
2023-09-26 09:20:19 -07:00
Dhruv Nair
9946dcf8db Test Fixes for CUDA Tests and Fast Tests (#5172)
* fix other tests

* fix tests

* fix tests

* Update tests/pipelines/shap_e/test_shap_e_img2img.py

* Update tests/pipelines/shap_e/test_shap_e_img2img.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix upstream merge mistake

* fix tests:

* test fix

* Update tests/lora/test_lora_layers_old_backend.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/lora/test_lora_layers_old_backend.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-26 19:08:02 +05:30
Ernie Chu
21e402faa0 fix-VaeImageProcessor-docstring (#5182)
```
do_binarize (`bool`, *optional*, defaults to `True`)
|
v
do_binarize (`bool`, *optional*, defaults to `False`)
```
2023-09-26 15:06:45 +02:00
Bagheera
4a06c74547 Min-SNR Gamma: follow-up fix for zero-terminal SNR models on v-prediction or epsilon (#5177)
* merge with main

* fix flax example

* fix onnx example

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-26 18:14:52 +05:30
Bagheera
89d8f84893 Timestep bias for fine-tuning SDXL (#5094)
* Timestep bias for fine-tuning SDXL

* Adjust parameter choices to include "range" and reword the help statements

* Condition our use of weighted timesteps on the value of timestep_bias_strategy

* style

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-26 13:45:37 +05:30
Dhruv Nair
bdd2544673 Tests compile fixes (#5148)
* test fix

* fix tests

* fix report name

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-26 11:36:46 +05:30
Patrick von Platen
a91a273d0b [Docs] Try to fix doc builder (#5180)
* try to fix docs

* try to fix docs
2023-09-25 20:24:50 +02:00
Patrick von Platen
bed8aceca1 make style 2023-09-25 20:24:03 +02:00
Ryan Dick
415093335b Fix the total_downscale_factor returned by FullAdapterXL T2IAdapters (#5134)
* Fix FullAdapterXL.total_downscale_factor.

* Fix incorrect error message in T2IAdapter.__init__(...).

* Move IP-Adapter test_total_downscale_factor(...) to pipeline test file (requested in code review).

* Add more info to error message about an unsupported T2I-Adapter adapter_type.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-25 20:23:14 +02:00
Hengwen Tong
dfdf85d32c [pipeline utils] sanitize pretrained_model_name_or_path (#5173)
Make sure the repo_id is valid before sending it to huggingface_hub to get a more understandable error message.

Re #5110

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-25 20:22:41 +02:00
Bagheera
539846a7d5 SDXL microconditioning documentation should indicate the correct default order of parameters, so that developers know (#5155)
* SDXL microconditioning documentation should indicate the correct default order of parameters, so that developers know

* SDXL microconditioning documentation should indicate the correct default order of parameters, so that developers know

* empty

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-25 20:22:09 +02:00
Patrick von Platen
d70944bf7f fix docs 2023-09-25 19:55:49 +02:00
Patrick von Platen
589cd8100b make style 2023-09-25 19:27:20 +02:00
Carson Katri
6281d2066b Add callbacks to WuerstchenDecoderPipeline and WuerstchenCombinedPipeline (#5154) 2023-09-25 19:26:53 +02:00
Anh71me
28254c79b6 Fix type annotation (#5146)
* Fix type annotation on Scheduler.from_pretrained

* Fix type annotation on PIL.Image
2023-09-25 19:26:39 +02:00
MLRichter
0bc6be6960 Update wuerstchen.md (#5156) 2023-09-25 18:43:08 +02:00
Patrick von Platen
144c3a8b7c [Imports] Fix many import bugs and make sure that doc builder CI test works correctly (#5176)
* [Doc builder] Ensure slow import for doc builder

* Apply suggestions from code review

* env for doc builder

* fix more

* [Diffusers] Set import to slow as env variable

* fix docs

* fix docs

* Apply suggestions from code review

* Apply suggestions from code review

* fix docs

* fix docs
2023-09-25 18:06:51 +02:00
Patrick von Platen
30a512ea69 [Core] Improve .to(...) method, fix offloads multi-gpu, add docstring, add dtype (#5132)
* fix cpu offload

* fix

* fix

* Update src/diffusers/pipelines/pipeline_utils.py

* make style

* Apply suggestions from code review

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix more

* fix more

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-09-25 14:10:18 +02:00
Dhruv Nair
92f15f5bd4 Model CPU offload fix for BLIPDiffusion (#5174)
cpu offload fix for blip diffusion
2023-09-25 17:07:32 +05:30
Patrick von Platen
22b19d578e [Tests] Add is flaky decorator (#5139)
* add is flaky decorator

* fix more
2023-09-25 13:24:44 +02:00
Sayak Paul
787195fe20 Fix/controlnet lora (#5157)
* print

* print

* print

* print

* print

* debugging

* debugging

* debugging

* debugging

* safer condition.

* remove prints and try excepts.

* Empty-Commit

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-25 12:08:05 +02:00
Mishig
48664d62b8 Delete duplicatd doc file (#5169) 2023-09-24 19:58:13 +02:00
YiYi Xu
5b11c5dc77 fix the add_noise function for dpm-multi et al (#5158)
* remove to _device() for sigmas

* update add_noise to use simgas

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-23 09:07:50 -10:00
Sayak Paul
310cf32801 add: note on whom to tag for issues related to community pipelines. (#5083) 2023-09-23 17:01:37 +01:00
Steven Liu
06b316ef5c [docs] Improved image-to-image guide (#5020)
* finish first draft

* feedback

* feedback
2023-09-22 13:20:30 -07:00
Pedro Cuenca
3651b14cf4 SDXL flax (#4254)
* support transformer_layers_per block in flax UNet

* add support for text_time additional embeddings to Flax UNet

* rename attention layers for VAE

* add shape asserts when renaming attention layers

* transpose VAE attention layers

* add pipeline flax SDXL code [WIP]

* continue add pipeline flax SDXL code [WIP]

* cleanup

* Working on JIT support

Fixed prompt embedding shapes so they work in parallel mode. Assuming we
always have both text encoders for now, for simplicity.

* Fixing embeddings (untested)

* Remove spurious line

* Shard guidance_scale when jitting.

* Decode images

* Fix sharding

* style

* Refiner UNet can be loaded.

* Refiner / img2img pipeline

* Allow latent outputs from base and latent inputs in refiner

This makes it possible to chain base + refiner without having to use the
vae decoder in the base model, the vae encoder in the refiner, skipping
conversions to/from PIL, and avoiding TPU <-> CPU memory copies.

* Adapt to FlaxCLIPTextModelOutput

* Update Flax XL pipeline to FlaxCLIPTextModelOutput

* make fix-copies

* make style

* add euler scheduler

* Fix import

* Fix copies, comment unused code.

* Fix SDXL Flax imports

* Fix euler discrete begin

* improve init import

* finish

* put discrete euler in init

* fix flax euler

* Fix more

* make style

* correct init

* correct init

* Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline

* correct pipelines

* finish

---------

Co-authored-by: Martin Müller <martin.muller.me@gmail.com>
Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-22 18:34:04 +02:00
Pedro Cuenca
2e860e89eb SDXL: update links to refine docs (#5101)
* SDXL: update links to refine docs

* make style
2023-09-22 13:17:17 +02:00
Younes Belkada
493f9529d7 [PEFT / LoRA] PEFT integration - text encoder (#5058)
* more fixes

* up

* up

* style

* add in setup

* oops

* more changes

* v1 rzfactor CI

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* few todos

* protect torch import

* style

* fix fuse text encoder

* Update src/diffusers/loaders.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* replace with `recurse_replace_peft_layers`

* keep old modules for BC

* adjustments on `adjust_lora_scale_text_encoder`

* nit

* move tests

* add conversion utils

* remove unneeded methods

* use class method instead

* oops

* use `base_version`

* fix examples

* fix CI

* fix weird error with python 3.8

* fix

* better fix

* style

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add comment

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* conv2d support for recurse remove

* added docstrings

* more docstring

* add deprecate

* revert

* try to fix merge conflicts

* v1 tests

* add new decorator

* add saving utilities test

* adapt tests a bit

* add save / from_pretrained tests

* add saving tests

* add scale tests

* fix deps tests

* fix lora CI

* fix tests

* add comment

* fix

* style

* add slow tests

* slow tests pass

* style

* Update src/diffusers/utils/import_utils.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* circumvents pattern finding issue

* left a todo

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update hub path

* add lora workflow

* fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-22 13:03:39 +02:00
hysts
b32555a2da [docs] Add missing parenthesis in the sample code of BLIP Diffusion (#5144)
Add missing parenthesis in the sample code of BLIP Diffusion
2023-09-22 09:38:17 +01:00
YiYi Xu
80c00e5451 add use_karras_sigmas to KDPM2DiscreteScheduler and KDPM2AncestralDiscreteScheduler (#5111)
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-21 13:50:41 -10:00
YiYi Xu
2badddfdb6 add multi adapter support to StableDiffusionXLAdapterPipeline (#5127)
fix and add tests

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-21 12:54:59 -10:00
Bagheera
d558811b26 Min-SNR gamma support for Dreambooth training (#5107)
* min-SNR gamma for Dreambooth training

* Align the mse_loss_weights style with SDXL training example

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-21 22:53:06 +01:00
Ayush Mangal
157c9011d8 Add BLIP Diffusion (#4388)
* Add BLIP Diffusion skeleton

* Add other model components

* Add BLIP2, need to change it for now

* Fix pipeline imports

* Load pretrained ViT

* Make qformer fwd pass same

* Replicate fwd passes

* Fix device bug

* Add accelerate functions

* Remove extra functions from Blip2

* Minor bug

* Integrate initial review changes

* Refactoring

* Refactoring

* Refactor

* Add controlnet

* Refactor

* Update conversion script

* Add image processor

* Shift postprocessing to ImageProcessor

* Refactor

* Fix device

* Add fast tests

* Update conversion script

* Fix checkpoint conversion script

* Integrate review changes

* Integrate reivew changes

* Remove unused functions from test

* Reuse HF image processor in Cond image

* Create new BlipImageProcessor based on transfomers

* Fix image preprocessor

* Minor

* Minor

* Add canny preprocessing

* Fix controlnet preprocessing

* Fix blip diffusion test

* Add controlnet test

* Add initial doc strings

* Integrate review changes

* Refactor

* Update examples

* Remove DDIM comments

* Add copied from for prepare_latents

* Add type anotations

* Add docstrings

* Do black formatting

* Add batch support

* Make tests pass

* Make controlnet tests pass

* Black formatting

* Fix progress bar

* Fix some licensing comments

* Fix imports

* Refactor controlnet

* Make tests faster

* Edit examples

* Black formatting/Ruff

* Add doc

* Minor

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Move controlnet pipeline

* Make tests faster

* Fix imports

* Fix formatting

* Fix make errors

* Fix make errors

* Minor

* Add suggested doc changes

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Edit docs

* Fix 16 bit loading

* Update examples

* Edit toctree

* Update docs/source/en/api/pipelines/blip_diffusion.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Minor

* Add tips

* Edit examples

* Update model paths

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-21 17:05:35 +01:00
Bagheera
24563ca654 SNR gamma fixes for v_prediction training (#5106)
Co-authored-by: bghira <bghira@users.github.com>
2023-09-20 21:18:56 +01:00
Younes Belkada
914586f5b6 [core] Use python 3.8 in workflow and setup file (#5122)
* use python 3.7 instead

* Update setup.py
2023-09-20 20:57:06 +02:00
김태민
5b78141fd3 [FIX BUG] add config_files parser #5114 (#5115)
* add config_files parser #5114

* add config_files parser_fix #5114
2023-09-20 16:17:47 +02:00
Sayak Paul
e312b2302b [LoRA] support LyCORIS (#5102)
* better condition.

* debugging

* how about now?

* how about now?

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* support for lycoris.

* style

* add: lycoris test

* fix from_pretrained call.

* fix assertion values.
2023-09-20 10:30:18 +01:00
YiYi Xu
8263cf00f8 refactor DPMSolverMultistepScheduler using sigmas (#4986)
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-19 11:21:49 -10:00
Bagheera
74e43a4fbd Resolve v_prediction issue for min-SNR gamma weighted loss function (#5096)
* Resolve v_prediction issue for min-SNR gamma weighted loss function

* Combine MSE loss calculation of epsilon and velocity, with a note about the application of the epsilon code to sample prediction

* style

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-19 17:31:27 +01:00
Bagheera
81331f3b7d Add x-prediction / prediction_type=sample support for SDXL fine-tuning (#5095)
Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-19 16:57:44 +01:00
Dhruv Nair
29970757de Fast Tests on PR improvements: Batch Tests fixes (#5080)
* fix test

* initial commit

* change test

* updates:

* fix tests

* test fix

* test fix

* fix tests

* make test faster

* clean up

* fix precision in test

* fix precision

* Fix tests

* Fix logging test

* fix test

* fix test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-19 18:31:21 +05:30
Dhruv Nair
c2787c11c2 Fixes for Float16 inference Fast CUDA Tests (#5097)
* wip

* fix tests
2023-09-19 17:25:48 +05:30
Dhruv Nair
79a3f39eb5 Move to slow tests to nightly (#5093)
* move slow tests to nightly

* move slow tests to nightly
2023-09-19 16:04:26 +05:30
Dhruv Nair
431dd2f4d6 Fix precision related issues in Kandinsky Pipelines (#5098)
* fix failing tests

* make style
2023-09-19 16:02:21 +05:30
Sayak Paul
edcbb6f42e [WIP] core: add support for clip skip to SDXL (#5057)
* core: add support for clip ckip to SDXL

* add clip_skip support to the rest of the pipeline.

* Empty-Commit
2023-09-19 10:51:36 +01:00
Patrick von Platen
5a287d3f23 [SDXL] Make sure multi batch prompt embeds works (#5073)
* [SDXL] Make sure multi batch prompt embeds works

* [SDXL] Make sure multi batch prompt embeds works

* improve more

* improve more

* Apply suggestions from code review
2023-09-19 11:49:49 +02:00
maksymbekuzarovSC
65c162a5b3 Fixed get_word_inds mistake/typo in P2P community pipeline (breaking code examples) (#5089)
Fixed `get_word_inds` mistake/typo in P2P community pipeline

The function `get_word_inds` was taking a string of text and either a word (str) or a word index (int) and returned the indices of token(s) the word would be encoded to.

However, there was a typo, in which in the second `if` branch the word was checked to be a `str` **again**, not `int`, which resulted in an [example code from the docs](https://github.com/huggingface/diffusers/tree/main/examples/community#prompt2prompt-pipeline) to result in an error
2023-09-19 11:34:49 +02:00
Sayak Paul
04d696d650 [Core] Add support for CLIP-skip (#4901)
* add support for clip skip

* fix condition

* fix

* add clip_output_layer_to_default

* expose

* remove the previous functions.

* correct condition.

* apply final layer norm

* address feedback

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* refactor clip_skip.

* port to the other pipelines.

* fix copies one more time

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-19 10:30:53 +01:00
Sayak Paul
ed507680e3 [LoRA] don't break offloading for incompatible lora ckpts. (#5085)
* don't break offloading for incompatible lora ckpts.

* debugging

* better condition.

* fix

* fix

* fix

* fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-18 23:46:28 +02:00
Will Berman
7974fad13b remove unused adapter weights in constructor (#5088)
remove adapter weights in MultiAdapter constructor
2023-09-18 22:36:55 +02:00
Will Berman
6d7279adad t2i Adapter community member fix (#5090)
* convert tensorrt controlnet

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix number controlnet condition

* Add convert SD XL to onnx

* Add convert SD XL to tensorrt

* Add convert SD XL to tensorrt

* Add examples in comments

* Add examples in comments

* Add test onnx controlnet

* Add tensorrt test

* Remove copied

* Move file test to examples/community

* Remove script

* Remove script

* Remove text

* Fix import

* Fix T2I MultiAdapter

* fix tests

---------

Co-authored-by: dotieuthien <thien.do@mservice.com.vn>
Co-authored-by: dotieuthien <dotieuthien9997@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: dotieuthien <hades@cinnamon.is>
2023-09-18 22:35:49 +02:00
Patrick von Platen
119ad2c3dc [LoRA] Centralize LoRA tests (#5086)
* [LoRA] Centralize LoRA tests

* [LoRA] Centralize LoRA tests

* [LoRA] Centralize LoRA tests

* [LoRA] Centralize LoRA tests

* [LoRA] Centralize LoRA tests
2023-09-18 17:54:33 +02:00
Ruoxi
16b9a57d29 Implement CustomDiffusionAttnProcessor2_0. (#4604)
* Implement `CustomDiffusionAttnProcessor2_0`

* Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1)

* Update attnprocessor.md

* Update attention_processor.py

* Interops for `CustomDiffusionAttnProcessor2_0`.

* Formatted `attention_processor.py`.

* Formatted doc-string in `attention_processor.py`

* Conditional CustomDiffusion2_0 for training example.

* Remove unnecessary reference impl in comments.

* Fix `save_attn_procs`.
2023-09-18 14:49:00 +02:00
Patrick von Platen
7b39f43c06 [Textual inversion] Refactor textual inversion to make it cleaner (#5076)
* [Textual inversion] Clean loading

* [Textual inversion] Clean loading

* [Textual inversion] Clean up

* [Textual inversion] Clean up

* [Textual inversion] Clean up

* [Textual inversion] Clean up
2023-09-18 14:30:34 +02:00
Sayak Paul
bfc606301f add doc around fusing multiple loras. (#5056)
* add doc around fusing multiple loras.

* Apply suggestions from code review

Co-authored-by: apolinário <joaopaulo.passos@gmail.com>

* address poli's comments.

---------

Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
2023-09-18 12:42:58 +01:00
YiYi Xu
6886e28fd8 fix a bug in inpaint pipeline when use regular text2image unet (#5033)
* fix

* fix num_images_per_prompt >1

* other pipelines

* add fast tests for inpaint pipelines

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-18 13:40:11 +02:00
Lee Dong Joo
b089102a8e fix guidance_rescale docstring (#5063) 2023-09-18 13:39:12 +02:00
Kashif Rasul
73bb97adfc [LoRA] fix typo in attention_processor.py (#5066)
* [LoRA] fix typo in attention_processor.py

fixes #5062

* make style

* make fix-copies, logger comented for torch compile
2023-09-16 14:43:18 +02:00
Sayak Paul
38a664a3d6 fix: validation_image arg (#5053) 2023-09-15 12:20:50 +01:00
Kashif Rasul
427feb5359 [Wuerstchen] fix typos in docs (#5051)
* fix typos in docs

* fix for issue  #5023
2023-09-15 12:53:25 +02:00
Gang Wu
9f40d7970e [FIX BUG] type of args in train_instruct_pix2pix_sdxl.py (#4955) 2023-09-15 12:53:07 +02:00
Bagheera
a0198676d7 Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds (#4982)
* Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds

* Convert logging statement to a comment for future archaeologists

* Update src/diffusers/models/unet_2d_condition.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-15 12:52:46 +02:00
Patrick von Platen
abc47dece6 [SDXL, Docs] Textual inversion (#5039)
* [SDXL, Docs] Textual inversion

* Update docs/source/en/using-diffusers/sdxl.md

* finish

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-09-15 12:51:36 +02:00
dotieuthien
941473a12f Fix import in examples (#5048)
* convert tensorrt controlnet

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix number controlnet condition

* Add convert SD XL to onnx

* Add convert SD XL to tensorrt

* Add convert SD XL to tensorrt

* Add examples in comments

* Add examples in comments

* Add test onnx controlnet

* Add tensorrt test

* Remove copied

* Move file test to examples/community

* Remove script

* Remove script

* Remove text

* Fix import

---------

Co-authored-by: dotieuthien <thien.do@mservice.com.vn>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-15 12:48:06 +02:00
dg845
4c8a05f115 Fix Consistency Models UNet2DMidBlock2D Attention GroupNorm Bug (#4863)
* Add attn_groups argument to UNet2DMidBlock2D to control theinternal Attention block's GroupNorm.

* Add docstring for attn_norm_num_groups in UNet2DModel.

* Since the test UNet config uses resnet_time_scale_shift == 'scale_shift', also set attn_norm_num_groups to 32.

* Add test for attn_norm_num_groups to UNet2DModelTests.

* Fix expected slices for slow tests.

* Also fix tolerances for slow tests.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-15 11:27:51 +01:00
Dhruv Nair
5fd42e5d61 Add SDXL refiner only tests (#5041)
* add refiner only tests

* make style
2023-09-15 12:58:03 +05:30
YiYi Xu
e70cb1243f [WIP] adding Kandinsky training scripts (#4890)
* Add files via upload

Co-authored-by: Shahmatov Arseniy <62886550+cene555@users.noreply.github.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-09-14 06:58:20 -10:00
YiYi Xu
fe4837a96e add step_index and clear noise_sampler at begining of each loop (#5024)
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-14 06:48:35 -10:00
Patrick von Platen
342c5c02c0 [Release 0.21] Bump version (#5018)
* [Release 0.21] Bump version

* fix & remove

* fix more

* fix all, upload
2023-09-14 18:28:57 +02:00
UmerHA
169fc4add5 Add Prompt2Prompt pipeline (#4563)
* Initial commit P2P

* Replaced CrossAttention, added test skeleton

* bug fixes

* Updated docstring

* Removed unused function

* Created tests

* improved tests

- made fast inference tests faster
- corrected image shape assertions

* Corrected expected output shape in tests

* small fix: test inputs

* Update tests

- used conditional unet2d
- set expected image slices
- edit_kwargs are now not popped, so pipe can be run multiple times

* Fixed bug in int tests

* Fixed tests

* Linting

* Create prompt2prompt.md

* Added to docs toc

* Ran make fix-copies

* Fixed code blocks in docs

* Using same interface as StableDiffusionPipeline

* Fixed small test bug

* Added all options SDPipeline.__call_ has

* Fixed docstring; made __call__ like in SD

* Linting

* Added test for multiple prompts

* Improved docs

* Incorporated feedback

* Reverted formatting on unrelated files

* Moved prompt2prompt to community

- Moved prompt2prompt pipeline from main to community
- Deleted tests
- Moved documentation to community and shorted it

* Update src/diffusers/utils/dummy_torch_and_transformers_objects.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-14 16:39:59 +02:00
bonlime
566bdf4c44 Allow disabling from_pretrained tqdm (#5007) 2023-09-14 16:36:19 +02:00
Younes Belkada
0eb715d715 [LoRA] Make optional arguments explicit (#5038)
make optional arguments explciit
2023-09-14 15:58:54 +02:00
Patrick von Platen
3aa641289c [Import] Add missing settings / Correct some dummy imports (#5036)
* [Import] Add missing settings

* up

* up

* up
2023-09-14 12:42:54 +02:00
Vladimir Mandic
ef29b24fda allow loading of sd models from safetensors without online lookups using local config files (#5019)
finish config_files implementation
2023-09-14 12:30:15 +02:00
Patrick von Platen
8dc93ad3e4 [Import] Don't force transformers to be installed (#5035)
* [Import] Don't force transformers to be installed

* make style
2023-09-14 11:42:10 +02:00
Dhruv Nair
e2033d2dff Fix model offload bug when key isn't present (#5030)
* fix model offload bug when key isn't present

* make style
2023-09-14 11:02:06 +02:00
Steven Liu
19edca82f1 [docs] Create clearer optimization sections (#4870)
* refactor

* update general optim sections

* update more sections

* few more updates

* benchmark code
2023-09-13 15:21:15 -07:00
Lucain
b954c22a44 Fix broken link in docs (#5015)
fix broken link
2023-09-13 15:40:25 +02:00
Kashif Rasul
77373c5eb1 [Wuerstchen] fix compel usage (#4999)
* fix compel usage

* minor changes in documentation

* fix tests

* fix more

* fix more

* typos

* fix tests

* formatting

---------

Co-authored-by: Dominic Rampas <d6582533@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-13 14:54:59 +02:00
Sayak Paul
0ea51627f1 [Core] Fix dtype in InstructPix2Pix SDXL while computing image_latents (#5013)
* check out dtypes.

* check out dtypes.

* check out dtypes.

* check out dtypes.

* check out dtypes.

* check out dtypes.

* check out dtypes.

* potential fix

* check out dtypes.

* check out dtypes.

* working?
2023-09-13 10:50:24 +01:00
Patrick von Platen
6d6a08f1f1 [Flax->PT] Fix flaky testing (#5011)
fix flaky flax class name
2023-09-13 11:29:13 +02:00
Dhruv Nair
34bfe98eaf Gligen Text to Image fix (#5010)
* fix gligen clip import issue

* fix dtype issue with gligen text to image pipeline

* make fix copies
2023-09-13 10:23:59 +01:00
Patrick von Platen
b47f5115da [Lora] fix lora fuse unfuse (#5003)
* fix lora fuse unfuse

* add same changes to loaders.py

* add test

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2023-09-13 11:21:04 +02:00
Patrick von Platen
324aef6d14 [SDXL] Add LoRA to all pipelines (#4896)
* [SDXL] Add LoRA to all pipelines

* fix all

* fix all

* fix all

* fix more docs

* make style
2023-09-13 11:05:20 +02:00
Sayak Paul
8009272f48 [Tests and Docs] Add a test on serializing pipelines with components containing fused LoRA modules (#4962)
* add: test to ensure pipelines can be saved with fused lora modules.

* add docs about serialization with fused lora.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Empty-Commit

* Update docs/source/en/training/lora.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-13 10:01:37 +01:00
Patrick von Platen
1037287e2b examples fix t2i training (#5001)
* examples fix t2i training

* make style
2023-09-12 23:52:41 +02:00
Steven Liu
6ea95b7a90 Fix PR template (#4984)
fix template
2023-09-12 19:36:38 +02:00
Patrick von Platen
0e0db625d0 Fix safety checker seq offload (#4998)
* fix safety checker

* fix safety checker

* fix safety checker
2023-09-12 18:56:35 +02:00
dg845
1f948109b8 [docs] Fix DiffusionPipeline.enable_sequential_cpu_offload docstring (#4952)
* Fix an unmatched backtick and make description more general for DiffusionPipeline.enable_sequential_cpu_offload.

* make style

* _exclude_from_cpu_offload -> self._exclude_from_cpu_offload

* make style

* apply suggestions from review

* make style
2023-09-12 08:58:47 -07:00
Patrick von Platen
37cb819df5 [Lora] Speed up lora loading (#4994)
* speed up lora loading

* Apply suggestions from code review

* up

* up

* Fix more

* Correct more

* Apply suggestions from code review

* up

* Fix more

* Fix more -

* up

* up
2023-09-12 17:51:15 +02:00
Dhruv Nair
f64d52dbca fix custom diffusion tests (#4996) 2023-09-12 17:50:47 +02:00
Dhruv Nair
4d897aaff5 fix image variation slow test (#4995)
fix image variation tests

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-12 17:45:47 +02:00
Patrick von Platen
b1105269b7 make style 2023-09-12 14:55:27 +00:00
Kashif Rasul
5d28d2217f [Wuerstchen] fix combined pipeline's num_images_per_prompt (#4989)
* fix encode_prompt

* added prompt_embeds and negative_prompt_embeds

* prompt_embeds for the prior only
2023-09-12 16:55:13 +02:00
Kashif Rasul
73bf620dec fix E721 Do not compare types, use isinstance() (#4992) 2023-09-12 16:52:25 +02:00
Dhruv Nair
c806f2fad6 remove extra gligen in import (#4987) 2023-09-12 18:35:29 +05:30
Patrick von Platen
18b7264bd0 [Utils] Correct custom init sort (#4967)
* [Utils] Correct custom init sort

* [Utils] Correct custom init sort

* [Utils] Correct custom init sort

* add type checking

* fix custom init sort

* fix test

* fix tests

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-09-12 11:05:53 +02:00
zhiqiang
d82157b3ce [Bug Fix] Should pass the dtype instead of torch_dtype (#4917)
.
2023-09-11 19:45:58 +02:00
Patrick von Platen
93579650f8 Refactor model offload (#4514)
* [Draft] Refactor model offload

* [Draft] Refactor model offload

* Apply suggestions from code review

* cpu offlaod updates

* remove model cpu offload from individual pipelines

* add hook to offload models to cpu

* clean up

* model offload

* add model cpu offload string

* make style

* clean up

* fixes for offload issues

* fix tests issues

* resolve merge conflicts

* update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-09-11 19:39:26 +02:00
Kashif Rasul
16a056a7b5 Wuerstchen fixes (#4942)
* fix arguments and make example code work

* change arguments in combined test

* Add default timesteps

* style

* fixed test

* fix broken test

* formatting

* fix docstrings

* fix  num_images_per_prompt

* fix doc styles

* please dont change this

* fix tests

* rename to DEFAULT_STAGE_C_TIMESTEPS

---------

Co-authored-by: Dominic Rampas <d6582533@gmail.com>
2023-09-11 15:47:53 +02:00
Patrick von Platen
6c6a246461 Update README.md (#4973)
Add monthly pip installs
2023-09-11 15:45:19 +02:00
Patrick von Platen
6bbee1048b Make sure Flax pipelines can be loaded into PyTorch (#4971)
* Make sure Flax pipelines can be loaded into PyTorch

* add test

* Update src/diffusers/pipelines/pipeline_utils.py
2023-09-11 12:03:49 +02:00
Patrick von Platen
2c60f7d14e [Core] Remove TF import checks (#4968)
[TF] Remove tf
2023-09-11 11:22:40 +02:00
Dhruv Nair
b6e0b016ce Lazy Import for Diffusers (#4829)
* initial commit

* move modules to import struct

* add dummy objects and _LazyModule

* add lazy import to schedulers

* clean up unused imports

* lazy import on models module

* lazy import for schedulers module

* add lazy import to pipelines module

* lazy import altdiffusion

* lazy import audio diffusion

* lazy import audioldm

* lazy import consistency model

* lazy import controlnet

* lazy import dance diffusion ddim ddpm

* lazy import deepfloyd

* lazy import kandinksy

* lazy imports

* lazy import semantic diffusion

* lazy imports

* lazy import stable diffusion

* move sd output to its own module

* clean up

* lazy import t2iadapter

* lazy import unclip

* lazy import versatile and vq diffsuion

* lazy import vq diffusion

* helper to fetch objects from modules

* lazy import sdxl

* lazy import txt2vid

* lazy import stochastic karras

* fix model imports

* fix bug

* lazy import

* clean up

* clean up

* fixes for tests

* fixes for tests

* clean up

* remove import of torch_utils from utils module

* clean up

* clean up

* fix mistake import statement

* dedicated modules for exporting and loading

* remove testing utils from utils module

* fixes from  merge conflicts

* Update src/diffusers/pipelines/kandinsky2_2/__init__.py

* fix docs

* fix alt diffusion copied from

* fix check dummies

* fix more docs

* remove accelerate import from utils module

* add type checking

* make style

* fix check dummies

* remove torch import from xformers check

* clean up error message

* fixes after upstream merges

* dummy objects fix

* fix tests

* remove unused module import

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-11 09:56:22 +02:00
Sayak Paul
88735249da [Docs] fix: minor formatting in the Würstchen docs (#4965)
fix: minor formatting in the docs
2023-09-11 09:12:53 +02:00
Will Berman
4191ddee11 Revert revert and install accelerate main (#4963)
* Revert "Temp Revert "[Core] better support offloading when side loading is enabled… (#4927)"

This reverts commit 2ab170499e.

* tests: install accelerate from main
2023-09-11 08:49:46 +02:00
Will Berman
2ab170499e Temp Revert "[Core] better support offloading when side loading is enabled… (#4927)
Revert "[Core] better support offloading when side loading is enabled. (#4855)"

This reverts commit e4b8e7928b.
2023-09-08 19:54:59 -07:00
Sayak Paul
914c513ee0 [Docs] add t2i adapter entry to overview of training scripts. (#4946)
add t2i adapter entry to overview of training scripts.
2023-09-09 06:52:11 +05:30
Will Berman
d73e6ad050 guard save model hooks to only execute on main process (#4929) 2023-09-08 10:30:06 -07:00
Sayak Paul
d0cf681a1f [Tests] add: tests for t2i adapter training. (#4947)
add: tests for t2i adapter training.
2023-09-08 19:45:39 +05:30
Suraj Patil
dfec61f4b3 [examples] T2IAdapter training script (#4934)
* add t2i_example script

* remove in channels logic

* remove comments

* remove use_euler arg

* add requirements

* only use canny example

* use datasets

* comments

* make log_validation consistent with other scripts

* add readme

* fix title in readme

* update check_min_version

* change a few minor things.

* add doc entry

* add: test for t2i adapter training

* remove use_auth_token

* fix: logged info.

* remove tests for now.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-08 10:03:02 +05:30
Suraj Patil
0ec7a02b6a [StableDiffusionXLAdapterPipeline] allow negative micro conds (#4941)
* allow negative micro conds in t2i pipeline

* Empty-Commit

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-09-08 07:59:42 +05:30
Suraj Patil
626284f8d1 [StableDiffusionXLAdapterPipeline] add adapter_conditioning_factor (#4937)
add adapter_conditioning_factor
2023-09-07 19:05:28 +02:00
Sayak Paul
9800cc5ece [InstructPix2Pix] Fix pipeline implementation and add docs (#4844)
* initial evident fixes.

* instructpix2pix fixes.

* add: entry to doc.

* address PR feedback.

* make fix-copies
2023-09-07 15:34:19 +05:30
Kashif Rasul
541bb6ee63 Würstchen model (#3849)
* initial

* initial

* added initial convert script for paella vqmodel

* initial wuerstchen pipeline

* add LayerNorm2d

* added modules

* fix typo

* use model_v2

* embed clip caption amd negative_caption

* fixed name of var

* initial modules in one place

* WuerstchenPriorPipeline

* inital shape

* initial denoising prior loop

* fix output

* add WuerstchenPriorPipeline to __init__.py

* use the noise ratio in the Prior

* try to save pipeline

* save_pretrained working

* Few additions

* add _execution_device

* shape is int

* fix batch size

* fix shape of ratio

* fix shape of ratio

* fix output dataclass

* tests folder

* fix formatting

* fix float16 + started with generator

* Update pipeline_wuerstchen.py

* removed vqgan code

* add WuerstchenGeneratorPipeline

* fix WuerstchenGeneratorPipeline

* fix docstrings

* fix imports

* convert generator pipeline

* fix convert

* Work on Generator Pipeline. WIP

* Pipeline works with our diffuzz code

* apply scale factor

* removed vqgan.py

* use cosine schedule

* redo the denoising loop

* Update src/diffusers/models/resnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* use torch.lerp

* use warp-diffusion org

* clip_sample=False,

* some refactoring

* use model_v3_stage_c

* c_cond size

* use clip-bigG

* allow stage b clip to be None

* add dummy

* würstchen scheduler

* minor changes

* set clip=None in the pipeline

* fix attention mask

* add attention_masks to text_encoder

* make fix-copies

* add back clip

* add text_encoder

* gen_text_encoder and tokenizer

* fix import

* updated pipeline test

* undo changes to pipeline test

* nip

* fix typo

* fix output name

* set guidance_scale=0 and remove diffuze

* fix doc strings

* make style

* nip

* removed unused

* initial docs

* rename

* toc

* cleanup

* remvoe test script

* fix-copies

* fix multi images

* remove dup

* remove unused modules

* undo changes for debugging

* no  new line

* remove dup conversion script

* fix doc string

* cleanup

* pass default args

* dup permute

* fix some tests

* fix prepare_latents

* move Prior class to modules

* offload only the text encoder and vqgan

* fix resolution calculation for prior

* nip

* removed testing script

* fix shape

* fix argument to set_timesteps

* do not change .gitignore

* fix resolution calculations + readme

* resolution calculation fix + readme

* small fixes

* Add combined pipeline

* rename generator -> decoder

* Update .gitignore

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* removed efficient_net

* create combined WuerstchenPipeline

* make arguments consistent with VQ model

* fix var names

* no need to return text_encoder_hidden_states

* add latent_dim_scale to config

* split model into its own file

* add WuerschenPipeline to docs

* remove unused latent_size

* register latent_dim_scale

* update script

* update docstring

* use Attention preprocessor

* concat with normed input

* fix-copies

* add docs

* fix test

* fix style

* add to cpu_offloaded_model

* updated type

* remove 1-line func

* updated type

* initial decoder test

* formatting

* formatting

* fix autodoc link

* num_inference_steps is int

* remove comments

* fix example in docs

* Update src/diffusers/pipelines/wuerstchen/diffnext.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* rename layernorm to WuerstchenLayerNorm

* rename DiffNext to WuerstchenDiffNeXt

* added comment about MixingResidualBlock

* move paella vq-vae to pipelines' folder

* initial decoder test

* increased test_float16_inference expected diff

* self_attn is always true

* more passing decoder tests

* batch image_embeds

* fix failing tests

* set the correct dtype

* relax inference test

* update prior

* added combined pipeline test

* faster test

* faster test

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix issues from review

* update wuerstchen.md + change generator name

* resolve issues

* fix copied from usage and add back batch_size

* fix API

* fix arguments

* fix combined test

* Added timesteps argument + fixes

* Update tests/pipelines/test_pipelines_common.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/wuerstchen/test_wuerstchen_prior.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py

* up

* Fix more

* failing tests

* up

* up

* correct naming

* correct docs

* correct docs

* fix test params

* correct docs

* fix classifier free guidance

* fix classifier free guidance

* fix more

* fix all

* make tests faster

---------

Co-authored-by: Dominic Rampas <d6582533@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Dominic Rampas <61938694+dome272@users.noreply.github.com>
2023-09-06 16:15:51 +02:00
dg845
b76274cb53 [docs] Fix typo in Inpainting force unmasked area unchanged example (#4910)
Fix typo by replacing init_image_arr and repainted_image_arr with init_image and repainted_image, respectively.
2023-09-06 10:49:01 +02:00
Patrick von Platen
dc3e0ca59b [Textual inversion] Relax loading textual inversion (#4903)
* [Textual inversion] Relax loading textual inversion

* up
2023-09-06 10:39:44 +02:00
Sayak Paul
6c314ad0ce [Docs] add doc entry to explain lora fusion and use of different scales. (#4893)
* add doc entry to explain lora fusion and use of different scales.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-09-06 07:38:13 +05:30
Steven Liu
946bb53c56 [docs] Add stronger warning for SDXL height/width (#4867)
* add size warning

* feedback
2023-09-05 10:50:42 -07:00
YiYi Xu
ea311e6989 remove latent input for kandinsky prior_emb2emb pipeline (#4887)
* remove latent input

* fix test

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-04 22:19:49 -10:00
YiYi Xu
4c5718a09c fix a bug in StableDiffusionUpscalePipeline.run_safety_checker (#4886)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-04 22:18:59 -10:00
Patrick von Platen
2340ed629e [Test] Reduce CPU memory (#4897)
* [Test] Reduce CPU memory

* [Test] Reduce CPU memory
2023-09-05 13:18:35 +05:30
Bagheera
cfdfcf2018 Add --vae_precision option to the SDXL pix2pix script so that we have… (#4881)
* Add --vae_precision option to the SDXL pix2pix script so that we have the option of avoiding float32 overhead

* style

---------

Co-authored-by: bghira <bghira@users.github.com>
2023-09-05 09:04:06 +02:00
Sayak Paul
e4b8e7928b [Core] better support offloading when side loading is enabled. (#4855)
* better support offloading when side loading is enabled.

* load_textual_inversion

* better messaging for textual inversion.

* fixes

* address PR feedback.

* sdxl support.

* improve messaging

* recursive removal when cpu sequential offloading is enabled.

* add: lora tests

* recruse.

* add: offload tests for textual inversion.
2023-09-05 06:55:13 +05:30
dg845
55e17907f9 Add dropout parameter to UNet2DModel/UNet2DConditionModel (#4882)
* Add dropout param to get_down_block/get_up_block and UNet2DModel/UNet2DConditionModel.

* Add dropout param to Versatile Diffusion modeling, which has a copy of UNet2DConditionModel and its own get_down_block/get_up_block functions.
2023-09-05 00:02:21 +02:00
Sayak Paul
c81a88b239 [Core] LoRA improvements pt. 3 (#4842)
* throw warning when more than one lora is attempted to be fused.

* introduce support of lora scale during fusion.

* change test name

* changes

* change to _lora_scale

* lora_scale to call whenever applicable.

* debugging

* lora_scale additional.

* cross_attention_kwargs

* lora_scale -> scale.

* lora_scale fix

* lora_scale in patched projection.

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* styling.

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* remove unneeded prints.

* remove unneeded prints.

* assign cross_attention_kwargs.

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* clean up.

* refactor scale retrieval logic a bit.

* fix nonetypw

* fix: tests

* add more tests

* more fixes.

* figure out a way to pass lora_scale.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* unify the retrieval logic of lora_scale.

* move adjust_lora_scale_text_encoder to lora.py.

* introduce dynamic adjustment lora scale support to sd

* fix up copies

* Empty-Commit

* add: test to check fusion equivalence on different scales.

* handle lora fusion warning.

* make lora smaller

* make lora smaller

* make lora smaller

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-04 23:52:31 +02:00
YiYi Xu
2c1677eefe allow passing components to connected pipelines when use the combined pipeline (#4883)
* fix

* add test

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-04 06:21:36 -10:00
dg845
c73e609aae Fix get_dummy_inputs for Stable Diffusion Inpaint Tests (#4845)
* Change StableDiffusionInpaintPipelineFastTests.get_dummy_inputs to produce a random image and a white mask_image.

* Add dummy expected slices for the test_stable_diffusion_inpaint tests.

* Remove print statement

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-04 12:04:59 +02:00
Erwann Millon
2fa4b3ffb0 check for unet_lora_layers in sdxl pipeline's save_lora_weights method (#4821)
run make fix-copies and make style
2023-09-04 09:59:59 +02:00
Isamu Isozaki
3201903d94 Retrieval Augmented Diffusion Models (#3297)
* Resetting rdm pr

* Fixed styles

* Fixed style

* Moved to rdm folder+fixed slight errors

* Removed config diff

* Started adding tests

* Adding retrieved images

* Fixed faiss import

* Fixed import errors

* Fixing tests

* Added require_faiss

* Updated dependency table

* Attempt solving consistency test

* Fixed truncation and vocab size issue

* Passed common tests

* Finished up cpu testing on pipeline

* Passed all tests locally

* Removed some slow tests

* Removed diffs from test_pipeline_common

* Remove logs

* Removed diffs from test_pipelines_common

* Fixed style

* Fully fixed styles on diffs

* Fixed name

* Proper rename

* Fixed dummies

* Fixed issue with dummyonnx

* Fixed black style

* Fixed dummies

* Changed ordering

* Fixed logging

* Fixing

* Fixing

* quality

* Debugging regex

* Fix dummies with guess

* Fixed typo

* Attempt fix dummies

* black

* ruff

* fixed ordering

* Logging

* Attempt fix

* Attempt fix dummy

* Attempt fixing styles

* Fixed faiss dependency

* Removed unnecessary deprecations

* Finished up main changes

* Added doc

* Passed tests

* Fixed tests

* Remove invisible watermark

* Fixed ruff errors

* Added prompt embed to tests

* Added tests and made retriever an optional component

* Fixed styles

* Made faiss a dependency of pipeline

* Logging

* Fixed dummies

* Make pipeline test work

* Fixed style

* Moved to research projects

* Remove diff

* Fixed style error

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-04 09:42:04 +02:00
Patrick von Platen
705c592ea9 [Tests] Add combined pipeline tests (#4869)
* [Tests] Add combined pipeline tests

* Update tests/pipelines/kandinsky_v22/test_kandinsky.py
2023-09-02 21:36:20 +02:00
Harutatsu Akiyama
c52acaaf17 [ControlNet SDXL Inpainting] Support inpainting of ControlNet SDXL (#4694)
* [ControlNet SDXL Inpainting] Support inpainting of ControlNet SDXL

Co-authored-by: Jiabin Bai 1355864570@qq.com


---------

Co-authored-by: Harutatsu Akiyama <kf.zy.qin@gmail.com>
2023-09-02 08:04:22 -10:00
Steven Liu
2c45a53aef [docs] Shap-E guide (#4700)
* first draft

* fixes

* more fixes

* fix toctree
2023-09-01 19:52:41 -07:00
Steven Liu
22ea35cf23 [docs] DiffEdit guide (#4722)
* first draft

* minor edits
2023-09-01 14:18:41 -07:00
YiYi Xu
5c404f20f4 [WIP] masked_latent_inputs for inpainting pipeline (#4819)
* add

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-01 06:55:31 -10:00
YiYi Xu
d8b6f5d09e support AutoPipeline.from_pipe between a pipeline and its ControlNet pipeline counterpart (#4861)
add
2023-09-01 06:53:03 -10:00
YiYi Xu
30a5acc39f fix a bug in sdxl-controlnet-img2img when using MultiControlNetModel (#4862)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-01 06:51:59 -10:00
Seongsu Park
0c775544dd [Docs] Korean translation update (#4684)
* Docs kr update 3

controlnet, reproducibility 업로드

generator 그대로 사용
seamless multi-GPU 그대로 사용

create_dataset 번역 1차

stable_diffusion_jax

new translation

Add coreml, tome

kr docs minor fix

translate training/instructpix2pix

fix training/instructpix2pix.mdx

using-diffusers/weighting_prompts 번역 1차

add SDXL docs

Translate using-diffuers/loading_overview.md

translate using-diffusers/textual_inversion_inference.md

Conditional image generation (#37)

* stable_diffusion_jax

* index_update

* index_update

* condition_image_generation

---------

Co-authored-by: Seongsu Park <tjdtnsu@gmail.com>

jihwan/stable_diffusion.mdx

custom_diffusion 작업 완료

quicktour 작업 완료

distributed inference & control brightness (#40)

* distributed_inference.mdx

* control_brightness

---------

Co-authored-by: idra79haza <idra79haza@github.com>
Co-authored-by: Seongsu Park <tjdtnsu@gmail.com>

using_safetensors (#41)

* distributed_inference.mdx

* control_brightness

* using_safetensors.mdx

---------

Co-authored-by: idra79haza <idra79haza@github.com>
Co-authored-by: Seongsu Park <tjdtnsu@gmail.com>

delete safetensor short

* Repace mdx to md

* toctree update

* Add controlling_generation

* toctree fix

* colab link, minor fix

* docs name typo fix

* frontmatter fix

* translation fix
2023-09-01 09:23:45 -07:00
Pedro Cuenca
60d259add1 Fix link from API to using-diffusers (#4856)
* Fix link from API to using-diffusers

* Fix link
2023-09-01 15:05:01 +02:00
Dhruv Nair
189e9f01b3 Test Cleanup Precision issues (#4812)
* proposal for flaky tests

* more precision fixes

* move more tests to use cosine distance

* more test fixes

* clean up

* use default attn

* clean up

* update expected value

* make style

* make style

* Apply suggestions from code review

* Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py

* make style

* fix failing tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-09-01 17:58:37 +05:30
Nguyễn Công Tú Anh
38466c369f Add GLIGEN Text Image implementation (#4777)
* Add GLIGEN Text Image implementation

* add style transfer from image

* fix check_repository_consistency

* add convert script GLIGEN model to Diffusers

* rename attention type

* fix style code

* remove PositionNetTextImage

* Revert "fix check_repository_consistency"

This reverts commit 15f098c96e.

* change attention type name

* update docs for GLIGEN

* change examples with hf-document-image

* fix style

* add CLIPImageProjection for GLIGEN

* Add new encode_prompt, load project matrix in pipe init

* move CLIPImageProjection to stable_diffusion

* add comment
2023-09-01 15:48:01 +05:30
dg845
5f740d0f55 [docs] Add inpainting example for forcing the unmasked area to remain unchanged to the docs (#4536)
* Initial code to add force_unmasked_unchanged argument to StableDiffusionInpaintPipeline.__call__.

* Try to improve StableDiffusionInpaintPipelineFastTests.get_dummy_inputs.

* Use original mask to preserve unmasked pixels in pixel space rather than latent space.

* make style

* start working on note in docs to force unmasked area to be unchanged

* Add example of forcing the unmasked area to remain unchanged.

* Revert "make style"

This reverts commit fa7759293a.

* Revert "Use original mask to preserve unmasked pixels in pixel space rather than latent space."

This reverts commit 092bd0e9e9.

* Revert "Try to improve StableDiffusionInpaintPipelineFastTests.get_dummy_inputs."

This reverts commit ff41cf43c5.

* Revert "Initial code to add force_unmasked_unchanged argument to StableDiffusionInpaintPipeline.__call__."

This reverts commit 989979752a.

---------

Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-08-31 21:29:16 -07:00
YiYi Xu
75f81c25d1 fix sdxl-inpaint fast test (#4859)
fix inpaint test

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-08-31 15:42:58 -10:00
Patrick von Platen
bbf733ab70 [SDXL Inpaint] Correct strength default (#4858) 2023-08-31 20:34:33 +02:00
Steven Liu
aedd78767c [docs] ControlNet guide (#4640)
* first draft

* finish first draft

* feedback and remove sections from API pages

* clean docstrings

* add full code example
2023-08-31 10:02:02 -04:00
Patrick von Platen
7caa3682e4 Remove warn with deprecate (#4850)
* Remove warn with deprecate

* Fix typo with 1.0,0
2023-08-31 15:08:41 +02:00
Ella Charlaix
0edb4cac78 Fix image processor inputs width (#4853)
fix width for np array inputs
2023-08-31 14:50:55 +02:00
Yukun Huang
85b3f08c26 Fix potential type mismatch errors in SDXL pipelines (#4796)
* Fix potential type conversion errors in SDXL pipelines

* make sure vae stays in fp16

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-31 09:22:18 +02:00
Sayak Paul
19f3161d94 [Docs] improve the LoRA doc. (#4838)
* improve the LoRA doc.

* include fuse_lora and unfuse_lora

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-31 00:13:15 +05:30
Steven Liu
a1fdfca36f [docs] SDXL (#4428)
* first draft

* reorg toctree

* note about minsdxl

* feedback

* fix

* micro-conditionings

* add tip

* fix section levels

* d'oh fix pipeline names

* feedback

* remove old section
2023-08-30 11:34:55 -04:00
Patrick von Platen
d1e20be664 make style 2023-08-30 14:13:14 +02:00
Anatoly Belikov
af3854d6ad sketch inpaint from a1111 for non-inpaint models (#4824)
* Create masked_stable_diffusion_img2img.py

* add MaskedIm2ImPipeline to readme

* Update README.md
2023-08-30 09:51:28 +02:00
Patrick von Platen
9f1936d2fc Fix Unfuse Lora (#4833)
* Fix Unfuse Lora

* add tests

* Fix more

* Fix more

* Fix all

* make style

* make style
2023-08-30 09:32:25 +05:30
Eugene Antropov
fbca2e0a7a Add loading ckpt from file for SDXL controlNet (#4683)
* Add load ckpt from file for ControlNet SDXL

* Reformat code

* Resort imports

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-30 09:00:53 +05:30
Sayak Paul
3768d4d77c [Core] refactor encode_prompt (#4617)
* refactoring of encode_prompt()

* better handling of device.

* fix: device determination

* fix: device determination 2

* handle num_images_per_prompt

* revert changes in loaders.py and give birth to encode_prompt().

* minor refactoring for encode_prompt()/

* make backward compatible.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix: concatenation of the neg and pos embeddings.

* incorporate encode_prompt() in test_stable_diffusion.py

* turn it into big PR.

* make it bigger

* gligen fixes.

* more fixes to fligen

* _encode_prompt -> encode_prompt in tests

* first batch

* second batch

* fix blasphemous mistake

* fix

* fix: hopefully for the final time.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-30 08:57:26 +05:30
Nikhil Gajendrakumar
8ccb619416 VaeImageProcessor: Allow image resizing also for torch and numpy inputs (#4832)
Co-authored-by: Nikhil Gajendrakumar <nikhilkatte@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-29 22:45:05 +02:00
zideliu
0699ac62f0 fix typo (#4822) 2023-08-29 20:54:36 +02:00
Patrick von Platen
a76f2ad538 make style 2023-08-29 09:25:09 +02:00
VitjanZ
7200daa412 Support saving multiple t2i adapter models under one checkpoint (#4798)
* adding save and load for MultiAdapter, adding test

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Adding changes from review test_stable_diffusion_adapter

* import sorting fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-29 09:24:40 +02:00
Alexsey Shestacov
3eeaf4e041 Fix convert_original_stable_diffusion_to_diffusers script (#4817)
Fix stable diffusion conversion script
2023-08-29 09:14:45 +02:00
Patrick von Platen
c583f3b452 Fuse loras (#4473)
* Fuse loras

* initial implementation.

* add slow test one.

* styling

* add: test for checking efficiency

* print

* position

* place model offload correctly

* style

* style.

* unfuse test.

* final checks

* remove warning test

* remove warnings altogether

* debugging

* tighten up tests.

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* denugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debuging

* debugging

* debugging

* debugging

* suit up the generator initialization a bit.

* remove print

* update assertion.

* debugging

* remove print.

* fix: assertions.

* style

* can generator be a problem?

* generator

* correct tests.

* support text encoder lora fusion.

* tighten up tests.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-29 09:14:24 +02:00
Chong Mou
12358b986f add models for T2I-Adapter-XL (#4696)
* T2I-Adapter-XL

* update

* update

* add pipeline

* modify pipeline

* modify pipeline

* modify pipeline

* modify pipeline

* modify pipeline

* modify modeling_text_unet

* fix styling.

* fix: copies.

* adapter settings

* new test case

* new test case

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* revert prints.

* new test case

* remove print

* org test case

* add test_pipeline

* styling.

* fix copies.

* modify test parameter

* style.

* add adapter-xl doc

* double quotes in docs

* Fix potential type mismatch

* style.

---------

Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
2023-08-29 10:34:07 +05:30
YiYi Xu
5eeedd9e33 add StableDiffusionXLControlNetImg2ImgPipeline (#4592)
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-28 08:16:27 -10:00
YiYi Xu
a971c598b5 fix auto_pipeline: pass kwargs to load_config (#4793)
* fix

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-28 07:42:16 -10:00
YiYi Xu
934d439a42 fix bug in StableDiffusionXLControlNetPipeline when use guess_mode (#4799)
* fix



---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-28 06:51:17 -10:00
Dhruv Nair
e3f3672f46 Fix Disentangle ONNX and non-ONNX pipeline (#4656)
* initial commit to fix inheritance issue

* clean up sd onnx upscale

* clean up
2023-08-28 21:14:49 +05:30
Mario Namtao Shianti Larcher
87ae330056 [Examples] Save SDXL LoRA weights with chosen precision (#4791)
* Increase min accelerate ver to avoid OOM when mixed precision

* Rm re-instantiation of VAE

* Rm casting to float32

* Del unused models and free GPU

* Fix style
2023-08-28 13:57:40 +05:30
Patrick von Platen
1b46c66132 make style 2023-08-28 07:17:21 +00:00
Yead
031358988b Fix save_path bug in textual inversion training script (#4710)
* Update textual_inversion.py

fixed safe_path bug in textual inversion training

* Update test_examples.py

update test_textual_inversion for updating saved file's name

* Update textual_inversion.py

fixed some formatting issues

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-28 09:17:08 +02:00
Shauray Singh
fd35689f25 [WIP] Add Fabric (#4201)
* empty PR

* init

* changes

* starting with the pipeline

* stable diff

* prev

* more things, getting started

* more functions

* makeing it more readable

* almost done testing

* var changes

* testing

* device

* device support

* maybe

* device malfunctions

* new new

* register

* testing

* exec does not work

* float

* change info

* change of architecture

* might work

* testing with colab

* more attn atuff

* stupid additions

* documenting and testing

* writing tests

* more docs

* tests and docs

* remove test

* empty PR

* init

* changes

* starting with the pipeline

* stable diff

* prev

* more things, getting started

* more functions

* makeing it more readable

* almost done testing

* var changes

* testing

* device

* device support

* maybe

* device malfunctions

* new new

* register

* testing

* exec does not work

* float

* change info

* change of architecture

* might work

* testing with colab

* more attn atuff

* stupid additions

* documenting and testing

* writing tests

* more docs

* tests and docs

* remove test

* change cross attention

* revert back

* tests

* reverting back to orig

* changes

* test passing

* pipeline changes

* before quality

* quality checks pass

* remove print statements

* doc fixes

* __init__ error something

* update docs, working on dim

* working on encoding

* doc fix

* more fixes

* no more dependent on 512*512

* update docs

* fixes

* test passing

* remove comment

* fixes and migration

* simpler tests

* doc changes

* green CI

* changes

* more docs

* changes

* new images

* to community examples

* selete

* more fixes

* changes

* fix

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-28 09:10:55 +02:00
chillpixel
e8c9069d6f Update loaders.py (#4805)
* Update loaders.py

Solves an error sometimes thrown while iterating over state_dict.keys() caused by using the .pop() method within the loop.

* Update loaders.py
2023-08-28 11:23:25 +05:30
Patrick von Platen
766aa50f70 [LoRA Attn Processors] Refactor LoRA Attn Processors (#4765)
* [LoRA Attn] Refactor LoRA attn

* correct for network alphas

* fix more

* fix more tests

* fix more tests

* Move below

* Finish

* better version

* correct serialization format

* fix

* fix more

* fix more

* fix more

* Apply suggestions from code review

* Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py

* deprecation

* relax atol for slow test slighly

* Finish tests

* make style

* make style
2023-08-28 10:38:09 +05:30
Patrick von Platen
c4d2823601 [SDXL Lora] Fix last ben sdxl lora (#4797)
* Fix last ben sdxl lora

* Correct typo

* make style
2023-08-26 23:31:56 +02:00
Patrick von Platen
4f8853e481 [Torch compile] Fix torch compile for controlnet (#4795)
Fix torch compile for controlnete
2023-08-26 22:30:02 +02:00
Steven Liu
fed88195e3 [docs] Fix syntax for compel (#4794)
* fix syntax

* update image
2023-08-26 11:33:10 -07:00
Sayak Paul
0de35e4a52 [Tests] Tighten up LoRA loading relaxation (#4787)
* debugging

* better logic for filtering.

* Update src/diffusers/loaders.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-26 15:01:16 +05:30
Canberk Kandemir
0d81e543a2 Unet fix (#4769)
* Optional images variable train_custom_diffusion.py

* Fixed train_custom_diffusion.py

* Revert accidental changes to unet_2d_condition.py

* "Format code with black"
2023-08-26 11:01:24 +02:00
Sayak Paul
3be0ff9056 [Core] Support negative conditions in SDXL (#4774)
* add: support negative conditions.

* fix: key

* add: tests

* address PR feedback.

* add documentation

* add img2img support.

* add inpainting support.

* ad controlnet support

* Apply suggestions from code review

* modify wording in the doc.
2023-08-26 09:13:44 +05:30
Patrick von Platen
2764db3194 [SDXL] Add docs about forcing passed embeddings to be 0 (#4783)
* make style

* make style

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* make style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-25 20:52:45 +02:00
Patrick von Platen
048d901993 make style 2023-08-25 18:51:03 +00:00
cmdr2
cb432c4ebc Allow passing a checkpoint state_dict to convert_from_ckpt (instead of just a string path) (#4653)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-25 20:50:39 +02:00
YiYi Xu
b7b1a30bc4 refactor prepare_mask_and_masked_image with VaeImageProcessor (#4444)
* refactor image processor for mask
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-08-25 08:18:48 -10:00
Will Berman
7e5587a5ac instance_prompt->class_prompt (#4784) 2023-08-25 20:06:55 +02:00
Mayank Khanduja
dc8da1d449 Fixed broken link of CLIP doc in evaluation doc (#4760) 2023-08-25 20:04:50 +02:00
Zijian He
3dd540171d fix bug of progress bar in clip guided images mixing (#4729) 2023-08-25 18:54:03 +02:00
YiYi Xu
b3b2d30cd8 fix a bug in from_pretrained when load optional components (#4745)
* fix
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-25 06:25:48 -10:00
Dhruv Nair
3bba44d74e [WIP ] Proposal to address precision issues in CI (#4775)
* proposal for flaky tests

* clean up
2023-08-25 19:12:09 +05:30
Sanchit Gandhi
b1290d3fb8 Convert MusicLDM (#4579)
* from audioldm

* fix vae

* move to new pipeline

* copied from audioldm

* remove redundant control flow

* iterate

* fix docstring

* finish pipeline

* tests: from audioldm2

* iterate

* finish fast tests

* finish slow integration tests

* add docs

* remove dtype test

* update toctree

* "copied from" in conversion (where possible)

* Update docs/source/en/api/pipelines/musicldm.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix docstring

* make nightly

* style

* fix dtype test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-25 13:31:00 +01:00
Sanchit Gandhi
29a11c2a94 [AudioLDM 2] Pipeline fixes (#4738)
* fix docs

* fix unet docs

* use image output for latents

* fix hub checkpoints

* fix pipeline example

* update example

* return_dict = False

* revert image pipeline output

* revert doc changes

* remove dtype test

* make style

* remove docstring updates

* remove unet docstring update

* Empty commit to re-trigger CI

* fix cpu offload

* fix dtype test

* add offload test
2023-08-25 11:38:10 +01:00
Patrick von Platen
cdacd8f1dd Torch device (#4755) 2023-08-25 11:13:32 +02:00
Sayak Paul
470d51c8ed improve setup.py (#4748) 2023-08-25 13:44:20 +05:30
Andrew Zhu
d6141205cd fix sdxl_lwp empty neg_prompt error issue (#4743)
* fix sdxl_lwp empty neg_prompt error issue

* fix sdxl_lwp empty neg_prompt error issue, update code format

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-25 09:55:56 +05:30
Sayak Paul
4447547eda [Examples] fix sdxl dreambooth lora checkpointing. (#4749)
* fix sdxl dreambooth lora checkpointing.

* style
2023-08-25 09:50:02 +05:30
Sayak Paul
5222294748 [LoRA] relax lora loading logic (#4610)
* relax lora loading logic.

* cater to the other cases too.

* fix: variable name

* bring the chaos down.

* check

* deal with checkpointed files.

* Apply suggestions from code review

Co-authored-by: apolinário <joaopaulo.passos@gmail.com>

* style

---------

Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
2023-08-25 09:35:51 +05:30
Mario Namtao Shianti Larcher
c25c46137d [Examples] Add madebyollin VAE to SDXL LoRA example, along with an explanation (#4762)
Add madebyollin VAE to LoRA example, along with an explenation
2023-08-25 09:34:32 +05:30
Will Berman
3105c710ba [fix] multi t2i adapter set total_downscale_factor (#4621)
* [fix] multi t2i adapter set total_downscale_factor

* move image checks into check inputs

* remove copied from
2023-08-24 12:01:23 -07:00
Patrick von Platen
58f5f748f4 [Tests] Fix paint by example (#4761)
* [Tests] Fix paint by example

* Update src/diffusers/pipelines/paint_by_example/image_encoder.py
2023-08-24 16:03:10 +02:00
Dhruv Nair
4f05058bb7 Clean up flaky behaviour on Slow CUDA Pytorch Push Tests (#4759)
use max diff to compare model outputs
2023-08-24 18:58:02 +05:30
Patrick von Platen
5d4413001b make style 2023-08-24 10:19:47 +00:00
Symbiomatrix
863e741614 Bugfix for SDXL model loading in low ram system. (#4628)
Update convert_from_ckpt.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-24 12:19:16 +02:00
Sanchit Gandhi
24c5e7708b [AudioLDM2] Doc fixes (#4739)
* [AudioLDM2] Doc fixes

* update docstrings

* fix unet docstring

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-24 07:20:27 +05:30
YiYi Xu
cd21b965d1 add a step_index counter (#4347)
add self.step_index

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-23 10:49:54 -10:00
Yinzhen Wang
d185b5ed5f change validation scheduler for train_dreambooth.py when training IF (#4333)
* dreambooth training

* train_dreambooth validation scheduler

* set a particular scheduler via a string

* modify readme after setting a particular scheduler via a string

* modify readme after setting a particular scheduler

* use importlib to set a particular scheduler

* import with correct sort
2023-08-23 22:18:17 +02:00
Suraj Patil
709a642827 fix dummy import for AudioLDM2 (#4741)
* fix import

* style
2023-08-23 22:07:47 +02:00
Sanchit Gandhi
0a0fe69aa6 [AudioLDM Docs] Update docstring (#4744) 2023-08-23 11:04:54 -07:00
realliujiaxu
124e76ddc6 [docs] add variant="fp16" flag (#4678) 2023-08-23 10:00:34 -07:00
Sanchit Gandhi
05b0ec63bc [AudioLDM Docs] Fix docs for output (#4737) 2023-08-23 18:02:11 +02:00
Sayak Paul
4909b1e3ac [Examples] fix checkpointing and casting bugs in train_text_to_image_lora_sdxl.py (#4632)
* fix: casting issues.

* fix checkpointing.

* tests

* fix: bugs
2023-08-23 10:58:54 +05:30
Ollin Boer Bohan
052bf3280b Fix AutoencoderTiny encoder scaling convention (#4682)
* Fix AutoencoderTiny encoder scaling convention

  * Add [-1, 1] -> [0, 1] rescaling to EncoderTiny

  * Move [0, 1] -> [-1, 1] rescaling from AutoencoderTiny.decode to DecoderTiny
    (i.e. immediately after the final conv, as early as possible)

  * Fix missing [0, 255] -> [0, 1] rescaling in AutoencoderTiny.forward

  * Update AutoencoderTinyIntegrationTests to protect against scaling issues.
    The new test constructs a simple image, round-trips it through AutoencoderTiny,
    and confirms the decoded result is approximately equal to the source image.
    This test checks behavior with and without tiling enabled.
    This test will fail if new AutoencoderTiny scaling issues are introduced.

  * Context: Raw TAESD weights expect images in [0, 1], but diffusers'
    convention represents images with zero-centered values in [-1, 1],
    so AutoencoderTiny needs to scale / unscale images at the start of
    encoding and at the end of decoding in order to work with diffusers.

* Re-add existing AutoencoderTiny test, update golden values

* Add comments to AutoencoderTiny.forward
2023-08-23 08:38:37 +05:30
Patrick von Platen
80871ac597 fix bad error message when transformers is missing (#4714) 2023-08-22 21:25:01 +02:00
Patrick von Platen
6abc66ef28 Fix all docs (#4721)
* [Docs] Fix all

* fix
2023-08-22 21:00:21 +02:00
Patrick von Platen
38efac9f61 Revert "Move controlnet load local tests to nightly (#4543)" (#4713)
This reverts commit 7b07f9812a.
2023-08-22 19:55:15 +02:00
Patrick von Platen
4f6399bedd rename test file to run, so that examples tests do not fail (#4715)
* rename test file to run, so that examples tests do not fail

* [Tests] Rename community tests
2023-08-22 19:54:46 +02:00
Patrick von Platen
6e1af3a777 [Docs] Fix docs controlnet missing /Tip (#4717) 2023-08-22 18:40:26 +02:00
zideliu
f22aad6e3a Add reference_attn & reference_adain support for sdxl (#4502)
* ADD SDXL reference & reference adain

* Update README.md

* Update README.md

* format stable_diffusion_xl_reference.py

* format file

* Format file

* format file

* fix format

* fix format with ruff

* fix format

* Update examples/community/README.md

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* Update examples/community/README.md

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* Update README.md

* Update README.md & fix typo

* Update README.md

* fix format

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-08-22 20:22:01 +05:30
realliujiaxu
ecded50ad5 add convert diffuser pipeline of XL to original stable diffusion (#4596)
convert diffuser pipeline of XL to original stable diffusion

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-08-22 19:11:06 +05:30
Alex McKinney
e34d9aa681 Replaces DIFFUSERS_TEST_DEVICE backend list with trying device (#4673)
This is a better method than comparing against a list of supported backends as it allows for supporting any number of backends provided they are installed on the user's system.
This should have no effect on the behaviour of tests in Huggingface's CI workers.
See transformers#25506 where this approach has already been added.
2023-08-22 11:48:12 +05:30
Sayak Paul
8d30d25794 [LoRA] default to None when fc alphas are not available. (#4706)
default to None when fc alphas are not available.
2023-08-22 08:47:08 +05:30
Sayak Paul
1e0395e791 [LoRA] ensure different LoRA ranks for text encoders can be properly handled (#4669)
* debugging starts

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging ends, but does it?

* more robustness.
2023-08-22 08:21:13 +05:30
Sayak Paul
9141c1f9d5 [Core] enable lora for sdxl controlnets too and add slow tests. (#4666)
* enable lora for sdxl controlnets too.

* add: tests

* fix: assertion values.
2023-08-22 07:13:23 +05:30
dg845
f75b8aa9dd [docs] Add note in UniDiffusers Doc about PyTorch 1.X numerical stability issue (#4703)
* Add note regarding UniDiffuser pipeline numerical stability issues on PyTorch 1.X

* Use the doc-builder warning tag.
2023-08-22 07:12:06 +05:30
Sanchit Gandhi
7a24977ce3 Add AudioLDM 2 (#4549)
* from audioldm

* unet down + mid

* vae, clap, flan-t5

* start sequence audio mae

* iterate on audioldm encoder

* finish encoder

* finish weight conversion

* text pre-processing

* gpt2 pre-processing

* fix projection model

* working

* unet equivalence

* finish in base

* add unet cond

* finish unet

* finish custom unet

* start clean-up

* revert base unet changes

* refactor pre-processing

* tests: from audioldm

* fix some tests

* more fixes

* iterate on tests

* make fix copies

* harden fast tests

* slow integration tests

* finish tests

* update checkpoint

* update copyright

* docs

* remove outdated method

* add docstring

* make style

* remove decode latents

* enable cpu offload

* (text_encoder_1, tokenizer_1) -> (text_encoder, tokenizer)

* more clean up

* more refactor

* build pr docs

* Update docs/source/en/api/pipelines/audioldm2.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* small clean

* tidy conversion

* update for large checkpoint

* generate -> generate_language_model

* full clap model

* shrink clap-audio in tests

* fix large integration test

* fix fast tests

* use generation config

* make style

* update docs

* finish docs

* finish doc

* update tests

* fix last test

* syntax

* finalise tests

* refactor projection model in prep for TTS

* fix fast tests

* style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-21 12:34:21 +01:00
zuojianghua
74d902eb59 add config_file to from_single_file (#4614)
* Update loaders.py

add config_file to from_single_file, 
when the download_from_original_stable_diffusion_ckpt use

* Update loaders.py

add config_file to from_single_file,
when the download_from_original_stable_diffusion_ckpt use

* change config_file to original_config_file

* make style && make quality

---------

Co-authored-by: jianghua.zuo <jianghua.zuo@weimob.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-08-18 19:33:12 +05:30
Andrew Zhu
d7c4ae619d Add SDXL long weighted prompt pipeline (replace pr:4629) (#4661)
* Add SDXL long weighted prompt pipeline

* Add SDXL long weighted prompt pipeline usage sample in the readme document

* Add SDXL long weighted prompt pipeline usage sample in the readme document, add result image
2023-08-18 11:30:10 +05:30
Isotr0py
67ea2b7afa Support tiled encode/decode for AutoencoderTiny (#4627)
* Impl tae slicing and tiling

* add tae tiling test

* add parameterized test

* formatted code

* fix failed test

* style docs
2023-08-18 09:12:55 +05:30
Sayak Paul
a10107f92b fix: lora sdxl tests (#4652) 2023-08-17 15:59:50 +05:30
Sayak Paul
d0c30cfd37 make post-release (#4650) 2023-08-17 14:16:25 +05:30
Jacqui Wei
7c3e7fedcd Fix use_onnx parameter usage in from_pretrained func and update test_download_no_onnx_by_default test (#4508)
* add missing use_onnx in from_pretrained func

* fix test_download_no_onnx_by_default test func

* address comments

* split test cases
2023-08-17 11:49:32 +05:30
Patrick von Platen
029fb41695 [Safetensors] Make safetensors the default way of saving weights (#4235)
* make safetensors default

* set default save method as safetensors

* update tests

* update to support saving safetensors

* update test to account for safetensors default

* update example tests to use safetensors

* update example to support safetensors

* update unet tests for safetensors

* fix failing loader tests

* fix qc issues

* fix pipeline tests

* fix example test

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-08-17 10:54:28 +05:30
Batuhan Taskaya
852dc76d6d Support higher dimension LoRAs (#4625)
* Support higher dimension LoRAs

* add: tests

* fix: assertion values.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-17 10:07:07 +05:30
Scott Lessans
064f150813 Fix UnboundLocalError during LoRA loading (#4523)
* fixed

* add: tests

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-17 09:33:35 +05:30
Sayak Paul
5333f4c0ec make things clear in the controlnet sdxl doc. (#4644) 2023-08-17 09:04:28 +05:30
Dhruv Nair
3d08d8dc4e fix loading custom text encoder when using from_single_file (#4571)
fix loading custom text encoder when using from_single_file
2023-08-17 08:41:09 +05:30
Steven Liu
bdc4c3265f [docs] MultiControlNet (#4635)
multicontrolnet docs
2023-08-17 08:14:20 +05:30
Steven Liu
4ff7264d9b [docs] PushToHubMixin (#4622)
* push to hub docs

* fix typo

* feedback

* make style
2023-08-16 13:20:59 -06:00
Sayak Paul
5049599143 [Core] feat: MultiControlNet support for SDXL ControlNet pipeline (#4597)
* core: add multicontrolnet support to sdxl controlnet

* modify checks.

* fix: original_size determination

* add: tests for multi controlnet sdxl.

* remove unnecessary prints.
2023-08-16 20:30:39 +05:30
Suraj Patil
7b93c2a882 [research_projects] SDXL controlnet script (#4633)
add controlent script,
2023-08-16 18:27:08 +05:30
Dirk Morris
a7de96505b Fix unipc use_karras_sigmas exception - fixes huggingface/diffusers#4580 (#4581)
* Fix unipc karras sigmas exception - fixes huggingface/diffusers#4580

* Add unipc scheduler tests for karras sigmas
2023-08-16 10:01:53 +05:30
Sayak Paul
351aab60e9 Update text2image.md to fix the links (#4626) 2023-08-16 09:53:10 +05:30
nikhil-masterful
da5ab51d54 Add GLIGEN implementation (#4441)
* Add GLIGEN implementation

* GLIGEN: Fix code quality check failures

* GLIGEN: Fix Import block un-sorted or un-formatted failures

* GLIGEN: Fix check_repository_consistency failures

* GLIGEN: Add 'PositionNet' to versatile_diffusion/modeling_text_unet.py

* GLIGEN: check_repository_consistency: fix 'copy does not match' error

* GLIGEN: Fix review comments (1)

* GLIGEN: Fix E721 Do not compare types, use `isinstance()` failures

* GLIGEN : Ensure _encode_prompt() copy matches to StableDiffusionPipeline

* GLIGEN: Fix ruff E721 failure in unidiffuser/test_unidiffuser.py

* GLIGEN: doc_builder: restyle pipeline_stable_diffusion_gligen.py

* GIGLEN: reset files unrelated to gligen

* GLIGEN: Fix documentation comments (1)

* GLIGEN: Fix review comments (2)

* GLIGEN: Added FastTest

* GLIGEN: Fix review comments (3)
2023-08-16 09:34:17 +05:30
Sayak Paul
5175d3d7a5 add: train to text image with sdxl script. (#4505)
* add: train to text image with sdxl script.

Co-authored-by: CaptnSeraph <s3raph1m@gmail.com>

* fix: partial func.

* fix: default value of output_dir.

* make style

* set num inference steps to 25.

* remove mentions of LoRA.

* up min version

* add: ema cli arg

* run device placement while running step.

* precompute vae encodings too.

* fix

* debug

* should work now.

* debug

* debug

* goes alright?

* style

* debugging

* debugging

* debugging

* debugging

* fix

* reinit scheduler if prediction_type was passed.

* akways cast vae in float32

* better handling of snr.

Co-authored-by: bghira <bghira@users.github.com>

* the vae should be also passed

* add: docs.

* add: sdlx t2i tests

* save the pipeline

* autocast.

* fix: save_model_card

* fix: save_model_card.

---------

Co-authored-by: CaptnSeraph <s3raph1m@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: bghira <bghira@users.github.com>
2023-08-16 09:02:49 +05:30
Sayak Paul
a7508a76f0 add: pushtohubmixin to pipelines and schedulers docs overview. (#4607)
* add: pushtohubmixin to pipelines and schedulers docs overview.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-15 22:23:17 +05:30
Sayak Paul
aaef41b5fe [Docs] fix links in the controlling generation doc. (#4612)
* fix links in the controlling generation doc.

* more fixes.
2023-08-15 20:27:13 +05:30
Wang Qiang
078df46bc9 An invalid clerical error in sdxl finetune (#4608) 2023-08-15 10:41:51 +05:30
Sayak Paul
15782fd506 [Pipeline utils] feat: implement push_to_hub for standalone models, schedulers as well as pipelines (#4128)
* feat: implement push_to_hub for standalone models.

* address PR feedback.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* remove max_shard_size.

* add: support for scheduler push_to_hub

* enable push_to_hub support for flax schedulers.

* enable push_to_hub for pipelines.

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* reflect pr feedback.

* address another round of deedback.

* better handling of kwargs.

* add: tests

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* setting hub staging to False for now.

* incorporate staging test as a separate job.

Co-authored-by: ydshieh <2521628+ydshieh@users.noreply.github.com>

* fix: tokenizer loading.

* fix: json dumping.

* move is_staging_test to a better location.

* better treatment to tokens.

* define repo_id to better handle concurrency

* style

* explicitly set token

* Empty-Commit

* move SUER, TOKEN to test

* collate org_repo_id

* delete repo

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ydshieh <2521628+ydshieh@users.noreply.github.com>
2023-08-15 07:39:22 +05:30
Sayak Paul
d93ca26893 [Examples] Update InstructPix2Pix README_sdxl.md to fix mentions (#4574)
* Update README_sdxl.md to fix mentions

* add --push_to_hub

* add --push_to_hub

* fix: mention
2023-08-14 17:48:13 +05:30
Claire Froelich
32963c24c5 Fix git-lfs command typo in docs (#4586)
fix typo in git-lfs command

added missing hyphen. "git lfs" is not a command
2023-08-14 17:21:45 +05:30
AisingioroHao
1b739e7344 Fixed invalid pipeline_class_name parameter. (#4590)
* Fixed invalid pipeline_class_name parameter.

* Fix the format
2023-08-14 17:21:17 +05:30
Sayak Paul
d67eba0f31 [Utility] adds an image grid utility (#4576)
* add: utility for image grid.

* add: return type.

* change necessary places.

* add to utility page.
2023-08-12 10:34:51 +05:30
Steven Liu
714bfed859 [docs] Fix ControlNet SDXL docstring (#4582)
fix
2023-08-11 10:43:40 -07:00
Sayak Paul
d5983a6779 [Examples] fix: network_alpha -> network_alphas (#4572)
network_alpha
2023-08-11 14:18:49 +05:30
Mystfit
796c01534d Fixing repo_id regex validation error on windows platforms (#4358)
* Fixing repo_id regex validation error on windows platforms

* Validating correct URL with prefix is provided

If we are loading a URL then we don't need to use os.path.join and array slicing to split out a repo_id and file path from an absolute filepath. 

Checking if the URL prefix is valid first before doing any URL splitting otherwise we raise a ValueError since neither a valid filepath or URL was provided.

* Style fixes

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-11 11:11:43 +05:30
Abhipsha Das
c8d86e9f0a Remove code snippets containing is_safetensors_available() (#4521)
* [WIP] Remove code snippets containing `is_safetensors_available()`

* Modifying `import_utils.py`

* update pipeline tests for safetensor default

* fix test related to cached requests

* address import nits

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2023-08-11 11:05:22 +05:30
dotieuthien
b28cd3fba0 Convert Stable Diffusion ControlNet to TensorRT (#4465)
* convert tensorrt controlnet

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix code quality

* Fix number controlnet condition

* Add convert SD XL to onnx

* Add convert SD XL to tensorrt

* Add convert SD XL to tensorrt

* Add examples in comments

* Add examples in comments

* Add test onnx controlnet

* Add tensorrt test

* Remove copied

* Move file test to examples/community

* Remove script

* Remove script

* Remove text

---------

Co-authored-by: dotieuthien <thien.do@mservice.com.vn>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-11 08:12:26 +05:30
Steven Liu
cd7071e750 [docs] Add safetensors flag (#4245)
* add safetensors flag

* apply review
2023-08-10 12:37:23 -07:00
Steven Liu
e31f38b5d6 [docs] Remove attention slicing (#4518)
* remove attention slicing

* apply feedback
2023-08-10 11:00:03 -07:00
Steven Liu
3bd5e073cb [docs] Expand prompt weighting (#4516)
* add more weighting/blend/conjunction

* finish blend/conjunction

* add textual inversion example

* add dreambooth
2023-08-10 10:56:53 -07:00
YiYi Xu
3df52ba8dc [Doc] update sdxl-controlnet repo name (#4564)
* rename

* style

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-08-10 22:02:32 +05:30
Sayak Paul
c697c5ab57 improve controlnet sdxl docs now that we have a good checkpoint. (#4556) 2023-08-10 08:21:36 +05:30
VV-A-VV
3fd45eb10f fix some typo error (#4546)
* fix some typo error

* Undo changes to capitalization
2023-08-10 06:49:25 +05:30
Patrick von Platen
5cbcbe3c63 Revert "introduce minimalistic reimplementation of SDXL on the SDXL doc" (#4548)
Revert "introduce minimalistic reimplementation of SDXL on the SDXL doc (#4532)"

This reverts commit e7e3749498.
2023-08-10 06:49:06 +05:30
Dhruv Nair
7b07f9812a Move controlnet load local tests to nightly (#4543)
move controlnet load local tests to nihghtly
2023-08-09 23:00:42 +05:30
Steven Liu
16ad13b61d [docs] Clean scheduler api (#4204)
* clean scheduler mixin

* up to dpmsolvermultistep

* finish cleaning

* first draft

* fix overview table

* apply feedback

* update reference code
2023-08-09 09:00:35 -07:00
Dhruv Nair
da0e2fce38 pin ruff version for quality checks (#4539)
* pin ruff version for quality checks

* update dependency versions table

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-09 16:46:45 +05:30
Dhruv Nair
a67ff32301 Move slow tests to nightly (#4526)
* move slow pix2pixzero tests to nightly

* move slow panorama tests to nightly

* move txt2video full test to nightly

* clean up

* remove nightly test from text to video pipeline
2023-08-09 12:38:15 +02:00
jere357
3c1b4933bd Changed code that converts tensors to PIL images in the write_your_own_pipeline notebook (#4489)
changed code that converts tensors to PIL images
2023-08-09 15:00:51 +05:30
Sayak Paul
e731ae0ec8 Update README_sdxl.md to include the free-tier Colab Notebook (#4540)
Update README_sdxl.md
2023-08-09 14:32:14 +05:30
Rastislav Švarba
6c5b5b260e Fix push_to_hub in train_text_to_image_lora_sdxl.py example (#4535)
fix: push_to_hub in train text2image lora sdxl
2023-08-09 11:48:24 +05:30
Simo Ryu
e7e3749498 introduce minimalistic reimplementation of SDXL on the SDXL doc (#4532)
minsdxl
2023-08-09 07:33:07 +05:30
Wooyeol Baek
c7c0b57541 Copy lora functions to XLPipelines (#4512)
* add load_lora_weights and save_lora_weights to StableDiffusionXLImg2ImgPipeline

* add load_lora_weights and save_lora_weights to StableDiffusionXLInpaintPipeline

* apply black format

* apply black format

* add copy statement

* fix statements

* fix statements

* fix statements

* run `make fix-copies`
2023-08-08 18:53:22 +05:30
Dhruv Nair
c91272d631 fix indexing issue in sd reference pipeline (#4531) 2023-08-08 15:14:19 +02:00
George He
f0725c5845 Fix misc typos (#4479)
Fix typos
2023-08-07 17:21:19 -07:00
YiYi Xu
aef11cbf66 add pipeline_class_name argument to Stable Diffusion conversion script (#4461)
* add pipeline class

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-07 06:44:31 -10:00
Dhruv Nair
71c8224159 Moving certain pipelines slow tests to nightly (#4469)
* move audioldm tests to nightly

* move kandinsky im2img ddpm test to nightly

* move flax dpm test to nightly

* move diffedit dpm test to nightly

* move fp16 slow tests to nightly
2023-08-07 17:28:56 +02:00
Patrick von Platen
4367b8a300 move pipeline only when running validation (#4515) 2023-08-07 17:20:18 +02:00
ethansmith2000
f4f854138d grad checkpointing (#4474)
* grad checkpointing

* fix make fix-copies

* fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-07 15:26:54 +02:00
Patrick von Platen
e1b5b8ba13 Make sure fp16-fix is used as default (#4510)
* Make sue fp16-fix is used as default

* fix vae

* finish

* fix
2023-08-07 15:16:37 +02:00
Patrick von Platen
dff5ff35a9 [SDXL LoRA] fix batch size lora (#4509)
fix batch size lora
2023-08-07 13:27:13 +02:00
Sayak Paul
b2456717e6 Update lora.md to clarify SDXL support (#4503)
* Update lora.md

* Update lora.md
2023-08-07 11:06:30 +05:30
Vladislav Artemyev
2e69cf16fe Log global_step instead of epoch to tensorboard (#4493)
Co-authored-by: mrlzla <noname@noname.com>
2023-08-07 07:49:39 +05:30
takuoko
9c29bc2df8 [Examples] Support train_text_to_image_lora_sdxl.py (#4365)
* add train_text_to_image_lora_sdxl.py

* add train_text_to_image_lora_sdxl.py

* add test and minor fix

* Update examples/text_to_image/README_sdxl.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* fix unwrap_model rule

* add invisible-watermark in requirements

* del invisible-watermark

* Update examples/text_to_image/README_sdxl.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/text_to_image/README_sdxl.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/text_to_image/train_text_to_image_lora_sdxl.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* del comment & update readme

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-06 13:47:20 +05:30
AisingioroHao
70d098540d Add a data_dir parameter to the load_dataset method. (#4482)
Co-authored-by: AisingioroHao0 <1286098622@qq.com>
2023-08-06 08:45:48 +05:30
Patrick von Platen
ea1fcc28a4 [SDXL] Allow SDXL LoRA to be run with less than 16GB of VRAM (#4470)
* correct

* correct blocks

* finish

* finish

* finish

* Apply suggestions from code review

* fix

* up

* up

* up

* Update examples/dreambooth/README_sdxl.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Apply suggestions from code review

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-08-04 20:06:38 +02:00
Patrick von Platen
66de221409 Update README_sdxl.md (#4472) 2023-08-04 16:23:35 +02:00
Sayak Paul
06f73bd6d1 [Tests] Adds integration tests for SDXL LoRAs (#4462)
* add: integration tests for SDXL LoRAs.

* change pipeline class.

* fix assertion values.

* print values again.

* let's see.

* let's see.

* let's see.

* finish
2023-08-04 16:25:53 +05:30
asfiyab-nvidia
c14c141b86 TensorRT Inpaint pipeline: minor fixes (#4457)
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
2023-08-04 12:28:28 +02:00
manosplitsis
79ef9e528c Fixed multi-token textual inversion training (#4452)
* added placeholder token concatenation during training

* Update examples/textual_inversion/textual_inversion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-04 12:21:31 +02:00
Dhruv Nair
801a5e2199 Cleanup Pass on flaky slow tests for Stable Diffusion (#4455)
* lower num inference steps and precision checkk

* fix flaky inpaint tests

* remove unsued imports

* set unet default attn processor
2023-08-04 10:24:56 +02:00
YiYi Xu
1edd0debaa fix-format (#4458)
make style

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-08-03 20:34:37 -07:00
YiYi Xu
29ece0db79 a few fix for kandinsky combined pipeline (#4352)
* add xformer

* enable_sequential_cpu_offload

* style

* Update src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-03 15:10:41 -10:00
Patrick von Platen
1a8843f93e add sdxl to prompt weighting (#4439)
* add sdxl to prompt weighting

* Update docs/source/en/using-diffusers/weighted_prompts.md

* Update docs/source/en/using-diffusers/weighted_prompts.md

* add sdxl to prompt weighting

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

* Update docs/source/en/using-diffusers/weighted_prompts.md

* Apply suggestions from code review

* correct

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-03 21:41:48 +02:00
JinK
e391b789ac Support different strength for Stable Diffusion TensorRT Inpainting pipeline (#4216)
* Support different strength

* run make style
2023-08-03 21:32:44 +02:00
VV-A-VV
d0b8de1262 Delete the duplicate code for the contolnet img 2 img (#4411)
delete the duplicated code from the controlnet-img-2-img pipelines
2023-08-03 20:32:03 +02:00
Yuyang Zhao
b9058754c5 Fix bug caused by typo (#4357)
Fix the typo that causes `omegaconf.errors.ConfigAttributeError: Missing key parms`
2023-08-03 20:27:42 +02:00
Alan Ji
777becda6b fix typo to ensure make test-examples work correctly (#4329)
fix typo to ensure `make test-examples` work correctly
2023-08-03 20:17:48 +02:00
cmdr2
380bfd82c1 Allow controlnets to be loaded (from ckpt) in a parallel thread with a SD model (ckpt), and speed it up slightly (#4298)
Faster controlnet model instantiation, and allow controlnets to be loaded (from ckpt) in a parallel thread with a SD model (ckpt) without  tensor errors (race condition)
2023-08-03 20:11:13 +02:00
Steven Liu
5989a85edb [docs] Distilled SD (#4442)
* first draft

* add blog link
2023-08-03 11:03:42 -07:00
Levi McCallum
4188f3063a Add rank argument to train_dreambooth_lora_sdxl.py (#4343)
* Add rank argument to train_dreambooth_lora_sdxl.py

* Update train_dreambooth_lora_sdxl.py
2023-08-03 23:27:30 +05:30
George He
0b4430e840 Fix typerror in pipeline handling for MultiControlNets which only contain a single ControlNet (#4454)
* Handle single controlnet in multicontrolnet

* Fix formatting
2023-08-03 17:37:07 +02:00
Patrick von Platen
a74c995e7d make style 2023-08-03 14:45:06 +00:00
Neil Wang
85aa673bec auto type conversion (#4270)
* type conversion

Default value of `control_guidance_start` and `control_guidance_end` in `StableDiffusionControlNetPipeline.check_inputs` causes `TypeError: object of type 'float' has no len()`

Proposed fix: 
Convert `control_guidance_start` and `control_guidance_end` to list if float

* Update src/diffusers/pipelines/controlnet/pipeline_controlnet.py

* Update src/diffusers/pipelines/controlnet/pipeline_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/controlnet/pipeline_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-03 16:44:48 +02:00
Dhruv Nair
1d2587bb34 move tests to nightly (#4451)
* move tests to nightly

* clean up code quality issues

* more clean up
2023-08-03 15:25:28 +02:00
Patrick von Platen
372b58108e fix make style 2023-08-03 10:17:00 +00:00
w4ffl35
45171174b8 Prevent online access when desired when using download_from_original_stable_diffusion_ckpt (#4271)
Prevent online access when desired

- Bypass requests with config files option added to download_from_original_stable_diffusion_ckpt
- Adds local_files_only flags to all from_pretrained requests
2023-08-03 12:16:41 +02:00
cmdr2
4c4fe042a7 Accept pooled_prompt_embeds in the SDXL Controlnet pipeline. Fixes an error if prompt_embeds are passed. (#4309)
* Accept pooled_prompt_embeds in the SDXL Controlnet pipeline. Fixes an error if prompt_embeds are passed.

* Add a test for pooled prompt embeds
2023-08-03 13:05:19 +05:30
Will Berman
47bf8e566c can call encode_prompt with out setting a text encoder instance variable (#4396)
* can call encode_prompt with out setting a text encoder instance variable

* fix
2023-08-02 21:25:30 -07:00
Sayak Paul
18fc40c169 [Feat] add tiny Autoencoder for (almost) instant decoding (#4384)
* add: model implementation of tiny autoencoder.

* add: inits.

* push the latest devs.

* add: conversion script and finish.

* add: scaling factor args.

* debugging

* fix denormalization.

* fix: positional argument.

* handle use_torch_2_0_or_xformers.

* handle post_quant_conv

* handle dtype

* fix: sdxl image processor for tiny ae.

* fix: sdxl image processor for tiny ae.

* unify upcasting logic.

* copied from madness.

* remove trailing whitespace.

* set is_tiny_vae = False

* address PR comments.

* change to AutoencoderTiny

* make act_fn an str throughout

* fix: apply_forward_hook decorator call

* get rid of the special is_tiny_vae flag.

* directly scale the output.

* fix dummies?

* fix: act_fn.

* get rid of the Clamp() layer.

* bring back copied from.

* movement of the blocks to appropriate modules.

* add: docstrings to AutoencoderTiny

* add: documentation.

* changes to the conversion script.

* add doc entry.

* settle tests.

* style

* add one slow test.

* fix

* fix 2

* fix 2

* fix: 4

* fix: 5

* finish integration tests

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-02 23:58:05 +05:30
Xin Kong
615c04db15 [Pipelines] Add community pipeline for Zero123 (#4295)
* add zero123 pipeline to community

* add community doc

* reformat

* update zero123 pipeline, including cc_projection within diffusers; add convert ckpt scripts; support diffusers weights
2023-08-02 19:36:49 +02:00
Steven Liu
ae82a3eb34 [docs] AutoPipeline tutorial (#4273)
* first draft

* tidy api

* apply feedback

* mdx to md

* apply feedback

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-02 10:32:02 -07:00
Sayak Paul
816ca0048f [LoRA] Fix SDXL text encoder LoRAs (#4371)
* temporarily disable text encoder loras.

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debbuging.

* modify doc.

* rename tests.

* print slices.

* fix: assertions

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-08-02 17:00:56 +05:30
Sayak Paul
fef8d2f726 remove mentions of textual inversion from sdxl. (#4404) 2023-08-02 15:29:46 +05:30
Ella Charlaix
579b4b2020 Update documentation (#4422)
* update documentation

* minor
2023-08-02 11:49:22 +02:00
Steven Liu
6c5bd2a38d [docs] Fix SDXL docstring (#4397)
fix guidance scale value
2023-08-01 16:40:15 -07:00
Will Berman
160474ac61 train dreambooth fix pre encode class prompt (#4395) 2023-08-01 12:00:05 -07:00
YiYi Xu
c10861ee1b fix test_float16_inference (#4412)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-08-01 07:49:50 -10:00
YiYi Xu
94b332c476 support from_single_file for SDXL inpainting (#4408)
fix

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-08-01 07:47:22 -10:00
Dhruv Nair
6f4355f89f Cleanup pass for flaky Slow Tests for Stable diffusion (#4415)
* update expected slice so img2img compile tests pass

* use default attn processor

* use default attn processor and update expected slice value to pass test

* use default attn processor

* set default attn processor and update expected slice

* set default attn processor and change precision for check

* set unet to use default attn processor
2023-08-01 18:21:14 +02:00
estelleafl
05a1cb902c [ldm3d] documentation fixing typos (#4284)
* fixed typo

* updated doc to be consistent in naming

* make style/quality

* preprocessing for 4 channels and not 6

* make style

* test for 4c

* make style/quality

* fixed test on cpu

* fixed doc typo

* changed default ckpt to 4c

* Update pipeline_stable_diffusion_ldm3d.py

---------

Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>
2023-08-01 09:03:29 -07:00
Patrick von Platen
c69526a3d5 [AutoPipeline] Correct naming (#4420) 2023-08-01 14:56:27 +02:00
Nishant Rajadhyaksha
6c49d542a3 Update docs of unet_1d.py (#4394)
Update unet_1d.py

highlighting the way the modules are actually fed in the main code as the order matters because no skip block attaches time embeds whilst others do not
2023-07-31 11:04:47 -07:00
Sayak Paul
ba43ce3476 minor doc fixes. (#4380) 2023-07-31 12:15:56 +05:30
Andrey Voroshilov
ea5b0575f8 Clean up duplicate lines in encode_prompt (#4369)
* Clean up duplicate line

* Clean up duplicate lines

* Clean up duplicate line
2023-07-30 15:49:46 +05:30
Patrick von Platen
4f986fb28a [SDXL] Fix dummy imports incorrect naming (#4370)
[SDXL] Fix dummy imports
2023-07-30 12:17:38 +02:00
Harutatsu Akiyama
aae27262f4 [SDXL-IP2P] Add gif for demonstrating training processes (#4342)
* [SDXL-IP2P] Add gif for demonstrating training processes

* [SDXL-IP2P] Add gif for demonstrating training processes

* [SDXL-IP2P] Change gif to URLs

* [SDXL-IP2P] Add URLs in case gif now show

---------

Co-authored-by: Harutatsu Akiyama <kf.zy.qin@gmail.com>
2023-07-30 10:07:10 +05:30
Sayak Paul
34b5b63bb8 Update README.md to have PyPI-friendly path (#4351) 2023-07-29 08:59:18 +05:30
Will Berman
2b1786735e fix fp type in t2i adapter docs (#4350) 2023-07-28 13:01:52 -07:00
Sayak Paul
4a4cdd6b07 [Feat] Support SDXL Kohya-style LoRA (#4287)
* sdxl lora changes.

* better name replacement.

* better replacement.

* debugging

* debugging

* debugging

* debugging

* debugging

* remove print.

* print state dict keys.

* print

* distingisuih better

* debuggable.

* fxi: tyests

* fix: arg from training script.

* access from class.

* run style

* debug

* save intermediate

* some simplifications for SDXL LoRA

* styling

* unet config is not needed in diffusers format.

* fix: dynamic SGM block mapping for SDXL kohya loras (#4322)

* Use lora compatible layers for linear proj_in/proj_out (#4323)

* improve condition for using the sgm_diffusers mapping

* informative comment.

* load compatible keys and embedding layer maaping.

* Get SDXL 1.0 example lora to load

* simplify

* specif ranks and hidden sizes.

* better handling of k rank and hidden

* debug

* debug

* debug

* debug

* debug

* fix: alpha keys

* add check for handling LoRAAttnAddedKVProcessor

* sanity comment

* modifications for text encoder SDXL

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* denugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* up

* up

* up

* up

* up

* up

* unneeded comments.

* unneeded comments.

* kwargs for the other attention processors.

* kwargs for the other attention processors.

* debugging

* debugging

* debugging

* debugging

* improve

* debugging

* debugging

* more print

* Fix alphas

* debugging

* debugging

* debugging

* debugging

* debugging

* debugging

* clean up

* clean up.

* debugging

* fix: text

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Batuhan Taskaya <batuhan@python.org>
2023-07-28 19:49:49 +02:00
Patrick von Platen
b7b6d6138d [SDXL] Make watermarker optional under certain circumstances to improve usability of SDXL 1.0 (#4346)
* improve sdxl

* more fixes

* improve sdxl

* improve sdxl

* improve sdxl

* finish
2023-07-28 19:29:22 +02:00
kathath
faa6cbc959 Fix repeat of negative prompt (#4335)
fix repeat of negative prompt
2023-07-28 18:14:22 +02:00
Patrick von Platen
306a7bd047 [ONNX] Don't download ONNX model by default (#4338)
* [Download] Don't download ONNX weights by default

* [Download] Don't download ONNX weights by default

* [Download] Don't download ONNX weights by default

* fix more

* finish

* finish

* finish
2023-07-28 14:02:48 +02:00
Tanupriya Singh
c7250f2b8a correct doc string for default value of guidance_scale (#4339) 2023-07-28 13:54:28 +02:00
Patrick von Platen
18b018c864 [SDXL Refiner] Fix refiner forward pass for batched input (#4327)
* fix_batch_xl

* Fix other pipelines as well

* up

* up

* Update tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_inpaint.py

* sort

* up

* Finish it all up Co-authored-by: Bagheera <bghira@users.github.com>

* Co-authored-by: Bagheera bghira@users.github.com

* Co-authored-by: Bagheera <bghira@users.github.com>

* Finish it all up Co-authored-by: Bagheera <bghira@users.github.com>
2023-07-28 12:34:18 +02:00
Sayak Paul
54fab2cd5f Update README_sdxl.md to correct the header (#4330)
Update README_sdxl.md
2023-07-28 09:22:14 +05:30
Sayak Paul
961173064d Honor the SDXL 1.0 licensing from the training scripts. (#4319)
* honor the original license.

* train_instruct_pix2pix_xl -> train_instruct_pix2pix_sdxl
2023-07-28 01:28:36 +05:30
Sayak Paul
7d0d073261 [Tests] add test for pipeline import. (#4276)
* add test for pipeline import.

* Update tests/others/test_dependencies.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address suggestions

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-28 00:08:15 +05:30
Xinyang Li
01b6ec21fa fix validation option for dreambooth training example (#4317) 2023-07-27 09:58:52 -07:00
Ella Charlaix
92e5ddd295 Fix typo documentation (#4320)
fix typo documentation
2023-07-27 21:31:58 +05:30
Patrick von Platen
1926331eaf [Local loading] Correct bug with local files only (#4318)
* [Local loading] Correct bug with local files only

* file not found error

* fix

* finish
2023-07-27 16:16:46 +02:00
YiYi Xu
5fd3dca5f3 fix a bug in StableDiffusionUpscalePipeline when prompt is None (#4278)
* fix batch_size

* add test

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-07-27 15:07:50 +02:00
Duong A. Nguyen
a2091b7071 Fix SDXL conversion from original to diffusers (#4280)
* fix sdxl conversion

* convention
2023-07-27 15:05:43 +02:00
Patrick von Platen
d8bc1a4e51 [Torch.compile] Fixes torch compile graph break (#4315)
* fix torch compile

* Fix all

* make style
2023-07-27 13:53:36 +02:00
YiYi Xu
80c10d8245 update Kandinsky doc (#4301)
* update doc

* fix an error in autopipe doc

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-07-27 13:10:41 +02:00
Patrick von Platen
20e92586c1 0.20.0dev0 (#4299)
* 0.20.0dev0

* make style
2023-07-26 23:06:18 +02:00
Patrick von Platen
5623ea065a quick fix 2023-07-26 21:01:17 +02:00
Patrick von Platen
16049caf79 quick fix 2023-07-26 18:47:21 +00:00
Patrick von Platen
6a6dfe1cbd Rename (#4294)
* up

* Apply suggestions from code review

* Apply suggestions from code review

* up
2023-07-26 20:41:21 +02:00
Ella Charlaix
b83bdce42a add openvino and onnx runtime SD XL documentation (#4285)
* add openvino SD XL documentation

* add onnx SD XL integration

* rephrase

* update doc

* add images

* update model
2023-07-26 20:25:07 +02:00
camenduru
c6ae9b7df6 Where did this 'x' come from, Elon? (#4277)
* why mdx?

* why mdx?

* why mdx?

* no x for kandinksy either

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-26 18:18:14 +02:00
Patrick von Platen
b3e5cd6b4d [Kandinsky] Add combined pipelines / Fix cpu model offload / Fix inpainting (#4207)
* Add combined pipeline

* Download readme

* Upload

* up

* up

* fix final

* Add enable model cpu offload kandinsky

* finish

* finish

* Fix

* fix more

* make style

* fix kandinsky mask

* fix inpainting test

* add callbacks

* add tests

* fix tests

* Apply suggestions from code review

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* docs

* docs

* correct docs

* fix tests

* add warning

* correct docs

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-07-26 17:13:55 +02:00
Patrick von Platen
b37dc3b3cd Fix all missing optional import statements from pipeline folders (#4272)
* fix circular import

* fix imports when watermark not specified

* fix all pipelines
2023-07-26 01:46:05 +02:00
Batuhan Taskaya
ff8f58086b Load Kohya-ss style LoRAs with auxilary states (#4147)
* Support to load Kohya-ss style LoRA file format (without restrictions)

Co-Authored-By: Takuma Mori <takuma104@gmail.com>
Co-Authored-By: Sayak Paul <spsayakpaul@gmail.com>

* tmp: add sdxl to mlp_modules

---------

Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-26 00:24:19 +02:00
Sayak Paul
161449d51a [SDXL DreamBooth LoRA] multiple fixes (#4262)
* add automatic licensing.

* debugging

* debugging

* more debugging

* more debugging.

* run make fix-copies.

* change to default tracker.
2023-07-25 21:10:01 +02:00
Steven Liu
34abee0907 [docs] Fix image in SDXL docs (#4267)
fix image link
2023-07-25 09:41:11 -07:00
Harutatsu Akiyama
428dbfecd9 [SDXL and IP2P]: instruction pix2pix XL training and pipeline (#4079)
* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* [Community] Implementation of the IADB community pipeline (#3996)

* community pipeline: implementation of iadb

* iadb.py: reformat using black

* iadb.py: linting update

* add kandinsky to readme table (#4081)

Co-authored-by: yiyixuxu <yixu310@gmail,com>

* [From Single File] Force accelerate to be installed (#4078)

force accelerate to be installed

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Support instruction pix2pix sdxl

* Clean up IP2P SDXL code

* Clean up IP2P SDXL code

* [IP2P and SDXL] clean up code

* [IP2P and SDXL] clean up code

* [IP2P and SDXL] clean up code

* [IP2P SDXL] Address code reviews

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews, add docs, tests

* [IP2P SDXL] Address code reviews

* [IP2P SDXL] Address code reviews

* [IP2P SDXL] Add README_SDXL

* [IP2P SDXL] Address code reviews

* [IP2P SDXL] Address code reviews

* [IP2P SDXL] Fix the copy problems

* [IP2P SDXL] Add license

* [IP2P SDXL] Add license

* [IP2P SDXL] Add license

* [IP2P SDXL] Address code reivew for selecting VAE andd others

* [IP2P SDXL] Update README_sdxl

* [IP2P SDXL] Update __init__

* [IP2P SDXL] Update dummy_torch_and_transformers_and_invisible_watermark_objects

* address patrick's comments and some additions to readmes.

---------

Co-authored-by: Harutatsu Akiyama <kf.zy.qin@gmail.com>
Co-authored-by: Thomas Chambon <36728882+tchambon@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-25 18:19:35 +05:30
Ragnar Rova
4e2a021829 Model path for sdxl wrong in dreambooth README (#4261) 2023-07-25 18:06:50 +05:30
Patrick von Platen
ebfe343149 [from_single_file] Fix circular import (#4259)
* up

* finish

* fix final
2023-07-25 14:30:39 +02:00
Sayak Paul
5ef6b8fa53 Update README_sdxl.md to change the note on default hyperparameters (#4258) 2023-07-25 16:57:48 +05:30
YiYi Xu
c11d11d63d [draft v2] AutoPipeline (#4138)
* initial

* style

* from ...pipelines -> from ..pipeline_util

* make style

* fix-copies

* fix value_guided_sampling oops

* style

* add test

* Show failing test

* update from_pipe

* fix

* add controlnet, additional test and register unused original config

* update for controlnet

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* store unused config as private attribute and pass if can

* add doc

* kandinsky inpaint pipeline does not work with decoder checkpoint

* update doc

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* style

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix

* Apply suggestions from code review

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-25 13:20:35 +02:00
Patrick von Platen
d74561da2c [SDXL] Improve docs (#4196)
* Improve docs

* Correct docs

* Add better example inpaint

* make style

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* fix

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-07-25 12:48:25 +02:00
Patrick von Platen
a0422ed0c9 [From Single File] Allow vae to be loaded (#4242)
* Allow vae to be loaded

* up
2023-07-25 12:16:43 +02:00
Will Berman
3dd339379d do not pass list to accelerator.init_trackers (#4248) 2023-07-24 21:10:37 -07:00
nupurkmr9
5652c43f83 Resolve bf16 error as mentioned in this [issue](https://github.com/huggingface/diffusers/issues/4139#issuecomment-1639977304) (#4214)
* resolve bf16 error

* resolve bf16 error

* resolve bf16 error

* resolve bf16 error

* resolve bf16 error

* resolve bf16 error

* resolve bf16 error
2023-07-25 05:41:19 +05:30
Sayak Paul
365e8461ac [SDXL DreamBooth LoRA] add support for text encoder fine-tuning (#4097)
* Allow low precision sd xl

* finish

* finish

* feat: initial draft for supporting text encoder lora finetuning for SDXL DreamBooth

* fix: variable assignments.

* add: autocast block.

* add debugging

* vae dtype hell

* fix: vae dtype hell.

* fix: vae dtype hell 3.

* clean up

* lora text encoder loader.

* fix: unwrapping models.

* add: tests.

* docs.

* handle unexpected keys.

* fix vae dtype in the final inference.

* fix scope problem.

* fix: save_model_card args.

* initialize: prefix to None.

* fix: dtype issues.

* apply gixes.

* debgging.

* debugging

* debugging

* debugging

* debugging

* debugging

* add: fast tests.

* pre-tokenize.

* address: will's comments.

* fix: loader and tests.

* fix: dataloader.

* simplify dataloader.

* length.

* simplification.

* make style && make quality

* simplify state_dict munging

* fix: tests.

* fix: state_dict packing.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-25 05:35:48 +05:30
Sayak Paul
fed12376c5 [ControlNet SDXL training] fixes in the training script (#4223)
* fix: #4206

* add: sdxl controlnet training smoketest.

* remove unnecessary token inits.

* add: licensing to model card.

* include SDXL licensing in the model card and make public visibility default

* debugging

* debugging

* disable local file download.

* fix: training test.

* fix: ckpt prefix.
2023-07-25 05:31:48 +05:30
Patrick von Platen
95b7de88fd [Docs] Fix from pretrained docs (#4240)
* [Docs] Fix from pretrained docs

* [Docs] Fix from pretrained docs
2023-07-24 20:24:29 +02:00
Apoorva Kulkarni
cbb1ead60b docs: Add missing import statement in textual_inversion inference example (#4227)
docs: Add missing import statement in textual_inversion inference instructions
2023-07-24 11:07:53 -07:00
Steven Liu
5470a4fce3 [docs] Other modalities (#4205)
remove coming soon, rl pipeline
2023-07-24 10:51:24 -07:00
39th president of the United States, probably
e98fabc550 Allow specifying denoising_start and denoising_end as integers representing the discrete timesteps, fixing the XL ensemble not working for many schedulers (#4115)
* Fix the XL ensemble not working for any kerras scheduler sigmas and having an off by one bug

* Update src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py

* make sytle

---------

Co-authored-by: Jimmy <39@🇺🇸.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-24 19:44:35 +02:00
Cris
fa356bd4da [docs] Changed path for ControlNet in docs (#4215)
docs: changed path for control net
2023-07-24 10:13:10 -07:00
Patrick von Platen
3ba36f97b8 [SD-XL] Fix sdxl controlnet inference (#4238)
* Fix controlnet xl inference

* correct some sd xl control inference
2023-07-24 18:43:35 +02:00
Patrick von Platen
b288684d25 [SDXL] Fix sd xl encode prompt (#4237)
* [SDXL] Fix sd xl encode prompt

* add tests
2023-07-24 18:37:07 +02:00
Lucain
06eda5b232 Raise initial HTTPError if pipeline is not cached locally (#4230)
* Raise initial HTTPError if pipeline is not cached locally

* make style
2023-07-24 15:35:16 +02:00
Hu Ye
8e5921cac1 fix a bug of prompt embeds in sdxl (#4099)
* fix bug in sdxl

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_stable_diffusion_xl.py

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_stable_diffusion_xl_inpaint.py

* Update pipeline_stable_diffusion_xl.py

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_stable_diffusion_xl_inpaint.py

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_controlnet_sd_xl.py

* Update pipeline_controlnet_sd_xl.py

* Update pipeline_stable_diffusion_xl.py

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_stable_diffusion_xl_inpaint.py

* Update test_stable_diffusion_xl.py

* Update test_stable_diffusion_xl.py

* Update test_stable_diffusion_xl.py

add test on prompt_embeds

* add test on prompt_embeds

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-24 10:14:20 +02:00
YiYi Xu
8e8954bd15 fix no CFG for kandinsky pipelines (#4193)
* fix bug when no cfg

* style

* fix no cfg for shap-e and cycle

* style

* fix no cfg for sdxl

* fix copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-07-24 10:10:14 +02:00
Jackmin801
09fab5610e [fix] network_alpha when loading unet lora from old format (#4221)
fix: missed network_alpha when loading lora from old format
2023-07-24 06:13:32 +05:30
Apoorva Kulkarni
2e53936c97 docs: Typo in dreambooth example README.md (#4203)
fix: Typo in dreambooth example README.md
2023-07-21 15:16:38 -07:00
Steven Liu
a69754bb87 [docs] Clean up pipeline apis (#3905)
* start with stable diffusion

* fix

* finish stable diffusion pipelines

* fix path to pipeline output

* fix flax paths

* fix copies

* add up to score sde ve

* finish first pass of pipelines

* fix copies

* second review

* align doc titles

* more review fixes

* final review
2023-07-21 11:01:34 -07:00
Kadir Nar
bcc570b910 📄 Renamed File for Better Understanding (#4056)
* 📄 Renamed File for Better Understanding

Renamed the 'rl' file to 'run_locomotion'. This change was made to improve the clarity and readability of the codebase. The 'rl' name was ambiguous, and 'run_locomotion' provides a more clear description of the file's purpose.

Thanks 🙌

* 📁 [Docs] Renamed Directory for Better Clarity

Renamed the 'rl' directory to 'reinforcement_learning'. This change provides a clearer understanding of the directory's purpose and its contents.

* Update examples/reinforcement_learning/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* 📝 Update README

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-21 09:08:27 -07:00
Sayak Paul
4dcab9227a [SDXL ControlNet Training] Follow-up fixes (#4188)
* hash computation. thanks to @lhoestq

* disable dtype casting.

* remove comments.
2023-07-21 20:55:33 +05:30
apolinário
aed30dff6b Allow passing different prompts to each text_encoder on stable_diffusion_xl pipelines (#4156)
* sdxl prompt2

* Improve checks

* doc linting

* whoops

* remove cat

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add other pipelines and tests

* Add multi-prompting to docs

* doc and copies check

* Fix copied froms

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Bring back the original code for unrelated files

* Fix tests

* Fix img2img

* Fix all

* fix

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-21 14:50:22 +02:00
statelesshz
e2bbaa4f54 make enable_sequential_cpu_offload more generic for third-party devices (#4191)
* make enable_sequential_cpu_offload more generic for third-party devices

* make style
2023-07-21 17:15:09 +05:30
Patrick von Platen
1e853e240e [Safetensors] make safetensors a required dep (#4177) 2023-07-21 12:57:30 +02:00
Batuhan Taskaya
ad787082e2 Fix unloading of LoRAs when xformers attention procs are in use (#4179) 2023-07-21 14:29:20 +05:30
Will Berman
7a47df22a5 remove bentoml doc in favor of blogpost (#4182) 2023-07-21 08:23:36 +05:30
Patrick von Platen
d620070bb3 [ControlNet Training] Remove safety from controlnet (#4180)
Remove safety from controlnet
2023-07-21 08:03:59 +05:30
Patrick von Platen
b7a6e34cc6 [From single file] Make sure that controlnet stays False for from_single_file (#4181)
* fix from signle file

* Make sure converison always works with safetensors
2023-07-21 02:09:11 +02:00
YiYi Xu
47b3346422 Shap-E: add support for mesh output (#4062)
* add output_type=mesh

* update img2img

* make style

* add doc

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add docstring for output_type

* add a section in doc about hub mesh visualization/ rotation

* update conversion script so default background is white

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* renderer -> shap_e_renderer

* img2img renderer -> shap_e_renderer

* fix tests

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-20 18:05:13 +02:00
Ruslan Vorovchenko
07f1fbb18e Asymmetric vqgan (#3956)
* added AsymmetricAutoencoderKL

* fixed copies+dummy

* added script to convert original asymmetric vqgan

* added docs

* updated docs

* fixed style

* fixes, added tests

* update doc

* fixed doc

* fixed tests

* naming

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* naming

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* udpated code example

* updated doc

* comments fixes

* added docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* comments fixes

* added inpaint pipeline tests

* comment suggestion: delete method

* yet another fixes

---------

Co-authored-by: Ruslan Vorovchenko <r.vorovchenko@prequelapp.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-20 17:51:06 +02:00
Lim Swee Kiat
2551b73670 Fix bug in ControlNetPipelines with MultiControlNetModel of length 1 (#4032)
* Fix bug in ControlNetPipelines with MultiControlNetModel of length 1

* Add tests for varying number of ControlNet models

* Fix missing indexing for control_guidance_start and control_guidance_end

* Fix code quality

* Separate test for MultiControlNet with one model

* Revert formatting of earlier test
2023-07-20 17:45:08 +02:00
Allen Zhang
930c8fdcb7 fix incorrect attention head dimension in AttnProcessor2_0 (#4154)
fix inner_dim
2023-07-20 17:40:50 +02:00
Patrick von Platen
6b1abba18d Add controlnet and vae from single file (#4084)
* Add controlnet from single file

* Updates

* make style

* finish

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-19 14:50:27 +02:00
Saurav Maheshkar
470f51cd26 feat: add act_fn param to OutValueFunctionBlock (#3994)
* feat: add act_fn param to OutValueFunctionBlock

* feat: update unet1d tests to not use mish

* feat: add `mish` as the default activation function

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* feat: drop mish tests from unet1d

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-19 14:44:44 +02:00
Patrick von Platen
b7e35dc782 make style 2023-07-19 14:28:35 +02:00
Byron Mallett
c77ac246c1 Fixed SDXL single file loading to use the correct requested pipeline class (#4142)
Using pipeline_class argument to instantiate the correct pipeline when loading SDXL models from single files
2023-07-19 14:40:13 +02:00
Zhao Shenyang
ed2a3584ab Docs/bentoml integration (#4090)
* docs: first draft of BentoML integration

* Update the diffusers doc

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* add BentoML integration guide under Optimization section

* restyle codes

---------

Co-authored-by: Sherlock113 <sherlockxu07@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-07-18 11:56:13 -07:00
Sayak Paul
3eb498e7b4 [Core] add: controlnet support for SDXL (#4038)
* add: controlnet sdxl.

* modifications to controlnet.

* run styling.

* add: __init__.pys

* incorporate https://github.com/huggingface/diffusers/pull/4019 changes.

* run make fix-copies.

* resize the conditioning images.

* remove autocast.

* run styling.

* disable autocast.

* debugging

* device placement.

* back to autocast.

* remove comment.

* save some memory by reusing the vae and unet in the pipeline.

* apply styling.

* Allow low precision sd xl

* finish

* finish

* changes to accommodate the improved VAE.

* modifications to how we handle vae encoding in the training.

* make style

* make existing controlnet fast tests pass.

* change vae checkpoint cli arg.

* fix: vae pretrained paths.

* fix: steps in get_scheduler().

* debugging.

* debugging./

* fix: weight conversion.

* add: docs.

* add: limited tests./

* add: datasets to the requirements.

* update docstrings and incorporate the usage of watermarking.

* incorporate fix from #4083

* fix watermarking dependency handling.

* run make-fix-copies.

* Empty-Commit

* Update requirements_sdxl.txt

* remove vae upcasting part.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* run make style

* run make fix-copies.

* disable suppot for multicontrolnet.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* run make fix-copies.

* dtyle/.

* fix-copies.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-18 18:25:34 +05:30
clarencechen
c6e56e92ed Add Recent Timestep Scheduling Improvements to DDIM Inverse Scheduler (#3865)
* Add Recent Timestep Scheduling Improvements to DDIM Inverse Scheduler

Roll timesteps by one to reflect origin-destination semantic discrepancy

Restore `set_alpha_to_one` option to handle negative initial timesteps

Remove `set_alpha_to_zero` option not used due to previous truncation

* Bugfix

* Remove unnecessary calls to `detach()`

Use `self.image_processor.preprocess` in DiffEdit pipeline functions

* Preprocess list input for inverted image latents in diffedit pipeline

* Add `timestep_spacing` and `steps_offset` to `DPMSolverMultistepInverseScheduler`

* Update expected test results to account for inverting last forward diffusion step

* Fix inversion progress bar bug

* Add first draft for proper fast tests for DDIMInverseScheduler

* Add deprecated DDIMInverseScheduler kwarg to ConfigMixer registry

* Fix test failure in DPMMultistepInverseScheduler

Invert step specification leads to negative noise variance in SDE-based algs

Add first draft for proper fast tests for DPMMultistepInverseScheduler

* Update expected test results to account for inverting last forward diffusion step

Clean up diffedit fast test
2023-07-18 11:35:16 +02:00
Patrick von Platen
27062c3631 Refactor execution device & cpu offload (#4114)
* create general cpu offload & execution device

* Remove boiler plate

* finish

* kp

* Correct offload more pipelines

* up

* Update src/diffusers/pipelines/pipeline_utils.py

* make style

* up
2023-07-18 11:04:40 +02:00
takuoko
6427aa995e [Enhance] Add rank in dreambooth (#4112)
add rank in dreambooth
2023-07-18 11:30:06 +05:30
Seongsu Park
8b18cd8e7f [Docs] Korean translation update (#4022)
* feat) optimization kr translation

* fix) typo, italic setting

* feat) dreambooth, text2image kr

* feat) lora kr

* fix) LoRA

* fix) fp16 fix

* fix) doc-builder style

* fix) fp16 일부 단어 수정

* fix) fp16 style fix

* fix) opt, training docs update

* merge conflict

* Fix community pipelines (#3266)

* Allow disabling torch 2_0 attention (#3273)

* Allow disabling torch 2_0 attention

* make style

* Update src/diffusers/models/attention.py

* Release: v0.16.1

* feat) toctree update

* feat) toctree update

* Fix custom releases (#3708)

* Fix custom releases

* make style

* Fix loading if unexpected keys are present (#3720)

* Fix loading

* make style

* Release: v0.17.0

* opt_overview

* commit

* Create pipeline_overview.mdx

* unconditional_image_generatoin_1stDraft

*  Add translation for write_own_pipeline.mdx

* conditional-직역, 언컨디셔널

* unconditional_image_generation first draft

* reviese

* Update pipeline_overview.mdx

* revise-2

* ♻️ translation fixed for write_own_pipeline.mdx

* complete translate basic_training.mdx

* other-formats.mdx 번역 완료

* fix tutorials/basic_training.mdx

* other-formats 수정

* inpaint 한국어 번역

* depth2img translation

* translate training/adapt-a-model.mdx

* revised_all

* feedback taken

* using_safetensors.mdx_first_draft

* custom_pipeline_examples.mdx_first_draft

* img2img 한글번역 완료

* tutorial_overview edit

* reusing_seeds

* torch2.0

* translate complete

* fix) 용어 통일 규약 반영

* [fix] 피드백을 반영해서 번역 보정

* 오탈자 정정 + 컨벤션 위배된 부분 정정

* typo, style fix

* toctree update

* copyright fix

* toctree fix

* Update _toctree.yml

---------

Co-authored-by: Chanran Kim <seriousran@gmail.com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Lee, Hongkyu <75282888+howsmyanimeprofilepicture@users.noreply.github.com>
Co-authored-by: hyeminan <adios9709@gmail.com>
Co-authored-by: movie5 <oyh5800@naver.com>
Co-authored-by: idra79haza <idra79haza@github.com>
Co-authored-by: Jihwan Kim <cuchoco@naver.com>
Co-authored-by: jungwoo <boonkoonheart@gmail.com>
Co-authored-by: jjuun0 <jh061993@gmail.com>
Co-authored-by: szjung-test <93111772+szjung-test@users.noreply.github.com>
Co-authored-by: idra79haza <37795618+idra79haza@users.noreply.github.com>
Co-authored-by: howsmyanimeprofilepicture <howsmyanimeprofilepicture@gmail.com>
Co-authored-by: hoswmyanimeprofilepicture <hoswmyanimeprofilepicture@gmail.com>
2023-07-17 18:28:08 -07:00
Will Berman
a0597f33ac t2i pipeline (#3932)
* Quick implementation of t2i-adapter

Load adapter module with from_pretrained

Prototyping generalized adapter framework

Writeup doc string for sideload framework(WIP) + some minor update on implementation

Update adapter models

Remove old adapter optional args in UNet

Add StableDiffusionAdapterPipeline unit test

Handle cpu offload in StableDiffusionAdapterPipeline

Auto correct coding style

Update model repo name to "RzZ/sd-v1-4-adapter-pipeline"

Refactor MultiAdapter to better compatible with config system

Export MultiAdapter

Create pipeline document template from controlnet

Create dummy objects

Supproting new AdapterLight model

Fix StableDiffusionAdapterPipeline common pipeline test

[WIP] Update adapter pipeline document

Handle num_inference_steps in StableDiffusionAdapterPipeline

Update definition of Adapter "channels_in"

Update documents

Apply code style

Fix doc typo and merge error

Update doc string and example

Quality of life improvement

Remove redundant code and file from prototyping

Remove unused pageage

Remove comments

Fix title

Fix typo

Add conditioning scale arg

Bring back old implmentation

Offload sideload

Add supply info on document

Update src/diffusers/models/adapter.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

Update MultiAdapter constructor

Swap out custom checkpoint and update pipeline constructor

Update docment

Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>

Correcting style

Following single-file policy

Update auto size in image preprocess func

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

fix copies

Update adapter pipeline behavior

Add adapter_conditioning_scale doc string

Add the missing doc string

Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Fix few bugs from suggestion

Handle L-mode PIL image as control image

Rename to differentiate adapter resblock

Update src/diffusers/models/adapter.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

Fix typo

Update adapter parameter name

Update test case and code style

Fix copies

Fix typo

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

Update Adapter class name

Add checkpoint converting script

Fix style

Fix-copies

Remove dev script

Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Updates for parameter rename

Fix convert_adapter

remove main

fix diff

more

refactoring

more

more

small fixes

refactor

tests

more slow tests

more tests

Update docs/source/en/api/pipelines/overview.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

add community contributor to docs

Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

fix

remove from_adapters

license

paper link

docs

more url fixes

more docs

fix

fixes

fix

fix

* fix sample inplace add

* additional_kwargs -> additional_residuals

* move t2i adapter pipeline to own module

* preprocess -> _preprocess_adapter_image

* add TencentArc to license

* fix example code links

* add image converter and fix example doc string

* fix links

* clearer additional residual application

---------

Co-authored-by: HimariO <dsfhe49854@gmail.com>
2023-07-17 12:55:44 -07:00
Kadir Nar
3929954613 📝 Update doc with more descriptive title and filename for "IF" section (#4049)
* 📝 Update doc with more descriptive title and filename for "IF" section

Updated the documentation to provide a more descriptive title and filename for the "IF" section. Previously, having only "IF" as the title was not conveying a clear meaning. By renaming the section to "DeepFloyd IF," we provide users with a more informative and context-specific heading.

Thanks! 🙌

* 📝 Update name for "IF" section in 📝 Update name for "IF" section in README

Updated the link and name for the "IF" section in the README file to reflect the new heading "DeepFloyd IF."

* 📝 Fix broken link for "Instruct Pix2Pix" section in README

Fixed the broken link for the "Instruct Pix2Pix" section in the README file. Previously, the link was pointing to an incorrect location due to the presence of "stable_diffusion" in the URL. By removing "stable_diffusion" from the URL, I have corrected the error and ensured that users are directed to the correct section.

* 🔧💼 Updated parameters in _toctree.yml file

- ✏️ Updated 'local' parameter to 'api/pipelines/deepfloyd_if'.
- ✏️ Updated 'title' parameter to 'DeepFloyd IF'.

🎯 These changes aim to improve visibility and accessibility in the documentation of the DeepFloyd IF pipeline. 🚀📚
2023-07-17 09:25:37 -07:00
Apoorva Pandey
f6ce323633 Make setup.py compatible with pipenv (#4121) 2023-07-17 17:31:04 +02:00
edward zhu
6b33c11c5b add noise_sampler_seed to StableDiffusionKDiffusionPipeline.__call__ (#3911)
* add noise_sampler to StableDiffusionKDiffusionPipeline

* fix/docs: Fix the broken doc links (#3897)

* fix/docs: Fix the broken doc links

Signed-off-by: GitHub <noreply@github.com>

* Update docs/source/en/using-diffusers/write_own_pipeline.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Add video img2img (#3900)

* Add image to image video

* Improve

* better naming

* make fix copies

* add docs

* finish tests

* trigger tests

* make style

* correct

* finish

* Fix more

* make style

* finish

* fix/doc-code: Updating to the latest version parameters (#3924)

fix/doc-code: update to use the new parameter

Signed-off-by: GitHub <noreply@github.com>

* fix/doc: no import torch issue (#3923)

Ffix/doc: no import torch issue

Signed-off-by: GitHub <noreply@github.com>

* Correct controlnet out of list error (#3928)

* Correct controlnet out of list error

* Apply suggestions from code review

* correct tests

* correct tests

* fix

* test all

* Apply suggestions from code review

* test all

* test all

* Apply suggestions from code review

* Apply suggestions from code review

* fix more tests

* Fix more

* Apply suggestions from code review

* finish

* Apply suggestions from code review

* Update src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py

* finish

* Adding better way to define multiple concepts and also validation capabilities. (#3807)

* - Added validation parameters
- Changed some parameter descriptions to better explain their use.
- Fixed a few typos.
- Added concept_list parameter for better management of multiple subjects
- changed logic for image validation

* - Fixed bad logic for class data root directories

* Defaulting validation_steps to None for an easier logic

* Fixed multiple validation prompts

* Fixed bug on validation negative prompt

* Changed validation logic for tracker.

* Added uuid for validation image labeling

* Fix error when comparing validation prompts and validation negative prompts

* Improved error message when negative prompts for validation are more than the number of prompts

* - Changed image tracking number from epoch to global_step
- Added Typing for functions

* Added some validations more when using concept_list parameter and the regular ones.

* Fixed error message

* Added more validations for validation parameters

* Improved messaging for errors

* Fixed validation error for parameters with default values

* - Added train step to image name for validation
- reformatted code

* - Added train step to image's name for validation
- reformatted code

* Updated README.md file.

* reverted back original script of train_dreambooth.py

* reverted back original script of train_dreambooth.py

* left one blank line at the eof

* reverted back setup.py

* reverted back setup.py

* added same logic for when parameters for prior preservation are used without enabling the flag while using concept_list parameter.

* Ran black formatter.

* fixed a few strings

* fixed import sort with isort and removed fstrings without placeholder

* fixed import order with ruff (since with isort wasn't ok)

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [ldm3d] Update code to be functional with the new checkpoints (#3875)

* fixed typo

* updated doc to be consistent in naming

* make style/quality

* preprocessing for 4 channels and not 6

* make style

* test for 4c

* make style/quality

* fixed test on cpu

---------

Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>

* Improve memory text to video (#3930)

* Improve memory text to video

* Apply suggestions from code review

* add test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish test setup

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* revert automatic chunking (#3934)

* revert automatic chunking

* Apply suggestions from code review

* revert automatic chunking

* avoid upcasting by assigning dtype to noise tensor (#3713)

* avoid upcasting by assigning dtype to noise tensor

* make style

* Update train_unconditional.py

* Update train_unconditional.py

* make style

* add unit test for pickle

* revert change

---------

Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>

* Fix failing np tests (#3942)

* Fix failing np tests

* Apply suggestions from code review

* Update tests/pipelines/test_pipelines_common.py

* Add `timestep_spacing` and `steps_offset` to schedulers (#3947)

* Add timestep_spacing to DDPM, LMSDiscrete, PNDM.

* Remove spurious line.

* More easy schedulers.

* Add `linspace` to DDIM

* Noise sigma for `trailing`.

* Add timestep_spacing to DEISMultistepScheduler.

Not sure the range is the way it was intended.

* Fix: remove line used to debug.

* Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC

* Fix: convert to numpy.

* Use sched. defaults when instantiating from_config

For params not present in the original configuration.

This makes it possible to switch pipeline schedulers even if they use
different timestep_spacing (or any other param).

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Missing args in DPMSolverMultistep

* Test: default args not in config

* Style

* Fix scheduler name in test

* Remove duplicated entries

* Add test for solver_type

This test currently fails in main. When switching from DEIS to UniPC,
solver_type is "logrho" (the default value from DEIS), which gets
translated to "bh1" by UniPC. This is different to the default value for
UniPC: "bh2". This is where the translation happens: 36d22d0709/src/diffusers/schedulers/scheduling_unipc_multistep.py (L171)

* UniPC: use same default for solver_type

Fixes a bug when switching from UniPC from another scheduler (i.e.,
DEIS) that uses a different solver type. The solver is now the same as
if we had instantiated the scheduler directly.

* do not save use default values

* fix more

* fix all

* fix schedulers

* fix more

* finish for real

* finish for real

* flaky tests

* Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py

* Default steps_offset to 0.

* Add missing docstrings

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add Consistency Models Pipeline (#3492)

* initial commit

* Improve consistency models sampling implementation.

* Add CMStochasticIterativeScheduler, which implements the multi-step sampler (stochastic_iterative_sampler) in the original code, and make further improvements to sampling.

* Add Unet blocks for consistency models

* Add conversion script for Unet

* Fix bug in new unet blocks

* Fix attention weight loading

* Make design improvements to ConsistencyModelPipeline and CMStochasticIterativeScheduler and add initial version of tests.

* make style

* Make small random test UNet class conditional and set resnet_time_scale_shift to 'scale_shift' to better match consistency model checkpoints.

* Add support for converting a test UNet and non-class-conditional UNets to the consistency models conversion script.

* make style

* Change num_class_embeds to 1000 to better match the original consistency models implementation.

* Add support for distillation in pipeline_consistency_models.py.

* Improve consistency model tests:
	- Get small testing checkpoints from hub
	- Modify tests to take into account "distillation" parameter of ConsistencyModelPipeline
	- Add onestep, multistep tests for distillation and distillation + class conditional
	- Add expected image slices for onestep tests

* make style

* Improve ConsistencyModelPipeline:
	- Add initial support for class-conditional generation
	- Fix initial sigma for onestep generation
	- Fix some sigma shape issues

* make style

* Improve ConsistencyModelPipeline:
	- add latents __call__ argument and prepare_latents method
	- add check_inputs method
	- add initial docstrings for ConsistencyModelPipeline.__call__

* make style

* Fix bug when randomly generating class labels for class-conditional generation.

* Switch CMStochasticIterativeScheduler to configuring a sigma schedule and make related changes to the pipeline and tests.

* Remove some unused code and make style.

* Fix small bug in CMStochasticIterativeScheduler.

* Add expected slices for multistep sampling tests and make them pass.

* Work on consistency model fast tests:
	- in pipeline, call self.scheduler.scale_model_input before denoising
	- get expected slices for Euler and Heun scheduler tests
	- make Euler test pass
	- mark Heun test as expected fail because it doesn't support prediction_type "sample" yet
	- remove DPM and Euler Ancestral tests because they don't support use_karras_sigmas

* make style

* Refactor conversion script to make it easier to add more model architectures to convert in the future.

* Work on ConsistencyModelPipeline tests:
	- Fix device bug when handling class labels in ConsistencyModelPipeline.__call__
	- Add slow tests for onestep and multistep sampling and make them pass
	- Refactor fast tests
	- Refactor ConsistencyModelPipeline.__init__

* make style

* Remove the add_noise and add_noise_to_input methods from CMStochasticIterativeScheduler for now.

* Run python utils/check_copies.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite to make dummy objects for new pipeline and scheduler.

* Make fast tests from PipelineTesterMixin pass.

* make style

* Refactor consistency models pipeline and scheduler:
	- Remove support for Karras schedulers (only support CMStochasticIterativeScheduler)
	- Move sigma manipulation, input scaling, denoising from pipeline to scheduler
	- Make corresponding changes to tests and ensure they pass

* make style

* Add docstrings and further refactor pipeline and scheduler.

* make style

* Add initial version of the consistency models documentation.

* Refactor custom timesteps logic following DDPMScheduler/IFPipeline and temporarily add torch 2.0 SDPA kernel selection logic for debugging.

* make style

* Convert current slow tests to use fp16 and flash attention.

* make style

* Add slow tests for normal attention on cuda device.

* make style

* Fix attention weights loading

* Update consistency model fast tests for new test checkpoints with attention fix.

* make style

* apply suggestions

* Add add_noise method to CMStochasticIterativeScheduler (copied from EulerDiscreteScheduler).

* Conversion script now outputs pipeline instead of UNet and add support for LSUN-256 models and different schedulers.

* When both timesteps and num_inference_steps are supplied, raise warning instead of error (timesteps take precedence).

* make style

* Add remaining diffusers model checkpoints for models in the original consistency model release and update usage example.

* apply suggestions from review

* make style

* fix attention naming

* Add tests for CMStochasticIterativeScheduler.

* make style

* Make CMStochasticIterativeScheduler tests pass.

* make style

* Override test_step_shape in CMStochasticIterativeSchedulerTest instead of modifying it in SchedulerCommonTest.

* make style

* rename some models

* Improve API

* rename some models

* Remove duplicated block

* Add docstring and make torch compile work

* More fixes

* Fixes

* Apply suggestions from code review

* Apply suggestions from code review

* add more docstring

* update consistency conversion script

---------

Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add test case for StableDiffusionKDiffusionPipeline noise_sampler

---------

Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Andrés Mauricio Repetto Ferrero <amd.repetto@gmail.com>
Co-authored-by: estelleafl <estelle.aflalo@intel.com>
Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>
Co-authored-by: Prathik Rao <prathikr@usc.edu>
Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
2023-07-17 17:10:17 +02:00
Patrick von Platen
5729829cd8 [From single file] Make accelerate optional (#4132)
* Make accelerate optional

* make accelerate optional
2023-07-17 15:56:21 +02:00
Patrick von Platen
e27500b72c [From ckpt] replace with os path join (#3746)
replace with os path join
2023-07-17 11:39:06 +02:00
Patrick von Platen
fe5911bf3d [Stable Diffusion Inpaint ]Fix dtype inpaint (#4113)
Fix dtype inpaint
2023-07-15 16:10:40 +02:00
Patrick von Platen
b024ebb965 [SD-XL] Add inpainting (#4098)
* Add more

* more

* up

* Get ensemble of expert denoisers working

* Fix code

* add tests

* up
2023-07-14 17:05:44 +02:00
Patrick von Platen
ad8f985e81 Allow low precision vae sd xl (#4083)
* Allow low precision sd xl

* finish

* finish

* make style
2023-07-14 14:51:16 +02:00
Patrick von Platen
ee2f2775b2 [tests] use parent class for monkey patching to not break other tests (#4088)
* [tests] use parent class for monkey patching to not break other tests

* fix
2023-07-14 13:44:44 +02:00
Sayak Paul
692b7a907d [Feat] add: utility for unloading lora. (#4034)
* add: test for testing unloading lora.

* add :reason to skipif.

* initial implementation of lora unload().

* apply styling.

* add: doc.

* change checkpoints.

* reinit generator

* finalize slow test.

* add fast test for unloading lora.
2023-07-14 16:30:18 +05:30
Patrick von Platen
71c918b848 [Invisible watermark] Correct version (#4087) 2023-07-14 09:30:43 +05:30
Sayak Paul
83ca21f539 fix: minor things in the SDXL docs. (#4070) 2023-07-14 09:01:20 +05:30
Gabriel Birnbaum
f3802eb805 fix requirement in SDXL (#4082) 2023-07-14 02:58:20 +02:00
Patrick von Platen
bfe8b41315 [From Single File] Force accelerate to be installed (#4078)
force accelerate to be installed
2023-07-13 23:16:43 +02:00
YiYi Xu
4535088cec add kandinsky to readme table (#4081)
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-07-13 22:26:51 +02:00
Thomas Chambon
2eceaaef0f [Community] Implementation of the IADB community pipeline (#3996)
* community pipeline: implementation of iadb

* iadb.py: reformat using black

* iadb.py: linting update
2023-07-13 16:49:41 +02:00
Ruoxi
ece55227ff Multiply lr scheduler steps by num_processes. (#3983)
* Multiply lr scheduler steps by `num_processes`.

* Stop multiplying steps by gradient accumulation.
2023-07-13 17:50:25 +05:30
Patrick von Platen
92a57a8e84 Fix kandinsky remove safety (#4065)
* Finish

* make style
2023-07-12 21:42:29 +02:00
Lucain
d7280b7436 Trigger CI on ci-* branches (#3635) 2023-07-12 19:31:36 +02:00
Patrick von Platen
e9eb0938f4 make style 2023-07-12 19:24:47 +02:00
junming huang
a29ea36d62 Update train_unconditional.py (#3899)
increase the time of timeout when using big dataset or high resolution
2023-07-12 19:24:28 +02:00
Evgenii Kashin
af48bf2008 Add circular padding for artifact-free StableDiffusionPanoramaPipeline (#4025)
* Add circular padding option

* Fix style with black

* Fix corner case with small image size

* Add circular padding test cases

* Fix docstring

* Improve docstring for circular padding, remove slow test case

* Update docs for circular padding argument

* Add images comparison for circular padding
2023-07-12 20:49:46 +05:30
Patrick von Platen
4b50ecceb0 Correct sdxl docs (#4058) 2023-07-12 16:02:31 +02:00
Bagheera
99b540b072 [SDXL] Partial diffusion support for Text2Img and Img2Img Pipelines (#4015)
* diffusers#4003 - initial implementation of max_inference_steps

* diffusers#4003 - initial implementation of max_inference_steps and first_inference_step for img2img

* diffusers#4003 - use first_inference_step as an input arg for get_timestamps in img2img

* diffusers#4003 Do not add noise during img2img when we have a defined first timestep

* diffusers#4003 Mild updates after revert

* diffusers#4003 Missing change

* Show implementation with denoising_start and end

* Apply suggestions from code review

* Update src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* move to 0.19.0dev

* Apply suggestions from code review

* add exhaustive tests

* add docs

* finish

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* make style

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-12 13:54:27 +02:00
Patrick von Platen
b9feed8795 move to 0.19.0dev (#4048) 2023-07-11 22:49:12 +02:00
Kadir Nar
f9cedfb75c 📝 Fix broken link to models documentation (#4026)
* 📝 Fix broken link to models documentation

Corrected the link to the models documentation in the README. Previously, the link was pointing to an incorrect URL. Now, the link directs users to the correct documentation page for more details on the models.

Thanks! 🙌

* Update src/diffusers/models/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-07-11 12:10:37 -07:00
Michael Monashev
fc7aa64ea8 Fixed pipeline parameter descriptions. (#3955)
Fixed parameter descriptions
2023-07-11 09:52:17 -07:00
Patrick von Platen
d0979f5274 quick fix to make sure sdxl conversion works 2023-07-11 16:51:56 +00:00
Vines
fcb0da7f00 Fix diffedit doc typo (#3977)
fix diffedit code mistake
2023-07-11 09:47:36 -07:00
manosplitsis
8e8b046d24 Typo fix in example docstring in audioldm pipeline (#4044)
typo fix in example docstring in audioldm pipeline
2023-07-11 09:35:11 -07:00
oOraph
5e704a2c71 keep _use_default_values as a list type (#4040)
Signed-off-by: Raphael <oOraph@users.noreply.github.com>
Co-authored-by: Raphael <oOraph@users.noreply.github.com>
2023-07-11 18:21:08 +02:00
Patrick von Platen
8bff782354 Improve single loading file (#4041)
* start improving single file load

* Fix more

* start improving single file load

* Fix sd 2.1

* further improve from_single_file
2023-07-11 17:01:25 +02:00
Lucain
6632823690 FIX force_download in download utility (#4036)
FIX force_download in download utils
2023-07-11 12:20:00 +02:00
Sayak Paul
f74d5e1c2f [diffusers-cli] Add a CLI command to run FP16 and safetensors conversions and opening PRs (#3982)
* feat: add a cli util for fp16 and safetensors format.

* fix: commit_description.

* add: usage example.
2023-07-11 08:30:09 +05:30
Sayak Paul
3d74dc2abd [Examples] Add a training script for SDXL DreamBooth LoRA (#4016)
* add dreambooth lora script for SDXL incorporating latest changes.

* remove use_auth_token=True.

* add: documentation

* remove unneeded cli.

* increase the number of training steps in the readme.

* add LoraLoaderMixin to the subclassing mix.

* add sdxl lora dreambooth test.

* add: inference code sample.

* add: refiner output.

* add LoraLoaderMixin to the mix of classes of StableDiffusionXLImg2ImgPipeline.

* change default resolution of DreamBoothDataset.

* better sdxl report path.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-11 07:38:41 +05:30
Susnato Dhar
dfd7eafbce Fixed wrong link in CONTRIBUTING.md (#4028)
Update CONTRIBUTING.md
2023-07-10 21:00:55 +02:00
Steven Liu
8dd0ddc3c4 [docs] Fix index page (#3997)
* fix

* correct link

* Fix typo

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-10 10:24:37 -07:00
Patrick von Platen
080ecf01b3 Improve loading pipe (#4009)
* improve loading subcomponents

* Add test for logging

* improve loading subcomponents

* make style

* make style

* fix

* finish
2023-07-10 18:05:40 +02:00
Pedro Cuenca
7a91ea6c2b Remove remaining not in upscale pipeline (#4020)
Remove remaining `not` in upscale pipeline.
2023-07-10 12:37:50 +02:00
Sayak Paul
e4559f48c1 minor improvements to the SDXL doc. (#3985)
* minor improvements to the SDXL doc.

* use_refiner variable.

* fix: typo.
2023-07-10 16:04:23 +05:30
Pedro Cuenca
d6b861401e Correctly keep vae in float16 when using PyTorch 2 or xFormers (#4019)
* Update pipeline_stable_diffusion_xl.py

fix a bug

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_stable_diffusion_xl_img2img.py

* Update pipeline_stable_diffusion_upscale.py

* style

---------

Co-authored-by: Hu Ye <xiaohuzc@gmail.com>
2023-07-10 12:19:56 +02:00
Patrick von Platen
e4f6c3799d [DiffusionPipeline] Deprecate not throwing error when loading non-existant variant (#4011)
* Deprecate variant nicely

* make style

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-10 12:12:29 +02:00
Patrick von Platen
98c9aac1d5 [SDXL] Fix all sequential offload (#4010)
* Fix all sequential offload

* make style

* make style
2023-07-10 10:27:23 +02:00
Omar Sanseviero
e3d71ad89a Minor nits to Dance DIffusion (#4012)
Update pipeline_dance_diffusion.py
2023-07-10 02:53:06 +05:30
Patrick von Platen
68f61a07d6 Make sure torch compile doesn't access unet config (#4008) 2023-07-10 00:23:55 +05:30
Patrick von Platen
4a3e574807 make style 2023-07-09 16:02:59 +00:00
Will Berman
c2a28c346c Refactor LoRA (#3778)
* refactor to support patching LoRA into T5

instantiate the lora linear layer on the same device as the regular linear layer

get lora rank from state dict

tests

fmt

can create lora layer in float32 even when rest of model is float16

fix loading model hook

remove load_lora_weights_ and T5 dispatching

remove Unet#attn_processors_state_dict

docstrings

* text encoder monkeypatch class method

* fix test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-09 18:02:46 +02:00
Patrick von Platen
78922ed7c7 Add sdxl prompt embeddings (#3995)
* Add sdxl prompt embeddings

* Fix more

* fix some slow tests
2023-07-07 16:50:53 +02:00
Patrick von Platen
6fde5a6dd6 [Tests] Fix some slow tests (#3989)
fix some slow tests
2023-07-07 15:17:57 +02:00
Amiha
d1d0b8afce Don't use bare prints in a library (#3991) 2023-07-07 15:17:50 +02:00
Batuhan Taskaya
04ddad484e Add 'rank' parameter to Dreambooth LoRA training script (#3945) 2023-07-07 17:26:10 +05:30
Saurav Maheshkar
03d829d59e feat: add Dropout to Flax UNet (#3894)
* feat: add Dropout to Flax UNet

* feat: add @compact decorator

* fix: drop nn.compact
2023-07-07 11:38:16 +02:00
Omar Sanseviero
8d8b4311b9 Fix code snippet for Audio Diffusion (#3987) 2023-07-07 10:39:38 +02:00
Yorai Levi
1fbcc78d6e typo in safetensors (safetenstors) (#3976)
* Update pipeline_utils.py

typo in safetensors (safetenstors)

* Update loaders.py

typo in safetensors (safetenstors)

* Update modeling_utils.py

typo in safetensors (safetenstors)
2023-07-07 13:03:51 +05:30
Patrick von Platen
51593da25a fix main docs 2023-07-06 19:28:33 +02:00
Patrick von Platen
38e563d0c7 Fix SD XL Docs (#3971)
* finish sd xl docs

* make style

* Apply suggestions from code review

* uP

* uP

* Correct
2023-07-06 19:21:03 +02:00
Aisuko
b8f089c5a3 fix/doc-code: import torch and fix the broken document address (#3941)
Signed-off-by: GitHub <noreply@github.com>
2023-07-06 09:29:04 -07:00
Patrick von Platen
187ea539ae Improve SD XL (#3968)
* improve sd xl

* correct more

* finish

* make style

* fix more
2023-07-06 18:11:20 +02:00
Patrick von Platen
8bf80fc8d8 disable num attenion heads (#3969)
* disable num attenion heads

* finish
2023-07-06 17:51:40 +02:00
YiYi Xu
45f6d52b10 Add Shap-E (#3742)
* refactor prior_transformer

adding conversion script

add pipeline

add step_index from pipeline, + remove permute

add zero pad token

remove copy from statement for betas_for_alpha_bar function

* add

* add

* update conversion script for renderer model

* refactor camera a little bit

* clean up

* style

* fix copies

* Update src/diffusers/schedulers/scheduling_heun_discrete.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* alpha_transform_type

* remove step_index argument

* remove get_sigmas_karras

* remove _yiyi_sigma_to_t

* move the rescale prompt_embeds from prior_transformer to pipeline

* replace baddbmm with einsum to match origial repo

* Revert "replace baddbmm with einsum to match origial repo"

This reverts commit 3f6b435d65.

* add step_index to scale_model_input

* Revert "move the rescale prompt_embeds from prior_transformer to pipeline"

This reverts commit 5b5a8e6be9.

* move rescale from prior_transformer to pipeline

* correct step_index in scale_model_input

* remove print lines

* refactor prior - reduce arguments

* make style

* add prior_image

* arg embedding_proj_norm -> norm_embedding_proj

* add pre-norm for proj_embedding

* move rescale prompt from pipeline to _encode_prompt

* add img2img pipeline

* style

* copies

* Update src/diffusers/models/prior_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

add arg: encoder_hid_proj

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

add new config: norm_in_type

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

add new config: added_emb_type

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

rename out_dim -> clip_embed_dim

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

rename config: out_dim -> clip_embed_dim

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/prior_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* finish refactor prior_tranformer

* make style

* refactor renderer

* fix

* make style

* refactor img2img

* remove params_proj

* add test

* add upcast_softmax to prior_transformer

* enable num_images_per_prompt, add save_gif utility

* add

* add fast test

* make style

* add slow test

* style

* add test for img2img

* refactor

* enable batching

* style

* refactor scheduler

* update test

* style

* attempt to solve batch related tests timeout

* add doc

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* hardcode rendering related config

* update betas_for_alpha_bar on ddpm_scheduler

* fix copies

* fix

* export_to_gif

* style

* second attempt to speed up batching tests

* add doc page to index

* Remove intermediate clipping

* 3rd attempt to speed up batching tests

* Remvoe time index

* simplify scheduler

* Fix more

* Fix more

* fix more

* make style

* fix schedulers

* fix some more tests

* finish

* add one more test

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

* apply feedbacks

* style

* fix copies

* add one example

* style

* add example for img2img

* fix doc

* fix more doc strings

* size -> frame_size

* style

* update doc

* style

* fix on doc

* update repo name

* improve the usage example in shap-e img2img

* add usage examples in the shap-e docs.

* consolidate examples.

* minor fix.

* update doc

* Apply suggestions from code review

* Apply suggestions from code review

* remove upcast

* Make sure background is white

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py

* Apply suggestions from code review

* Finish

* Apply suggestions from code review

* Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py

* Make style

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-06 15:20:42 +02:00
YiYi Xu
746215670a Kandinsky_v22_yiyi (#3936)
* Kandinsky2_2

* fix init kandinsky2_2

* kandinsky2_2 fix inpainting

* rename pipelines: remove decoder + 2_2 -> V22

* Update scheduling_unclip.py

* remove text_encoder and tokenizer arguments from doc string

* add test for text2img

* add tests for text2img & img2img

* fix

* add test for inpaint

* add prior tests

* style

* copies

* add controlnet test

* style

* add a test for controlnet_img2img

* update prior_emb2emb api to accept image_embedding or image

* add a test for prior_emb2emb

* style

* remove try except

* example

* fix

* add doc string examples to all kandinsky pipelines

* style

* update doc

* style

* add a top about 2.2

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* vae -> movq

* vae -> movq

* style

* fix the #copied from

* remove decoder from file name

* update doc: add a section for kandinsky 2.2

* fix

* fix-copies

* add coped from

* add copies from for prior

* add copies from for prior emb2emb

* copy from for img2img

* copied from for inpaint

* more copied from

* more copies from

* more copies

* remove the yiyi comments

* Apply suggestions from code review

* Self-contained example, pipeline order

* Import prior output instead of redefining.

* Style

* Make VQModel compatible with model offload.

* Fix copies

---------

Co-authored-by: Shahmatov Arseniy <62886550+cene555@users.noreply.github.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-06 15:05:42 +02:00
Patrick von Platen
bc9a8cef6f [SD-XL] Add new pipelines (#3859)
* Add new text encoder

* add transformers depth

* More

* Correct conversion script

* Fix more

* Fix more

* Correct more

* correct text encoder

* Finish all

* proof that in works in run local xl

* clean up

* Get refiner to work

* Add red castle

* Fix batch size

* Improve pipelines more

* Finish text2image tests

* Add img2img test

* Fix more

* fix import

* Fix embeddings for classic models (#3888)

Fix embeddings for classic SD models.

* Allow multiple prompts to be passed to the refiner (#3895)

* finish more

* Apply suggestions from code review

* add watermarker

* Model offload (#3889)

* Model offload.

* Model offload for refiner / img2img

* Hardcode encoder offload on img2img vae encode

Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* correct

* fix

* clean print

* Update install warning for `invisible-watermark`

* add: missing docstrings.

* fix and simplify the usage example in img2img.

* fix setup for watermarking.

* Revert "fix setup for watermarking."

This reverts commit 491bc9f5a6.

* fix: watermarking setup.

* fix: op.

* run make fix-copies.

* make sure tests pass

* improve convert

* make tests pass

* make tests pass

* better error message

* fiinsh

* finish

* Fix final test

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-07-06 13:37:27 +02:00
Sayak Paul
b62d9a1fdc [Text-to-video] Add torch.compile() compatibility (#3949)
* use sample directly instead of the dataclass.

* more usage of directly samples instead of dataclasses

* more usage of directly samples instead of dataclasses

* use direct sample in the pipeline.

* direct usage of sample in the img2img case.
2023-07-06 14:30:50 +05:30
Sayak Paul
46af98267d [Consistency Models] correct checkpoint url in the doc (#3962)
correct checkpoint url.
2023-07-06 14:22:43 +05:30
Prathik Rao
de1426119d Make UNet2DConditionOutput pickle-able (#3857)
* add default to unet output to prevent it from being a required arg

* add unit test

* make style

* adjust unit test

* mark as fast test

* adjust assert statement in test

---------

Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-07-06 13:12:41 +05:30
Sayak Paul
41ea88f38c Update consistency_models.mdx (#3961) 2023-07-06 10:55:24 +05:30
dg845
aed7499a8d Add Consistency Models Pipeline (#3492)
* initial commit

* Improve consistency models sampling implementation.

* Add CMStochasticIterativeScheduler, which implements the multi-step sampler (stochastic_iterative_sampler) in the original code, and make further improvements to sampling.

* Add Unet blocks for consistency models

* Add conversion script for Unet

* Fix bug in new unet blocks

* Fix attention weight loading

* Make design improvements to ConsistencyModelPipeline and CMStochasticIterativeScheduler and add initial version of tests.

* make style

* Make small random test UNet class conditional and set resnet_time_scale_shift to 'scale_shift' to better match consistency model checkpoints.

* Add support for converting a test UNet and non-class-conditional UNets to the consistency models conversion script.

* make style

* Change num_class_embeds to 1000 to better match the original consistency models implementation.

* Add support for distillation in pipeline_consistency_models.py.

* Improve consistency model tests:
	- Get small testing checkpoints from hub
	- Modify tests to take into account "distillation" parameter of ConsistencyModelPipeline
	- Add onestep, multistep tests for distillation and distillation + class conditional
	- Add expected image slices for onestep tests

* make style

* Improve ConsistencyModelPipeline:
	- Add initial support for class-conditional generation
	- Fix initial sigma for onestep generation
	- Fix some sigma shape issues

* make style

* Improve ConsistencyModelPipeline:
	- add latents __call__ argument and prepare_latents method
	- add check_inputs method
	- add initial docstrings for ConsistencyModelPipeline.__call__

* make style

* Fix bug when randomly generating class labels for class-conditional generation.

* Switch CMStochasticIterativeScheduler to configuring a sigma schedule and make related changes to the pipeline and tests.

* Remove some unused code and make style.

* Fix small bug in CMStochasticIterativeScheduler.

* Add expected slices for multistep sampling tests and make them pass.

* Work on consistency model fast tests:
	- in pipeline, call self.scheduler.scale_model_input before denoising
	- get expected slices for Euler and Heun scheduler tests
	- make Euler test pass
	- mark Heun test as expected fail because it doesn't support prediction_type "sample" yet
	- remove DPM and Euler Ancestral tests because they don't support use_karras_sigmas

* make style

* Refactor conversion script to make it easier to add more model architectures to convert in the future.

* Work on ConsistencyModelPipeline tests:
	- Fix device bug when handling class labels in ConsistencyModelPipeline.__call__
	- Add slow tests for onestep and multistep sampling and make them pass
	- Refactor fast tests
	- Refactor ConsistencyModelPipeline.__init__

* make style

* Remove the add_noise and add_noise_to_input methods from CMStochasticIterativeScheduler for now.

* Run python utils/check_copies.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite to make dummy objects for new pipeline and scheduler.

* Make fast tests from PipelineTesterMixin pass.

* make style

* Refactor consistency models pipeline and scheduler:
	- Remove support for Karras schedulers (only support CMStochasticIterativeScheduler)
	- Move sigma manipulation, input scaling, denoising from pipeline to scheduler
	- Make corresponding changes to tests and ensure they pass

* make style

* Add docstrings and further refactor pipeline and scheduler.

* make style

* Add initial version of the consistency models documentation.

* Refactor custom timesteps logic following DDPMScheduler/IFPipeline and temporarily add torch 2.0 SDPA kernel selection logic for debugging.

* make style

* Convert current slow tests to use fp16 and flash attention.

* make style

* Add slow tests for normal attention on cuda device.

* make style

* Fix attention weights loading

* Update consistency model fast tests for new test checkpoints with attention fix.

* make style

* apply suggestions

* Add add_noise method to CMStochasticIterativeScheduler (copied from EulerDiscreteScheduler).

* Conversion script now outputs pipeline instead of UNet and add support for LSUN-256 models and different schedulers.

* When both timesteps and num_inference_steps are supplied, raise warning instead of error (timesteps take precedence).

* make style

* Add remaining diffusers model checkpoints for models in the original consistency model release and update usage example.

* apply suggestions from review

* make style

* fix attention naming

* Add tests for CMStochasticIterativeScheduler.

* make style

* Make CMStochasticIterativeScheduler tests pass.

* make style

* Override test_step_shape in CMStochasticIterativeSchedulerTest instead of modifying it in SchedulerCommonTest.

* make style

* rename some models

* Improve API

* rename some models

* Remove duplicated block

* Add docstring and make torch compile work

* More fixes

* Fixes

* Apply suggestions from code review

* Apply suggestions from code review

* add more docstring

* update consistency conversion script

---------

Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-05 19:33:58 +02:00
Pedro Cuenca
07c9a08e67 Add timestep_spacing and steps_offset to schedulers (#3947)
* Add timestep_spacing to DDPM, LMSDiscrete, PNDM.

* Remove spurious line.

* More easy schedulers.

* Add `linspace` to DDIM

* Noise sigma for `trailing`.

* Add timestep_spacing to DEISMultistepScheduler.

Not sure the range is the way it was intended.

* Fix: remove line used to debug.

* Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC

* Fix: convert to numpy.

* Use sched. defaults when instantiating from_config

For params not present in the original configuration.

This makes it possible to switch pipeline schedulers even if they use
different timestep_spacing (or any other param).

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Missing args in DPMSolverMultistep

* Test: default args not in config

* Style

* Fix scheduler name in test

* Remove duplicated entries

* Add test for solver_type

This test currently fails in main. When switching from DEIS to UniPC,
solver_type is "logrho" (the default value from DEIS), which gets
translated to "bh1" by UniPC. This is different to the default value for
UniPC: "bh2". This is where the translation happens: 36d22d0709/src/diffusers/schedulers/scheduling_unipc_multistep.py (L171)

* UniPC: use same default for solver_type

Fixes a bug when switching from UniPC from another scheduler (i.e.,
DEIS) that uses a different solver type. The solver is now the same as
if we had instantiated the scheduler directly.

* do not save use default values

* fix more

* fix all

* fix schedulers

* fix more

* finish for real

* finish for real

* flaky tests

* Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py

* Default steps_offset to 0.

* Add missing docstrings

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-05 15:49:30 +02:00
Patrick von Platen
2837d49079 Fix failing np tests (#3942)
* Fix failing np tests

* Apply suggestions from code review

* Update tests/pipelines/test_pipelines_common.py
2023-07-04 14:00:43 +02:00
Prathik Rao
1997614aa9 avoid upcasting by assigning dtype to noise tensor (#3713)
* avoid upcasting by assigning dtype to noise tensor

* make style

* Update train_unconditional.py

* Update train_unconditional.py

* make style

* add unit test for pickle

* revert change

---------

Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-07-04 07:19:49 +05:30
Patrick von Platen
4e898560ce revert automatic chunking (#3934)
* revert automatic chunking

* Apply suggestions from code review

* revert automatic chunking
2023-07-03 23:12:41 +02:00
Patrick von Platen
332d2bbea3 Improve memory text to video (#3930)
* Improve memory text to video

* Apply suggestions from code review

* add test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish test setup

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-03 18:17:34 +02:00
estelleafl
b8a5dda56e [ldm3d] Update code to be functional with the new checkpoints (#3875)
* fixed typo

* updated doc to be consistent in naming

* make style/quality

* preprocessing for 4 channels and not 6

* make style

* test for 4c

* make style/quality

* fixed test on cpu

---------

Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu33.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu38.rr.intel.com>
2023-07-03 18:15:46 +02:00
Andrés Mauricio Repetto Ferrero
572d8e2002 Adding better way to define multiple concepts and also validation capabilities. (#3807)
* - Added validation parameters
- Changed some parameter descriptions to better explain their use.
- Fixed a few typos.
- Added concept_list parameter for better management of multiple subjects
- changed logic for image validation

* - Fixed bad logic for class data root directories

* Defaulting validation_steps to None for an easier logic

* Fixed multiple validation prompts

* Fixed bug on validation negative prompt

* Changed validation logic for tracker.

* Added uuid for validation image labeling

* Fix error when comparing validation prompts and validation negative prompts

* Improved error message when negative prompts for validation are more than the number of prompts

* - Changed image tracking number from epoch to global_step
- Added Typing for functions

* Added some validations more when using concept_list parameter and the regular ones.

* Fixed error message

* Added more validations for validation parameters

* Improved messaging for errors

* Fixed validation error for parameters with default values

* - Added train step to image name for validation
- reformatted code

* - Added train step to image's name for validation
- reformatted code

* Updated README.md file.

* reverted back original script of train_dreambooth.py

* reverted back original script of train_dreambooth.py

* left one blank line at the eof

* reverted back setup.py

* reverted back setup.py

* added same logic for when parameters for prior preservation are used without enabling the flag while using concept_list parameter.

* Ran black formatter.

* fixed a few strings

* fixed import sort with isort and removed fstrings without placeholder

* fixed import order with ruff (since with isort wasn't ok)

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-03 17:55:45 +02:00
Patrick von Platen
2e8668f0af Correct controlnet out of list error (#3928)
* Correct controlnet out of list error

* Apply suggestions from code review

* correct tests

* correct tests

* fix

* test all

* Apply suggestions from code review

* test all

* test all

* Apply suggestions from code review

* Apply suggestions from code review

* fix more tests

* Fix more

* Apply suggestions from code review

* finish

* Apply suggestions from code review

* Update src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py

* finish
2023-07-03 15:10:07 +02:00
Aisuko
b298484fd0 fix/doc: no import torch issue (#3923)
Ffix/doc: no import torch issue

Signed-off-by: GitHub <noreply@github.com>
2023-07-03 12:28:42 +02:00
Aisuko
f911287cc9 fix/doc-code: Updating to the latest version parameters (#3924)
fix/doc-code: update to use the new parameter

Signed-off-by: GitHub <noreply@github.com>
2023-07-03 12:28:05 +02:00
Patrick von Platen
62825064bf Add video img2img (#3900)
* Add image to image video

* Improve

* better naming

* make fix copies

* add docs

* finish tests

* trigger tests

* make style

* correct

* finish

* Fix more

* make style

* finish
2023-07-02 13:19:27 +02:00
Aisuko
5439e917ca fix/docs: Fix the broken doc links (#3897)
* fix/docs: Fix the broken doc links

Signed-off-by: GitHub <noreply@github.com>

* Update docs/source/en/using-diffusers/write_own_pipeline.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-07-01 08:07:59 +02:00
Steven Liu
174dcd697f [docs] Model API (#3562)
* add modelmixin and unets

* remove old model page

* minor fixes

* fix unet2dcondition

* add vqmodel and autoencoderkl

* add rest of models

* fix autoencoderkl path

* fix toctree

* fix toctree again

* apply feedback

* apply feedback

* fix copies

* fix controlnet copy

* fix copies
2023-06-29 17:24:39 -07:00
takuoko
cdf2ae8a84 [Enhance] Add LoRA rank args in train_text_to_image_lora (#3866)
* add rank args in lora finetune

* del network_alpha
2023-06-29 17:09:59 +05:30
Sayak Paul
49949f321d [Tests] add test for checking soft dependencies. (#3847)
* add test for checking soft dependencies.

* address patrick's comments.

* dependency tests should not run twice.

* debugging.

* up.
2023-06-28 22:05:25 +05:30
Uranus
c7469ebe74 fix sde add noise typo (#3839)
* fix sde typo

* fix code style
2023-06-28 15:44:29 +02:00
Wadim Korablin
150013060e Support for manual CLIP loading in StableDiffusionPipeline - txt2img. (#3832)
* Support for manual CLIP loading in StableDiffusionPipeline - txt2img.

* Update src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py

* Update variables & according docs to match previous style.

* Updated to match style & quality of 'diffusers'

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-28 15:29:48 +02:00
Patrick von Platen
219636f7e4 improve tolerance 2023-06-28 13:29:36 +00:00
Vincent Neemie
35bac5edec Fixing the global_step key not found (#3844)
* Fixing the global_step key not found

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-28 14:36:33 +02:00
Saurav Maheshkar
0bf6aeb885 feat: rename single-letter vars in resnet.py (#3868)
feat: rename single-letter vars
2023-06-28 13:31:32 +02:00
Joachim Blaafjell Holwech
9a45d7fb76 Add guidance start/stop (#3770)
* Add guidance start/stop

* Add guidance start/stop to inpaint class

* Black formatting

* Add support for guidance for multicontrolnet

* Add inclusive end

* Improve design

* correct imports

* Finish

* Finish all

* Correct more

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-27 01:04:11 +02:00
regisss
61916fefc4 Update Habana Gaudi doc (#3863)
* Update Habana Gaudi doc

* Fix typo
2023-06-24 21:17:11 +02:00
Sayak Paul
fc6acb6b97 [Docs] add: contributor note in the paradigms docs. (#3852)
add: contributor note in the paradigms docs.
2023-06-22 17:54:35 +05:30
Patrick von Platen
5e3f8fff40 Fix some audio tests (#3841)
* Fix some audio tests

* make style

* fix

* make style
2023-06-22 13:53:27 +02:00
Patrick von Platen
5df2acf7d2 [Conversion] Small fixes (#3848)
* [Conversion] Small fixes

* Update src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py
2023-06-22 13:52:59 +02:00
Patrick von Platen
88d269461c Correct bad attn naming (#3797)
* relax tolerance slightly

* correct incorrect naming

* correct namingc

* correct more

* Apply suggestions from code review

* Fix more

* Correct more

* correct incorrect naming

* Update src/diffusers/models/controlnet.py

* Correct flax

* Correct renaming

* Correct blocks

* Fix more

* Correct more

* mkae style

* mkae style

* mkae style

* mkae style

* mkae style

* Fix flax

* mkae style

* rename

* rename

* rename attn head dim to attention_head_dim

* correct flax

* make style

* improve

* Correct more

* make style

* fix more

* mkae style

* Update src/diffusers/models/controlnet_flax.py

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-06-22 13:52:48 +02:00
Robert Dargavel Smith
0c6d1bc985 fix audio_diffusion tests (#3850) 2023-06-22 12:27:39 +02:00
Sayak Paul
13e781f9a5 fix: random module seeding (#3846) 2023-06-22 12:26:55 +02:00
Will Berman
0bab447670 relax tol attention conversion test (#3842) 2023-06-21 12:35:38 -07:00
Steven Liu
1f02087607 [docs] More API stuff (#3835)
* clean up loaders

* clean up rest of main class apis

* apply feedback
2023-06-21 11:07:23 -07:00
YiYi Xu
95ea538c79 Add ddpm kandinsky (#3783)
* update doc

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-06-21 07:23:18 -10:00
Hans Brouwer
ef3844d3a8 Support ControlNet models with different number of channels in control images (#3815)
support ControlNet models with a different hint_channels value (e.g. TemporalNet2)
2023-06-21 13:11:45 +02:00
dqueue
3ebbaf7c96 Update control_brightness.mdx (#3825) 2023-06-20 14:09:51 +02:00
Andy Shih
73b125df68 [Pipeline] Add new pipeline for ParaDiGMS -- parallel sampling of diffusion models (#3716)
* add paradigms parallel sampling pipeline

* linting

* ran make fix-copies

* add paradigms parallel sampling pipeline

* linting

* ran make fix-copies

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* changes based on review

* add docs for paradigms

* update docs with paradigms abstract

* improve documentation, and add tests for ddim/ddpm batch_step_no_noise

* fix docs and run make fix-copies

* minor changes to docs.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* move parallel scheduler to new classes for DDPMParallelScheduler and DDIMParallelScheduler

* remove changes for scheduling_ddim, adjust licenses, credits, and commented code

* fix tensor type that is breaking tests

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-20 15:04:26 +05:30
Sayak Paul
88eb04489d [Docs] add missing pipelines from the overview pages and minor fixes (#3795)
* add entry for safe stable diffusion to the sd overview page.

* add missing pipelines o the broader overview section in the pipelines.

* address PR feedback./
2023-06-20 11:15:21 +05:30
Sayak Paul
4870626728 [Examples] Improve the model card pushed from the train_text_to_image.py script (#3810)
* refactor: readme serialized from the example when push_to_hub is True.

* fix: batch size arg.

* a bit better formatting

* minor fixes.

* add note on env.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* condition wandb info better

* make mixed_precision assignment in cli args explicit.

* separate inference block for sample images.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* address more comments.

* autocast mode.

* correct none image type problem.

* ifx: list assignment.

* minor fix.

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-06-20 08:59:41 +05:30
estelleafl
666743302f [ldm3d] Fixed small typo (#3820)
* fixed typo

* updated doc to be consistent in naming

* make style/quality

---------

Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
2023-06-19 17:38:02 +02:00
Steven Liu
f7cc9adc05 [docs] Zero SNR (#3776)
* add zero snr doc

* fix image link

* apply feedback

* separate page
2023-06-16 13:19:37 -07:00
Will Berman
59aefe9ea6 device map legacy attention block weight conversion (#3804) 2023-06-16 10:39:20 -07:00
Will Berman
3ddc2b7395 [train text to image] add note to loading from checkpoint (#3806)
add note to loading from checkpoint
2023-06-16 11:54:49 +05:30
Will Berman
d49e2dd54c manual check for checkpoints_total_limit instead of using accelerate (#3681)
* manual check for checkpoints_total_limit instead of using accelerate

* remove controlnet_conditioning_embedding_out_channels
2023-06-15 15:38:54 -07:00
Isotr0py
7bfd2375c7 fix typo (#3800) 2023-06-15 22:00:47 +05:30
Patrick von Platen
ea8ae8c639 Complete set_attn_processor for prior and vae (#3796)
* relax tolerance slightly

* Add more tests

* upload readme

* upload readme

* Apply suggestions from code review

* Improve API Autoencoder KL

* finalize

* finalize tests

* finalize tests

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* up

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-15 17:42:49 +02:00
estelleafl
958d9ec723 Ldm3d first PR (#3668)
* added ldm3d pipeline and updated image processor to support depth

* added description

* added paper reference

* added docs

* fixed bug

* added test

* Update tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* added reference in indexmdx

* reverted changes tto image processor'

* added LDM3DOutput

* Fixes with make style

* fix failing tests for make fix-copies

* aligned with our version

* Update pipeline_stable_diffusion_ldm3d.py

updated the guidance scale

* Fix for failing check_code_quality test

* Code review feedback

* Fix typo in ldm3d_diffusion.mdx

* updated the doc accordnlgy

* copyrights

* fixed test failure

* make style

* added image processor of LDM3D in the documentation:

* added ldm3d doc to toctree

* run make style && make quality

* run make fix-copies

* Update docs/source/en/api/image_processor.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* updated the safety checker to accept tuple

* make style and make quality

* Update src/diffusers/pipelines/stable_diffusion/__init__.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* LDM3D output

* up

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Aflalo <estellea@isl-gpu27.rr.intel.com>
Co-authored-by: Anahita Bhiwandiwalla <anahita.bhiwandiwalla@intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu26.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-iam1.rr.intel.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Aflalo <estellea@isl-gpu42.rr.intel.com>
Co-authored-by: Aflalo <estellea@isl-gpu43.rr.intel.com>
2023-06-15 17:36:52 +02:00
Sayak Paul
77f9137f10 feat: add PR template. (#3786)
* feat: add PR template.

* address pr comments.

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-15 19:41:54 +05:30
Naga Sai Abhinay
231bdf2e56 UnCLIP Image Interpolation -> Keep same initial noise across interpolation steps (#3782)
* Maintain same decoder start noise for all interp steps

* Correct comment

* use batch_size for consistency
2023-06-15 15:15:40 +02:00
Arpan Tripathi
75124fc91e Added LoRA loading to StableDiffusionKDiffusionPipeline (#3751)
Added `LoraLoaderMixin` to `StableDiffusionKDiffusionPipeline`
2023-06-15 15:09:44 +02:00
Patrick von Platen
908e5e9cc6 Fix some bad comment in training scripts (#3798)
* relax tolerance slightly

* correct incorrect naming
2023-06-15 15:07:51 +02:00
cmdr2
2715079344 Fix broken cpu-offloading in legacy inpainting SD pipeline (#3773) 2023-06-15 14:56:40 +02:00
takuoko
1ae15fa64c [Enhance] Update reference (#3723)
* update reference pipeline

* update reference pipeline

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-15 14:34:12 +02:00
Sayak Paul
027a365a62 [Bug Report template] modify the issue template to include core maintainers. (#3785)
* modify the issue template to include core maintainers.

* add: entry for audio.

* Update .github/ISSUE_TEMPLATE/bug-report.yml

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-15 07:43:07 +05:30
Steven Liu
f96b760658 [docs] Fix Colab notebook cells (#3777)
fix colab notebook cells
2023-06-14 10:21:39 -07:00
YiYi Xu
7761b89d7b update conversion script for Kandinsky unet (#3766)
* update kandinsky conversion script

* style

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-06-14 06:57:53 -10:00
jfozard
ce5504934a Update pipeline_flax_stable_diffusion_controlnet.py (#3306)
Update pipeline_flax_controlnet.py

Change type of images array from jax.numpy.array to numpy.ndarray to permit in-place modification of the array when the safety checker detects a NSFW image.
2023-06-12 14:25:46 -10:00
Patrick von Platen
34d14d7848 [MultiControlNet] Allow save and load (#3747)
* [MultiControlNet] Allow save and load

* Correct more

* [MultiControlNet] Allow save and load

* make style

* Apply suggestions from code review
2023-06-12 18:29:58 +02:00
Patrick von Platen
ef9590712a [Tests] Relax tolerance of flaky failing test (#3755)
relax tolerance slightly
2023-06-12 18:28:30 +02:00
Andranik Movsisyan
a812fb6f5c Text2video zero refinements (#3733)
* fix docs typos. add frame_ids argument to text2video-zero pipeline call

* make style && make quality

* add support of pytorch 2.0 scaled_dot_product_attention for CrossFrameAttnProcessor

* add chunk-by-chunk processing to text2video-zero docs

* make style && make quality

* Update docs/source/en/api/pipelines/text_to_video_zero.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-12 18:03:18 +02:00
Liam Swayne
f46b22ba13 [documentation] grammatical fixes in installation.mdx (#3735)
Update installation.mdx
2023-06-12 17:42:01 +02:00
JeLuF
b2b13cd315 [Documentation] Replace dead link to Flax install guide (#3739)
Replace dead link to Flax documentation

Replace the dead link to the Flax installation guide by a working one: https://flax.readthedocs.io/en/latest/#installation
2023-06-12 17:40:48 +02:00
Patrick von Platen
38adcd21bd [Stable Diffusion Inpaint & ControlNet inpaint] Correct timestep inpaint (#3749)
* Correct timestep inpaint

* make style

* Fix

* Apply suggestions from code review

* make style
2023-06-12 13:59:38 +02:00
Patrick von Platen
790212f4d9 Correct another push token (#3745)
clean up more
2023-06-12 10:29:23 +02:00
Patrick von Platen
11aa105077 Correct Token to upload docs (#3744)
clean up more
2023-06-12 10:04:45 +02:00
Patrick von Platen
abbfe4b5b7 fix zh 2023-06-10 17:54:55 +02:00
Patrick von Platen
1d50f47a58 Merge branch 'main' of https://github.com/huggingface/diffusers 2023-06-10 17:04:59 +02:00
Patrick von Platen
e891b00dfc build docs 2023-06-10 16:58:59 +02:00
Patrick von Platen
27af55d1b4 build docs 2023-06-10 16:56:41 +02:00
YiYi Xu
05361960f2 remove seed (#3734)
* remove seed

* style

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-06-09 08:27:02 -10:00
Patrick von Platen
c42f6ee43e Post 0.17.0 release (#3721)
* Post release

* Post release
2023-06-08 18:08:49 +02:00
Patrick von Platen
f523b11a10 Fix loading if unexpected keys are present (#3720)
* Fix loading

* make style
2023-06-08 16:48:06 +02:00
Zachary Mueller
79fa94ea8b Apply deprecations from Accelerate (#3714)
Apply deprecations
2023-06-08 16:44:22 +02:00
Patrick von Platen
a06317abea [Actions] Fix actions (#3712) 2023-06-07 18:57:28 +01:00
YiYi Xu
500a3ff9ef [docs] add image processor documentation (#3710)
add image processor

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
2023-06-07 18:35:07 +01:00
Mishig
8caa530069 [doc build] Use secrets (#3707)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-07 18:21:16 +01:00
Kadir Nar
cd6186907c [Community] Support StableDiffusionCanvasPipeline (#3590)
* added StableDiffusionCanvasPipeline pipeline

* Added utils codes to pipe_utils file.

* make style

* delete mixture.py and Text2ImageRegion class

* make style

* Added the codes to the readme.md file.

* Moved functions from pipeline_utils to mix_canvas
2023-06-07 17:43:33 +01:00
Patrick von Platen
803d653748 Fix custom releases (#3708)
* Fix custom releases

* make style
2023-06-07 17:33:54 +01:00
Alex McKinney
cd9d0913d9 Fixes eval generator init in train_text_to_image_lora.py (#3678) 2023-06-07 15:37:13 +05:30
Pedro Cuenca
fdec23188a [Tests] Run slow matrix sequentially (#3500)
[tests] Run slow matrix sequentially.
2023-06-07 11:01:35 +01:00
Max-We
12a232efa9 Fix schedulers zero SNR and rescale classifier free guidance (#3664)
* Implement option for rescaling betas to zero terminal SNR

* Implement rescale classifier free guidance in pipeline_stable_diffusion.py

* focus on DDIM

* make style

* make style

* make style

* make style

* Apply suggestions from Peter Lin

* Apply suggestions from Peter Lin

* make style

* Apply suggestions from code review

* Apply suggestions from code review

* make style

* make style

---------

Co-authored-by: MaxWe00 <gitlab.9v1lq@slmail.me>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-07 10:57:10 +01:00
Patrick von Platen
74fd735eb0 Add draft for lora text encoder scale (#3626)
* Add draft for lora text encoder scale

* Improve naming

* fix: training dreambooth lora script.

* Apply suggestions from code review

* Update examples/dreambooth/train_dreambooth_lora.py

* Apply suggestions from code review

* Apply suggestions from code review

* add lora mixin when fit

* add lora mixin when fit

* add lora mixin when fit

* fix more

* fix more

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-06 22:47:46 +01:00
Jason C.H
2de9e2df36 Fix from_ckpt for Stable Diffusion 2.x (#3662) 2023-06-06 22:39:11 +01:00
Isotr0py
11b3002b48 Support views batch for panorama (#3632)
* support views batch for panorama

* add entry for the new argument

* format entry for the new argument

* add view_batch_size test

* fix batch test and a boundary condition

* add more docstrings

* fix a typos

* fix typos

* add: entry to the doc about view_batch_size.

* Revert "add: entry to the doc about view_batch_size."

This reverts commit a36aeaa9ed.

* add a tip on .

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-07 02:50:02 +05:30
stano
10f4ecd177 Fix the Kandinsky docstring examples (#3695)
- use the correct Prior hub model id
 - use the new names in KandinskyPriorPipelineOutput
2023-06-06 22:18:14 +01:00
Sayak Paul
de16f64667 feat: when using PT 2.0 use LoRAAttnProcessor2_0 for text enc LoRA. (#3691) 2023-06-06 21:20:53 +01:00
YiYi Xu
017ee1609b refactor Image processor for x4 upscaler (#3692)
* refactor x4 upscaler

* style

* copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-06-06 21:08:36 +01:00
Sayak Paul
8669e8313d [LoRA] feat: add lora attention processor for pt 2.0. (#3594)
* feat: add lora attention processor for pt 2.0.

* explicit context manager for SDPA.

* switch to flash attention

* make shapes compatible to work optimally with SDPA.

* fix: circular import problem.

* explicitly specify the flash attention kernel in sdpa

* fall back to efficient attention context manager.

* remove explicit dispatch.

* fix: removed processor.

* fix: remove optional from type annotation.

* feat: make changes regarding LoRAAttnProcessor2_0.

* remove confusing warning.

* formatting.

* relax tolerance for PT 2.0

* fix: loading message.

* remove unnecessary logging.

* add: entry to the docs.

* add: network_alpha argument.

* relax tolerance.
2023-06-06 14:56:05 +05:30
Takuma Mori
b45204ea5a Add function to remove monkey-patch for text encoder LoRA (#3649)
* merge undoable-monkeypatch

* remove TEXT_ENCODER_TARGET_MODULES, refactoring

* move create_lora_weight_file
2023-06-06 14:06:13 +05:30
Steven Liu
a8b0f42c38 [docs] Fix link to loader method (#3680)
fix link to load_lora_weights
2023-06-06 13:37:47 +05:30
Will Berman
41ae670828 move activation dispatches into helper function (#3656)
* move activation dispatches into helper function

* tests
2023-06-05 12:30:48 -07:00
Will Berman
462956be7b small tweaks for parsing thibaudz controlnet checkpoints (#3657) 2023-06-05 10:24:31 -07:00
YiYi Xu
5990014700 [WIP]Vae preprocessor refactor (PR1) (#3557)
VaeImageProcessor.preprocess refactor

* refactored VaeImageProcessor 
   -  allow passing optional height and width argument to resize()
   - add convert_to_rgb
* refactored prepare_latents method for img2img pipelines so that if we pass latents directly as image input, it will not encode it again
* added a test in test_pipelines_common.py to test latents as image inputs
* refactored img2img pipelines that accept latents as image: 
   - controlnet img2img, stable diffusion img2img , instruct_pix2pix

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-05 07:11:00 -10:00
Steven Liu
1a6a647e06 [docs] More API fixes (#3640)
* part 2 of api fixes

* move randn_tensor

* add to toctree

* apply feedback

* more feedback
2023-06-05 09:47:26 -07:00
Sayak Paul
995bbcb9aa [UniDiffuser test] fix one test so that it runs correctly on V100 (#3675)
* fix: assertion.

* assertion fix.
2023-06-05 17:42:31 +05:30
pdoane
d0416ab090 Update Compel documentation for textual inversions (#3663)
* Update Compel documentation for textual inversions

* Fix typo
2023-06-05 16:46:27 +05:30
Vladislav Lyubimov
1994dbcb5e Fix from_ckpt not working properly on windows (#3666) 2023-06-05 11:55:37 +01:00
Patrick von Platen
262d539a8a Correct multi gpu dreambooth (#3673)
Correct multi gpu
2023-06-05 11:03:11 +01:00
Will Berman
0fc2fb71c1 dreambooth upscaling fix added latents (#3659) 2023-06-05 10:32:16 +01:00
Steven Liu
523a50a8eb [docs] Load A1111 LoRA (#3629)
* load a1111 lora

* fix

* apply feedback

* fix
2023-06-05 11:05:42 +05:30
0x1355
de45af4a46 Allow setting num_cycles for cosine_with_restarts lr scheduler (#3606)
Expose num_cycles kwarg of get_schedule() through args.lr_num_cycles.
2023-06-05 10:18:29 +05:30
0x1355
b95cbdf6fc Set step_rules correctly for piecewise_constant scheduler (#3605)
So that schedule_func() calls get_piecewise_constant_schedule() with correctly named kwarg.
2023-06-05 10:16:26 +05:30
Will Berman
7a39691362 linting fix (#3653) 2023-06-02 13:33:19 -07:00
Will Berman
5911a3aa47 dreambooth if docs - stage II, more info (#3628)
* dreambooth if docs - stage II, more info

* Update docs/source/en/training/dreambooth.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docs/source/en/training/dreambooth.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docs/source/en/training/dreambooth.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* download instructions for downsized images

* update source README to match docs

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-02 10:37:13 -07:00
Will Berman
b7af946138 set config from original module but set compiled module on class (#3650)
* set config from original module but set compiled module on class

* add test
2023-06-02 10:26:41 -07:00
asfiyab-nvidia
d3717e6368 add Stable Diffusion TensorRT Inpainting pipeline (#3642)
* add tensorrt inpaint pipeline

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* run make style

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-06-02 18:14:31 +01:00
Kadir Nar
0dbdc0cbae [Community Doc] Updated the filename and readme file. (#3634)
* Updated the filename and readme file.

* reformatter

* reformetter
2023-06-02 17:53:09 +01:00
YiYi Xu
0e8688113a fix inpainting pipeline when providing initial latents (#3641)
* fix latents

* fix copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-06-02 17:03:15 +01:00
Kashif Rasul
f1d4743394 fixed typo in example train_text_to_image.py (#3608)
fixed typo
2023-06-02 20:54:54 +05:30
Lachlan Nicholson
a6c7b5b6b7 Iterate over unique tokens to avoid duplicate replacements for multivector embeddings (#3588)
* iterate over unique tokens to avoid duplicate replacements

* added test for multiple references to multi embedding

* adhere to black formatting

* reorder test post-rebase
2023-06-02 16:10:22 +01:00
Takuma Mori
8e552bb4fe Support Kohya-ss style LoRA file format (in a limited capacity) (#3437)
* add _convert_kohya_lora_to_diffusers

* make style

* add scaffold

* match result: unet attention only

* fix monkey-patch for text_encoder

* with CLIPAttention

While the terrible images are no longer produced,
the results do not match those from the hook ver.
This may be due to not setting the network_alpha value.

* add to support network_alpha

* generate diff image

* fix monkey-patch for text_encoder

* add test_text_encoder_lora_monkey_patch()

* verify that it's okay to release the attn_procs

* fix closure version

* add comment

* Revert "fix monkey-patch for text_encoder"

This reverts commit bb9c61e6fa.

* Fix to reuse utility functions

* make LoRAAttnProcessor targets to self_attn

* fix LoRAAttnProcessor target

* make style

* fix split key

* Update src/diffusers/loaders.py

* remove TEXT_ENCODER_TARGET_MODULES loop

* add print memory usage

* remove test_kohya_loras_scaffold.py

* add: doc on LoRA civitai

* remove print statement and refactor in the doc.

* fix state_dict test for kohya-ss style lora

* Apply suggestions from code review

Co-authored-by: Takuma Mori <takuma104@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-06-02 17:40:24 +05:30
Patrick von Platen
32ea2142c0 [Kandinsky] Improve kandinsky API a bit (#3636)
* Improve docs

* up

* Update docs/source/en/api/pipelines/kandinsky.mdx

* up

* up

* correct more

* further improve

* Update docs/source/en/api/pipelines/kandinsky.mdx

Co-authored-by: YiYi Xu <yixu310@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-06-02 08:57:20 +01:00
Sayak Paul
55dbfa0229 [Docs] include the instruction-tuning blog link in the InstructPix2Pix docs (#3644)
include the instruction-tuning blog link.
2023-06-02 08:04:35 +05:30
Will Berman
4f14b36329 Full Dreambooth IF stage II upscaling (#3561)
* update dreambooth lora to work with IF stage II

* Update dreambooth script for IF stage II upscaler
2023-05-31 09:39:31 -07:00
Will Berman
f751b8844e update dreambooth lora to work with IF stage II (#3560) 2023-05-31 09:39:03 -07:00
Prathik Rao
abb89da4de update code to reflect latest changes as of May 30th (#3616)
* update code to reflect latest changes as of May 30th

* update text to image example

* reflect changes to textual inversion

* make style

* fix typo

* Revert unnecessary readme changes

---------

Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-05-31 11:29:04 +02:00
Will Berman
7d0ac4eeab goodbye frog (#3617) 2023-05-30 23:18:01 +01:00
Patrick von Platen
0cc3a7a123 Make sure we also change the config when setting encoder_hid_dim_type=="text_proj" and allow xformers (#3615)
* fix if

* make style

* make style

* add tests for xformers

* make style

* update
2023-05-30 20:47:14 +01:00
Patrick von Platen
9d3ff0794d fix tests (#3614) 2023-05-30 18:59:07 +01:00
Patrick von Platen
a359ab4e29 Update README.md 2023-05-30 18:26:32 +01:00
Patrick von Platen
160c377ddc Make style 2023-05-30 13:14:09 +01:00
Denis
bb22d546c0 [Community] CLIP Guided Images Mixing with Stable DIffusion Pipeline (#3587)
* added clip_guided_images_mixing_stable_diffusion file and readme description

* apply pre-commit

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-30 13:13:45 +01:00
Greg Hunkins
799f5b4e12 [Feat] Enable State Dict For Textual Inversion Loader (#3439)
* enable state dict for textual inversion loader

* Empty-Commit | restart CI

* Empty-Commit | restart CI

* Empty-Commit | restart CI

* Empty-Commit | restart CI

* add tests

* fix tests

* fix tests

* fix tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-30 13:13:34 +01:00
takuoko
07ef4855cd [Community, Enhancement] Add reference tricks in README (#3589)
add reference tricks
2023-05-30 12:38:16 +01:00
Kadir Nar
6cbddf558a [Community] Support StableDiffusionTilingPipeline (#3586)
* added mixture pipeline

* added docstring

* update docstring
2023-05-30 12:24:15 +01:00
Rupert Menneer
35a740427e #3487 Fix inpainting strength for various samplers (#3532)
* Throw error if strength adjusted num_inference_steps < 1

* Added new fast test to check ValueError raised when num_inference_steps < 1

when strength adjusts the num_inference_steps then the inpainting pipeline should fail

* fix #3487 initial latents are now only scaled by init_noise_sigma when pure noise

updated this commit w.r.t the latest merge here: https://github.com/huggingface/diffusers/pull/3533

* fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-30 12:17:42 +01:00
Sayak Paul
0612f48cd0 [UniDiffuser Tests] Fix some tests (#3609)
* fix: unidiffuser test failures.

* living room.
2023-05-30 12:07:18 +01:00
Kadir Nar
c059cc0992 [docs] update the broken links (#3577) 2023-05-30 11:44:53 +01:00
Patrick von Platen
c0f867afd1 Fix temb attention (#3607)
* Fix temb attention

* Apply suggestions from code review

* make style

* Add tests and fix docker

* Apply suggestions from code review
2023-05-30 11:26:23 +01:00
Sayak Paul
c6ae883751 remove print statements from attention processor. (#3592) 2023-05-29 09:20:31 +05:30
Steven Liu
5559d04237 [docs] Working with different formats (#3534)
* add ckpt

* fix format

* apply feedback

* fix

* include pb

* rename file
2023-05-26 14:37:51 -07:00
Brandon
9917c32916 [docs] update the broken links (#3568)
update the broken links

update the broken links for training folder doc
2023-05-26 12:10:32 -07:00
Steven Liu
ab986769f1 [docs] Maintenance (#3552)
* doc fixes

* fix latex

* parenthesis on inside
2023-05-26 12:04:15 -07:00
Will Berman
bdc75e753d [IF super res] correctly normalize PIL input (#3536)
* [IF super res] correctl normalize PIL input

* 175 -> 127.5
2023-05-26 10:59:44 -07:00
Leon Lin
1d1f648c6b fix dreambooth attention mask (#3541) 2023-05-26 10:58:50 -07:00
Takuma Mori
67cf0445ef Fix to apply LoRAXFormersAttnProcessor instead of LoRAAttnProcessor when xFormers is enabled (#3556)
* fix to use LoRAXFormersAttnProcessor

* add test

* using new LoraLoaderMixin.save_lora_weights

* add test_lora_save_load_with_xformers
2023-05-26 17:33:25 +05:30
dg845
352ca3198c [WIP] Add UniDiffuser model and pipeline (#2963)
* Fix a bug of pano when not doing CFG (#3030)

* Fix a bug of pano when not doing CFG

* enhance code quality

* apply formatting.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Text2video zero refinements (#3070)

* fix progress bar issue in pipeline_text_to_video_zero.py. Copy scheduler after first backward

* fix tensor loading in test_text_to_video_zero.py

* make style && make quality

* Release: v0.15.0

* [Tests] Speed up panorama tests (#3067)

* fix: norm group test for UNet3D.

* chore: speed up the panorama tests (fast).

* set default value of _test_inference_batch_single_identical.

* fix: batch_sizes default value.

* [Post release] v0.16.0dev (#3072)

* Adds profiling flags, computes train metrics average. (#3053)

* WIP controlnet training

- bugfix --streaming
- bugfix running report_to!='wandb'
- adds memory profile before validation

* Adds final logging statement.

* Sets train epochs to 11.

Looking at a longer ~16ep run, we see only good validation images
after ~11ep:

https://wandb.ai/andsteing/controlnet_fill50k/runs/3j2hx6n8

* Removes --logging_dir (it's not used).

* Adds --profile flags.

* Updates --output_dir=runs/fill-circle-{timestamp}.

* Compute mean of `train_metrics`.

Previously `train_metrics[-1]` was logged, resulting in very bumpy train
metrics.

* Improves logging a bit.

- adds l2_grads gradient norm logging
- adds steps_per_sec
- sets walltime as x coordinate of train/step
- logs controlnet_params config

* Adds --ccache (doesn't really help though).

* minor fix in controlnet flax example (#2986)

* fix the error when push_to_hub but not log validation

* contronet_from_pt & controlnet_revision

* add intermediate checkpointing to the guide

* Bugfix --profile_steps

* Sets `RACKER_PROJECT_NAME='controlnet_fill50k'`.

* Logs fractional epoch.

* Adds relative `walltime` metric.

* Adds `StepTraceAnnotation` and uses `global_step` insetad of `step`.

* Applied `black`.

* Streamlines commands in README a bit.

* Removes `--ccache`.

This makes only a very small difference (~1 min) with this model size, so removing
the option introduced in cdb3cc.

* Re-ran `black`.

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Converts spaces to tab.

* Removes repeated args.

* Skips first step (compilation) in profiling

* Updates README with profiling instructions.

* Unifies tabs/spaces in README.

* Re-ran style & quality.

---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [Pipelines] Make sure that None functions are correctly not saved (#3080)

* doc string example remove from_pt (#3083)

* [Tests] parallelize (#3078)

* [Tests] parallelize

* finish folder structuring

* Parallelize tests more

* Correct saving of pipelines

* make sure logging level is correct

* try again

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Throw deprecation warning for return_cached_folder (#3092)

Throw deprecation warning

* Allow SD attend and excite pipeline to work with any size output images (#2835)

Allow stable diffusion attend and excite pipeline to work with any size output image. Re: #2476, #2603

* [docs] Update community pipeline docs (#2989)

* update community pipeline docs

* fix formatting

* explain sharing workflows

* Add to support Guess Mode for StableDiffusionControlnetPipleline (#2998)

* add guess mode (WIP)

* fix uncond/cond order

* support guidance_scale=1.0 and batch != 1

* remove magic coeff

* add docstring

* add intergration test

* add document to controlnet.mdx

* made the comments a bit more explanatory

* fix table

* fix default value for attend-and-excite (#3099)

* fix default

* remvoe one line as requested by gc team  (#3077)

remvoe one line

* ddpm custom timesteps (#3007)

add custom timesteps test

add custom timesteps descending order check

docs

timesteps -> custom_timesteps

can only pass one of num_inference_steps and timesteps

* Fix breaking change in `pipeline_stable_diffusion_controlnet.py` (#3118)

fix breaking change

* Add global pooling to controlnet (#3121)

* [Bug fix] Fix img2img processor with safety checker (#3127)

Fix img2img processor with safety checker

* [Bug fix] Make sure correct timesteps are chosen for img2img (#3128)

Make sure correct timesteps are chosen for img2img

* Improve deprecation warnings (#3131)

* Fix config deprecation (#3129)

* Better deprecation message

* Better deprecation message

* Better doc string

* Fixes

* fix more

* fix more

* Improve __getattr__

* correct more

* fix more

* fix

* Improve more

* more improvements

* fix more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* make style

* Fix all rest & add tests & remove old deprecation fns

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* feat: verfication of multi-gpu support for select examples. (#3126)

* feat: verfication of multi-gpu support for select examples.

* add: multi-gpu training sections to the relvant doc pages.

* speed up attend-and-excite fast tests (#3079)

* Optimize log_validation in train_controlnet_flax (#3110)

extract pipeline from log_validation

* make style

* Correct textual inversion readme (#3145)

* Update README.md

* Apply suggestions from code review

* Add unet act fn to other model components (#3136)

Adding act fn config to the unet timestep class embedding and conv
activation.

The custom activation defaults to silu which is the default
activation function for both the conv act and the timestep class
embeddings so default behavior is not changed.

The only unet which use the custom activation is the stable diffusion
latent upscaler https://huggingface.co/stabilityai/sd-x2-latent-upscaler/blob/main/unet/config.json
(I ran a script against the hub to confirm).
The latent upscaler does not use the conv activation nor the timestep
class embeddings so we don't change its behavior.

* class labels timestep embeddings projection dtype cast (#3137)

This mimics the dtype cast for the standard time embeddings

* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model (#2705)

* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model

* Address review comment from PR

* PyLint formatting

* Some more pylint fixes, unrelated to our change

* Another pylint fix

* Styling fix

* add from_ckpt method as Mixin (#2318)

* add mixin class for pipeline from original sd ckpt

* Improve

* make style

* merge main into

* Improve more

* fix more

* up

* Apply suggestions from code review

* finish docs

* rename

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add TensorRT SD/txt2img Community Pipeline to diffusers along with TensorRT utils (#2974)

* Add SD/txt2img Community Pipeline to diffusers along with TensorRT utils

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update installation command

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update tensorrt installation

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* changes
1. Update setting of cache directory
2. Address comments: merge utils and pipeline code.
3. Address comments: Add section in README

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* apply make style

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Correct `Transformer2DModel.forward` docstring (#3074)

⚙️chore(transformer_2d) update function signature for encoder_hidden_states

* Update pipeline_stable_diffusion_inpaint_legacy.py (#2903)

* Update pipeline_stable_diffusion_inpaint_legacy.py

* fix preprocessing of Pil images with adequate batch size

* revert map

* add tests

* reformat

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* next try to fix the style

* wth is this

* Update testing_utils.py

* Update testing_utils.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Modified altdiffusion pipline to support altdiffusion-m18 (#2993)

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

---------

Co-authored-by: root <fulong_ye@163.com>

* controlnet training resize inputs to multiple of 8 (#3135)

controlnet training center crop input images to multiple of 8

The pipeline code resizes inputs to multiples of 8.
Not doing this resizing in the training script is causing
the encoded image to have different height/width dimensions
than the encoded conditioning image (which uses a separate
encoder that's part of the controlnet model).

We resize and center crop the inputs to make sure they're the
same size (as well as all other images in the batch). We also
check that the initial resolution is a multiple of 8.

* adding custom diffusion training to diffusers examples (#3031)

* diffusers==0.14.0 update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion

* custom diffusion

* custom diffusion

* custom diffusion

* custom diffusion

* apply formatting and get rid of bare except.

* refactor readme and other minor changes.

* misc refactor.

* fix: repo_id issue and loaders logging bug.

* fix: save_model_card.

* fix: save_model_card.

* fix: save_model_card.

* add: doc entry.

* refactor doc,.

* custom diffusion

* custom diffusion

* custom diffusion

* apply style.

* remove tralining whitespace.

* fix: toctree entry.

* remove unnecessary print.

* custom diffusion

* custom diffusion

* custom diffusion test

* custom diffusion xformer update

* custom diffusion xformer update

* custom diffusion xformer update

---------

Co-authored-by: Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>

* make style

* Update custom_diffusion.mdx (#3165)

Add missing newlines for rendering the links correctly

* Added distillation for quantization example on textual inversion. (#2760)

* Added distillation for quantization example on textual inversion.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* refined readme and code style.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* Update text2images.py

* refined code of model load and added compatibility check.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* fixed code style.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* fix C403 [*] Unnecessary `list` comprehension (rewrite as a `set` comprehension)

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

---------

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* Update Noise Autocorrelation Loss Function for Pix2PixZero Pipeline (#2942)

* Update Pix2PixZero Auto-correlation Loss

* Add fast inversion tests

* Clarify purpose and mark as deprecated

Fix inversion prompt broadcasting

* Register modules set to `None` in config for `test_save_load_optional_components`

* Update new tests to coordinate with #2953

* [DreamBooth] add text encoder LoRA support in the DreamBooth training script (#3130)

* add: LoRA text encoder support for DreamBooth example.

* fix initialization.

* fix: modification call.

* add: entry in the readme.

* use dog dataset from hub.

* fix: params to clip.

* add entry to the LoRA doc.

* add: tests for lora.

* remove unnecessary list comprehension./

* Update Habana Gaudi documentation (#3169)

* Update Habana Gaudi doc

* Fix tables

* Add model offload to x4 upscaler (#3187)

* Add model offload to x4 upscaler

* fix

* [docs] Deterministic algorithms (#3172)

deterministic algos

* Update custom_diffusion.mdx to credit the author (#3163)

* Update custom_diffusion.mdx

* fix: unnecessary list comprehension.

* Fix TensorRT community pipeline device set function (#3157)

pass silence_dtype_warnings as kwarg

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make `from_flax` work for controlnet (#3161)

fix from_flax

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [docs] Clarify training args (#3146)

* clarify training arg

* apply feedback

* Multi Vector Textual Inversion (#3144)

* Multi Vector

* Improve

* fix multi token

* improve test

* make style

* Update examples/test_examples.py

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* update

* Finish

* Apply suggestions from code review

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Add `Karras sigmas` to HeunDiscreteScheduler (#3160)

* Add karras pattern to discrete heun scheduler

* Add integration test

* Fix failing CI on pytorch test on M1 (mps)

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [AudioLDM] Fix dtype of returned waveform (#3189)

* Fix bug in train_dreambooth_lora (#3183)

* Update train_dreambooth_lora.py

fix bug

* Update train_dreambooth_lora.py

* [Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)

* Update lpw_stable_diffusion.py

* fix cpu offload

* Make sure VAE attention works with Torch 2_0 (#3200)

* Make sure attention works with Torch 2_0

* make style

* Fix more

* Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (#3201)

Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)"

This reverts commit 9965cb50ea.

* [Bug fix] Fix batch size attention head size mismatch (#3214)

* fix mixed precision training on train_dreambooth_inpaint_lora (#3138)

cast to weight dtype

* adding enable_vae_tiling and disable_vae_tiling functions (#3225)

adding enable_vae_tiling and disable_val_tiling functions

* Add ControlNet v1.1 docs (#3226)

Add v1.1 docs

* Fix issue in maybe_convert_prompt (#3188)

When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens.  Adding a space for the padding tokens fixes this.

* Sync cache version check from transformers (#3179)

sync cache version check from transformers

* Fix docs text inversion (#3166)

* Fix docs text inversion

* Apply suggestions from code review

* add model (#3230)

* add

* clean

* up

* clean up more

* fix more tests

* Improve docs further

* improve

* more fixes docs

* Improve docs more

* Update src/diffusers/models/unet_2d_condition.py

* fix

* up

* update doc links

* make fix-copies

* add safety checker and watermarker to stage 3 doc page code snippets

* speed optimizations docs

* memory optimization docs

* make style

* add watermarking snippets to doc string examples

* make style

* use pt_to_pil helper functions in doc strings

* skip mps tests

* Improve safety

* make style

* new logic

* fix

* fix bad onnx design

* make new stable diffusion upscale pipeline model arguments optional

* define has_nsfw_concept when non-pil output type

* lowercase linked to notebook name

---------

Co-authored-by: William Berman <WLBberman@gmail.com>

* Allow return pt x4 (#3236)

* Add all files

* update

* Allow fp16 attn for x4 upscaler (#3239)

* Add all files

* update

* Make sure vae is memory efficient for PT 1

* make style

* fix fast test (#3241)

* Adds a document on token merging (#3208)

* add document on token merging.

* fix headline.

* fix: headline.

* add some samples for comparison.

* [AudioLDM] Update docs to use updated ckpt (#3240)

* [AudioLDM] Update docs to use updated ckpt

* make style

* Release: v0.16.0

* Post release for 0.16.0 (#3244)

* Post release

* fix more

* [docs] only mention one stage (#3246)

* [docs] only mention one stage

* add blurb on auto accepting

---------

Co-authored-by: William Berman <WLBberman@gmail.com>

* Write model card in controlnet training script (#3229)

Write model card in controlnet training script.

* [2064]: Add stochastic sampler (sample_dpmpp_sde) (#3020)

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* Review comments

* [Review comment]: Add is_torchsde_available()

* [Review comment]: Test and docs

* [Review comment]

* [Review comment]

* [Review comment]

* [Review comment]

* [Review comment]

---------

Co-authored-by: njindal <njindal@adobe.com>

* [Stochastic Sampler][Slow Test]: Cuda test fixes (#3257)

[Slow Test]: Cuda test fixes

Co-authored-by: njindal <njindal@adobe.com>

* Remove required from tracker_project_name (#3260)

Remove required from tracker_project_name.

As observed by https://github.com/off99555 in https://github.com/huggingface/diffusers/issues/2695#issuecomment-1470755050, it already has a default value.

* adding required parameters while calling the get_up_block and get_down_block  (#3210)

* removed unnecessary parameters from get_up_block and get_down_block functions

* adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [docs] Update interface in repaint.mdx (#3119)

Update repaint.mdx

accomodate to #1701

* Update IF name to XL (#3262)

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>

* fix typo in score sde pipeline (#3132)

* Fix typo in textual inversion JAX training script (#3123)

The pipeline is built as `pipe` but then used as `pipeline`.

* AudioDiffusionPipeline - fix encode method after config changes (#3114)

* config fixes

* deprecate get_input_dims

* Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline"" (#3265)

Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (#3201)"

This reverts commit 91a2a80eb2.

* Fix community pipelines (#3266)

* update notebook (#3259)

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>

* [docs] add notes for stateful model changes (#3252)

* [docs] add notes for stateful model changes

* Update docs/source/en/optimization/fp16.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* link to accelerate docs for discarding hooks

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* [LoRA] quality of life improvements in the loading semantics and docs (#3180)

* 👽 qol improvements for LoRA.

* better function name?

* fix: LoRA weight loading with the new format.

* address Patrick's comments.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* change wording around encouraging the use of load_lora_weights().

* fix: function name.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Community Pipelines] EDICT pipeline implementation (#3153)

* EDICT pipeline initial commit

- Starting point taking from https://github.com/Joqsan/edict-diffusion

* refactor __init__() method

* minor refactoring

* refactor scheduler code

- remove scheduler and move its methods to the EDICTPipeline class

* make CFG optional
- refactor encode_prompt().
- include optional generator for sampling with vae.
- minor variable renaming

* add EDICT pipeline description to README.md

* replace preprocess() with VaeImageProcessor

* run make style and make quality commands

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Docs]zh translated docs update (#3245)

* zh translated docs update

* update _toctree

* Update logging.mdx (#2863)

Fix typos

* Add multiple conditions to StableDiffusionControlNetInpaintPipeline (#3125)

* try multi controlnet inpaint

* multi controlnet inpaint

* multi controlnet inpaint

* Let's make sure that dreambooth always uploads to the Hub (#3272)

* Update Dreambooth README

* Adapt all docs as well

* automatically write model card

* fix

* make style

* Diffedit Zero-Shot Inpainting Pipeline (#2837)

* Update Pix2PixZero Auto-correlation Loss

* Add Stable Diffusion DiffEdit pipeline

* Add draft documentation and import code

* Bugfixes and refactoring

* Add option to not decode latents in the inversion process

* Harmonize preprocessing

* Revert "Update Pix2PixZero Auto-correlation Loss"

This reverts commit b218062fed.

* Update annotations

* rename `compute_mask` to `generate_mask`

* Update documentation

* Update docs

* Update Docs

* Fix copy

* Change shape of output latents to batch first

* Update docs

* Add first draft for tests

* Bugfix and update tests

* Add `cross_attention_kwargs` support for all pipeline methods

* Fix Copies

* Add support for PIL image latents

Add support for mask broadcasting

Update docs and tests

Align `mask` argument to `mask_image`

Remove height and width arguments

* Enable MPS Tests

* Move example docstrings

* Fix test

* Fix test

* fix pipeline inheritance

* Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline

* Register modules set to `None` in config for `test_save_load_optional_components`

* Move fixed logic to specific test class

* Clean changes to other pipelines

* Update new tests to coordinate with #2953

* Update slow tests for better results

* Safety to avoid potential problems with torch.inference_mode

* Add reference in SD Pipeline Overview

* Fix tests again

* Enforce determinism in noise for generate_mask

* Fix copies

* Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`

* Add LoraLoaderMixin and update `prepare_image_latents`

* clean up repeat and reg

* bugfix

* Remove invalid args from docs

Suppress spurious warning by repeating image before latent to mask gen

* add constant learning rate with custom rule (#3133)

* add constant lr with rules

* add constant with rules in TYPE_TO_SCHEDULER_FUNCTION

* add constant lr rate with rule

* hotfix code quality

* fix doc style

* change name constant_with_rules to piecewise constant

* Allow disabling torch 2_0 attention (#3273)

* Allow disabling torch 2_0 attention

* make style

* Update src/diffusers/models/attention.py

* [doc] add link to training script (#3271)

add link to training script

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>

* temp disable spectogram diffusion tests (#3278)

The note-seq package throws an error on import because the default installed version of Ipython
is not compatible with python 3.8 which we run in the CI.
https://github.com/huggingface/diffusers/actions/runs/4830121056/jobs/8605954838#step:7:9

* Changed sample[0] to images[0] (#3304)

A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.

* Typo in tutorial (#3295)

* Torch compile graph fix (#3286)

* fix more

* Fix more

* fix more

* Apply suggestions from code review

* fix

* make style

* make fix-copies

* fix

* make sure torch compile

* Clean

* fix test

* Postprocessing refactor img2img (#3268)

* refactor img2img VaeImageProcessor.postprocess

* remove copy from for init, run_safety_checker, decode_latents

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [Torch 2.0 compile] Fix more torch compile breaks (#3313)

* Fix more torch compile breaks

* add tests

* Fix all

* fix controlnet

* fix more

* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>

* Add Horace He as co-author.

Co-authored-by: Horace He <horacehe2007@yahoo.com>

---------

Co-authored-by: Horace He <horacehe2007@yahoo.com>

* fix: scale_lr and sync example readme and docs. (#3299)

* fix: scale_lr and sync example readme and docs.

* fix doc link.

* Update stable_diffusion.mdx (#3310)

fixed import statement

* Fix missing variable assign in DeepFloyd-IF-II (#3315)

Fix missing variable assign

lol

* Correct doc build for patch releases (#3316)

Update build_documentation.yml

* Add Stable Diffusion RePaint to community pipelines (#3320)

* Add Stable Diffsuion RePaint to community pipelines

- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline

* Fix: Remove wrong import

- Remove wrong import
- Minor change in comments

* Fix: Code formatting of stable_diffusion_repaint

* Fix: ruff errors in stable_diffusion_repaint

* Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)

* fix multistep dpmsolver for cosine schedule (deepfloy-if)

* fix a typo

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule

* add test, fix style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [docs] Improve LoRA docs (#3311)

* update docs

* add to toctree

* apply feedback

* Added input pretubation (#3292)

* Added input pretubation

* Fixed spelling

* Update write_own_pipeline.mdx (#3323)

* update controlling generation doc with latest goodies. (#3321)

* [Quality] Make style (#3341)

* Fix config dpm (#3343)

* Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)

* add SDE variant of DPM-Solver and DPM-Solver++

* add test

* fix typo

* fix typo

* Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)

The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.

* Add UniDiffuser classes to __init__ files, modify transformer block to support pre- and post-LN, add fast default tests, fix some bugs.

* Update fast tests to use test checkpoints stored on the hub and to better match the reference UniDiffuser implementation.

* Fix code with make style.

* Revert "Fix code style with make style."

This reverts commit 10a174a12c.

* Add self.image_encoder, self.text_decoder to list of models to offload to CPU in the enable_sequential_cpu_offload(...)/enable_model_cpu_offload(...) methods to make test_cpu_offload_forward_pass pass.

* Fix code quality with make style.

* Support using a data type embedding for UniDiffuser-v1.

* Add fast test for checking UniDiffuser-v1 sampling.

* Make changes so that the repository consistency tests pass.

* Add UniDiffuser dummy objects via make fix-copies.

* Fix bugs and make improvements to the UniDiffuser pipeline:
	- Improve batch size inference and fix bugs when num_images_per_prompt or num_prompts_per_image > 1
	- Add tests for num_images_per_prompt, num_prompts_per_image > 1
	- Improve check_inputs, especially regarding checking supplied latents
	- Add reset_mode method so that mode inference can be re-enabled after mode is set manually
	- Fix some warnings related to accessing class members directly instead of through their config
	- Small amount of refactoring in pipeline_unidiffuser.py

* Fix code style with make style.

* Add/edit docstrings for added classes and public pipeline methods. Also do some light refactoring.

* Add documentation for UniDiffuser and fix some typos/formatting in docstrings.

* Fix code with make style.

* Refactor and improve the UniDiffuser convert_from_ckpt.py script.

* Move the UniDiffusers convert_from_ckpy.py script to diffusers/scripts/convert_unidiffuser_to_diffusers.py

* Fix code quality via make style.

* Improve UniDiffuser slow tests.

* make style

* Fix some typos in the UniDiffuser docs.

* Remove outdated logic based on transformers version in UniDiffuser pipeline __init__.py

* Remove dependency on einops by refactoring einops operations to pure torch operations.

* make style

* Add slow test on full checkpoint for joint mode and correct expected image slices/text prefixes.

* make style

* Fix mixed precision issue by wrapping the offending code with the torch.autocast context manager.

* Revert "Fix mixed precision issue by wrapping the offending code with the torch.autocast context manager."

This reverts commit 1a58958ab4.

* Add fast test for CUDA/fp16 model behavior (currently failing).

* Fix the mixed precision issue and add additional tests of the pipeline cuda/fp16 functionality.

* make style

* Use a CLIPVisionModelWithProjection instead of CLIPVisionModel for image_encoder to better match the original UniDiffuser implementation.

* Make style and remove some testing code.

* Fix shape errors for the 'joint' and 'img2text' modes.

* Fix tests and remove some testing code.

* Add option to use fixed latents for UniDiffuserPipelineSlowTests and fix issue in modeling_text_decoder.py.

* Improve UniDiffuser docs, particularly the usage examples, and improve slow tests with new expected outputs.

* make style

* Fix examples to load model in float16.

* In image-to-text mode, sample from the autoencoder moment distribution instead of always getting its mode.

* make style

* When encoding the image using the VAE, scale the image latents by the VAE's scaling factor.

* make style

* Clean up code and make slow tests pass.

* make fix-copies

* [docs] Fix docstring (#3334)

fix docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* if dreambooth lora (#3360)

* update IF stage I pipelines

add fixed variance schedulers and lora loading

* added kv lora attn processor

* allow loading into alternative lora attn processor

* make vae optional

* throw away predicted variance

* allow loading into added kv lora layer

* allow load T5

* allow pre compute text embeddings

* set new variance type in schedulers

* fix copies

* refactor all prompt embedding code

class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable

* fix for when variance type is not defined on scheduler

* do not pre compute validation prompt if not present

* add example test for if lora dreambooth

* add check for train text encoder and pre compute text embeddings

* Postprocessing refactor all others (#3337)

* add text2img

* fix-copies

* add

* add all other pipelines

* add

* add

* add

* add

* add

* make style

* style + fix copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>

* [docs] Improve safetensors docstring (#3368)

* clarify safetensor docstring

* fix typo

* apply feedback

* add: a warning message when using xformers in a PT 2.0 env. (#3365)

* add: a warning message when using xformers in a PT 2.0 env.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)

* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.

* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests

Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution

* Added a resolution test to StableDiffusionInpaintPipelineSlowTests

this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* [docs] Adapt a model (#3326)

* first draft

* apply feedback

* conv_in.weight thrown away

* [docs] Load safetensors (#3333)

* safetensors

* apply feedback

* apply feedback

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* [Docs] Fix stable_diffusion.mdx typo (#3398)

Fix typo in last code block. Correct "prommpts" to "prompt"

* Support ControlNet v1.1 shuffle properly (#3340)

* add inferring_controlnet_cond_batch

* Revert "add inferring_controlnet_cond_batch"

This reverts commit abe8d6311d.

* set guess_mode to True
whenever global_pool_conditions is True

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* nit

* add integration test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Tests] better determinism (#3374)

* enable deterministic pytorch and cuda operations.

* disable manual seeding.

* make style && make quality for unet_2d tests.

* enable determinism for the unet2dconditional model.

* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.

* relax tolerance (very weird issue, though).

* revert to torch manual_seed() where needed.

* relax more tolerance.

* better placement of the cuda variable and relax more tolerance.

* enable determinism for 3d condition model.

* relax tolerance.

* add: determinism to alt_diffusion.

* relax tolerance for alt diffusion.

* dance diffusion.

* dance diffusion is flaky.

* test_dict_tuple_outputs_equivalent edit.

* fix two more tests.

* fix more ddim tests.

* fix: argument.

* change to diff in place of difference.

* fix: test_save_load call.

* test_save_load_float16 call.

* fix: expected_max_diff

* fix: paint by example.

* relax tolerance.

* add determinism to 1d unet model.

* torch 2.0 regressions seem to be brutal

* determinism to vae.

* add reason to skipping.

* up tolerance.

* determinism to vq.

* determinism to cuda.

* determinism to the generic test pipeline file.

* refactor general pipelines testing a bit.

* determinism to alt diffusion i2i

* up tolerance for alt diff i2i and audio diff

* up tolerance.

* determinism to audioldm

* increase tolerance for audioldm lms.

* increase tolerance for paint by paint.

* increase tolerance for repaint.

* determinism to cycle diffusion and sd 1.

* relax tol for cycle diffusion 🚲

* relax tol for sd 1.0

* relax tol for controlnet.

* determinism to img var.

* relax tol for img variation.

* tolerance to i2i sd

* make style

* determinism to inpaint.

* relax tolerance for inpaiting.

* determinism for inpainting legacy

* relax tolerance.

* determinism to instruct pix2pix

* determinism to model editing.

* model editing tolerance.

* panorama determinism

* determinism to pix2pix zero.

* determinism to sag.

* sd 2. determinism

* sd. tolerance

* disallow tf32 matmul.

* relax tolerance is all you need.

* make style and determinism to sd 2 depth

* relax tolerance for depth.

* tolerance to diffedit.

* tolerance to sd 2 inpaint.

* up tolerance.

* determinism in upscaling.

* tolerance in upscaler.

* more tolerance relaxation.

* determinism to v pred.

* up tol for v_pred

* unclip determinism

* determinism to unclip img2img

* determinism to text to video.

* determinism to last set of tests

* up tol.

* vq cumsum doesn't have a deterministic kernel

* relax tol

* relax tol

* [docs] Add transformers to install (#3388)

add transformers to install

* [deepspeed] partial ZeRO-3 support (#3076)

* [deepspeed] partial ZeRO-3 support

* cleanup

* improve deepspeed fixes

* Improve

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add omegaconf for tests (#3400)

Add omegaconfg

* Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)

* Improve checkpointing lora

* fix more

* Improve doc string

* Update src/diffusers/loaders.py

* make stytle

* Apply suggestions from code review

* Update src/diffusers/loaders.py

* Apply suggestions from code review

* Apply suggestions from code review

* better

* Fix all

* Fix multi-GPU dreambooth

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix all

* make style

* make style

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix docker file (#3402)

* up

* up

* fix: deepseepd_plugin retrieval from accelerate state (#3410)

* [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)

* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring

* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Don't install accelerate and transformers from source (#3415)

* Don't install transformers and accelerate from source (#3414)

* Improve fast tests (#3416)

Update pr_tests.yml

* attention refactor: the trilogy  (#3387)

* Replace `AttentionBlock` with `Attention`

* use _from_deprecated_attn_block check re: @patrickvonplaten

* [Docs] update the PT 2.0 optimization doc with latest findings (#3370)

* add: benchmarking stats for A100 and V100.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address patrick's comments.

* add: rtx 4090 stats

* ⚔ benchmark reports done

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* 3313 pr link.

* add: plots.

Co-authored-by: Pedro <pedro@huggingface.co>

* fix formattimg

* update number percent.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix style rendering (#3433)

* Fix style rendering.

* Fix typo

* unCLIP scheduler do not use note (#3417)

* Replace deprecated command with environment file (#3409)

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix warning message pipeline loading (#3446)

* add stable diffusion tensorrt img2img pipeline (#3419)

* add stable diffusion tensorrt img2img pipeline

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update docstrings

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* Refactor controlnet and add img2img and inpaint (#3386)

* refactor controlnet and add img2img and inpaint

* First draft to get pipelines to work

* make style

* Fix more

* Fix more

* More tests

* Fix more

* Make inpainting work

* make style and more tests

* Apply suggestions from code review

* up

* make style

* Fix imports

* Fix more

* Fix more

* Improve examples

* add test

* Make sure import is correctly deprecated

* Make sure everything works in compile mode

* make sure authorship is correctly attributed

* [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)

* Add DPM-Solver Multistep Inverse Scheduler

* Add draft tests for DiffEdit

* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents

* Fix tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Docs] Fix incomplete docstring for resnet.py (#3438)

Fix incomplete docstrings for resnet.py

* fix tiled vae blend extent range (#3384)

fix tiled vae bleand extent range

* Small update to "Next steps" section (#3443)

Small update to "Next steps" section:

- PyTorch 2 is recommended.
- Updated improvement figures.

* Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)

* Update pipeline_if_superresolution.py

Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape

* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments

* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Adding 'strength' parameter to StableDiffusionInpaintingPipeline  (#3424)

* Added explanation of 'strength' parameter

* Added get_timesteps function which relies on new strength parameter

* Added `strength` parameter which defaults to 1.

* Swapped ordering so `noise_timestep` can be calculated before masking the image

this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.

* Added strength to check_inputs, throws error if out of range

* Changed `prepare_latents` to initialise latents w.r.t strength

inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.

* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline

still need to add correct regression values

* Created a is_strength_max to initialise from pure random noise

* Updated unit tests w.r.t new strength parameter + fixed new strength unit test

* renamed parameter to avoid confusion with variable of same name

* Updated regression values for new strength test - now passes

* removed 'copied from' comment as this method is now different and divergent from the cpy

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Ensure backwards compatibility for prepare_mask_and_masked_image

created a return_image boolean and initialised to false

* Ensure backwards compatibility for prepare_latents

* Fixed copy check typo

* Fixes w.r.t backward compibility changes

* make style

* keep function argument ordering same for backwards compatibility in callees with copied from statements

* make fix-copies

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>

* [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)

Added bugfix using f strings.

* Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)

* gradient checkpointing bug fix

* bug fix; changes for reviews

* reformat

* reformat

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Make dreambooth lora more robust to orig unet (#3462)

* Make dreambooth lora more robust to orig unet

* up

* Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)

Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.

* Add min snr to text2img lora training script (#3459)

add min snr to text2img lora training script

* Add inpaint lora scale support (#3460)

* add inpaint lora scale support

* add inpaint lora scale test

---------

Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>

* [From ckpt] Fix from_ckpt (#3466)

* Correct from_ckpt

* make style

* Update full dreambooth script to work with IF (#3425)

* Add IF dreambooth docs (#3470)

* parameterize pass single args through tuple (#3477)

* attend and excite tests disable determinism on the class level (#3478)

* dreambooth docs torch.compile note (#3471)

* dreambooth docs torch.compile note

* Update examples/dreambooth/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/dreambooth/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add: if entry in the dreambooth training docs. (#3472)

* [docs] Textual inversion inference (#3473)

* add textual inversion inference to docs

* add to toctree

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [docs] Distributed inference (#3376)

* distributed inference

* move to inference section

* apply feedback

* update with split_between_processes

* apply feedback

* [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)

explicit view kernel size as number elements in flattened indices

* mps & onnx tests rework (#3449)

* Remove ONNX tests from PR.

They are already a part of push_tests.yml.

* Remove mps tests from PRs.

They are already performed on push.

* Fix workflow name for fast push tests.

* Extract mps tests to a workflow.

For better control/filtering.

* Remove --extra-index-url from mps tests

* Increase tolerance of mps test

This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.

* Temporarily run mps tests on pr

So we can test.

* Revert "Temporarily run mps tests on pr"

Tests passed, go back to running on push.

* [Attention processor] Better warning message when shifting to `AttnProcessor2_0` (#3457)

* add: debugging to enabling memory efficient processing

* add: better warning message.

* [Docs] add note on local directory path. (#3397)

add note on local directory path.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Refactor full determinism (#3485)

* up

* fix more

* Apply suggestions from code review

* fix more

* fix more

* Check it

* Remove 16:8

* fix more

* fix more

* fix more

* up

* up

* Test only stable diffusion

* Test only two files

* up

* Try out spinning up processes that can be killed

* up

* Apply suggestions from code review

* up

* up

* Fix DPM single (#3413)

* Fix DPM single

* add test

* fix one more bug

* Apply suggestions from code review

Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>

---------

Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>

* Add `use_Karras_sigmas` to DPMSolverSinglestepScheduler (#3476)

* add use_karras_sigmas

* add karras test

* add doc

* Adds local_files_only bool to prevent forced online connection (#3486)

* make style

* [Docs] Korean translation (optimization, training) (#3488)

* feat) optimization kr translation

* fix) typo, italic setting

* feat) dreambooth, text2image kr

* feat) lora kr

* fix) LoRA

* fix) fp16 fix

* fix) doc-builder style

* fix) fp16 일부 단어 수정

* fix) fp16 style fix

* fix) opt, training docs update

* feat) toctree update

* feat) toctree update

---------

Co-authored-by: Chanran Kim <seriousran@gmail.com>

* DataLoader respecting EXIF data in Training Images (#3465)

* DataLoader will now bake in any transforms or image manipulations contained in the EXIF

Images may have rotations stored in EXIF. Training using such images will cause those transforms to be ignored while training and thus produce unexpected results

* Fixed the Dataloading EXIF issue in main DreamBooth training as well

* Run make style (black & isort)

* make style

* feat: allow disk offload for diffuser models (#3285)

* allow disk offload for diffuser models

* sort import

* add max_memory argument

* Changed sample[0] to images[0] (#3304)

A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.

* Typo in tutorial (#3295)

* Torch compile graph fix (#3286)

* fix more

* Fix more

* fix more

* Apply suggestions from code review

* fix

* make style

* make fix-copies

* fix

* make sure torch compile

* Clean

* fix test

* Postprocessing refactor img2img (#3268)

* refactor img2img VaeImageProcessor.postprocess

* remove copy from for init, run_safety_checker, decode_latents

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [Torch 2.0 compile] Fix more torch compile breaks (#3313)

* Fix more torch compile breaks

* add tests

* Fix all

* fix controlnet

* fix more

* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>

* Add Horace He as co-author.

Co-authored-by: Horace He <horacehe2007@yahoo.com>

---------

Co-authored-by: Horace He <horacehe2007@yahoo.com>

* fix: scale_lr and sync example readme and docs. (#3299)

* fix: scale_lr and sync example readme and docs.

* fix doc link.

* Update stable_diffusion.mdx (#3310)

fixed import statement

* Fix missing variable assign in DeepFloyd-IF-II (#3315)

Fix missing variable assign

lol

* Correct doc build for patch releases (#3316)

Update build_documentation.yml

* Add Stable Diffusion RePaint to community pipelines (#3320)

* Add Stable Diffsuion RePaint to community pipelines

- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline

* Fix: Remove wrong import

- Remove wrong import
- Minor change in comments

* Fix: Code formatting of stable_diffusion_repaint

* Fix: ruff errors in stable_diffusion_repaint

* Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)

* fix multistep dpmsolver for cosine schedule (deepfloy-if)

* fix a typo

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule

* add test, fix style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [docs] Improve LoRA docs (#3311)

* update docs

* add to toctree

* apply feedback

* Added input pretubation (#3292)

* Added input pretubation

* Fixed spelling

* Update write_own_pipeline.mdx (#3323)

* update controlling generation doc with latest goodies. (#3321)

* [Quality] Make style (#3341)

* Fix config dpm (#3343)

* Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)

* add SDE variant of DPM-Solver and DPM-Solver++

* add test

* fix typo

* fix typo

* Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)

The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.

* Rename --only_save_embeds to --save_as_full_pipeline (#3206)

* Set --only_save_embeds to False by default

Due to how the option is named, it makes more sense to behave like this.

* Refactor only_save_embeds to save_as_full_pipeline

* [AudioLDM] Generalise conversion script (#3328)

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix TypeError when using prompt_embeds and negative_prompt (#2982)

* test: Added test case

* fix: fixed type checking issue on _encode_prompt

* fix: fixed copies consistency

* fix: one copy was not sufficient

* Fix pipeline class on README (#3345)

Update README.md

* Inpainting: typo in docs (#3331)

Typo in docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add `use_Karras_sigmas` to LMSDiscreteScheduler (#3351)

* add karras sigma to lms discrete scheduler

* add test for lms_scheduler karras

* reformat test lms

* Batched load of textual inversions (#3277)

* Batched load of textual inversions

- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info

* Update src/diffusers/loaders.py

Check for duplicate tokens

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update condition for None tokens

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make fix-copies

* [docs] Fix docstring (#3334)

fix docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* if dreambooth lora (#3360)

* update IF stage I pipelines

add fixed variance schedulers and lora loading

* added kv lora attn processor

* allow loading into alternative lora attn processor

* make vae optional

* throw away predicted variance

* allow loading into added kv lora layer

* allow load T5

* allow pre compute text embeddings

* set new variance type in schedulers

* fix copies

* refactor all prompt embedding code

class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable

* fix for when variance type is not defined on scheduler

* do not pre compute validation prompt if not present

* add example test for if lora dreambooth

* add check for train text encoder and pre compute text embeddings

* Postprocessing refactor all others (#3337)

* add text2img

* fix-copies

* add

* add all other pipelines

* add

* add

* add

* add

* add

* make style

* style + fix copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>

* [docs] Improve safetensors docstring (#3368)

* clarify safetensor docstring

* fix typo

* apply feedback

* add: a warning message when using xformers in a PT 2.0 env. (#3365)

* add: a warning message when using xformers in a PT 2.0 env.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)

* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.

* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests

Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution

* Added a resolution test to StableDiffusionInpaintPipelineSlowTests

this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* [docs] Adapt a model (#3326)

* first draft

* apply feedback

* conv_in.weight thrown away

* [docs] Load safetensors (#3333)

* safetensors

* apply feedback

* apply feedback

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* [Docs] Fix stable_diffusion.mdx typo (#3398)

Fix typo in last code block. Correct "prommpts" to "prompt"

* Support ControlNet v1.1 shuffle properly (#3340)

* add inferring_controlnet_cond_batch

* Revert "add inferring_controlnet_cond_batch"

This reverts commit abe8d6311d.

* set guess_mode to True
whenever global_pool_conditions is True

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* nit

* add integration test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Tests] better determinism (#3374)

* enable deterministic pytorch and cuda operations.

* disable manual seeding.

* make style && make quality for unet_2d tests.

* enable determinism for the unet2dconditional model.

* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.

* relax tolerance (very weird issue, though).

* revert to torch manual_seed() where needed.

* relax more tolerance.

* better placement of the cuda variable and relax more tolerance.

* enable determinism for 3d condition model.

* relax tolerance.

* add: determinism to alt_diffusion.

* relax tolerance for alt diffusion.

* dance diffusion.

* dance diffusion is flaky.

* test_dict_tuple_outputs_equivalent edit.

* fix two more tests.

* fix more ddim tests.

* fix: argument.

* change to diff in place of difference.

* fix: test_save_load call.

* test_save_load_float16 call.

* fix: expected_max_diff

* fix: paint by example.

* relax tolerance.

* add determinism to 1d unet model.

* torch 2.0 regressions seem to be brutal

* determinism to vae.

* add reason to skipping.

* up tolerance.

* determinism to vq.

* determinism to cuda.

* determinism to the generic test pipeline file.

* refactor general pipelines testing a bit.

* determinism to alt diffusion i2i

* up tolerance for alt diff i2i and audio diff

* up tolerance.

* determinism to audioldm

* increase tolerance for audioldm lms.

* increase tolerance for paint by paint.

* increase tolerance for repaint.

* determinism to cycle diffusion and sd 1.

* relax tol for cycle diffusion 🚲

* relax tol for sd 1.0

* relax tol for controlnet.

* determinism to img var.

* relax tol for img variation.

* tolerance to i2i sd

* make style

* determinism to inpaint.

* relax tolerance for inpaiting.

* determinism for inpainting legacy

* relax tolerance.

* determinism to instruct pix2pix

* determinism to model editing.

* model editing tolerance.

* panorama determinism

* determinism to pix2pix zero.

* determinism to sag.

* sd 2. determinism

* sd. tolerance

* disallow tf32 matmul.

* relax tolerance is all you need.

* make style and determinism to sd 2 depth

* relax tolerance for depth.

* tolerance to diffedit.

* tolerance to sd 2 inpaint.

* up tolerance.

* determinism in upscaling.

* tolerance in upscaler.

* more tolerance relaxation.

* determinism to v pred.

* up tol for v_pred

* unclip determinism

* determinism to unclip img2img

* determinism to text to video.

* determinism to last set of tests

* up tol.

* vq cumsum doesn't have a deterministic kernel

* relax tol

* relax tol

* [docs] Add transformers to install (#3388)

add transformers to install

* [deepspeed] partial ZeRO-3 support (#3076)

* [deepspeed] partial ZeRO-3 support

* cleanup

* improve deepspeed fixes

* Improve

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add omegaconf for tests (#3400)

Add omegaconfg

* Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)

* Improve checkpointing lora

* fix more

* Improve doc string

* Update src/diffusers/loaders.py

* make stytle

* Apply suggestions from code review

* Update src/diffusers/loaders.py

* Apply suggestions from code review

* Apply suggestions from code review

* better

* Fix all

* Fix multi-GPU dreambooth

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix all

* make style

* make style

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix docker file (#3402)

* up

* up

* fix: deepseepd_plugin retrieval from accelerate state (#3410)

* [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)

* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring

* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Don't install accelerate and transformers from source (#3415)

* Don't install transformers and accelerate from source (#3414)

* Improve fast tests (#3416)

Update pr_tests.yml

* attention refactor: the trilogy  (#3387)

* Replace `AttentionBlock` with `Attention`

* use _from_deprecated_attn_block check re: @patrickvonplaten

* [Docs] update the PT 2.0 optimization doc with latest findings (#3370)

* add: benchmarking stats for A100 and V100.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address patrick's comments.

* add: rtx 4090 stats

* ⚔ benchmark reports done

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* 3313 pr link.

* add: plots.

Co-authored-by: Pedro <pedro@huggingface.co>

* fix formattimg

* update number percent.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix style rendering (#3433)

* Fix style rendering.

* Fix typo

* unCLIP scheduler do not use note (#3417)

* Replace deprecated command with environment file (#3409)

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix warning message pipeline loading (#3446)

* add stable diffusion tensorrt img2img pipeline (#3419)

* add stable diffusion tensorrt img2img pipeline

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update docstrings

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* Refactor controlnet and add img2img and inpaint (#3386)

* refactor controlnet and add img2img and inpaint

* First draft to get pipelines to work

* make style

* Fix more

* Fix more

* More tests

* Fix more

* Make inpainting work

* make style and more tests

* Apply suggestions from code review

* up

* make style

* Fix imports

* Fix more

* Fix more

* Improve examples

* add test

* Make sure import is correctly deprecated

* Make sure everything works in compile mode

* make sure authorship is correctly attributed

* [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)

* Add DPM-Solver Multistep Inverse Scheduler

* Add draft tests for DiffEdit

* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents

* Fix tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Docs] Fix incomplete docstring for resnet.py (#3438)

Fix incomplete docstrings for resnet.py

* fix tiled vae blend extent range (#3384)

fix tiled vae bleand extent range

* Small update to "Next steps" section (#3443)

Small update to "Next steps" section:

- PyTorch 2 is recommended.
- Updated improvement figures.

* Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)

* Update pipeline_if_superresolution.py

Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape

* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments

* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Adding 'strength' parameter to StableDiffusionInpaintingPipeline  (#3424)

* Added explanation of 'strength' parameter

* Added get_timesteps function which relies on new strength parameter

* Added `strength` parameter which defaults to 1.

* Swapped ordering so `noise_timestep` can be calculated before masking the image

this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.

* Added strength to check_inputs, throws error if out of range

* Changed `prepare_latents` to initialise latents w.r.t strength

inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.

* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline

still need to add correct regression values

* Created a is_strength_max to initialise from pure random noise

* Updated unit tests w.r.t new strength parameter + fixed new strength unit test

* renamed parameter to avoid confusion with variable of same name

* Updated regression values for new strength test - now passes

* removed 'copied from' comment as this method is now different and divergent from the cpy

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Ensure backwards compatibility for prepare_mask_and_masked_image

created a return_image boolean and initialised to false

* Ensure backwards compatibility for prepare_latents

* Fixed copy check typo

* Fixes w.r.t backward compibility changes

* make style

* keep function argument ordering same for backwards compatibility in callees with copied from statements

* make fix-copies

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>

* [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)

Added bugfix using f strings.

* Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)

* gradient checkpointing bug fix

* bug fix; changes for reviews

* reformat

* reformat

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Make dreambooth lora more robust to orig unet (#3462)

* Make dreambooth lora more robust to orig unet

* up

* Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)

Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.

* Add min snr to text2img lora training script (#3459)

add min snr to text2img lora training script

* Add inpaint lora scale support (#3460)

* add inpaint lora scale support

* add inpaint lora scale test

---------

Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>

* [From ckpt] Fix from_ckpt (#3466)

* Correct from_ckpt

* make style

* Update full dreambooth script to work with IF (#3425)

* Add IF dreambooth docs (#3470)

* parameterize pass single args through tuple (#3477)

* attend and excite tests disable determinism on the class level (#3478)

* dreambooth docs torch.compile note (#3471)

* dreambooth docs torch.compile note

* Update examples/dreambooth/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/dreambooth/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add: if entry in the dreambooth training docs. (#3472)

* [docs] Textual inversion inference (#3473)

* add textual inversion inference to docs

* add to toctree

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [docs] Distributed inference (#3376)

* distributed inference

* move to inference section

* apply feedback

* update with split_between_processes

* apply feedback

* [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)

explicit view kernel size as number elements in flattened indices

* mps & onnx tests rework (#3449)

* Remove ONNX tests from PR.

They are already a part of push_tests.yml.

* Remove mps tests from PRs.

They are already performed on push.

* Fix workflow name for fast push tests.

* Extract mps tests to a workflow.

For better control/filtering.

* Remove --extra-index-url from mps tests

* Increase tolerance of mps test

This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.

* Temporarily run mps tests on pr

So we can test.

* Revert "Temporarily run mps tests on pr"

Tests passed, go back to running on push.

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Ilia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
Co-authored-by: Umar <55330742+mu94-csl@users.noreply.github.com>
Co-authored-by: Mylo <36931363+gitmylo@users.noreply.github.com>
Co-authored-by: Markus Pobitzer <markuspobitzer@gmail.com>
Co-authored-by: Cheng Lu <lucheng.lc15@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Isamu Isozaki <isamu.website@gmail.com>
Co-authored-by: Cesar Aybar <csaybar@gmail.com>
Co-authored-by: Will Rice <will@spokestack.io>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: At-sushi <dkahw210@kyoto.zaq.ne.jp>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Isotr0py <41363108+Isotr0py@users.noreply.github.com>
Co-authored-by: pdoane <pdoane2@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Rupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
Co-authored-by: sudowind <wfpkueecs@163.com>
Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Laureηt <laurentfainsin@protonmail.com>
Co-authored-by: Jongwoo Han <jongwooo.han@gmail.com>
Co-authored-by: asfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
Co-authored-by: clarencechen <clarencechenct@gmail.com>
Co-authored-by: Laureηt <laurent@fainsin.bzh>
Co-authored-by: superlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
Co-authored-by: Dev Aggarwal <devxpy@gmail.com>
Co-authored-by: Vimarsh Chaturvedi <vimarsh.c@gmail.com>
Co-authored-by: 7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
Co-authored-by: Glaceon-Hyy <ffheyy0017@gmail.com>
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>

* [Community] reference only control (#3435)

* add reference only control

* add reference only control

* add reference only control

* fix lint

* fix lint

* reference adain

* bugfix EulerAncestralDiscreteScheduler

* fix style fidelity rule

* fix default output size

* del unused line

* fix deterministic

* Support for cross-attention bias / mask (#2634)

* Cross-attention masks

prefer qualified symbol, fix accidental Optional

prefer qualified symbol in AttentionProcessor

prefer qualified symbol in embeddings.py

qualified symbol in transformed_2d

qualify FloatTensor in unet_2d_blocks

move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).

move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.

regenerate modeling_text_unet.py

remove unused import

unet_2d_condition encoder_attention_mask docs

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

transformer_2d encoder_attention_mask docs

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

unet_2d_blocks.py: add parameter name comments

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

revert description. bool-to-bias treatment happens in unet_2d_condition only.

comment parameter names

fix copies, style

* encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D

* encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn

* support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.

* fix mistake made during merge conflict resolution

* regenerate versatile_diffusion

* pass time embedding into checkpointed attention invocation

* always assume encoder_attention_mask is a mask (i.e. not a bias).

* style, fix-copies

* add tests for cross-attention masks

* add test for padding of attention mask

* explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens

* support both masks and biases in Transformer2DModel#forward. document behaviour

* fix-copies

* delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).

* review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.

* remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.

* put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.

* fix-copies

* style

* fix-copies

* put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.

* restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.

* make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.

* fix copies

* do not scale the initial global step by gradient accumulation steps when loading from checkpoint (#3506)

* Remove CPU latents logic for UniDiffuserPipelineFastTests.

* make style

* Revert "Clean up code and make slow tests pass."

This reverts commit ec7fb8735b.

* Revert bad commit and clean up code.

* add: contributor note.

* Batched load of textual inversions (#3277)

* Batched load of textual inversions

- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info

* Update src/diffusers/loaders.py

Check for duplicate tokens

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update condition for None tokens

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Revert "add: contributor note."

This reverts commit 302fde9409.

* Re-add contributor note and refactored fast tests fixed latents code to remove CPU specific logic.

* make style

* Refactored the code:
	- Updated the checkpoint ids to the new ids where appropriate
	- Refactored the UniDiffuserTextDecoder methods to return only tensors (and made other changes to support this)
	- Cleaned up the code following suggestions by patrickvonplaten

* make style

* Remove padding logic from UniDiffuserTextDecoder.generate_beam since the inputs are already padded to a consistent length.

* Update checkpoint id for small test v1 checkpoint to hf-internal-testing/unidiffuser-test-v1.

* make style

* Make improvements to the documentation.

* Move ImageTextPipelineOutput documentation from /api/pipelines/unidiffuser.mdx to /api/diffusion_pipeline.mdx.

* Change order of arguments for UniDiffuserTextDecoder.generate_beam.

* make style

* Update docs/source/en/api/pipelines/unidiffuser.mdx

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Co-authored-by: Ernie Chu <51432514+ernestchu@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Andranik Movsisyan <48154088+19and99@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Andreas Steiner <andstein@google.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Joseph Coffland <github@joe.coffland.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Tommaso De Rossi <beats.by.morse@gmail.com>
Co-authored-by: Cristian Garcia <cgarcia.e88@gmail.com>
Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
Co-authored-by: 1lint <105617163+1lint@users.noreply.github.com>
Co-authored-by: asfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
Co-authored-by: Chanchana Sornsoontorn <off9955555@gmail.com>
Co-authored-by: hwuebben <wbben123@yahoo.de>
Co-authored-by: superhero-7 <57797766+superhero-7@users.noreply.github.com>
Co-authored-by: root <fulong_ye@163.com>
Co-authored-by: nupurkmr9 <nupurkmr9@gmail.com>
Co-authored-by: Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
Co-authored-by: Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
Co-authored-by: Mishig <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: XinyuYe-Intel <xinyu.ye@intel.com>
Co-authored-by: clarencechen <clarencechenct@gmail.com>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Youssef Adarrab <104783077+youssefadr@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Chengrui Wang <80876977+crywang@users.noreply.github.com>
Co-authored-by: SkyTNT <SKYTNT@outlook.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Isaac <34376531+init-22@users.noreply.github.com>
Co-authored-by: pdoane <pdoane2@gmail.com>
Co-authored-by: Yuchen Fan <fyc0624@gmail.com>
Co-authored-by: Nipun Jindal <jindal.nipun@gmail.com>
Co-authored-by: njindal <njindal@adobe.com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com>
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: Xie Zejian <xiezej@gmail.com>
Co-authored-by: Jair Trejo <jairtrejo@gmail.com>
Co-authored-by: Robert Dargavel Smith <teticio@gmail.com>
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Joqsan <6027118+Joqsan@users.noreply.github.com>
Co-authored-by: NimenDavid <312648004@qq.com>
Co-authored-by: M. Tolga Cangöz <46008593+standardAI@users.noreply.github.com>
Co-authored-by: timegate <timegate@kaist.ac.kr>
Co-authored-by: Jason Kuan <jason9075@users.noreply.github.com>
Co-authored-by: Ilia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
Co-authored-by: Umar <55330742+mu94-csl@users.noreply.github.com>
Co-authored-by: Mylo <36931363+gitmylo@users.noreply.github.com>
Co-authored-by: Markus Pobitzer <markuspobitzer@gmail.com>
Co-authored-by: Cheng Lu <lucheng.lc15@gmail.com>
Co-authored-by: Isamu Isozaki <isamu.website@gmail.com>
Co-authored-by: Cesar Aybar <csaybar@gmail.com>
Co-authored-by: Will Rice <will@spokestack.io>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Rupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
Co-authored-by: sudowind <wfpkueecs@163.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Laureηt <laurentfainsin@protonmail.com>
Co-authored-by: Jongwoo Han <jongwooo.han@gmail.com>
Co-authored-by: Laureηt <laurent@fainsin.bzh>
Co-authored-by: superlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
Co-authored-by: Dev Aggarwal <devxpy@gmail.com>
Co-authored-by: Vimarsh Chaturvedi <vimarsh.c@gmail.com>
Co-authored-by: 7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
Co-authored-by: Glaceon-Hyy <ffheyy0017@gmail.com>
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
Co-authored-by: Isotr0py <41363108+Isotr0py@users.noreply.github.com>
Co-authored-by: w4ffl35 <w4ffl35@ml1.net>
Co-authored-by: Seongsu Park <tjdtnsu@gmail.com>
Co-authored-by: Chanran Kim <seriousran@gmail.com>
Co-authored-by: Ambrosiussen <paul@ambrosiussen.com>
Co-authored-by: Hari Krishna <37787894+hari10599@users.noreply.github.com>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: At-sushi <dkahw210@kyoto.zaq.ne.jp>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: takuoko <to78314910@gmail.com>
Co-authored-by: Birch-san <Birch-san@users.noreply.github.com>
2023-05-26 17:27:30 +05:30
Steven Liu
7948db81c5 [docs] Add AttnProcessor to docs (#3474)
* add attnprocessor to docs

* fix path to class

* create separate page for attnprocessors

* fix path

* fix path for real

* fill in docstrings

* apply feedback

* apply feedback
2023-05-26 17:11:42 +05:30
Patrick von Platen
bf16a97018 Fix controlnet guess mode euler (#3571)
* Fix guess mode controlnet for euler-like schedulers

* make style

* Co-authored-by: Chanchana Sornsoontorn <off.chanchana@gmail.com>

* Add co author Co-authored-by: Chanchana Sornsoontorn <off.chanchana@gmail.com>

* 2nd try
Co-authored-by: Chanchana Sornsoontorn <off.chanchana@gmail.com>
2023-05-26 11:31:51 +01:00
Patrick von Platen
66356e7dd5 Correct inpainting controlnet docs (#3572) 2023-05-26 11:02:30 +01:00
vikasmech
ffa33d631a renamed variable to input_ and output_ (#3507)
* renamed variable to input_ and output_

* changed input
_ to intputs and output_ to outputs
2023-05-26 10:34:11 +01:00
Emin Demirci
d8ce53a8c4 Fix loaded_token reference before definition (#3523) 2023-05-26 10:31:02 +01:00
Patrick von Platen
d114d80fd2 [Stable Diffusion Inpainting] Allow standard text-to-img checkpoints to be useable for SD inpainting (#3533)
* Add default to inpaint

* Make sure controlnet also works with normal sd for inpaint

* Add tests

* improve

* Correct encode images function

* Correct inpaint controlnet

* Improve text2img inpanit

* make style

* up

* up

* up

* up

* fix more
2023-05-26 09:47:42 +01:00
YiYi Xu
e5215dee9a fix broken change for vq pipeline (#3563)
fix vq_model

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-05-25 14:55:31 -10:00
YiYi Xu
03b7a84cbe Add Kandinsky 2.1 (#3308)
add kandinsky2.1

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-05-25 11:28:34 -10:00
Patrick von Platen
f19f128735 Add open parti prompts to docs (#3549)
* Add open parti prompts

* More changes
2023-05-25 11:11:20 +01:00
Isotr0py
a94977b8b3 Fix panorama to support all schedulers (#3546)
* refactor blocks init

* refactor blocks loop

* remove unused function and warnings

* fix scheduler update location

* reformat code

* reformat code again

* fix PNDM test case

* reformat pndm test case
2023-05-24 17:58:08 +05:30
Sayak Paul
8e69708b0d [Examples/DreamBooth] refactor save_model_card utility in dreambooth examples (#3543)
refactor save_model_card utility in dreambooth examples.
2023-05-24 16:16:28 +05:30
Will Berman
db56f8a4f5 explicit broadcasts for assignments (#3535) 2023-05-24 11:17:41 +01:00
Will Berman
c13dbd5c3a fix attention mask pad check (#3531) 2023-05-23 13:11:53 -07:00
Pedro Cuenca
bde2cb5d9b Run torch.compile tests in separate subprocesses (#3503)
* Run ControlNet compile test in a separate subprocess

`torch.compile()` spawns several subprocesses and the GPU memory used
was not reclaimed after the test ran. This approach was taken from
`transformers`.

* Style

* Prepare a couple more compile tests to run in subprocess.

* Use require_torch_2 decorator.

* Test inpaint_compile in subprocess.

* Run img2img compile test in subprocess.

* Run stable diffusion compile test in subprocess.

* style

* Temporarily trigger on pr to test.

* Revert "Temporarily trigger on pr to test."

This reverts commit 82d76868dd.
2023-05-23 19:24:17 +02:00
Patrick von Platen
abab61d49e Update README.md 2023-05-23 17:29:18 +01:00
Patrick von Platen
b402604de4 Update README.md (#3525) 2023-05-23 17:28:39 +01:00
Patrick von Platen
84ce50f08e Improve README (#3524)
Update README.md
2023-05-23 16:53:34 +01:00
Patrick von Platen
9e2734a710 Make sure Diffusers works even if Hub is down (#3447)
* Make sure Diffusers works even if Hub is down

* Make sure hub down is well tested
2023-05-23 14:22:43 +01:00
Patrick von Platen
d4197bf4d7 Allow custom pipeline loading (#3504) 2023-05-23 13:20:55 +01:00
takuoko
b134f6a8b6 [Community] ControlNet Reference (#3508)
add controlnet reference and bugfix

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-23 13:20:34 +01:00
yingjieh
edc6505193 [Community Pipelines]Accelerate inference of stable diffusion by IPEX on CPU (#3105)
* add stable_diffusion_ipex community pipeline

* Update readme.md

* reformat

* reformat

* Update examples/community/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/community/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/community/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/community/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update README.md

* Update README.md

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* style

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-05-23 10:55:14 +02:00
Isotr0py
2f997f30ab Fix bug in panorama pipeline when using dpmsolver scheduler (#3499)
fix panorama pipeline with dpmsolver scheduler
2023-05-23 08:55:15 +05:30
Will Berman
67cd460154 do not scale the initial global step by gradient accumulation steps when loading from checkpoint (#3506) 2023-05-22 15:19:56 -07:00
Birch-san
64bf5d33b7 Support for cross-attention bias / mask (#2634)
* Cross-attention masks

prefer qualified symbol, fix accidental Optional

prefer qualified symbol in AttentionProcessor

prefer qualified symbol in embeddings.py

qualified symbol in transformed_2d

qualify FloatTensor in unet_2d_blocks

move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).

move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.

regenerate modeling_text_unet.py

remove unused import

unet_2d_condition encoder_attention_mask docs

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

transformer_2d encoder_attention_mask docs

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

unet_2d_blocks.py: add parameter name comments

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

revert description. bool-to-bias treatment happens in unet_2d_condition only.

comment parameter names

fix copies, style

* encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D

* encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn

* support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.

* fix mistake made during merge conflict resolution

* regenerate versatile_diffusion

* pass time embedding into checkpointed attention invocation

* always assume encoder_attention_mask is a mask (i.e. not a bias).

* style, fix-copies

* add tests for cross-attention masks

* add test for padding of attention mask

* explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens

* support both masks and biases in Transformer2DModel#forward. document behaviour

* fix-copies

* delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).

* review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.

* remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.

* put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.

* fix-copies

* style

* fix-copies

* put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.

* restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.

* make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.

* fix copies
2023-05-22 17:27:15 +01:00
takuoko
c4359d63e3 [Community] reference only control (#3435)
* add reference only control

* add reference only control

* add reference only control

* fix lint

* fix lint

* reference adain

* bugfix EulerAncestralDiscreteScheduler

* fix style fidelity rule

* fix default output size

* del unused line

* fix deterministic
2023-05-22 16:21:54 +01:00
Hari Krishna
f3d570c273 feat: allow disk offload for diffuser models (#3285)
* allow disk offload for diffuser models

* sort import

* add max_memory argument

* Changed sample[0] to images[0] (#3304)

A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.

* Typo in tutorial (#3295)

* Torch compile graph fix (#3286)

* fix more

* Fix more

* fix more

* Apply suggestions from code review

* fix

* make style

* make fix-copies

* fix

* make sure torch compile

* Clean

* fix test

* Postprocessing refactor img2img (#3268)

* refactor img2img VaeImageProcessor.postprocess

* remove copy from for init, run_safety_checker, decode_latents

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [Torch 2.0 compile] Fix more torch compile breaks (#3313)

* Fix more torch compile breaks

* add tests

* Fix all

* fix controlnet

* fix more

* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>

* Add Horace He as co-author.

Co-authored-by: Horace He <horacehe2007@yahoo.com>

---------

Co-authored-by: Horace He <horacehe2007@yahoo.com>

* fix: scale_lr and sync example readme and docs. (#3299)

* fix: scale_lr and sync example readme and docs.

* fix doc link.

* Update stable_diffusion.mdx (#3310)

fixed import statement

* Fix missing variable assign in DeepFloyd-IF-II (#3315)

Fix missing variable assign

lol

* Correct doc build for patch releases (#3316)

Update build_documentation.yml

* Add Stable Diffusion RePaint to community pipelines (#3320)

* Add Stable Diffsuion RePaint to community pipelines

- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline

* Fix: Remove wrong import

- Remove wrong import
- Minor change in comments

* Fix: Code formatting of stable_diffusion_repaint

* Fix: ruff errors in stable_diffusion_repaint

* Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)

* fix multistep dpmsolver for cosine schedule (deepfloy-if)

* fix a typo

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule

* add test, fix style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [docs] Improve LoRA docs (#3311)

* update docs

* add to toctree

* apply feedback

* Added input pretubation (#3292)

* Added input pretubation

* Fixed spelling

* Update write_own_pipeline.mdx (#3323)

* update controlling generation doc with latest goodies. (#3321)

* [Quality] Make style (#3341)

* Fix config dpm (#3343)

* Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)

* add SDE variant of DPM-Solver and DPM-Solver++

* add test

* fix typo

* fix typo

* Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)

The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.

* Rename --only_save_embeds to --save_as_full_pipeline (#3206)

* Set --only_save_embeds to False by default

Due to how the option is named, it makes more sense to behave like this.

* Refactor only_save_embeds to save_as_full_pipeline

* [AudioLDM] Generalise conversion script (#3328)

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix TypeError when using prompt_embeds and negative_prompt (#2982)

* test: Added test case

* fix: fixed type checking issue on _encode_prompt

* fix: fixed copies consistency

* fix: one copy was not sufficient

* Fix pipeline class on README (#3345)

Update README.md

* Inpainting: typo in docs (#3331)

Typo in docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add `use_Karras_sigmas` to LMSDiscreteScheduler (#3351)

* add karras sigma to lms discrete scheduler

* add test for lms_scheduler karras

* reformat test lms

* Batched load of textual inversions (#3277)

* Batched load of textual inversions

- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info

* Update src/diffusers/loaders.py

Check for duplicate tokens

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update condition for None tokens

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make fix-copies

* [docs] Fix docstring (#3334)

fix docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* if dreambooth lora (#3360)

* update IF stage I pipelines

add fixed variance schedulers and lora loading

* added kv lora attn processor

* allow loading into alternative lora attn processor

* make vae optional

* throw away predicted variance

* allow loading into added kv lora layer

* allow load T5

* allow pre compute text embeddings

* set new variance type in schedulers

* fix copies

* refactor all prompt embedding code

class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable

* fix for when variance type is not defined on scheduler

* do not pre compute validation prompt if not present

* add example test for if lora dreambooth

* add check for train text encoder and pre compute text embeddings

* Postprocessing refactor all others (#3337)

* add text2img

* fix-copies

* add

* add all other pipelines

* add

* add

* add

* add

* add

* make style

* style + fix copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>

* [docs] Improve safetensors docstring (#3368)

* clarify safetensor docstring

* fix typo

* apply feedback

* add: a warning message when using xformers in a PT 2.0 env. (#3365)

* add: a warning message when using xformers in a PT 2.0 env.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)

* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.

* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests

Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution

* Added a resolution test to StableDiffusionInpaintPipelineSlowTests

this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* [docs] Adapt a model (#3326)

* first draft

* apply feedback

* conv_in.weight thrown away

* [docs] Load safetensors (#3333)

* safetensors

* apply feedback

* apply feedback

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* [Docs] Fix stable_diffusion.mdx typo (#3398)

Fix typo in last code block. Correct "prommpts" to "prompt"

* Support ControlNet v1.1 shuffle properly (#3340)

* add inferring_controlnet_cond_batch

* Revert "add inferring_controlnet_cond_batch"

This reverts commit abe8d6311d.

* set guess_mode to True
whenever global_pool_conditions is True

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* nit

* add integration test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Tests] better determinism (#3374)

* enable deterministic pytorch and cuda operations.

* disable manual seeding.

* make style && make quality for unet_2d tests.

* enable determinism for the unet2dconditional model.

* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.

* relax tolerance (very weird issue, though).

* revert to torch manual_seed() where needed.

* relax more tolerance.

* better placement of the cuda variable and relax more tolerance.

* enable determinism for 3d condition model.

* relax tolerance.

* add: determinism to alt_diffusion.

* relax tolerance for alt diffusion.

* dance diffusion.

* dance diffusion is flaky.

* test_dict_tuple_outputs_equivalent edit.

* fix two more tests.

* fix more ddim tests.

* fix: argument.

* change to diff in place of difference.

* fix: test_save_load call.

* test_save_load_float16 call.

* fix: expected_max_diff

* fix: paint by example.

* relax tolerance.

* add determinism to 1d unet model.

* torch 2.0 regressions seem to be brutal

* determinism to vae.

* add reason to skipping.

* up tolerance.

* determinism to vq.

* determinism to cuda.

* determinism to the generic test pipeline file.

* refactor general pipelines testing a bit.

* determinism to alt diffusion i2i

* up tolerance for alt diff i2i and audio diff

* up tolerance.

* determinism to audioldm

* increase tolerance for audioldm lms.

* increase tolerance for paint by paint.

* increase tolerance for repaint.

* determinism to cycle diffusion and sd 1.

* relax tol for cycle diffusion 🚲

* relax tol for sd 1.0

* relax tol for controlnet.

* determinism to img var.

* relax tol for img variation.

* tolerance to i2i sd

* make style

* determinism to inpaint.

* relax tolerance for inpaiting.

* determinism for inpainting legacy

* relax tolerance.

* determinism to instruct pix2pix

* determinism to model editing.

* model editing tolerance.

* panorama determinism

* determinism to pix2pix zero.

* determinism to sag.

* sd 2. determinism

* sd. tolerance

* disallow tf32 matmul.

* relax tolerance is all you need.

* make style and determinism to sd 2 depth

* relax tolerance for depth.

* tolerance to diffedit.

* tolerance to sd 2 inpaint.

* up tolerance.

* determinism in upscaling.

* tolerance in upscaler.

* more tolerance relaxation.

* determinism to v pred.

* up tol for v_pred

* unclip determinism

* determinism to unclip img2img

* determinism to text to video.

* determinism to last set of tests

* up tol.

* vq cumsum doesn't have a deterministic kernel

* relax tol

* relax tol

* [docs] Add transformers to install (#3388)

add transformers to install

* [deepspeed] partial ZeRO-3 support (#3076)

* [deepspeed] partial ZeRO-3 support

* cleanup

* improve deepspeed fixes

* Improve

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add omegaconf for tests (#3400)

Add omegaconfg

* Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)

* Improve checkpointing lora

* fix more

* Improve doc string

* Update src/diffusers/loaders.py

* make stytle

* Apply suggestions from code review

* Update src/diffusers/loaders.py

* Apply suggestions from code review

* Apply suggestions from code review

* better

* Fix all

* Fix multi-GPU dreambooth

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix all

* make style

* make style

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix docker file (#3402)

* up

* up

* fix: deepseepd_plugin retrieval from accelerate state (#3410)

* [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)

* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring

* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Don't install accelerate and transformers from source (#3415)

* Don't install transformers and accelerate from source (#3414)

* Improve fast tests (#3416)

Update pr_tests.yml

* attention refactor: the trilogy  (#3387)

* Replace `AttentionBlock` with `Attention`

* use _from_deprecated_attn_block check re: @patrickvonplaten

* [Docs] update the PT 2.0 optimization doc with latest findings (#3370)

* add: benchmarking stats for A100 and V100.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address patrick's comments.

* add: rtx 4090 stats

* ⚔ benchmark reports done

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* 3313 pr link.

* add: plots.

Co-authored-by: Pedro <pedro@huggingface.co>

* fix formattimg

* update number percent.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix style rendering (#3433)

* Fix style rendering.

* Fix typo

* unCLIP scheduler do not use note (#3417)

* Replace deprecated command with environment file (#3409)

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix warning message pipeline loading (#3446)

* add stable diffusion tensorrt img2img pipeline (#3419)

* add stable diffusion tensorrt img2img pipeline

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update docstrings

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* Refactor controlnet and add img2img and inpaint (#3386)

* refactor controlnet and add img2img and inpaint

* First draft to get pipelines to work

* make style

* Fix more

* Fix more

* More tests

* Fix more

* Make inpainting work

* make style and more tests

* Apply suggestions from code review

* up

* make style

* Fix imports

* Fix more

* Fix more

* Improve examples

* add test

* Make sure import is correctly deprecated

* Make sure everything works in compile mode

* make sure authorship is correctly attributed

* [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)

* Add DPM-Solver Multistep Inverse Scheduler

* Add draft tests for DiffEdit

* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents

* Fix tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* [Docs] Fix incomplete docstring for resnet.py (#3438)

Fix incomplete docstrings for resnet.py

* fix tiled vae blend extent range (#3384)

fix tiled vae bleand extent range

* Small update to "Next steps" section (#3443)

Small update to "Next steps" section:

- PyTorch 2 is recommended.
- Updated improvement figures.

* Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)

* Update pipeline_if_superresolution.py

Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape

* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments

* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Adding 'strength' parameter to StableDiffusionInpaintingPipeline  (#3424)

* Added explanation of 'strength' parameter

* Added get_timesteps function which relies on new strength parameter

* Added `strength` parameter which defaults to 1.

* Swapped ordering so `noise_timestep` can be calculated before masking the image

this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.

* Added strength to check_inputs, throws error if out of range

* Changed `prepare_latents` to initialise latents w.r.t strength

inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.

* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline

still need to add correct regression values

* Created a is_strength_max to initialise from pure random noise

* Updated unit tests w.r.t new strength parameter + fixed new strength unit test

* renamed parameter to avoid confusion with variable of same name

* Updated regression values for new strength test - now passes

* removed 'copied from' comment as this method is now different and divergent from the cpy

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Ensure backwards compatibility for prepare_mask_and_masked_image

created a return_image boolean and initialised to false

* Ensure backwards compatibility for prepare_latents

* Fixed copy check typo

* Fixes w.r.t backward compibility changes

* make style

* keep function argument ordering same for backwards compatibility in callees with copied from statements

* make fix-copies

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>

* [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)

Added bugfix using f strings.

* Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)

* gradient checkpointing bug fix

* bug fix; changes for reviews

* reformat

* reformat

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Make dreambooth lora more robust to orig unet (#3462)

* Make dreambooth lora more robust to orig unet

* up

* Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)

Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.

* Add min snr to text2img lora training script (#3459)

add min snr to text2img lora training script

* Add inpaint lora scale support (#3460)

* add inpaint lora scale support

* add inpaint lora scale test

---------

Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>

* [From ckpt] Fix from_ckpt (#3466)

* Correct from_ckpt

* make style

* Update full dreambooth script to work with IF (#3425)

* Add IF dreambooth docs (#3470)

* parameterize pass single args through tuple (#3477)

* attend and excite tests disable determinism on the class level (#3478)

* dreambooth docs torch.compile note (#3471)

* dreambooth docs torch.compile note

* Update examples/dreambooth/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/dreambooth/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add: if entry in the dreambooth training docs. (#3472)

* [docs] Textual inversion inference (#3473)

* add textual inversion inference to docs

* add to toctree

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* [docs] Distributed inference (#3376)

* distributed inference

* move to inference section

* apply feedback

* update with split_between_processes

* apply feedback

* [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)

explicit view kernel size as number elements in flattened indices

* mps & onnx tests rework (#3449)

* Remove ONNX tests from PR.

They are already a part of push_tests.yml.

* Remove mps tests from PRs.

They are already performed on push.

* Fix workflow name for fast push tests.

* Extract mps tests to a workflow.

For better control/filtering.

* Remove --extra-index-url from mps tests

* Increase tolerance of mps test

This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.

* Temporarily run mps tests on pr

So we can test.

* Revert "Temporarily run mps tests on pr"

Tests passed, go back to running on push.

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Ilia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
Co-authored-by: Umar <55330742+mu94-csl@users.noreply.github.com>
Co-authored-by: Mylo <36931363+gitmylo@users.noreply.github.com>
Co-authored-by: Markus Pobitzer <markuspobitzer@gmail.com>
Co-authored-by: Cheng Lu <lucheng.lc15@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Isamu Isozaki <isamu.website@gmail.com>
Co-authored-by: Cesar Aybar <csaybar@gmail.com>
Co-authored-by: Will Rice <will@spokestack.io>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: At-sushi <dkahw210@kyoto.zaq.ne.jp>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Isotr0py <41363108+Isotr0py@users.noreply.github.com>
Co-authored-by: pdoane <pdoane2@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Rupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
Co-authored-by: sudowind <wfpkueecs@163.com>
Co-authored-by: Takuma Mori <takuma104@gmail.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Laureηt <laurentfainsin@protonmail.com>
Co-authored-by: Jongwoo Han <jongwooo.han@gmail.com>
Co-authored-by: asfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
Co-authored-by: clarencechen <clarencechenct@gmail.com>
Co-authored-by: Laureηt <laurent@fainsin.bzh>
Co-authored-by: superlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
Co-authored-by: Dev Aggarwal <devxpy@gmail.com>
Co-authored-by: Vimarsh Chaturvedi <vimarsh.c@gmail.com>
Co-authored-by: 7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
Co-authored-by: Glaceon-Hyy <ffheyy0017@gmail.com>
Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
2023-05-22 16:11:08 +01:00
Patrick von Platen
2b56e8ca68 make style 2023-05-22 16:49:46 +02:00
Ambrosiussen
b8b5daaee3 DataLoader respecting EXIF data in Training Images (#3465)
* DataLoader will now bake in any transforms or image manipulations contained in the EXIF

Images may have rotations stored in EXIF. Training using such images will cause those transforms to be ignored while training and thus produce unexpected results

* Fixed the Dataloading EXIF issue in main DreamBooth training as well

* Run make style (black & isort)
2023-05-22 15:49:35 +01:00
Seongsu Park
229fd8cbca [Docs] Korean translation (optimization, training) (#3488)
* feat) optimization kr translation

* fix) typo, italic setting

* feat) dreambooth, text2image kr

* feat) lora kr

* fix) LoRA

* fix) fp16 fix

* fix) doc-builder style

* fix) fp16 일부 단어 수정

* fix) fp16 style fix

* fix) opt, training docs update

* feat) toctree update

* feat) toctree update

---------

Co-authored-by: Chanran Kim <seriousran@gmail.com>
2023-05-22 15:46:16 +01:00
Patrick von Platen
a2874af297 make style 2023-05-22 16:44:48 +02:00
w4ffl35
0160e5146f Adds local_files_only bool to prevent forced online connection (#3486) 2023-05-22 15:44:36 +01:00
Isotr0py
194b0a425d Add use_Karras_sigmas to DPMSolverSinglestepScheduler (#3476)
* add use_karras_sigmas

* add karras test

* add doc
2023-05-22 15:43:56 +01:00
Patrick von Platen
6dd3871ae0 Fix DPM single (#3413)
* Fix DPM single

* add test

* fix one more bug

* Apply suggestions from code review

Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>

---------

Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
2023-05-22 14:32:39 +01:00
Patrick von Platen
51843fd7d0 Refactor full determinism (#3485)
* up

* fix more

* Apply suggestions from code review

* fix more

* fix more

* Check it

* Remove 16:8

* fix more

* fix more

* fix more

* up

* up

* Test only stable diffusion

* Test only two files

* up

* Try out spinning up processes that can be killed

* up

* Apply suggestions from code review

* up

* up
2023-05-22 11:15:11 +01:00
Sayak Paul
49ad61c204 [Docs] add note on local directory path. (#3397)
add note on local directory path.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-21 15:26:56 +05:30
Sayak Paul
4bbc51d94d [Attention processor] Better warning message when shifting to AttnProcessor2_0 (#3457)
* add: debugging to enabling memory efficient processing

* add: better warning message.
2023-05-21 15:26:47 +05:30
Pedro Cuenca
f7b4f51cc2 mps & onnx tests rework (#3449)
* Remove ONNX tests from PR.

They are already a part of push_tests.yml.

* Remove mps tests from PRs.

They are already performed on push.

* Fix workflow name for fast push tests.

* Extract mps tests to a workflow.

For better control/filtering.

* Remove --extra-index-url from mps tests

* Increase tolerance of mps test

This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
(Ventura 13.2). I ran the local tests following the same steps that
exist in the CI workflow.

* Temporarily run mps tests on pr

So we can test.

* Revert "Temporarily run mps tests on pr"

Tests passed, go back to running on push.
2023-05-20 13:43:07 +02:00
Will Berman
85eff637aa [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)
explicit view kernel size as number elements in flattened indices
2023-05-19 10:45:56 -07:00
Steven Liu
e589bdb956 [docs] Distributed inference (#3376)
* distributed inference

* move to inference section

* apply feedback

* update with split_between_processes

* apply feedback
2023-05-19 10:07:33 -07:00
Steven Liu
00c76f6ff1 [docs] Textual inversion inference (#3473)
* add textual inversion inference to docs

* add to toctree

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-05-19 09:47:27 -07:00
Sayak Paul
e343443565 add: if entry in the dreambooth training docs. (#3472) 2023-05-19 07:47:28 +05:30
Will Berman
8d646f2294 dreambooth docs torch.compile note (#3471)
* dreambooth docs torch.compile note

* Update examples/dreambooth/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/dreambooth/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-05-19 07:40:14 +05:30
Will Berman
8917769499 attend and excite tests disable determinism on the class level (#3478) 2023-05-18 10:24:49 -07:00
Will Berman
49b7ccfb96 parameterize pass single args through tuple (#3477) 2023-05-18 10:14:29 -07:00
Will Berman
7200985eab Add IF dreambooth docs (#3470) 2023-05-17 11:56:10 -07:00
Will Berman
c9f939bf98 Update full dreambooth script to work with IF (#3425) 2023-05-17 10:42:20 -07:00
Patrick von Platen
2858d7e15e [From ckpt] Fix from_ckpt (#3466)
* Correct from_ckpt

* make style
2023-05-17 13:26:53 +01:00
Glaceon-Hyy
88295f92d9 Add inpaint lora scale support (#3460)
* add inpaint lora scale support

* add inpaint lora scale test

---------

Co-authored-by: yueyang.hyy <yueyang.hyy@alibaba-inc.com>
2023-05-17 16:58:19 +05:30
wfng92
2faf91dbde Add min snr to text2img lora training script (#3459)
add min snr to text2img lora training script
2023-05-17 16:37:45 +05:30
cmdr2
bd78f63a54 Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)
Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.
2023-05-17 11:24:59 +01:00
Patrick von Platen
3ebd2d1f9e Make dreambooth lora more robust to orig unet (#3462)
* Make dreambooth lora more robust to orig unet

* up
2023-05-17 11:20:13 +01:00
7eu7d7
15f1bab13b Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)
* gradient checkpointing bug fix

* bug fix; changes for reviews

* reformat

* reformat

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-17 11:06:04 +01:00
Vimarsh Chaturvedi
415c616712 [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)
Added bugfix using f strings.
2023-05-17 11:05:33 +01:00
Rupert Menneer
c09c4f3ab7 Adding 'strength' parameter to StableDiffusionInpaintingPipeline (#3424)
* Added explanation of 'strength' parameter

* Added get_timesteps function which relies on new strength parameter

* Added `strength` parameter which defaults to 1.

* Swapped ordering so `noise_timestep` can be calculated before masking the image

this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.

* Added strength to check_inputs, throws error if out of range

* Changed `prepare_latents` to initialise latents w.r.t strength

inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.

* WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline

still need to add correct regression values

* Created a is_strength_max to initialise from pure random noise

* Updated unit tests w.r.t new strength parameter + fixed new strength unit test

* renamed parameter to avoid confusion with variable of same name

* Updated regression values for new strength test - now passes

* removed 'copied from' comment as this method is now different and divergent from the cpy

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Ensure backwards compatibility for prepare_mask_and_masked_image

created a return_image boolean and initialised to false

* Ensure backwards compatibility for prepare_latents

* Fixed copy check typo

* Fixes w.r.t backward compibility changes

* make style

* keep function argument ordering same for backwards compatibility in callees with copied from statements

* make fix-copies

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
2023-05-17 11:05:16 +01:00
Dev Aggarwal
6070b32fcf Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)
* Update pipeline_if_superresolution.py

Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape

* IFSuperResolutionPipeline: allow the user to override the height and width through the arguments

* update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-16 19:21:07 -07:00
Pedro Cuenca
0392eceba8 Small update to "Next steps" section (#3443)
Small update to "Next steps" section:

- PyTorch 2 is recommended.
- Updated improvement figures.
2023-05-16 19:35:47 +01:00
superlabs-dev
92ea5baca2 fix tiled vae blend extent range (#3384)
fix tiled vae bleand extent range
2023-05-16 19:33:47 +01:00
Laureηt
754fac82d2 [Docs] Fix incomplete docstring for resnet.py (#3438)
Fix incomplete docstrings for resnet.py
2023-05-16 19:33:34 +01:00
clarencechen
17f9aed79c [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)
* Add DPM-Solver Multistep Inverse Scheduler

* Add draft tests for DiffEdit

* Add inverse sde-dpmsolver steps to tune image diversity from inverted latents

* Fix tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-16 19:26:53 +01:00
Patrick von Platen
886575ee43 Refactor controlnet and add img2img and inpaint (#3386)
* refactor controlnet and add img2img and inpaint

* First draft to get pipelines to work

* make style

* Fix more

* Fix more

* More tests

* Fix more

* Make inpainting work

* make style and more tests

* Apply suggestions from code review

* up

* make style

* Fix imports

* Fix more

* Fix more

* Improve examples

* add test

* Make sure import is correctly deprecated

* Make sure everything works in compile mode

* make sure authorship is correctly attributed
2023-05-16 19:07:21 +01:00
asfiyab-nvidia
9d44e2fb66 add stable diffusion tensorrt img2img pipeline (#3419)
* add stable diffusion tensorrt img2img pipeline

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update docstrings

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
2023-05-16 14:28:01 +01:00
Patrick von Platen
d2285f5158 fix warning message pipeline loading (#3446) 2023-05-16 12:58:24 +01:00
Jongwoo Han
326f326e17 Replace deprecated command with environment file (#3409)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-16 12:51:10 +01:00
Will Berman
29b1325a5a unCLIP scheduler do not use note (#3417) 2023-05-15 09:47:14 -06:00
Pedro Cuenca
7a32b6beeb Fix style rendering (#3433)
* Fix style rendering.

* Fix typo
2023-05-15 14:32:34 +05:30
Sayak Paul
bdefabd1a8 [Docs] update the PT 2.0 optimization doc with latest findings (#3370)
* add: benchmarking stats for A100 and V100.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address patrick's comments.

* add: rtx 4090 stats

* ⚔ benchmark reports done

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* 3313 pr link.

* add: plots.

Co-authored-by: Pedro <pedro@huggingface.co>

* fix formattimg

* update number percent.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-05-13 15:12:01 +05:30
Will Berman
909742dbd6 attention refactor: the trilogy (#3387)
* Replace `AttentionBlock` with `Attention`

* use _from_deprecated_attn_block check re: @patrickvonplaten
2023-05-12 08:54:09 -06:00
Patrick von Platen
28f404349d Improve fast tests (#3416)
Update pr_tests.yml
2023-05-12 14:01:03 +01:00
Patrick von Platen
03e5126978 Don't install transformers and accelerate from source (#3414) 2023-05-12 13:15:23 +01:00
Patrick von Platen
b1b92f4a98 Don't install accelerate and transformers from source (#3415) 2023-05-12 13:14:04 +01:00
Laureηt
7f6373d264 [Docs] Add sigmoid beta_scheduler to docstrings of relevant Schedulers (#3399)
* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring

* Add `sigmoid` beta scheduler to `RePaintScheduler` docstring

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-12 12:48:26 +01:00
Sayak Paul
3a237f4fa2 fix: deepseepd_plugin retrieval from accelerate state (#3410) 2023-05-12 10:02:22 +01:00
Patrick von Platen
1a5797c6d4 Fix docker file (#3402)
* up

* up
2023-05-11 20:28:37 +01:00
Patrick von Platen
f92253015c Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)
* Improve checkpointing lora

* fix more

* Improve doc string

* Update src/diffusers/loaders.py

* make stytle

* Apply suggestions from code review

* Update src/diffusers/loaders.py

* Apply suggestions from code review

* Apply suggestions from code review

* better

* Fix all

* Fix multi-GPU dreambooth

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fix all

* make style

* make style

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-05-11 19:28:09 +01:00
Patrick von Platen
58c6f9cb71 Add omegaconf for tests (#3400)
Add omegaconfg
2023-05-11 18:03:27 +01:00
Stas Bekman
af2a237676 [deepspeed] partial ZeRO-3 support (#3076)
* [deepspeed] partial ZeRO-3 support

* cleanup

* improve deepspeed fixes

* Improve

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-11 16:59:20 +01:00
Steven Liu
d71db894eb [docs] Add transformers to install (#3388)
add transformers to install
2023-05-11 08:52:28 -07:00
Sayak Paul
90f5f3c4d4 [Tests] better determinism (#3374)
* enable deterministic pytorch and cuda operations.

* disable manual seeding.

* make style && make quality for unet_2d tests.

* enable determinism for the unet2dconditional model.

* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.

* relax tolerance (very weird issue, though).

* revert to torch manual_seed() where needed.

* relax more tolerance.

* better placement of the cuda variable and relax more tolerance.

* enable determinism for 3d condition model.

* relax tolerance.

* add: determinism to alt_diffusion.

* relax tolerance for alt diffusion.

* dance diffusion.

* dance diffusion is flaky.

* test_dict_tuple_outputs_equivalent edit.

* fix two more tests.

* fix more ddim tests.

* fix: argument.

* change to diff in place of difference.

* fix: test_save_load call.

* test_save_load_float16 call.

* fix: expected_max_diff

* fix: paint by example.

* relax tolerance.

* add determinism to 1d unet model.

* torch 2.0 regressions seem to be brutal

* determinism to vae.

* add reason to skipping.

* up tolerance.

* determinism to vq.

* determinism to cuda.

* determinism to the generic test pipeline file.

* refactor general pipelines testing a bit.

* determinism to alt diffusion i2i

* up tolerance for alt diff i2i and audio diff

* up tolerance.

* determinism to audioldm

* increase tolerance for audioldm lms.

* increase tolerance for paint by paint.

* increase tolerance for repaint.

* determinism to cycle diffusion and sd 1.

* relax tol for cycle diffusion 🚲

* relax tol for sd 1.0

* relax tol for controlnet.

* determinism to img var.

* relax tol for img variation.

* tolerance to i2i sd

* make style

* determinism to inpaint.

* relax tolerance for inpaiting.

* determinism for inpainting legacy

* relax tolerance.

* determinism to instruct pix2pix

* determinism to model editing.

* model editing tolerance.

* panorama determinism

* determinism to pix2pix zero.

* determinism to sag.

* sd 2. determinism

* sd. tolerance

* disallow tf32 matmul.

* relax tolerance is all you need.

* make style and determinism to sd 2 depth

* relax tolerance for depth.

* tolerance to diffedit.

* tolerance to sd 2 inpaint.

* up tolerance.

* determinism in upscaling.

* tolerance in upscaler.

* more tolerance relaxation.

* determinism to v pred.

* up tol for v_pred

* unclip determinism

* determinism to unclip img2img

* determinism to text to video.

* determinism to last set of tests

* up tol.

* vq cumsum doesn't have a deterministic kernel

* relax tol

* relax tol
2023-05-11 16:38:14 +01:00
Takuma Mori
01c056f094 Support ControlNet v1.1 shuffle properly (#3340)
* add inferring_controlnet_cond_batch

* Revert "add inferring_controlnet_cond_batch"

This reverts commit abe8d6311d.

* set guess_mode to True
whenever global_pool_conditions is True

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* nit

* add integration test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-11 14:58:07 +01:00
sudowind
e0b56d2b18 [Docs] Fix stable_diffusion.mdx typo (#3398)
Fix typo in last code block. Correct "prommpts" to "prompt"
2023-05-11 15:10:16 +02:00
Patrick von Platen
f740d357c9 make style 2023-05-11 11:31:49 +02:00
Steven Liu
5e746753d6 [docs] Load safetensors (#3333)
* safetensors

* apply feedback

* apply feedback

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-11 10:31:27 +01:00
Steven Liu
c49e9ede4d [docs] Adapt a model (#3326)
* first draft

* apply feedback

* conv_in.weight thrown away
2023-05-10 16:02:48 -07:00
Patrick von Platen
82e6fa56f0 make style 2023-05-10 20:16:18 +02:00
Rupert Menneer
edb087a217 StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.

* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests

Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution

* Added a resolution test to StableDiffusionInpaintPipelineSlowTests

this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-10 19:14:25 +01:00
Sayak Paul
94a0c644a8 add: a warning message when using xformers in a PT 2.0 env. (#3365)
* add: a warning message when using xformers in a PT 2.0 env.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-10 07:22:04 +05:30
Steven Liu
26832aa5ef [docs] Improve safetensors docstring (#3368)
* clarify safetensor docstring

* fix typo

* apply feedback
2023-05-09 16:15:05 -07:00
YiYi Xu
c559479592 Postprocessing refactor all others (#3337)
* add text2img

* fix-copies

* add

* add all other pipelines

* add

* add

* add

* add

* add

* make style

* style + fix copies

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-05-09 22:28:30 +01:00
Will Berman
a757b2db6e if dreambooth lora (#3360)
* update IF stage I pipelines

add fixed variance schedulers and lora loading

* added kv lora attn processor

* allow loading into alternative lora attn processor

* make vae optional

* throw away predicted variance

* allow loading into added kv lora layer

* allow load T5

* allow pre compute text embeddings

* set new variance type in schedulers

* fix copies

* refactor all prompt embedding code

class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable

* fix for when variance type is not defined on scheduler

* do not pre compute validation prompt if not present

* add example test for if lora dreambooth

* add check for train text encoder and pre compute text embeddings
2023-05-09 10:24:36 -07:00
Steven Liu
571bc1ea11 [docs] Fix docstring (#3334)
fix docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-08 12:08:23 -07:00
Patrick von Platen
f381402ec8 make fix-copies 2023-05-08 10:55:02 +02:00
pdoane
3d8b3d7cd8 Batched load of textual inversions (#3277)
* Batched load of textual inversions

- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info

* Update src/diffusers/loaders.py

Check for duplicate tokens

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update condition for None tokens

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-08 09:54:30 +01:00
Isotr0py
0ffac97933 Add use_Karras_sigmas to LMSDiscreteScheduler (#3351)
* add karras sigma to lms discrete scheduler

* add test for lms_scheduler karras

* reformat test lms
2023-05-06 12:19:27 +01:00
Lysandre Debut
b0966f5801 Inpainting: typo in docs (#3331)
Typo in docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-06 12:13:33 +01:00
Lucca Zenóbio
0407c3e7d0 Fix pipeline class on README (#3345)
Update README.md
2023-05-06 12:06:52 +01:00
At-sushi
7ce3fa010a Fix TypeError when using prompt_embeds and negative_prompt (#2982)
* test: Added test case

* fix: fixed type checking issue on _encode_prompt

* fix: fixed copies consistency

* fix: one copy was not sufficient
2023-05-06 12:04:07 +01:00
Sanchit Gandhi
abd86d1c17 [AudioLDM] Generalise conversion script (#3328)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-06 12:00:42 +01:00
Adrià Arrufat
e9aa0925a8 Rename --only_save_embeds to --save_as_full_pipeline (#3206)
* Set --only_save_embeds to False by default

Due to how the option is named, it makes more sense to behave like this.

* Refactor only_save_embeds to save_as_full_pipeline
2023-05-06 12:00:30 +01:00
Will Rice
36f43ea75a Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)
The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.
2023-05-05 19:50:41 +01:00
Cheng Lu
27522b585b Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)
* add SDE variant of DPM-Solver and DPM-Solver++

* add test

* fix typo

* fix typo
2023-05-05 16:03:47 +01:00
Patrick von Platen
8d4c7d0ea0 Fix config dpm (#3343) 2023-05-05 12:02:33 +01:00
Patrick von Platen
29ad75dc3b [Quality] Make style (#3341) 2023-05-05 10:06:09 +01:00
Sayak Paul
379197a2f0 update controlling generation doc with latest goodies. (#3321) 2023-05-05 11:22:29 +05:30
Cesar Aybar
79c0e24a14 Update write_own_pipeline.mdx (#3323) 2023-05-04 10:58:27 -07:00
Isamu Isozaki
fa9e35fca4 Added input pretubation (#3292)
* Added input pretubation

* Fixed spelling
2023-05-04 18:12:32 +05:30
Steven Liu
4bae76e453 [docs] Improve LoRA docs (#3311)
* update docs

* add to toctree

* apply feedback
2023-05-04 11:28:44 +05:30
Cheng Lu
022479416f Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)
* fix multistep dpmsolver for cosine schedule (deepfloy-if)

* fix a typo

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule

* add test, fix style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-05-03 18:00:59 +01:00
Markus Pobitzer
2dd408504a Add Stable Diffusion RePaint to community pipelines (#3320)
* Add Stable Diffsuion RePaint to community pipelines

- Adds Stable Diffsuion RePaint to community pipelines
- Add Readme enty for pipeline

* Fix: Remove wrong import

- Remove wrong import
- Minor change in comments

* Fix: Code formatting of stable_diffusion_repaint

* Fix: ruff errors in stable_diffusion_repaint
2023-05-03 17:59:49 +01:00
Patrick von Platen
79bd909dbd Correct doc build for patch releases (#3316)
Update build_documentation.yml
2023-05-03 17:33:41 +01:00
Mylo
63a8ef7b73 Fix missing variable assign in DeepFloyd-IF-II (#3315)
Fix missing variable assign

lol
2023-05-03 17:31:04 +01:00
Umar
0ccad2ad2d Update stable_diffusion.mdx (#3310)
fixed import statement
2023-05-03 15:53:14 +01:00
Sayak Paul
efc48da23b fix: scale_lr and sync example readme and docs. (#3299)
* fix: scale_lr and sync example readme and docs.

* fix doc link.
2023-05-03 10:13:05 +05:30
Patrick von Platen
5c7a35a259 [Torch 2.0 compile] Fix more torch compile breaks (#3313)
* Fix more torch compile breaks

* add tests

* Fix all

* fix controlnet

* fix more

* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>

* Add Horace He as co-author.

Co-authored-by: Horace He <horacehe2007@yahoo.com>

---------

Co-authored-by: Horace He <horacehe2007@yahoo.com>
2023-05-02 18:51:00 +01:00
YiYi Xu
a7f25b4a88 Postprocessing refactor img2img (#3268)
* refactor img2img VaeImageProcessor.postprocess

* remove copy from for init, run_safety_checker, decode_latents

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-05-01 07:54:09 -10:00
Patrick von Platen
0e82fb19e1 Torch compile graph fix (#3286)
* fix more

* Fix more

* fix more

* Apply suggestions from code review

* fix

* make style

* make fix-copies

* fix

* make sure torch compile

* Clean

* fix test
2023-05-01 16:45:43 +02:00
Ilia Larchenko
709cf554f6 Typo in tutorial (#3295) 2023-05-01 15:44:30 +02:00
Ilia Larchenko
536684eb2f Changed sample[0] to images[0] (#3304)
A pipeline object stores the results in `images` not in `sample`.
Current code blocks don't work.
2023-05-01 15:33:51 +02:00
Will Berman
384c83aa9a temp disable spectogram diffusion tests (#3278)
The note-seq package throws an error on import because the default installed version of Ipython
is not compatible with python 3.8 which we run in the CI.
https://github.com/huggingface/diffusers/actions/runs/4830121056/jobs/8605954838#step:7:9
2023-04-28 12:05:53 -07:00
YiYi Xu
14b460614b [doc] add link to training script (#3271)
add link to training script

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
2023-04-28 07:14:30 -10:00
Patrick von Platen
4d35d7fea3 Allow disabling torch 2_0 attention (#3273)
* Allow disabling torch 2_0 attention

* make style

* Update src/diffusers/models/attention.py
2023-04-28 13:31:11 +02:00
Jason Kuan
a7b0671c07 add constant learning rate with custom rule (#3133)
* add constant lr with rules

* add constant with rules in TYPE_TO_SCHEDULER_FUNCTION

* add constant lr rate with rule

* hotfix code quality

* fix doc style

* change name constant_with_rules to piecewise constant
2023-04-28 16:29:56 +05:30
clarencechen
be0bfcec4d Diffedit Zero-Shot Inpainting Pipeline (#2837)
* Update Pix2PixZero Auto-correlation Loss

* Add Stable Diffusion DiffEdit pipeline

* Add draft documentation and import code

* Bugfixes and refactoring

* Add option to not decode latents in the inversion process

* Harmonize preprocessing

* Revert "Update Pix2PixZero Auto-correlation Loss"

This reverts commit b218062fed.

* Update annotations

* rename `compute_mask` to `generate_mask`

* Update documentation

* Update docs

* Update Docs

* Fix copy

* Change shape of output latents to batch first

* Update docs

* Add first draft for tests

* Bugfix and update tests

* Add `cross_attention_kwargs` support for all pipeline methods

* Fix Copies

* Add support for PIL image latents

Add support for mask broadcasting

Update docs and tests

Align `mask` argument to `mask_image`

Remove height and width arguments

* Enable MPS Tests

* Move example docstrings

* Fix test

* Fix test

* fix pipeline inheritance

* Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline

* Register modules set to `None` in config for `test_save_load_optional_components`

* Move fixed logic to specific test class

* Clean changes to other pipelines

* Update new tests to coordinate with #2953

* Update slow tests for better results

* Safety to avoid potential problems with torch.inference_mode

* Add reference in SD Pipeline Overview

* Fix tests again

* Enforce determinism in noise for generate_mask

* Fix copies

* Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`

* Add LoraLoaderMixin and update `prepare_image_latents`

* clean up repeat and reg

* bugfix

* Remove invalid args from docs

Suppress spurious warning by repeating image before latent to mask gen
2023-04-28 16:28:26 +05:30
Patrick von Platen
d464214464 Let's make sure that dreambooth always uploads to the Hub (#3272)
* Update Dreambooth README

* Adapt all docs as well

* automatically write model card

* fix

* make style
2023-04-28 11:39:50 +01:00
timegate
6290668254 Add multiple conditions to StableDiffusionControlNetInpaintPipeline (#3125)
* try multi controlnet inpaint

* multi controlnet inpaint

* multi controlnet inpaint
2023-04-28 10:58:10 +01:00
M. Tolga Cangöz
73cc43109b Update logging.mdx (#2863)
Fix typos
2023-04-28 10:57:27 +01:00
NimenDavid
0614fd2038 [Docs]zh translated docs update (#3245)
* zh translated docs update

* update _toctree
2023-04-28 10:23:02 +01:00
Joqsan
462b4edd31 [Community Pipelines] EDICT pipeline implementation (#3153)
* EDICT pipeline initial commit

- Starting point taking from https://github.com/Joqsan/edict-diffusion

* refactor __init__() method

* minor refactoring

* refactor scheduler code

- remove scheduler and move its methods to the EDICTPipeline class

* make CFG optional
- refactor encode_prompt().
- include optional generator for sampling with vae.
- minor variable renaming

* add EDICT pipeline description to README.md

* replace preprocess() with VaeImageProcessor

* run make style and make quality commands

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-28 10:11:29 +01:00
Sayak Paul
71de5b7051 [LoRA] quality of life improvements in the loading semantics and docs (#3180)
* 👽 qol improvements for LoRA.

* better function name?

* fix: LoRA weight loading with the new format.

* address Patrick's comments.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* change wording around encouraging the use of load_lora_weights().

* fix: function name.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-28 11:36:49 +05:30
Will Berman
256e6960cb [docs] add notes for stateful model changes (#3252)
* [docs] add notes for stateful model changes

* Update docs/source/en/optimization/fp16.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* link to accelerate docs for discarding hooks

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-04-27 11:05:08 -07:00
YiYi Xu
329d1df8f2 update notebook (#3259)
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
2023-04-27 07:03:56 -10:00
Patrick von Platen
364d59d13b Fix community pipelines (#3266) 2023-04-27 17:12:08 +01:00
Patrick von Platen
2ced899cc7 Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline"" (#3265)
Revert "Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (#3201)"

This reverts commit 91a2a80eb2.
2023-04-27 16:45:37 +01:00
Robert Dargavel Smith
b63419a28a AudioDiffusionPipeline - fix encode method after config changes (#3114)
* config fixes

* deprecate get_input_dims
2023-04-27 16:27:41 +01:00
Jair Trejo
eb29dbad17 Fix typo in textual inversion JAX training script (#3123)
The pipeline is built as `pipe` but then used as `pipeline`.
2023-04-27 16:24:12 +01:00
Xie Zejian
d92c4d5ab7 fix typo in score sde pipeline (#3132) 2023-04-27 15:39:14 +01:00
apolinário
eade4308da Update IF name to XL (#3262)
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2023-04-27 14:26:58 +01:00
Ernie Chu
fa31da29e5 [docs] Update interface in repaint.mdx (#3119)
Update repaint.mdx

accomodate to #1701
2023-04-27 13:24:51 +01:00
Isaac
77bfb56241 adding required parameters while calling the get_up_block and get_down_block (#3210)
* removed unnecessary parameters from get_up_block and get_down_block functions

* adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-04-27 17:01:43 +05:30
Pedro Cuenca
70ef774fa0 Remove required from tracker_project_name (#3260)
Remove required from tracker_project_name.

As observed by https://github.com/off99555 in https://github.com/huggingface/diffusers/issues/2695#issuecomment-1470755050, it already has a default value.
2023-04-27 16:59:18 +05:30
Nipun Jindal
0b64c2c6c3 [Stochastic Sampler][Slow Test]: Cuda test fixes (#3257)
[Slow Test]: Cuda test fixes

Co-authored-by: njindal <njindal@adobe.com>
2023-04-27 14:52:38 +05:30
Nipun Jindal
fd512d7461 [2064]: Add stochastic sampler (sample_dpmpp_sde) (#3020)
* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* [2064]: Add stochastic sampler

* Review comments

* [Review comment]: Add is_torchsde_available()

* [Review comment]: Test and docs

* [Review comment]

* [Review comment]

* [Review comment]

* [Review comment]

* [Review comment]

---------

Co-authored-by: njindal <njindal@adobe.com>
2023-04-27 11:18:38 +05:30
Pedro Cuenca
e0a2bd15f9 Write model card in controlnet training script (#3229)
Write model card in controlnet training script.
2023-04-26 21:22:27 +02:00
Pedro Cuenca
c399de396d [docs] only mention one stage (#3246)
* [docs] only mention one stage

* add blurb on auto accepting

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
2023-04-26 12:06:50 -07:00
Patrick von Platen
f842396367 Post release for 0.16.0 (#3244)
* Post release

* fix more
2023-04-26 17:43:09 +01:00
Patrick von Platen
6ba0efb9a1 Release: v0.16.0 2023-04-26 13:35:01 +02:00
Sanchit Gandhi
46ceba5b35 [AudioLDM] Update docs to use updated ckpt (#3240)
* [AudioLDM] Update docs to use updated ckpt

* make style
2023-04-26 12:33:08 +01:00
Sayak Paul
977162c02b Adds a document on token merging (#3208)
* add document on token merging.

* fix headline.

* fix: headline.

* add some samples for comparison.
2023-04-26 16:25:48 +05:30
Patrick von Platen
744663f8dc fix fast test (#3241) 2023-04-26 11:44:19 +01:00
Patrick von Platen
abbf3c1adf Allow fp16 attn for x4 upscaler (#3239)
* Add all files

* update

* Make sure vae is memory efficient for PT 1

* make style
2023-04-26 11:16:06 +01:00
Patrick von Platen
da2ce1a6b9 Allow return pt x4 (#3236)
* Add all files

* update
2023-04-26 09:34:34 +01:00
Patrick von Platen
e51f19aee8 add model (#3230)
* add

* clean

* up

* clean up more

* fix more tests

* Improve docs further

* improve

* more fixes docs

* Improve docs more

* Update src/diffusers/models/unet_2d_condition.py

* fix

* up

* update doc links

* make fix-copies

* add safety checker and watermarker to stage 3 doc page code snippets

* speed optimizations docs

* memory optimization docs

* make style

* add watermarking snippets to doc string examples

* make style

* use pt_to_pil helper functions in doc strings

* skip mps tests

* Improve safety

* make style

* new logic

* fix

* fix bad onnx design

* make new stable diffusion upscale pipeline model arguments optional

* define has_nsfw_concept when non-pil output type

* lowercase linked to notebook name

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
2023-04-25 14:20:43 -07:00
Patrick von Platen
1ffcc924bc Fix docs text inversion (#3166)
* Fix docs text inversion

* Apply suggestions from code review
2023-04-25 14:18:40 +01:00
Yuchen Fan
730e01ec93 Sync cache version check from transformers (#3179)
sync cache version check from transformers
2023-04-25 14:18:25 +01:00
pdoane
0d196f9f45 Fix issue in maybe_convert_prompt (#3188)
When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens.  Adding a space for the padding tokens fixes this.
2023-04-25 14:17:57 +01:00
Patrick von Platen
131312caba Add ControlNet v1.1 docs (#3226)
Add v1.1 docs
2023-04-25 14:12:35 +01:00
Isaac
e9edbfc251 adding enable_vae_tiling and disable_vae_tiling functions (#3225)
adding enable_vae_tiling and disable_val_tiling functions
2023-04-25 14:12:21 +01:00
Lucca Zenóbio
0ddc5bf7b9 fix mixed precision training on train_dreambooth_inpaint_lora (#3138)
cast to weight dtype
2023-04-25 15:22:57 +05:30
Patrick von Platen
c5933c9c89 [Bug fix] Fix batch size attention head size mismatch (#3214) 2023-04-25 00:44:00 +02:00
Will Berman
91a2a80eb2 Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline" (#3201)
Revert "[Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)"

This reverts commit 9965cb50ea.
2023-04-22 12:36:55 -07:00
Patrick von Platen
425192fe15 Make sure VAE attention works with Torch 2_0 (#3200)
* Make sure attention works with Torch 2_0

* make style

* Fix more
2023-04-22 17:29:29 +01:00
SkyTNT
9965cb50ea [Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)
* Update lpw_stable_diffusion.py

* fix cpu offload
2023-04-22 15:07:45 +01:00
Chengrui Wang
20e426cb5d Fix bug in train_dreambooth_lora (#3183)
* Update train_dreambooth_lora.py

fix bug

* Update train_dreambooth_lora.py
2023-04-22 09:04:28 +05:30
Sanchit Gandhi
90eac14f72 [AudioLDM] Fix dtype of returned waveform (#3189) 2023-04-21 19:24:37 +01:00
Youssef Adarrab
11f527ac0f Add Karras sigmas to HeunDiscreteScheduler (#3160)
* Add karras pattern to discrete heun scheduler

* Add integration test

* Fix failing CI on pytorch test on M1 (mps)

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-21 19:21:04 +01:00
Patrick von Platen
2c04e5855c Multi Vector Textual Inversion (#3144)
* Multi Vector

* Improve

* fix multi token

* improve test

* make style

* Update examples/test_examples.py

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* update

* Finish

* Apply suggestions from code review

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-04-21 19:06:19 +01:00
Steven Liu
391cfcd7d7 [docs] Clarify training args (#3146)
* clarify training arg

* apply feedback
2023-04-21 11:03:44 -07:00
YiYi Xu
bc0392a0cb make from_flax work for controlnet (#3161)
fix from_flax

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-21 19:01:36 +01:00
asfiyab-nvidia
05d9baeacd Fix TensorRT community pipeline device set function (#3157)
pass silence_dtype_warnings as kwarg

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-21 18:53:10 +01:00
Sayak Paul
e573ae06e2 Update custom_diffusion.mdx to credit the author (#3163)
* Update custom_diffusion.mdx

* fix: unnecessary list comprehension.
2023-04-21 18:44:08 +01:00
Steven Liu
2f6351b001 [docs] Deterministic algorithms (#3172)
deterministic algos
2023-04-21 10:38:34 -07:00
Patrick von Platen
9c856118c7 Add model offload to x4 upscaler (#3187)
* Add model offload to x4 upscaler

* fix
2023-04-21 17:47:33 +01:00
regisss
9bce375f77 Update Habana Gaudi documentation (#3169)
* Update Habana Gaudi doc

* Fix tables
2023-04-21 17:24:43 +01:00
Sayak Paul
3045fb2763 [DreamBooth] add text encoder LoRA support in the DreamBooth training script (#3130)
* add: LoRA text encoder support for DreamBooth example.

* fix initialization.

* fix: modification call.

* add: entry in the readme.

* use dog dataset from hub.

* fix: params to clip.

* add entry to the LoRA doc.

* add: tests for lora.

* remove unnecessary list comprehension./
2023-04-20 17:25:17 +05:30
clarencechen
7b0ba4820a Update Noise Autocorrelation Loss Function for Pix2PixZero Pipeline (#2942)
* Update Pix2PixZero Auto-correlation Loss

* Add fast inversion tests

* Clarify purpose and mark as deprecated

Fix inversion prompt broadcasting

* Register modules set to `None` in config for `test_save_load_optional_components`

* Update new tests to coordinate with #2953
2023-04-20 12:13:47 +01:00
Patrick von Platen
8d5906a331 Merge branch 'main' of https://github.com/huggingface/diffusers 2023-04-20 13:09:33 +02:00
Patrick von Platen
17470057d2 make style 2023-04-20 13:09:20 +02:00
XinyuYe-Intel
a5b242d30d Added distillation for quantization example on textual inversion. (#2760)
* Added distillation for quantization example on textual inversion.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* refined readme and code style.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* Update text2images.py

* refined code of model load and added compatibility check.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* fixed code style.

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

* fix C403 [*] Unnecessary `list` comprehension (rewrite as a `set` comprehension)

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>

---------

Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
2023-04-20 11:55:42 +01:00
Mishig
a121e05feb Update custom_diffusion.mdx (#3165)
Add missing newlines for rendering the links correctly
2023-04-20 11:04:06 +02:00
nupurkmr9
3979aac996 adding custom diffusion training to diffusers examples (#3031)
* diffusers==0.14.0 update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion update

* custom diffusion

* custom diffusion

* custom diffusion

* custom diffusion

* custom diffusion

* apply formatting and get rid of bare except.

* refactor readme and other minor changes.

* misc refactor.

* fix: repo_id issue and loaders logging bug.

* fix: save_model_card.

* fix: save_model_card.

* fix: save_model_card.

* add: doc entry.

* refactor doc,.

* custom diffusion

* custom diffusion

* custom diffusion

* apply style.

* remove tralining whitespace.

* fix: toctree entry.

* remove unnecessary print.

* custom diffusion

* custom diffusion

* custom diffusion test

* custom diffusion xformer update

* custom diffusion xformer update

* custom diffusion xformer update

---------

Co-authored-by: Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
2023-04-20 09:31:42 +02:00
Will Berman
7e6886f5e9 controlnet training resize inputs to multiple of 8 (#3135)
controlnet training center crop input images to multiple of 8

The pipeline code resizes inputs to multiples of 8.
Not doing this resizing in the training script is causing
the encoded image to have different height/width dimensions
than the encoded conditioning image (which uses a separate
encoder that's part of the controlnet model).

We resize and center crop the inputs to make sure they're the
same size (as well as all other images in the batch). We also
check that the initial resolution is a multiple of 8.
2023-04-19 10:46:51 -07:00
superhero-7
a4c91be73b Modified altdiffusion pipline to support altdiffusion-m18 (#2993)
* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

* Modified altdiffusion pipline to support altdiffusion-m18

---------

Co-authored-by: root <fulong_ye@163.com>
2023-04-19 18:00:29 +01:00
hwuebben
3becd368b1 Update pipeline_stable_diffusion_inpaint_legacy.py (#2903)
* Update pipeline_stable_diffusion_inpaint_legacy.py

* fix preprocessing of Pil images with adequate batch size

* revert map

* add tests

* reformat

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* next try to fix the style

* wth is this

* Update testing_utils.py

* Update testing_utils.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

* Update test_stable_diffusion_inpaint_legacy.py

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-19 17:58:13 +01:00
Chanchana Sornsoontorn
c8fdfe4572 Correct Transformer2DModel.forward docstring (#3074)
⚙️chore(transformer_2d) update function signature for encoder_hidden_states
2023-04-19 17:51:58 +01:00
asfiyab-nvidia
bba1c1de15 Add TensorRT SD/txt2img Community Pipeline to diffusers along with TensorRT utils (#2974)
* Add SD/txt2img Community Pipeline to diffusers along with TensorRT utils

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update installation command

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* update tensorrt installation

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* changes
1. Update setting of cache directory
2. Address comments: merge utils and pipeline code.
3. Address comments: Add section in README

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

* apply make style

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>

---------

Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-19 17:51:03 +01:00
1lint
86ecd4b795 add from_ckpt method as Mixin (#2318)
* add mixin class for pipeline from original sd ckpt

* Improve

* make style

* merge main into

* Improve more

* fix more

* up

* Apply suggestions from code review

* finish docs

* rename

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-19 17:07:36 +01:00
cmdr2
bdeff4d64a [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model (#2705)
* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model

* Address review comment from PR

* PyLint formatting

* Some more pylint fixes, unrelated to our change

* Another pylint fix

* Styling fix
2023-04-19 13:37:07 +01:00
Will Berman
fc1883918f class labels timestep embeddings projection dtype cast (#3137)
This mimics the dtype cast for the standard time embeddings
2023-04-18 15:05:41 -07:00
Will Berman
f0c74e9a75 Add unet act fn to other model components (#3136)
Adding act fn config to the unet timestep class embedding and conv
activation.

The custom activation defaults to silu which is the default
activation function for both the conv act and the timestep class
embeddings so default behavior is not changed.

The only unet which use the custom activation is the stable diffusion
latent upscaler https://huggingface.co/stabilityai/sd-x2-latent-upscaler/blob/main/unet/config.json
(I ran a script against the hub to confirm).
The latent upscaler does not use the conv activation nor the timestep
class embeddings so we don't change its behavior.
2023-04-18 14:13:16 -07:00
Patrick von Platen
4bc157ffa9 Correct textual inversion readme (#3145)
* Update README.md

* Apply suggestions from code review
2023-04-18 16:35:12 +01:00
Patrick von Platen
f2df39fa0e make style 2023-04-18 14:03:17 +02:00
Cristian Garcia
8ecdd3ef65 Optimize log_validation in train_controlnet_flax (#3110)
extract pipeline from log_validation
2023-04-18 13:03:00 +01:00
YiYi Xu
cd8b7507c2 speed up attend-and-excite fast tests (#3079) 2023-04-18 13:02:25 +01:00
Sayak Paul
3b641eabe9 feat: verfication of multi-gpu support for select examples. (#3126)
* feat: verfication of multi-gpu support for select examples.

* add: multi-gpu training sections to the relvant doc pages.
2023-04-18 08:36:13 +05:30
Patrick von Platen
703307efcc Fix config deprecation (#3129)
* Better deprecation message

* Better deprecation message

* Better doc string

* Fixes

* fix more

* fix more

* Improve __getattr__

* correct more

* fix more

* fix

* Improve more

* more improvements

* fix more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* make style

* Fix all rest & add tests & remove old deprecation fns

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-04-17 17:16:28 +01:00
Patrick von Platen
ed8fd38337 Improve deprecation warnings (#3131) 2023-04-17 16:19:11 +01:00
Patrick von Platen
ca783a0f1f [Bug fix] Make sure correct timesteps are chosen for img2img (#3128)
Make sure correct timesteps are chosen for img2img
2023-04-17 11:52:40 +01:00
Patrick von Platen
beb848e2b6 [Bug fix] Fix img2img processor with safety checker (#3127)
Fix img2img processor with safety checker
2023-04-17 10:53:04 +01:00
Patrick von Platen
cfc99adf0f Add global pooling to controlnet (#3121) 2023-04-16 19:07:23 +02:00
Tommaso De Rossi
807f69b328 Fix breaking change in pipeline_stable_diffusion_controlnet.py (#3118)
fix breaking change
2023-04-16 19:04:11 +02:00
Will Berman
b811964a7b ddpm custom timesteps (#3007)
add custom timesteps test

add custom timesteps descending order check

docs

timesteps -> custom_timesteps

can only pass one of num_inference_steps and timesteps
2023-04-14 12:39:38 -07:00
YiYi Xu
1bd4c9e93d remvoe one line as requested by gc team (#3077)
remvoe one line
2023-04-14 06:39:25 -10:00
YiYi Xu
eb2ef31606 fix default value for attend-and-excite (#3099)
* fix default
2023-04-13 17:54:54 -10:00
Takuma Mori
5c9dd0af95 Add to support Guess Mode for StableDiffusionControlnetPipleline (#2998)
* add guess mode (WIP)

* fix uncond/cond order

* support guidance_scale=1.0 and batch != 1

* remove magic coeff

* add docstring

* add intergration test

* add document to controlnet.mdx

* made the comments a bit more explanatory

* fix table
2023-04-14 08:37:34 +05:30
Steven Liu
d0f258206d [docs] Update community pipeline docs (#2989)
* update community pipeline docs

* fix formatting

* explain sharing workflows
2023-04-13 13:46:28 -07:00
Joseph Coffland
3eaead0c4a Allow SD attend and excite pipeline to work with any size output images (#2835)
Allow stable diffusion attend and excite pipeline to work with any size output image. Re: #2476, #2603
2023-04-13 05:54:16 -10:00
Patrick von Platen
3bf5ce21ad Throw deprecation warning for return_cached_folder (#3092)
Throw deprecation warning
2023-04-13 13:33:11 +01:00
Patrick von Platen
3a9d7d9758 [Tests] parallelize (#3078)
* [Tests] parallelize

* finish folder structuring

* Parallelize tests more

* Correct saving of pipelines

* make sure logging level is correct

* try again

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-04-13 13:32:57 +01:00
YiYi Xu
e748b3c6e1 doc string example remove from_pt (#3083) 2023-04-13 09:45:23 +02:00
Patrick von Platen
46c52f9b96 [Pipelines] Make sure that None functions are correctly not saved (#3080) 2023-04-13 00:25:10 +02:00
Andreas Steiner
d06e06940b Adds profiling flags, computes train metrics average. (#3053)
* WIP controlnet training

- bugfix --streaming
- bugfix running report_to!='wandb'
- adds memory profile before validation

* Adds final logging statement.

* Sets train epochs to 11.

Looking at a longer ~16ep run, we see only good validation images
after ~11ep:

https://wandb.ai/andsteing/controlnet_fill50k/runs/3j2hx6n8

* Removes --logging_dir (it's not used).

* Adds --profile flags.

* Updates --output_dir=runs/fill-circle-{timestamp}.

* Compute mean of `train_metrics`.

Previously `train_metrics[-1]` was logged, resulting in very bumpy train
metrics.

* Improves logging a bit.

- adds l2_grads gradient norm logging
- adds steps_per_sec
- sets walltime as x coordinate of train/step
- logs controlnet_params config

* Adds --ccache (doesn't really help though).

* minor fix in controlnet flax example (#2986)

* fix the error when push_to_hub but not log validation

* contronet_from_pt & controlnet_revision

* add intermediate checkpointing to the guide

* Bugfix --profile_steps

* Sets `RACKER_PROJECT_NAME='controlnet_fill50k'`.

* Logs fractional epoch.

* Adds relative `walltime` metric.

* Adds `StepTraceAnnotation` and uses `global_step` insetad of `step`.

* Applied `black`.

* Streamlines commands in README a bit.

* Removes `--ccache`.

This makes only a very small difference (~1 min) with this model size, so removing
the option introduced in cdb3cc.

* Re-ran `black`.

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Converts spaces to tab.

* Removes repeated args.

* Skips first step (compilation) in profiling

* Updates README with profiling instructions.

* Unifies tabs/spaces in README.

* Re-ran style & quality.

---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-04-12 08:29:18 -10:00
Patrick von Platen
0a73b4d3cd [Post release] v0.16.0dev (#3072) 2023-04-12 17:18:30 +01:00
Sayak Paul
e126a82cc5 [Tests] Speed up panorama tests (#3067)
* fix: norm group test for UNet3D.

* chore: speed up the panorama tests (fast).

* set default value of _test_inference_batch_single_identical.

* fix: batch_sizes default value.
2023-04-12 16:25:54 +01:00
Patrick von Platen
e7534542a2 Release: v0.15.0 2023-04-12 15:15:31 +00:00
Andranik Movsisyan
b9b891621e Text2video zero refinements (#3070)
* fix progress bar issue in pipeline_text_to_video_zero.py. Copy scheduler after first backward

* fix tensor loading in test_text_to_video_zero.py

* make style && make quality
2023-04-12 14:27:09 +01:00
Ernie Chu
a43934371a Fix a bug of pano when not doing CFG (#3030)
* Fix a bug of pano when not doing CFG

* enhance code quality

* apply formatting.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-04-12 14:20:25 +01:00
Pedro Cuenca
caa5884e8a Update Flax TPU tests (#3069)
Update Flax TPU tests.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-12 14:17:36 +01:00
Sayak Paul
fa736e321d [Docs] refactor text-to-video zero (#3049)
* fix: norm group test for UNet3D.

* refactor text-to-video zero docs.
2023-04-12 14:15:26 +01:00
Patrick von Platen
a4b233e5b5 Finish docs textual inversion (#3068)
* Finish docs textual inversion

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-04-12 13:35:58 +01:00
Nipun Jindal
524535b5f2 [2064]: Add Karras to DPMSolverMultistepScheduler (#3001)
* [2737]: Add Karras DPMSolverMultistepScheduler

* [2737]: Add Karras DPMSolverMultistepScheduler

* Add test

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix: repo consistency.

* remove Copied from statement from the set_timestep method.

* fix: test

* Empty commit.

Co-authored-by: njindal <njindal@adobe.com>

---------

Co-authored-by: njindal <njindal@adobe.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-04-12 18:04:51 +05:30
Sean Sube
7b2407f4d7 add support for pre-calculated prompt embeds to Stable Diffusion ONNX pipelines (#2597)
* add support for prompt embeds to SD ONNX pipeline

* fix up the pipeline copies

* add prompt embeds param to other ONNX pipelines

* fix up prompt embeds param for SD upscaling ONNX pipeline

* add missing type annotations to ONNX pipes
2023-04-12 12:19:56 +01:00
Will Berman
639f6455b4 fix pipeline __setattr__ value == None (#3063)
* fix pipeline __setattr__

* add test

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-12 12:11:09 +01:00
Andy
9d7c08f95e [WIP] implement rest of the test cases (LoRA tests) (#2824)
* inital commit for lora test cases

* help a bit with lora for 3d

* fixed lora tests

* replaced redundant code

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-04-12 15:32:14 +05:30
Pedro Cuenca
dc277501c7 Flax memory efficient attention (#2889)
* add use_memory_efficient params placeholder

* test

* add memory efficient attention jax

* add memory efficient attention jax

* newline

* forgot dot

* Rename use_memory_efficient

* Keep dtype last.

* Actually use key_chunk_size

* Rename symbol

* Apply style

* Rename use_memory_efficient

* Keep dtype last

* Pass `use_memory_efficient_attention` in `from_pretrained`

* Move JAX memory efficient attention to attention_flax.

* Simple test.

* style

---------

Co-authored-by: muhammad_hanif <muhammad_hanif@sofcograha.co.id>
Co-authored-by: MuhHanif <48muhhanif@gmail.com>
2023-04-12 10:17:51 +01:00
Susung Hong
0df47efee2 [Docs] update Self-Attention Guidance docs (#2952)
* Update index.mdx

* Edit docs & add HF space link

* Only change equation numbers in comments
2023-04-12 10:14:32 +01:00
Sayak Paul
5a7d35e29c Fix InstructPix2Pix training in multi-GPU mode (#2978)
* fix: norm group test for UNet3D.

* fix: unet rejig.

* fix: unwrapping when running validation inputs.

* unwrapping the unet too.

* fix: device.

* better unwrapping.

* unwrapping before ema.

* unwrapping.
2023-04-12 10:13:53 +01:00
Patrick von Platen
0c72006e3a fix slow tsets (#3066)
* fix slow tsets

* make style
2023-04-12 10:23:52 +02:00
Sayak Paul
a89a14fa7a [LoRA] Enabling limited LoRA support for text encoder (#2918)
* add: first draft for a better LoRA enabler.

* make fix-copies.

* feat: backward compatibility.

* add: entry to the docs.

* add: tests.

* fix: docs.

* fix: norm group test for UNet3D.

* feat: add support for flat dicts.

* add depcrcation message instead of warning.
2023-04-12 08:29:04 +05:30
Sayak Paul
e607a582cf [Examples] Fix type-casting issue in the ControlNet training script (#2994)
* fix: norm group test for UNet3D.

* fix: type-casting issue in controlnet training.
2023-04-12 06:35:06 +05:30
Will Berman
ea39cd7e64 Attn added kv processor torch 2.0 block (#3023)
add AttnAddedKVProcessor2_0 block
2023-04-11 16:54:22 -07:00
Will Berman
98c5e5da31 Attention processor cross attention norm group norm (#3021)
add group norm type to attention processor cross attention norm

This lets the cross attention norm use both a group norm block and a
layer norm block.

The group norm operates along the channels dimension
and requires input shape (batch size, channels, *) where as the layer norm with a single
`normalized_shape` dimension only operates over the least significant
dimension i.e. (*, channels).

The channels we want to normalize are the hidden dimension of the encoder hidden states.

By convention, the encoder hidden states are always passed as (batch size, sequence
length, hidden states).

This means the layer norm can operate on the tensor without modification, but the group
norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).

All existing attention processors will have the same logic and we can
consolidate it in a helper function `prepare_encoder_hidden_states`

prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten

move norm_cross defined check to outside norm_encoder_hidden_states

add missing attn.norm_cross check
2023-04-11 15:51:40 -07:00
Will Berman
2d52e81cb9 unet time embedding activation function (#3048)
* unet time embedding activation function

* typo act_fn -> time_embedding_act_fn

* flatten conditional
2023-04-11 15:51:29 -07:00
Chanchana Sornsoontorn
52c4d32d41 Fix typo and format BasicTransformerBlock attributes (#2953)
* ⚙️chore(train_controlnet) fix typo in logger message

* ⚙️chore(models) refactor modules order; make them the same as calling order

When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3

* correct many tests

* remove bogus file

* make style

* correct more tests

* finish tests

* fix one more

* make style

* make unclip deterministic

* ⚙️chore(models/attention) reorganize comments in BasicTransformerBlock class

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-12 00:31:05 +02:00
Will Berman
c6180a311c add only cross attention to simple attention blocks (#3011)
* add only cross attention to simple attention blocks

* add test for only_cross_attention re: @patrickvonplaten

* mid_block_only_cross_attention better default

allow mid_block_only_cross_attention to default to
`only_cross_attention` when `only_cross_attention` is given
as a single boolean
2023-04-11 14:38:50 -07:00
Pedro Cuenca
e3095c5f47 Fix invocation of some slow Flax tests (#3058)
* Fix invocation of some slow tests.

We use __call__ rather than pmapping the generation function ourselves
because the number of static arguments is different now.

* style
2023-04-11 23:21:25 +02:00
Pedro Cuenca
526827c3d1 Fix scheduler type mismatch (#3041)
When doing generation manually and using guidance_scale as a static
argument.
2023-04-11 23:20:35 +02:00
George Ogden
cb63febf2e Update documentation (#2996)
* Update documentation

Based on sampling, the width and height must be powers of 2 as the samples halve in size each time

* make style
2023-04-11 19:02:13 +01:00
Will Berman
8c6b47cfde AttentionProcessor.group_norm num_channels should be query_dim (#3046)
* `AttentionProcessor.group_norm` num_channels should be `query_dim`

The group_norm on the attention processor should really norm the number
of channels in the query _not_ the inner dim. This wasn't caught before
because the group_norm is only used by the added kv attention processors
and the added kv attention processors are only used by the karlo models
which are configured such that the inner dim is the same as the query
dim.

* add_{k,v}_proj should be projecting to inner_dim
2023-04-11 10:32:55 -07:00
Will Berman
67ec9cf513 accelerate min version for ProjectConfiguration import (#3042) 2023-04-11 10:12:28 -07:00
Will Berman
80bc0c0ced config fixes (#3060) 2023-04-11 17:54:50 +01:00
Patrick von Platen
091a058236 make style 2023-04-11 15:51:21 +00:00
J N Hearns
881a6b58c3 Fix imports for composable_stable_diffusion pipeline (#3002)
* Update composable_stable_diffusion.py

Fix imports

* Formatting

* Formatting

* Formatting
2023-04-11 16:50:25 +01:00
Steven Liu
cb9d77af23 [docs] Reusing components (#3000)
* reuse-components

* format
2023-04-11 15:34:34 +01:00
Patrick von Platen
8b451eb63b Fix config prints and save, load of pipelines (#2849)
* [Config] Fix config prints and save, load

* Only use potential nn.Modules for dtype and device

* Correct vae image processor

* make sure in_channels is not accessed directly

* make sure in channels is only accessed via config

* Make sure schedulers only access config attributes

* Make sure to access config in SAG

* Fix vae processor and make style

* add tests

* uP

* make style

* Fix more naming issues

* Final fix with vae config

* change more
2023-04-11 13:35:42 +02:00
Patrick von Platen
8369196703 fix report tool (#3047) 2023-04-11 10:55:00 +02:00
Mishig
4f48476dd6 Update contribution.mdx (#3054)
* Update contribution.mdx

hotfix for doc-builder parsing quote in heading bug

* quoteation replace
2023-04-11 09:23:58 +02:00
Pedro Cuenca
fbc9a736dd mps: skip unstable test (#3037) 2023-04-11 06:36:54 +05:30
Rogério Júnior
67c3518f68 Small typo correction in comments (#3012) 2023-04-10 13:48:35 -07:00
Andranik Movsisyan
ba49272db8 [Pipeline] Add TextToVideoZeroPipeline (#2954)
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor

* add docs for text-to-video zero

* add teaser image for text-to-video zero docs

* Fix review changes. Add Documentation. Add test

* clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings

* make style && make quality

* make fix-copies

* make requested changes to docs. use huggingface server links for resources, delete res folder

* make style && make quality && make fix-copies

* make style && make quality

* Apply suggestions from code review

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-04-10 22:09:53 +02:00
William Berman
074d281ae0 tests and additional scheduler fixes 2023-04-10 12:59:33 -07:00
William Berman
953c9d14eb [bug fix] dpm multistep solver duplicate timesteps 2023-04-10 12:59:33 -07:00
luanjintai
85f1c19282 find another one accelerate parameter error 2023-04-10 12:23:17 -07:00
luanjintai
b5d0a9131d fix wrong parameter name for accelerate 2023-04-10 12:23:17 -07:00
Pedro Cuenca
983a7fbfd8 Initial draft of Core ML docs (#2987)
* Initial draft of Core ML docs.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Fix Core ML spelling

* Apply the rest of suggestions.

* Attempt to fix hyperlink inside Tip.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-04-10 21:09:04 +02:00
William Berman
c413353e8e add encoder_hid_dim to unet
`encoder_hid_dim` provides an additional projection for the input `encoder_hidden_states` from `encoder_hidden_dim` to `cross_attention_dim`
2023-04-09 23:00:16 -07:00
William Berman
8db5e5b37d allow unet varying number of layers per block 2023-04-09 22:57:26 -07:00
William Berman
707341aebe resnet skip time activation and output scale factor 2023-04-09 22:55:33 -07:00
William Berman
26b4319ac5 do not overwrite scheduler instance variables with type casted versions 2023-04-09 22:34:29 -07:00
William Berman
18ebd57bd8 add missing AttnProcessor2_0 to AttentionProcessor union 2023-04-09 22:02:14 -07:00
William Berman
b6cc050245 fix simple attention processor encoder hidden states ordering 2023-04-09 21:57:56 -07:00
William Berman
0cbefefac3 clamp comment @sayakpaul 2023-04-09 21:54:50 -07:00
William Berman
1875c35aeb remove extra min arg @sayakpaul 2023-04-09 21:54:50 -07:00
William Berman
1dc856e508 ddpm scheduler variance fixes 2023-04-09 21:54:50 -07:00
Will Berman
2cbdc586de dynamic threshold sampling bug fixes and docs (#3003)
dynamic threshold sampling bug fix and docs
2023-04-09 21:43:40 -07:00
YiYi Xu
dcfa6e1d20 add Min-SNR loss to Controlnet flax train script (#3016)
* add wandb team and min-snr loss

* make style

* apply feedbacks
2023-04-10 07:56:54 +05:30
Patrick von Platen
1c96f82ed9 Update one_step_unet.py
Fix dummy community pipeline
2023-04-09 19:22:18 +01:00
Guspan Tanadi
ce144d6dd0 docs: Link Navigation Path API Pipelines (#2976)
* docs: link navigation Safe Stable Diffusion

Link navigation API pipelines text2img and using diffusers Conditional Image Generation.

* docs: link navigation Versatile Diffusion

Removing exceeding path Stable Diffusion Overview.

* docs: Python extension Spectrogram Diffusion

Link navigation Spectrogram Diffusion Pipeline source code

* docs: Link navigation AltDiffusion Pipelines

Stable Diffusion Overview and Using Diffusers path.
2023-04-07 14:07:42 -07:00
Pedro Cuenca
8c5c30f3b1 Explain how to install test dependencies (#2983)
As pointed out by @Birch-san: https://github.com/huggingface/diffusers/pull/2634#issuecomment-1496517210
2023-04-07 20:41:09 +02:00
YiYi Xu
2de36fae7b minor fix in controlnet flax example (#2986)
* fix the error when push_to_hub but not log validation

* contronet_from_pt & controlnet_revision

* add intermediate checkpointing to the guide
2023-04-06 10:27:41 -10:00
FurryPotato
e40526431a [scheduler] fix some scheduler dtype error (#2992)
Co-authored-by: wangguan <dizhipeng.dzp@alibaba-inc.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-06 14:55:33 +01:00
Sayak Paul
24947317a6 [Examples] Add support for Min-SNR weighting strategy for better convergence (#2899)
* improve stable unclip doc.

* feat: support for applying min-snr weighting for faster convergence.

* add: support for validation logging with wandb

* make  not a required arg.

* fix: arg name.

* fix: cli args.

* fix: tracker config.

* fix: loss calculation.

* fix: validation logging.

* fix: unwrap call.

* fix: validation logging.

* fix: internval.

* fix: checkpointing push to hub.

* fix: c8a2856c6d\#commitcomment-106913193

* fix: norm group test for UNet3D.

* address PR comments.

* remove unneeded code.

* add: entry in the readme and docs.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-04-06 19:08:40 +05:30
cmdr2
8826bae655 Update the K-Diffusion SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2962) 2023-04-06 11:59:48 +01:00
Nipun Jindal
6e8e1ed77a [2905]: Add Karras pattern to discrete euler (#2956)
* [2905]: Add Karras pattern to discrete euler

* [2905]: Add Karras pattern to discrete euler

* Review comments

* Review comments

* Review comments

* Review comments

---------

Co-authored-by: njindal <njindal@adobe.com>
2023-04-06 16:10:57 +05:30
Kadir Nar
37b359b2bd The variable name has been updated. (#2970) 2023-04-06 10:55:43 +01:00
Patrick von Platen
a9477bbdac [Pipeline download] Improve pipeline download for index and passed co… (#2980)
* [Pipeline download] Improve pipeline download for index and passed components

* correct

* add more tests

* up
2023-04-06 01:31:09 +02:00
YiYi Xu
ee20d1f8b9 update flax controlnet training script (#2951)
* load_from_disk + checkpointing_steps

* apply feedback
2023-04-04 15:49:44 -10:00
Steven Liu
0d0fa2a3e1 [docs] Simplify loading guide (#2694)
* simplify loading guide

* apply feedbacks

* clarify variants

* clarify torch_dtype and variant

* remove conceptual pipeline doc
2023-04-04 14:08:21 -07:00
YiYi Xu
1a6def3ddb fix post-processing (#2968)
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-04-04 08:52:55 -10:00
YiYi Xu
0c63c3839a allow use custom local dataset for controlnet training scripts (#2928)
use custom local datset

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-04 10:37:47 -07:00
Lucain
a87e88b783 Use upload_folder in training scripts (#2934)
use upload folder in training scripts

Co-authored-by: testbot <lucainp@hf.co>
2023-04-04 16:19:12 +01:00
Patrick von Platen
a0263b2e5b make style 2023-04-04 15:18:39 +02:00
Ernie Chu
62c01d267a Ensure validation image RGB not RGBA (#2945)
* ensure validation image RGB not RGBA

* ensure validation image RGB not RGBA

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-04-04 14:17:59 +01:00
Guspan Tanadi
f3e72e9e57 Removing explicit markdown extension (#2944)
Trigger from previous PR. Build the page once again.
2023-04-04 14:15:19 +01:00
M. Tolga Cangöz
4fd7e97f33 Update ddpm.mdx (#2929) 2023-04-04 14:02:30 +01:00
M. Tolga Cangöz
4a1eae07c7 Update ddim.mdx (#2926) 2023-04-04 14:01:55 +01:00
M. Tolga Cangöz
e329edff7e Update score_sde_vp.mdx (#2938) 2023-04-04 14:00:43 +01:00
M. Tolga Cangöz
3e2d1af867 Update score_sde_ve.mdx (#2937) 2023-04-04 14:00:15 +01:00
M. Tolga Cangöz
715c25d344 Update unipc.mdx (#2936) 2023-04-04 13:59:53 +01:00
M. Tolga Cangöz
4274a3a915 Update euler_ancestral.mdx (#2932) 2023-04-04 13:58:58 +01:00
Sayak Paul
7139f0e874 fix: norm group test for UNet3D. (#2959) 2023-04-04 09:01:15 +01:00
Patrick von Platen
8c530fc2f6 make style 2023-03-31 23:46:28 +02:00
Patrick von Platen
723933f5f1 add another import 2023-03-31 23:45:05 +02:00
Patrick von Platen
f23d6eb8f2 fix missing import 2023-03-31 23:37:58 +02:00
wfng92
cd634a8fbb Check for all different packages of opencv (#2901)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-31 15:00:59 +01:00
Patrick von Platen
7447f75b9f Update pipeline_stable_diffusion_controlnet.py (#2917) 2023-03-31 14:59:50 +01:00
Patrick von Platen
a5bdb678c0 fix importing diffusers without transformers installed 2023-03-31 13:56:38 +00:00
M. Tolga Cangöz
c43356267b Update controlnet.mdx (#2912)
.
2023-03-31 14:32:36 +01:00
M. Tolga Cangöz
89b23d9869 Update image_variation.mdx (#2911)
.
2023-03-31 14:31:43 +01:00
Guspan Tanadi
419660c99b Have fix current pipeline link (#2910)
Also capitalization notebook provider name
2023-03-31 14:31:14 +01:00
Patrick von Platen
d36103a089 [Tests] Speed up test (#2919)
speed up test
2023-03-31 14:20:46 +01:00
Nipun Jindal
b3c437e009 [2884]: Fix cross_attention_kwargs in StableDiffusionImg2ImgPipeline (#2902)
* [2884]: Fix cross_attention_kwargs in StableDiffusionImg2ImgPipeline

* [Build Fix]

* [Build Fix]

---------

Co-authored-by: njindal <njindal@adobe.com>
2023-03-31 13:26:04 +01:00
mengfei25
7b6caca9eb Modify example with intel optimization (#2896)
* modify intel opts inference script

* modify readme

* modify doc

* fix some issues

* reformat

* reformat script

* format issue

* format issue
2023-03-31 13:07:20 +01:00
Sandeep
f3fbf9bfc0 Fix check_inputs in upscaler pipeline to allow embeds (#2892)
* Remove suggestion to use cuDNN benchmark in docs

* removing the wrong line

* add support for embeds

* fix line length
2023-03-31 12:46:20 +01:00
Patrick von Platen
e1144ac20c Fix slow tests text inv (#2915)
* fix slow tests

* uP
2023-03-31 10:03:32 +01:00
Guillermo Cique
1055175a18 Fix textual inversion loading (#2914) 2023-03-31 09:52:48 +01:00
Takuma Mori
0df4ad541f Add support Karras sigmas for StableDiffusionKDiffusionPipeline (#2874)
* add use_karras_sigmas option

thanks @Stax124

* fix sigma_min/max from scheduler.sigmas

* add docstring

* revert to use k_diffusion_model.sigma, to(device)

* add integration test

* make style
2023-03-31 09:12:11 +05:30
YiYi Xu
51d970d60d [docs] add the Stable diffusion with Jax/Flax Guide into the docs (#2487)
* add stable diffusion jax guide


---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-30 16:22:40 -10:00
Pi Esposito
a937e1b594 add load textual inversion embeddings to stable diffusion (#2009)
* add load textual inversion embeddings draft

* fix quality

* fix typo

* make fix copies

* move to textual inversion mixin

* make it accept from sd-concept library

* accept list of paths to embeddings

* fix styling of stable diffusion pipeline

* add dummy TextualInversionMixin

* add docstring to textualinversionmixin

* add load textual inversion embeddings draft

* fix quality

* fix typo

* make fix copies

* move to textual inversion mixin

* make it accept from sd-concept library

* accept list of paths to embeddings

* fix styling of stable diffusion pipeline

* add dummy TextualInversionMixin

* add docstring to textualinversionmixin

* add case for parsing embedding from auto1111 UI format

Co-authored-by: Evan Jones <evan.a.jones3@gmail.com>
Co-authored-by: Ana Tamais <aninhamoraestamais@gmail.com>

* fix style after rebase

* move textual inversion mixin to loaders

* move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline)

* update dummy class name

* addressed allo comments

* fix old dangling import

* fix style

* proposal

* remove bogus

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>

* finish

* make style

* up

* fix code quality

* fix code quality - again

* fix code quality - 3

* fix alt diffusion code quality

* fix model editing pipeline

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Finish

---------

Co-authored-by: Evan Jones <evan.a.jones3@gmail.com>
Co-authored-by: Ana Tamais <aninhamoraestamais@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-30 18:08:39 +01:00
Michael Gartsbein
1d033a95f6 img2img.multiple.controlnets.pipeline (#2833)
* img2img.multiple.controlnets.pipeline

* remove comments

---------

Co-authored-by: mishka <gartsocial@gmail.com>
2023-03-30 18:00:12 +01:00
Patrick von Platen
49609768b4 make style 2023-03-30 18:26:41 +02:00
Alon Burg
9062b2847d Support fp16 in conversion from original ckpt (#2733)
add --half to convert_original_stable_diffusion_to_diffusers.py
2023-03-30 17:26:18 +01:00
YiYi Xu
b3d5cc4a36 add flax requirement (#2894)
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-03-30 17:10:26 +01:00
Sayak Paul
b2021273eb [Docs] add an example use for StableUnCLIPPipeline in the pipeline docs (#2897)
* improve stable unclip doc.

* add: entry of StableUnCLIPPipeline to the docs

* Apply suggestions from code review

Co-authored-by: apolinario <joaopaulo.passos@gmail.com>

---------

Co-authored-by: apolinario <joaopaulo.passos@gmail.com>
2023-03-30 17:14:04 +05:30
Steven Liu
e47459c80f [docs] Performance tutorial (#2773)
* update performance tutorial

* fix divs

* oops forgot to close tag

* apply feedback

* apply feedback

* apply feedback

* align doc title
2023-03-29 12:48:14 -07:00
Yaman Ahlawat
3be489182e feat: allow offset_noise in dreambooth training example (#2826) 2023-03-29 16:01:02 +05:30
Sayak Paul
d82b032319 [Examples] Add streaming support to the ControlNet training example in JAX (#2859)
* improve stable unclip doc.

* feat: add streaming support to controlnet flax training script.

* fix: CLI arg.

* fix: torch dataloader shuffle setting.

* fix: dataset length.

* fix: wandb config.

* fix: steps_per_epoch in the training loop.

* add: entry about streaming in the readme

* get column names from iterable dataset + fix final logging

---------

Co-authored-by: yiyixuxu <yixu310@gmail.com>
2023-03-29 06:42:08 +05:30
Patrick von Platen
40a7b8629e [Docs] Correct phrasing (#2873) 2023-03-28 17:32:18 +01:00
M. Tolga Cangöz
628fefb232 Update stable_diffusion_safe.mdx (#2870)
Fix typos
2023-03-28 17:23:54 +01:00
M. Tolga Cangöz
03fe36f183 Update paint_by_example.mdx (#2869)
.
2023-03-28 17:23:39 +01:00
M. Tolga Cangöz
ef4c2fa4f1 Update alt_diffusion.mdx (#2865)
Fix typos
2023-03-28 17:17:53 +01:00
M. Tolga Cangöz
3980858ad4 Update overview.mdx (#2864)
Fix typos
2023-03-28 17:17:33 +01:00
M. Tolga Cangöz
37c82480bb Update evaluation.mdx (#2862)
Fix typos
2023-03-28 17:15:37 +01:00
Sayak Paul
13845462db [Tests] Adds a test to check if image_embeds None case is handled properly in StableUnCLIPImg2ImgPipeline (#2861)
* improve stable unclip doc.

* add: test to check if image_emebds None case is handled.

* apply formatting/
2023-03-28 17:14:08 +01:00
Nipun Jindal
53377ef83c [2761]: Add documentation for extra_in_channels UNet1DModel (#2817)
Co-authored-by: njindal <njindal@adobe.com>
2023-03-28 16:56:45 +01:00
dg845
4d0f412d0d [WIP] Check UNet shapes in StableDiffusionInpaintPipeline __init__ (#2853)
Add warning in __init__ if user loads a checkpoint with pipeline.unet.config.in_channels other than 9.
2023-03-28 16:53:52 +01:00
Felix Blanke
25d927aa51 Add last_epoch argument to optimization.get_scheduler (#2850)
Add last_epoch arg to optimization.get_scheduler.

Allows the specification of the index of the last epoch when
resuming training.
2023-03-28 16:46:41 +01:00
dg845
663c654577 [WIP][Docs] Use DiffusionPipeline Instead of Child Classes when Loading Pipeline (#2809)
* Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible.

* Run make style to fix style issues.

* Change more docs to use DiffusionPipeline rather than a subclass.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-28 16:44:34 +01:00
John HU
920a15cf70 Fix link to LoRA training guide in DreamBooth training guide (#2836)
Fix link to LoRA training guide
2023-03-28 16:35:41 +01:00
cmdr2
7d756813d4 Update the legacy inpainting SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2842)
Fix error 'required positional argument: prompt' when Legacy Inpaint is called only with prompt_embeds
2023-03-28 16:30:49 +01:00
Li-Huai (Allan) Lin
159a0bff34 Remove duplicate sentence in docstrings (#2834)
* Remove duplicate sentence

* format
2023-03-28 16:27:51 +01:00
Sandeep
b76d9fde8d Remove suggestion to use cuDNN benchmark in docs (#2793)
* Remove suggestion to use cuDNN benchmark in docs

* removing the wrong line
2023-03-28 16:01:30 +01:00
Aki Sakurai
0f14335af3 StableDiffusionLongPromptWeightingPipeline: Do not hardcode pad token (#2832) 2023-03-28 16:00:56 +01:00
junhsss
8bdf423645 fix KarrasVePipeline bug (#2828) 2023-03-28 15:58:19 +01:00
Stax124
585f621af2 [Stable Diffusion] Allow users to disable Safety checker if loading model from checkpoint (#2768)
* Allow user to disable SafetyChecker and enable dtypes if loading models from .ckpt or .safetensors

* Fix Import sorting (Ruff error)

* Get rid of the dtype convert method as it was implemented all along

* Fix the docstring

* Fix ruff formatting

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-28 15:06:48 +01:00
Kashif Rasul
c0afca2d12 updated onnx pndm test (#2811) 2023-03-28 13:43:24 +01:00
Patrick von Platen
42d950174f [Init] Make sure shape mismatches are caught early (#2847)
Improve init
2023-03-28 09:08:28 +01:00
Pedro Cuenca
81125d8499 Make dynamo wrapped modules work with save_pretrained (#2726)
* Workaround for saving dynamo-wrapped models.

* Accept suggestion from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Apply workaround when overriding pipeline components.

* Ensure the correct config.json is saved to disk.

Instead of the dynamo class.

* Save correct module (not compiled one)

* Add test

* style

* fix docstrings

* Go back to using string comparisons.

PyTorch CPU does not have _dynamo.

* Simple test for save_pretrained of compiled models.

* Helper function to test whether module is compiled.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-28 09:03:21 +02:00
YiYi Xu
d4f846fa74 [WIP]Flax training script for controlnet (#2818)
* add train_controlnet_flax

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-27 19:13:35 -10:00
Sayak Paul
58fc824488 add: better warning messages when handling multiple conditionings. (#2804)
* add: better warning messages when handling multiple conditioning.

* fix: handling of controlnet_conditioning_scale
2023-03-28 08:19:39 +05:30
Sayak Paul
fab4f3d6e4 improve stable unclip doc. (#2823) 2023-03-28 08:18:29 +05:30
Pedro Cuenca
b10f527577 Helper function to disable custom attention processors (#2791)
* Helper function to disable custom attention processors.

* Restore code deleted by mistake.

* Format

* Fix modeling_text_unet copy.
2023-03-27 20:31:19 +02:00
Eugene Lyapustin
7bc2fff1a5 Fix StableUnCLIPImg2ImgPipeline handling of explicitly passed image embeddings (#2845) 2023-03-27 19:03:59 +01:00
Patrick von Platen
4c26cb9cc8 [Tests] Fix slow tests (#2846) 2023-03-27 18:45:49 +01:00
Pedro Cuenca
1d7b4b60b7 Ruff: apply same rules as in transformers (#2827)
* Apply same ruff settings as in transformers

See https://github.com/huggingface/transformers/blob/main/pyproject.toml
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>

* Apply new style rules

* Style

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>

* style

* remove list, ruff wouldn't auto fix.

---------

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2023-03-27 16:18:57 +02:00
Sayak Paul
abb22b4eeb Update examples README.md to include the latest examples (#2839) 2023-03-27 19:34:58 +05:30
Bahjat Kawar
9fb0217548 StableDiffusionModelEditingPipeline documentation (#2810)
* comment update

* comment update
2023-03-24 22:41:31 +05:30
Sayak Paul
5883d8d4d1 [Docs] update docs (Stable unCLIP) to reflect the updated ckpts. (#2815)
* update docs to reflect the updated ckpts.

* update: point about prompt.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* emove image resizing.

* Apply suggestions from code review

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-24 17:24:19 +01:00
Patrick von Platen
dbcb15c25f [Stable UnCLIP] Finish Stable UnCLIP (#2814)
* up

* fix more 7

* up

* finish
2023-03-24 17:04:41 +01:00
PeixuanZuo
c4892f1855 Update onnxruntime package candidates (#2666)
* update import onnxruntime package, enable onnxruntime-rocm and onnxruntime-training

* add ort_nightly_gpu
2023-03-24 12:23:05 +01:00
Kashif Rasul
f6feb69991 Relax DiT test (#2808)
* Relax DiT test

* relax 2 more tests

* fix style

* skip test on mac due to older protobuf
2023-03-24 11:28:55 +01:00
Bahjat Kawar
37a44bb283 Add ModelEditing pipeline (#2721)
* TIME first commit

* styling.

* styling 2.

* fixes; tests

* apply styling and doc fix.

* remove sups.

* fixes

* remove temp file

* move augmentations to const

* added doc entry

* code quality

* customize augmentations

* quality

* quality

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-03-24 13:01:39 +05:30
Haofan Wang
4a98d6e097 Update train_text_to_image_lora.py (#2795) 2023-03-24 11:45:35 +05:30
Sanchit Gandhi
b94880e536 Add AudioLDM (#2232)
* Add AudioLDM

* up

* add vocoder

* start unet

* unconditional unet

* clap, vocoder and vae

* clean-up: conversion scripts

* fix: conversion script token_type_ids

* clean-up: pipeline docstring

* tests: from SD

* clean-up: cpu offload vocoder instead of safety checker

* feat: adapt tests to audioldm

* feat: add docs

* clean-up: amend pipeline docstrings

* clean-up: make style

* clean-up: make fix-copies

* fix: add doc path to toctree

* clean-up: args for conversion script

* clean-up: paths to checkpoints

* fix: use conditional unet

* clean-up: make style

* fix: type hints for UNet

* clean-up: docstring for UNet

* clean-up: make style

* clean-up: remove duplicate in docstring

* clean-up: make style

* clean-up: make fix-copies

* clean-up: move imports to start in code snippet

* fix: pass cross_attention_dim as a list/tuple to unet

* clean-up: make fix-copies

* fix: update checkpoint path

* fix: unet cross_attention_dim in tests

* film embeddings -> class embeddings

* Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>

* fix: unet film embed to use existing args

* fix: unet tests to use existing args

* fix: make style

* fix: transformers import and version in init

* clean-up: make style

* Revert "clean-up: make style"

This reverts commit 5d6d1f8b32.

* clean-up: make style

* clean-up: use pipeline tester mixin tests where poss

* clean-up: skip attn slicing test

* fix: add torch dtype to docs

* fix: remove conversion script out of src

* fix: remove .detach from 1d waveform

* fix: reduce default num inf steps

* fix: swap height/width -> audio_length_in_s

* clean-up: make style

* fix: remove nightly tests

* fix: imports in conversion script

* clean-up: slim-down to two slow tests

* clean-up: slim-down fast tests

* fix: batch consistent tests

* clean-up: make style

* clean-up: remove vae slicing fast test

* clean-up: propagate changes to doc

* fix: increase test tol to 1e-2

* clean-up: finish docs

* clean-up: make style

* feat: vocoder / VAE compatibility check

* feat: possibly expand / cut audio waveform

* fix: pipeline call signature test

* fix: slow tests output len

* clean-up: make style

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
2023-03-23 19:00:21 +01:00
Steven Liu
1870fb05a9 [docs] Add Colab notebooks and Spaces (#2713)
* add colab notebook and spaces

* fix image link
2023-03-23 09:48:58 -07:00
YiYi Xu
df91c44712 Flax controlnet (#2727)
* add contronet flax

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-03-23 05:46:23 -10:00
Pedro Cuenca
aa0531fa8d Skip mps in text-to-video tests (#2792)
* Skip mps in text-to-video tests.

* style

* Skip UNet3D mps tests.
2023-03-23 14:39:03 +01:00
Haofan Wang
dc5b4e2342 Update train_text_to_image_lora.py (#2767)
* Update train_text_to_image_lora.py

* Update train_text_to_image_lora.py

* Update train_text_to_image_lora.py

* Update train_text_to_image_lora.py

* format
2023-03-23 14:28:47 +01:00
Sayak Paul
0d7aac3e8d [Docs] small fixes to the text to video doc. (#2787)
* small fixes to the text to video doc.

* add: Spaces link.

* add: warning on research-only model.
2023-03-23 18:57:02 +05:30
Nipun Jindal
055c90f589 [2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipeline (#2779)
[2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipelines

Co-authored-by: njindal <njindal@adobe.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-23 14:20:24 +01:00
Kashif Rasul
2ef9bdd76f Music Spectrogram diffusion pipeline (#1044)
* initial TokenEncoder and ContinuousEncoder

* initial modules

* added ContinuousContextTransformer

* fix copy paste error

* use numpy for get_sequence_length

* initial terminal relative positional encodings

* fix weights keys

* fix assert

* cross attend style: concat encodings

* make style

* concat once

* fix formatting

* Initial SpectrogramPipeline

* fix input_tokens

* make style

* added mel output

* ignore weights for config

* move mel to numpy

* import pipeline

* fix class names and import

* moved models to models folder

* import ContinuousContextTransformer and SpectrogramDiffusionPipeline

* initial spec diffusion converstion script

* renamed config to t5config

* added weight loading

* use arguments instead of t5config

* broadcast noise time to batch dim

* fix call

* added scale_to_features

* fix weights

* transpose laynorm weight

* scale is a vector

* scale the query outputs

* added comment

* undo scaling

* undo depth_scaling

* inital get_extended_attention_mask

* attention_mask is none in self-attention

* cleanup

* manually invert attention

* nn.linear need bias=False

* added T5LayerFFCond

* remove to fix conflict

* make style and dummy

* remove unsed variables

* remove predict_epsilon

* Move accelerate to a soft-dependency (#1134)

* finish

* finish

* Update src/diffusers/modeling_utils.py

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* more fixes

* fix

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* fix order

* added initial midi to note token data pipeline

* added int to int tokenizer

* remove duplicate

* added logic for segments

* add melgan to pipeline

* move autoregressive gen into pipeline

* added note_representation_processor_chain

* fix dtypes

* remove immutabledict req

* initial doc

* use np.where

* require note_seq

* fix typo

* update dependency

* added note-seq to test

* added is_note_seq_available

* fix import

* added toc

* added example usage

* undo for now

* moved docs

* fix merge

* fix imports

* predict first segment

* avoid un-needed copy to and from cpu

* make style

* Copyright

* fix style

* add test and fix inference steps

* remove bogus files

* reorder models

* up

* remove transformers dependency

* make work with diffusers cross attention

* clean more

* remove @

* improve further

* up

* uP

* Apply suggestions from code review

* Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py

* loop over all tokens

* make style

* Added a section on the model

* fix formatting

* grammer

* formatting

* make fix-copies

* Update src/diffusers/pipelines/__init__.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* added callback ad optional ionnx

* do not squeeze batch dim

* clean up more

* upload

* convert jax to nnumpy

* make style

* fix warning

* make fix-copies

* fix warning

* add initial fast tests

* add initial pipeline_params

* eval mode due to dropout

* skip batch tests as pipeline runs on a single file

* make style

* fix relative path

* fix doc tests

* Update src/diffusers/models/t5_film_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/t5_film_transformer.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docs/source/en/api/pipelines/spectrogram_diffusion.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add MidiProcessor

* format

* fix org

* Apply suggestions from code review

* Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py

* make style

* pin protobuf to <4

* fix formatting

* white space

* tensorboard needs protobuf

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2023-03-23 14:06:17 +01:00
Naoki Ainoya
14e3a28c12 Rename 'CLIPFeatureExtractor' class to 'CLIPImageProcessor' (#2732)
The 'CLIPFeatureExtractor' class name has been renamed to 'CLIPImageProcessor' in order to comply with future deprecation. This commit includes the necessary changes to the affected files.
2023-03-23 13:49:22 +01:00
Mishig
8e35ef0142 [doc wip] literalinclude (#2718) 2023-03-23 13:42:54 +01:00
Patrick von Platen
a8315ce1a9 [UNet3DModel] Fix with attn processor (#2790)
* [UNet3DModel] Fix attn processor

* make style
2023-03-23 09:56:02 +01:00
Sayak Paul
0d633a42f4 deduplicate training section in the docs. (#2788) 2023-03-23 11:21:53 +05:30
Sayak Paul
9dc84448ac [Examples] InstructPix2Pix instruct training script (#2478)
* add: initial implementation of the pix2pix instruct training script.

* shorten cli arg.

* fix: main process check.

* fix: dataset column names.

* simplify tokenization.

* proper placement of null conditions.

* apply styling.

* remove debugging message for conditioning do.

* complete license.

* add: requirements.tzt

* wandb column name order.

* fix: augmentation.

* change: dataset_id.

* fix: convert_to_np() call.

* fix: reshaping.

* fix: final ema copy.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* address PR comments.

* add: readme details.

* config fix.

* downgrade version.

* reduce image width in the readme.

* note on hyperparameters during generation.

* add: output images.

* update readme.

* minor edits to readme.

* debugging statement.

* explicitly placement of the pipeline.

* bump minimum diffusers version.

* fix: device attribute error.

* weight dtype.

* debugging.

* add dtype inform.

* add seoarate te and vae.

* add: explicit casting/

* remove casting.

* up.

* up 2.

* up 3.

* autocast.

* disable mixed-precision in the final inference.

* debugging information.

* autocasting.

* add: instructpix2pix training section to the docs.

* Empty-Commit

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-23 10:15:01 +05:30
Sayak Paul
c681ad1af2 add: section on multiple controlnets. (#2762)
* add: section on multiple controlnets.

Co-authored-by: William Berman <WLBberman@gmail.com>

* fix: docs.

* fix: docs.

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
2023-03-23 09:55:25 +05:30
Haofan Wang
e0d8c9ef83 Support for Offset Noise in examples (#2753)
* add noise offset

* make style
2023-03-23 09:36:17 +05:30
Pedro Cuenca
92e1164e2e mps: remove warmup passes (#2771)
* Remove warmup passes in mps tests.

* Update mps docs: no warmup pass in PyTorch 2

* Update imports.
2023-03-22 19:29:27 +01:00
Patrick von Platen
ca1a22296d [MS Text To Video] Add first text to video (#2738)
* [MS Text To Video} Add first text to video

* upload

* make first model example

* match unet3d params

* make sure weights are correcctly converted

* improve

* forward pass works, but diff result

* make forward work

* fix more

* finish

* refactor video output class.

* feat: add support for a video export utility.

* fix: opencv availability check.

* run make fix-copies.

* add: docs for the model components.

* add: standalone pipeline doc.

* edit docstring of the pipeline.

* add: right path to TransformerTempModel

* add: first set of tests.

* complete fast tests for text to video.

* fix bug

* up

* three fast tests failing.

* add: note on slow tests

* make work with all schedulers

* apply styling.

* add slow tests

* change file name

* update

* more correction

* more fixes

* finish

* up

* Apply suggestions from code review

* up

* finish

* make copies

* fix pipeline tests

* fix more tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* apply suggestions

* up

* revert

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-22 18:39:33 +01:00
Steven Liu
7fe88613fa [docs] Clarify purpose of reproducibility docs (#2756)
* clarify purpose of repro docs

* apply feedback
2023-03-21 17:35:21 -07:00
Pedro Cuenca
a39d42b91d [docs] update torch 2 benchmark (#2764)
* Update benchmark for A100, 3090, 3090 Ti, 4090.

* Link to PyTorch blog.

* Update install instructions.
2023-03-21 17:41:13 +00:00
Will Berman
ca1e40726e stable diffusion depth batching fix (#2757) 2023-03-21 10:18:44 -07:00
1lint
b33bd91fae Add option to set dtype in pipeline.to() method (#2317)
add test_to_dtype to check pipe.to(fp16)
2023-03-21 15:21:23 +01:00
Pedro Cuenca
1fcf279d74 Fix mps tests on torch 2.0 (#2766) 2023-03-21 15:19:31 +01:00
Hyowon Ha
58bcf46a8f Add guidance start/end parameters to StableDiffusionControlNetImg2ImgPipeline (#2731)
* Add guidance start/end parameters to community controlnet img2img pipeline

* Fix formats
2023-03-21 14:38:43 +01:00
Nipun Jindal
0042efd015 [1929]: Add CLIP guidance for Img2Img stable diffusion pipeline (#2723)
* [Img2Img]: Copyover img2img pipeline

* [Img2Img]: img2img pipeline

* [Img2Img]: img2img pipeline

* [Img2Img]: img2img pipeline

---------

Co-authored-by: njindal <njindal@adobe.com>
2023-03-21 13:53:00 +01:00
Alexander Pivovarov
f024e00398 Fix typos (#2715)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-21 13:45:04 +01:00
Patrick von Platen
2120b4eee3 Improve Contribution Doc (#2043)
* first refactor

* more text

* improve

* finish

* up

* up

* up

* up

* finish

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finished

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finished

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-21 13:41:29 +01:00
regisss
c10d6854c0 Update numbers for Habana Gaudi in documentation (#2734)
Update numbers for Habana Gaudi in doc
2023-03-21 11:59:28 +01:00
Sayak Paul
73bdad08a1 add: controlnet entry to training section in the docs. (#2677)
* add: controlnet entry to training section in the docs.

* formatting.

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* wrap in a tip block.

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-03-21 07:23:24 +05:30
M. Tolga Cangöz
ba87c1607c Update text_inversion.mdx (#2751)
Fix typos
2023-03-20 13:20:50 -07:00
M. Tolga Cangöz
afe59a920e Update philosophy.mdx (#2752)
Fix typos
2023-03-20 13:19:43 -07:00
M. Tolga Cangöz
25ed7cb08b Update dreambooth.mdx (#2742)
Fix typos
2023-03-20 17:40:56 +00:00
M. Tolga Cangöz
af86b0ccac Update fp16.mdx (#2746)
Fix typos
2023-03-20 17:39:55 +00:00
M. Tolga Cangöz
a9f28b687c Update torch2.0.mdx (#2748)
Fix typos
2023-03-20 17:39:04 +00:00
M. Tolga Cangöz
d91dc57d8a Update mps.mdx (#2749)
Fix typos
2023-03-20 17:33:23 +00:00
Patrick von Platen
fdcff560d0 Fix more slow tests 2023-03-18 19:41:38 +00:00
Patrick von Platen
ec2c1bc95f Update README.md 2023-03-18 19:39:24 +01:00
Patrick von Platen
9ecd924859 [Tests] Correct PT2 (#2724)
* [Tests] Correct PT2

* correct more

* move versatile to nightly

* up

* up

* again

* Apply suggestions from code review
2023-03-18 18:38:04 +01:00
Andy
116f70cbf8 Enabling gradient checkpointing for VAE (#2536)
* updated black format

* update black format

* make style format

* updated line endings

* update code formatting

* Update examples/research_projects/onnxruntime/text_to_image/train_text_to_image.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/vae.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/vae.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* added vae gradient checkpointing test

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-03-17 14:59:38 -07:00
Sayak Paul
a16957159e [docs] Update ONNX doc to use optimum (#2702)
* minor edits to onnx and openvino docs.

* Apply suggestions from code review

Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>

---------

Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
2023-03-17 18:17:42 +01:00
YiYi Xu
f4bbcb29c0 fix image link in inpaint doc (#2693)
fix link

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-03-16 19:35:27 -10:00
Patrick von Platen
a41850a21d Improve deprecation error message when using cross_attention import (#2710)
Improve error message
2023-03-17 00:17:53 +01:00
Will Berman
a4b2c2f150 train_unconditional save restore unet parameters (#2706) 2023-03-16 16:15:56 -07:00
Steven Liu
77e0ea8048 [docs] Add safety checker to ethical guidelines (#2699)
add safety checker
2023-03-16 09:39:39 -07:00
Nicolas Patry
d9227cf788 Adding use_safetensors argument to give more control to users (#2123)
* Adding `use_safetensors` argument to give more control to users

about which weights they use.

* Doc style.

* Rebased (not functional).

* Rebased and functional with tests.

* Style.

* Apply suggestions from code review

* Style.

* Addressing comments.

* Update tests/test_pipelines.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

* Black ???

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-03-16 15:57:43 +01:00
Patrick von Platen
e828232780 Rename attention (#2691)
* rename file

* rename attention

* fix more

* rename more

* up

* more deprecation imports

* fixes
2023-03-16 00:35:54 +01:00
Steven Liu
588e50bc57 [docs] Reorganize table of contents (#2671)
* reorg toc

* reorg toc some more

* remove duplicate config
2023-03-15 16:28:18 -07:00
Steven Liu
a72d14fc8d [docs] Create better navigation on index (#2658)
* create updated nav for index

* fix header

* apply feedback
2023-03-15 11:58:04 -07:00
Steven Liu
1c2c594e3d [docs] Add overviews to each section (#2657)
* add overviews to each section

* fix typo in toctree

* apply feedbacks
2023-03-15 11:57:32 -07:00
YiYi Xu
e52cd55615 Add image_processor (#2617)
* add image_processor

---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-15 07:55:49 -10:00
M. Tolga Cangöz
c0b4d72095 Update unconditional_image_generation.mdx (#2686)
Fix typos
2023-03-15 18:19:57 +01:00
M. Tolga Cangöz
78afb84436 Update controlling_generation.mdx (#2690)
Fix typos
2023-03-15 18:18:41 +01:00
M. Tolga Cangöz
91570b2fda Update conditional_image_generation.mdx (#2687)
Fix typos
2023-03-15 18:16:32 +01:00
M. Tolga Cangöz
3584f6b345 Update img2img.mdx (#2688)
Fix typos
2023-03-15 18:15:59 +01:00
M. Tolga Cangöz
b4bb5345cd Update kerascv.mdx (#2685)
Fix typos
2023-03-15 18:15:51 +01:00
M. Tolga Cangöz
e71f73d8df Update custom_pipeline_overview.mdx (#2684)
Fix typos
2023-03-15 18:14:37 +01:00
Kashif Rasul
cf4227cd1e T5Attention support for cross-attention (#2654)
* fix AttnProcessor2_0

Fix use of AttnProcessor2_0 for cross attention with mask

* added scale_qk and out_bias flags

* fixed for xformers

* check if it has scale argument

* Update cross_attention.py

* check torch version

* fix sliced attn

* style

* set scale

* fix test

* fixed addedKV processor

* revert back AttnProcessor2_0

* if missing if

* fix inner_dim

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-15 18:04:05 +01:00
Patrick von Platen
9d1341d69b Update Dockerfile CUDA (#2682)
* Update Dockerfile CUDA

* Apply suggestions from code review
2023-03-15 18:02:56 +01:00
Sayak Paul
4553c29d92 [Tests] fix: slow serialization test (#2678)
fix: slow serialization tests
2023-03-15 22:30:21 +05:30
Sayak Paul
c9477bf8a8 [Docs] Adds a documentation page for evaluating diffusion models (#2516)
* add a documentation page for evaluating diffuion models.

* fix: checkpoint link.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>

* formatting fixes.

* formatting fixes.

* link to partiprompts dataset on hub.

* reflect on Pedro's comments.

Co-authored-by: Pedro <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* reflect on Pedro's comments.

Co-authored-by: Pedro <pedro@huggingface.co>

* update mention of FID.

* Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>

* minor nit.

* finish edges and add colab notebook.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* run formatting.

* additional feedback.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Pedro <pedro@huggingface.co>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-03-15 17:05:01 +05:30
Henrik Forstén
79eb3d07d0 Controlnet training (#2545)
* Controlnet training code initial commit

Works with circle dataset: https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md

* Script for adding a controlnet to existing model

* Fix control image transform

Control image should be in 0..1 range.

* Add license header and remove more unused configs

* controlnet training readme

* Allow nonlocal model in add_controlnet.py

* Formatting

* Remove unused code

* Code quality

* Initialize controlnet in training script

* Formatting

* Address review comments

* doc style

* explicit constructor args and submodule names

* hub dataset

NOTE -  not tested

* empty prompts

* add conditioning image

* rename

* remove instance data dir

* image_transforms -> -1,1 . conditioning_image_transformers -> 0, 1

* nits

* remove local rank config

I think this isn't necessary in any of our training scripts

* validation images

* proportion_empty_prompts typo

* weight copying to controlnet bug

* call log validation fix

* fix

* gitignore wandb

* fix progress bar and resume from checkpoint iteration

* initial step fix

* log multiple images

* fix

* fixes

* tracker project name configurable

* misc

* add controlnet requirements.txt

* update docs

* image labels

* small fixes

* log validation using existing models for pipeline

* fix for deepspeed saving

* memory usage docs

* Update examples/controlnet/train_controlnet.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/train_controlnet.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update examples/controlnet/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* remove extra is main process check

* link to dataset in intro paragraph

* remove unnecessary paragraph

* note on deepspeed

* Update examples/controlnet/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* assert -> value error

* weights and biases note

* move images out of git

* remove .gitignore

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-14 20:16:30 -07:00
Will Berman
279f744ce5 controlnet integration tests num_inference_steps=3 (#2672) 2023-03-14 14:42:32 -07:00
clarencechen
ee71d9d03d Add support for different model prediction types in DDIMInverseScheduler (#2619)
* Add support for different model prediction types in DDIMInverseScheduler
Resolve alpha_prod_t_prev index issue for final step of inversion

* Fix old bug introduced when prediction type is "sample"

* Add support for sample clipping for numerical stability and deprecate old kwarg

* Detach sample, alphas, betas

Derive predicted noise from model output before dist. regularization

Style cleanup

* Log loss for debugging

* Revert "Log loss for debugging"

This reverts commit 76ea9c856f.

* Add comments

* Add inversion equivalence test

* Add expected data for Pix2PixZero pipeline tests with SD 2

* Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py

* Remove cruft and add more explanatory comments

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-14 21:25:12 +01:00
aengusng8
268ebcb015 Add ddim noise comparative analysis pipeline (#2665)
* add DDIM Noise Comparative Analysis pipeline

* update README

* add comments

* run BLACK format
2023-03-14 18:09:55 +01:00
Patrick von Platen
d185c0dfa7 [Lora] correct lora saving & loading (#2655)
* [Lora] correct lora saving & loading

* fix final

* Apply suggestions from code review
2023-03-14 17:55:43 +01:00
qwjaskzxl
7c1b347702 Update README.md (#2653)
* Update README.md

fix 2 bugs: (1) "previous_noisy_sample" should be in the FOR loop in line 87; (2) converting image to INT should be before "Image.fromarray" in line 91

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-14 17:10:35 +01:00
Ilmari Heikkinen
a7cc468fdb AutoencoderKL: clamp indices of blend_h and blend_v to input size (#2660) 2023-03-14 17:06:51 +01:00
Patrick von Platen
07a0c1cb3f [Hub] Upgrade to 0.13.2 (#2670) 2023-03-14 16:47:58 +01:00
Will Berman
ebd44957fc image generation main process checks (#2631) 2023-03-14 01:28:03 -07:00
Haiwen Huang
e2d9a9bea0 fix the in-place modification in unet condition when using controlnet (#2586)
* fix the in-place modification in unet condition when using controlnet, which will cause backprop errors when training

* add clone to mid block

* fix-copies

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
2023-03-14 01:23:03 -07:00
Sayak Paul
f9cfb5ab8a [Tests] Adds a test suite for EMAModel (#2530)
* ema test cases.

* debugging maessages.

* debugging maessages.

* add: tests for ema.

* fix: optimization_step arg,

* handle device placement.

* Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>

* remove del and gc.

* address PR feedback.

* add: tests for serialization.

* fix: typos.

* skip_mps to serialization.

---------

Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-14 10:54:45 +05:30
Takuma Mori
d9b8adc4ca Add support for Multi-ControlNet to StableDiffusionControlNetPipeline (#2627)
* support for List[ControlNetModel] on init()

* Add to support for multiple ControlNetCondition

* rename conditioning_scale to scale

* scaling bugfix

* Manually merge `MultiControlNet` #2621

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* cleanups
- don't expose ControlNetCondition
- move scaling to ControlNetModel

* make style error correct

* remove ControlNetCondition to reduce code diff

* refactoring image/cond_scale

* add explain for `images`

* Add docstrings

* all fast-test passed

* Add a slow test

* nit

* Apply suggestions from code review

* small precision fix

* nits

MultiControlNet -> MultiControlNetModel - Matches existing naming a bit
closer

MultiControlNetModel inherit from model utils class - Don't have to
re-write fp16 test

Skip tests that save multi controlnet pipeline - Clearer than changing
test body

Don't auto-batch the number of input images to the number of controlnets.
We generally like to require the user to pass the expected number of
inputs. This simplifies the processing code a bit more

Use existing image pre-processing code a bit more. We can rely on the
existing image pre-processing code and keep the inference loop a bit
simpler.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
2023-03-13 21:18:30 +01:00
Patrick von Platen
4ae54b3789 [attention] Fix attention (#2656)
* [attention] Fix attention

* fix

* correct
2023-03-13 19:10:33 +01:00
M. Tolga Cangöz
fa7a576191 Update schedulers.mdx (#2647)
Fix typos
2023-03-13 16:41:28 +01:00
Aki Sakurai
6766a811ff Support non square image generation for StableDiffusionSAGPipeline (#2629)
* Support non square image generation for StableDiffusionSAGPipeline

* Fix style
2023-03-13 11:49:06 +01:00
M. Tolga Cangöz
bbab855322 Update loading.mdx (#2642)
Fix typos
2023-03-11 16:49:05 +01:00
Steven Liu
d5ce55293c [docs] Build Jax notebooks for real (#2641)
build jax notebooks for real
2023-03-11 01:21:14 +01:00
Patrick von Platen
1a7e9f13fd [Pipeline loading] Remove send_telemetry (#2640)
* [Pipeline loading]

* up
2023-03-10 21:01:59 +01:00
Steven Liu
c460ef61b3 [docs] Update readme (#2612)
* 📝 update readme

* 🖍 apply feedback
2023-03-10 08:32:43 -08:00
Will Berman
a28acb5dcc controlnet sd 2.1 checkpoint conversions (#2593)
* controlnet sd 2.1 checkpoint conversions

* remove global_step -> make config file mandatory
2023-03-10 08:22:02 -08:00
M. Tolga Cangöz
f1ab955f64 Update basic_training.mdx (#2639)
Add 'import os'
2023-03-10 14:19:12 +01:00
M. Tolga Cangöz
9360bb94c3 Update quicktour.mdx (#2637)
Fix typo
2023-03-10 14:17:10 +01:00
Ruizhe Wang
ce08cb72fb [Dreambooth] Editable number of class images (#2251)
* [Dreambooth] Editable number of class images

* 'class_num=None' bug fix

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-10 14:15:16 +01:00
Sian
4aa68291a9 add translated docs (#2587)
* add translated docs

* improve translated content

* improve translated content

* Modify the translation content
2023-03-10 13:55:12 +01:00
Patrick von Platen
d761b58bfc [From pretrained] Speed-up loading from cache (#2515)
* [From pretrained] Speed-up loading from cache

* up

* Fix more

* fix one more bug

* make style

* bigger refactor

* factor out function

* Improve more

* better

* deprecate return cache folder

* clean up

* improve tests

* up

* upload

* add nice tests

* simplify

* finish

* correct

* fix version

* rename

* Apply suggestions from code review

Co-authored-by: Lucain <lucainp@gmail.com>

* rename

* correct doc string

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* apply code suggestions

* finish

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-10 11:56:10 +01:00
Will Berman
7fe638c502 update paint by example docs (#2598) 2023-03-09 15:57:07 -08:00
Peter Lin
c812d97d5b Improve ddim scheduler and fix bug when prediction type is "sample" (#2094)
Improve ddim scheduler

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-09 20:32:30 +01:00
Patrick von Platen
c5f6c538fd [Tests] Split scheduler tests (#2630)
* up

* correct some

* up

* finish
2023-03-09 19:11:47 +01:00
Patrick von Platen
6a7a5467ca Up vesion at which we deprecate "revision='fp16'" since transformers is not released yet (#2623)
* improve error message

* upload
2023-03-09 16:13:55 +01:00
Patrick von Platen
0650d641a3 Revert "[docs] Build notebooks from Markdown" (#2625)
Revert "[docs] Build notebooks from Markdown (#2570)"

This reverts commit 78507bda24.
2023-03-09 15:45:24 +01:00
Patrick von Platen
5d550cfd9e Make sure that DEIS, DPM and UniPC can correctly be switched in & out (#2595)
* [Schedulers] Correct config changing

* uP

* add tests
2023-03-09 14:17:19 +01:00
Patrick von Platen
24d624a486 Add cache_dir to docs (#2624)
Improve docs
2023-03-09 14:00:36 +01:00
Steven Liu
251a34add8 Migrate blog content to docs (#2477)
* first draft

*  minor edits

* 💄 make style

* oops add to toc

* 🖍 reframe around understanding components

* 🖍 apply feedback

* 🖍 apply feedback
2023-03-09 13:20:49 +01:00
M. Tolga Cangöz
ded3174238 Fix typos (#2608) 2023-03-09 13:19:18 +01:00
Patrick von Platen
ef504c7880 make style 2023-03-09 13:01:00 +01:00
YiYi Xu
a062e47ec3 add flax pipelines to api doc + doc string examples (#2600)
* add api doc for flax pipeline + doc string examples

* make style

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-09 13:00:29 +01:00
Antoine Bouthors
75f1210a0c Fixed incorrect width/height assignment in StableDiffusionDepth2ImgPi… (#2558)
Fixed incorrect width/height assignment in StableDiffusionDepth2ImgPipeline when passing in tensor
2023-03-09 10:55:36 +01:00
Víctor Martínez
186689affd fix: un-existing tmp config file in linux, avoid unnecessary disk IO (#2591) 2023-03-08 20:20:09 +01:00
Patrick von Platen
cbbad0af69 correct example 2023-03-08 20:14:19 +01:00
Haofan Wang
00132de359 Support LoRA for text encoder (#2588)
* add lora

* Update examples/research_projects/lora/README.md

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-08 20:14:01 +01:00
Ella Charlaix
a5d2ee9d47 Add OpenVINO documentation (#2569)
* Add OpenVINO documentation

* Update docs/source/en/optimization/open_vino.mdx

Co-authored-by: YiYi Xu <yixu310@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2023-03-08 20:07:44 +01:00
Steven Liu
68545a15d9 [docs] Update unconditional image generation docs (#2592)
* 📝 update and minor refactor

*  minor edits
2023-03-08 09:47:49 -08:00
Patrick von Platen
445a176bde [Docs] Fix link to colab (#2604) 2023-03-08 12:59:58 +01:00
Steven Liu
78507bda24 [docs] Build notebooks from Markdown (#2570)
* 📝 add mechanism for building colab notebook

* 🖍 add notebooks to correct folder

* 🖍 fix folder name
2023-03-08 12:13:19 +01:00
YiYi Xu
d2a5247a1f Add notebook doc img2img (#2472)
* convert img2img.mdx into notebook doc

* fix

* Update docs/source/en/using-diffusers/img2img.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-07 20:00:56 -10:00
Will Berman
309d8cf9ab add deps table check updated to ci (#2590) 2023-03-07 15:24:44 -08:00
Steven Liu
b285d94e10 [docs] Move Textual Inversion training examples to docs (#2576)
* 📝 add textual inversion examples to docs

* 🖍 apply feedback

* 🖍 add colab link
2023-03-07 14:21:18 -08:00
clarencechen
55660cfb6d Improve dynamic thresholding and extend to DDPM and DDIM Schedulers (#2528)
* Improve dynamic threshold

* Update code

* Add dynamic threshold to ddim and ddpm

* Encapsulate and leverage code copy mechanism

Update style

* Clean up DDPM/DDIM constructor arguments

* add test

* also add to unipc

---------

Co-authored-by: Peter Lin <peterlin9863@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-07 23:10:26 +01:00
Michael Gartsbein
46bef6e31d community stablediffusion controlnet img2img pipeline (#2584)
Co-authored-by: mishka <gartsocial@gmail.com>
2023-03-07 13:31:56 -08:00
Patrick von Platen
22a31760c4 [Docs] Weight prompting using compel (#2574)
* add docs

* correct

* finish

* Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>

* update deps table

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-07 20:26:33 +01:00
zxypro
f0b661b8fb [Docs]Fix invalid link to Pokemons dataset (#2583) 2023-03-07 14:26:09 +01:00
Isamu Isozaki
8552fd7efa Added multitoken training for textual inversion. Issue 369 (#661)
* Added multitoken training for textual inversion

* Updated assertion

* Removed duplicate save code

* Fixed undefined bug

* Fixed save

* Added multitoken clip model +util helper

* Removed code splitting

* Removed class

* Fixed errors

* Fixed errors

* Added loading functionality

* Loading via dict instead

* Fixed bug of invalid index being loaded

* Fixed adding placeholder token only adding 1 token

* Fixed bug when initializing tokens

* Fixed bug when initializing tokens

* Removed flawed logic

* Fixed vector shuffle

* Fixed tokenizer's inconsistent __call__ method

* Fixed tokenizer's inconsistent __call__ method

* Handling list input

* Added exception for adding invalid tokens to token map

* Removed unnecessary files and started working on progressive tokens

* Set at minimum load one token

* Changed to global step

* Added method to load automatic1111 tokens

* Fixed bug in load

* Quality+style fixes

* Update quality/style fixes

* Cast embeddings to fp16 when loading

* Fixed quality

* Started moving things over

* Clearing diffs

* Clearing diffs

* Moved everything

* Requested changes
2023-03-07 12:09:36 +01:00
Hu Ye
e09a7d01c8 fix the default value of doc (#2539) 2023-03-07 11:40:22 +01:00
Pedro Cuenca
d3ce6f4b1e Support revision in Flax text-to-image training (#2567)
Support revision in Flax text-to-image training.
2023-03-07 08:16:31 +01:00
Steven Liu
ff91f154ee Update quicktour (#2463)
* first draft of updated quicktour

* 🖍 apply feedbacks

* 🖍 apply feedback and minor edits

* 🖍 add link to safety checker
2023-03-06 13:45:36 -08:00
Steven Liu
62bea2df36 [docs] Move text-to-image LoRA training from blog to docs (#2527)
* include text2image lora training in docs

* 🖍 apply feedback

* 🖍 minor edits
2023-03-06 13:45:07 -08:00
Steven Liu
9136be14a7 [docs] Move DreamBooth training materials to docs (#2547)
* move dbooth github stuff to docs

* add notebooks

* 🖍 minor shuffle

* 🖍 fix markdown table

* 🖍 apply feedback

*  make style

* 🖍 minor fix in code snippet
2023-03-06 13:44:30 -08:00
Steven Liu
7004ff55d5 [docs] Move relevant code for text2image to docs (#2537)
* move relevant code from text2image on GitHub to docs

* 🖍 add inference for text2image with flax

* 🖍 apply feedback
2023-03-06 13:43:45 -08:00
Will Berman
ca7ca11bcd community controlnet inpainting pipelines (#2561)
* community controlnet inpainting pipelines

* add community member attribution re: @pcuenca
2023-03-06 12:55:31 -08:00
YiYi Xu
c7da8fd233 add intermediate logging for dreambooth training script (#2557)
* add  intermediate logging
---------

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-03-06 08:13:12 -10:00
Patrick von Platen
b8bfef2ab9 make style 2023-03-06 19:11:45 +01:00
haixinxu
f3f626d556 Allow textual_inversion_flax script to use save_steps and revision flag (#2075)
* Update textual_inversion_flax.py

* Update textual_inversion_flax.py

* Typo

sorry.

* Format source
2023-03-06 19:11:27 +01:00
YiYi Xu
b7b4683bdc allow Attend-and-excite pipeline work with different image sizes (#2476)
add attn_res variable

Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-03-06 08:06:54 -10:00
Patrick von Platen
56958e1177 [Training] Fix tensorboard typo (#2566) 2023-03-06 15:13:38 +01:00
Patrick von Platen
ec021923d2 [Unet1d] correct docs (#2565) 2023-03-06 14:36:28 +01:00
Patrick von Platen
1598a57958 make style 2023-03-06 10:51:03 +00:00
Haofan Wang
63805f8af7 Support convert LoRA safetensors into diffusers format (#2403)
* add lora convertor

* Update convert_lora_safetensor_to_diffusers.py

* Update README.md

* Update convert_lora_safetensor_to_diffusers.py
2023-03-06 11:50:46 +01:00
Sean Sube
9920c333c6 add OnnxStableDiffusionUpscalePipeline pipeline (#2158)
* [Onnx] add Stable Diffusion Upscale pipeline

* add a test for the OnnxStableDiffusionUpscalePipeline

* check for VAE config before adjusting scaling factor

* update test assertions, lint fixes

* run fix-copies target

* switch test checkpoint to one hosted on huggingface

* partially restore attention mask

* reshape embeddings after running text encoder

* add longer nightly test for ONNX upscale pipeline

* use package import to fix tests

* fix scheduler compatibility and class labels dtype

* use more precise type

* remove LMS from fast tests

* lookup latent and timestamp types

* add docs for ONNX upscaling, rename lookup table

* replace deprecated pipeline names in ONNX docs
2023-03-06 11:48:01 +01:00
Patrick von Platen
f38e3626cd make style 2023-03-06 10:40:18 +00:00
ForserX
5f826a35fb Add custom vae (diffusers type) to onnx converter (#2325) 2023-03-06 11:39:55 +01:00
Will Berman
f7278638e4 ema step, don't empty cuda cache (#2563) 2023-03-06 10:54:56 +01:00
Vico Chu
b36cbd4fba Fix: controlnet docs format (#2559) 2023-03-06 09:25:21 +01:00
Naga Sai Abhinay
2e3541d7f4 [Community Pipeline] Unclip Image Interpolation (#2400)
* unclip img interpolation poc

* Added code sample and refactoring.
2023-03-05 16:55:30 -08:00
Sanchit Gandhi
2b4f849db9 [PipelineTesterMixin] Handle non-image outputs for attn slicing test (#2504)
* [PipelineTesterMixin] Handle non-image outputs for batch/sinle inference test

* style

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
2023-03-05 15:36:47 -08:00
Dhruv Nair
e4c356d3f6 Fix for InstructPix2PixPipeline to allow for prompt embeds to be passed in without prompts. (#2456)
* fix check inputs to allow prompt embeds in instruct pix2pix

* linting

* add reference comment to check inputs

* remove comment

* style changes

---------

Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-03-05 11:42:50 -08:00
Nicolas Patry
2ea1da89ab Fix regression introduced in #2448 (#2551)
* Fix regression introduced in #2448

* Style.
2023-03-04 16:11:57 +01:00
Steven Liu
fa6d52d594 Training tutorial (#2473)
* first draft

*  minor edits

*  minor fixes

* 🖍 apply feedbacks

* 🖍 apply feedback and minor edits
2023-03-03 15:41:03 -08:00
Will Berman
a72a057d62 move test num_images_per_prompt to pipeline mixin (#2488)
* attend and excite batch test causing timeouts

* move test num_images_per_prompt to pipeline mixin

* style

* prompt_key -> self.batch_params
2023-03-03 11:45:07 -08:00
Laveraaa
2f489571a7 Update pipeline_stable_diffusion_inpaint_legacy.py resize to integer multiple of 8 instead of 32 for init image and mask (#2350)
Update pipeline_stable_diffusion_inpaint_legacy.py

Change resize to integer multiple of 8 instead of 32
2023-03-03 19:08:22 +01:00
alvanli
e75eae3711 Bug Fix: Remove explicit message argument in deprecate (#2421)
Remove explicit message argument
2023-03-03 19:03:16 +01:00
Alex McKinney
5e5ce13e2f adds xformers support to train_unconditional.py (#2520) 2023-03-03 18:35:59 +01:00
Patrick von Platen
7f0f7e1e91 Correct section docs (#2540) 2023-03-03 18:34:34 +01:00
Patrick von Platen
3d2648d743 [Post release] Push post release (#2546) 2023-03-03 18:11:01 +01:00
Nicolas Patry
1f4deb697f Adding support for safetensors and LoRa. (#2448)
* Adding support for `safetensors` and LoRa.

* Adding metadata.
2023-03-03 18:00:19 +01:00
Patrick von Platen
f20c8f5a1a Release: v0.14.0 2023-03-03 16:45:08 +01:00
Patrick von Platen
5b6582cf73 [Model offload] Add nice warning (#2543)
* [Model offload] Add nice warning

* Treat sequential and model offload differently.

Sequential raises an error because the operation would fail with a
cryptic warning later.

* Forcibly move to cpu when offloading.

* make style

* one more fix

* make fix-copies

* up

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-03 16:44:10 +01:00
Anton Lozhkov
4f0141a67d Fix ONNX checkpoint loading (#2544)
* Revert "Disable ONNX tests (#2509)"

This reverts commit a0549fea44.

* add external weights

* + pb

* style
2023-03-03 16:08:56 +01:00
Patrick von Platen
1021929313 Small fixes for controlnet (#2542)
* Small fixes for controlnet

* finish links
2023-03-03 14:20:43 +01:00
Fkunn1326
8f21a9f0e2 Fix convert SD to diffusers error (#1979) (#2529)
Fix convert sd 768 error (#1979)

Co-authored-by: tweeter0830 <tweeter0830@users.noreply.github.com>
2023-03-02 19:11:40 +01:00
Isamu Isozaki
d9b9533c7e Textual inv make save log both steps (#2178)
* Initial commit

* removed images

* Made logging the same as save

* Removed logging function

* Quality fixes

* Quality fixes

* Tested

* Added support back for validation_epochs

* Fixing styles

* Did changes

* Change to log_validation

* Add extra space after wandb import

* Add extra space after wandb

Co-authored-by: Will Berman <wlbberman@gmail.com>

* Fixed spacing

---------

Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-03-02 19:04:18 +01:00
Ilmari Heikkinen
801484840a 8k Stable Diffusion with tiled VAE (#1441)
* Tiled VAE for high-res text2img and img2img

* vae tiling, fix formatting

* enable_vae_tiling API and tests

* tiled vae docs, disable tiling for images that would have only one tile

* tiled vae tests, use channels_last memory format

* tiled vae tests, use smaller test image

* tiled vae tests, remove tiling test from fast tests

* up

* up

* make style

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* make style

* improve naming

* finish

* apply suggestions

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

---------

Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-03-02 17:42:32 +01:00
Takuma Mori
8dfff7c015 Add a ControlNet model & pipeline (#2407)
* add scaffold
- copied convert_controlnet_to_diffusers.py from
convert_original_stable_diffusion_to_diffusers.py

* Add support to load ControlNet (WIP)
- this makes Missking Key error on ControlNetModel

* Update to convert ControlNet without error msg
- init impl for StableDiffusionControlNetPipeline
- init impl for ControlNetModel

* cleanup of commented out

* split create_controlnet_diffusers_config()
from create_unet_diffusers_config()

- add config: hint_channels

* Add input_hint_block, input_zero_conv and
middle_block_out
- this makes missing key error on loading model

* add unet_2d_blocks_controlnet.py
- copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D
- this makes missing key error on loading model

* Add loading for input_hint_block, zero_convs
and middle_block_out

- this makes no error message on model loading

* Copy from UNet2DConditionalModel except __init__

* Add ultra primitive test for ControlNetModel
inference

* Support ControlNetModel inference
- without exceptions

* copy forward() from UNet2DConditionModel

* Impl ControlledUNet2DConditionModel inference
- test_controlled_unet_inference passed

* Frozen weight & biases for training

* Minimized version of ControlNet/ControlledUnet
- test_modules_controllnet.py passed

* make style

* Add support model loading for minimized ver

* Remove all previous version files

* from_pretrained and inference test passed

* copied from pipeline_stable_diffusion.py
except `__init__()`

* Impl pipeline, pixel match test (almost) passed.

* make style

* make fix-copies

* Fix to add import ControlNet blocks
for `make fix-copies`

* Remove einops dependency

* Support  np.ndarray, PIL.Image for controlnet_hint

* set default config file as lllyasviel's

* Add support grayscale (hw) numpy array

* Add and update docstrings

* add control_net.mdx

* add control_net.mdx to toctree

* Update copyright year

* Fix to add PIL.Image RGB->BGR conversion
- thanks @Mystfit

* make fix-copies

* add basic fast test for controlnet

* add slow test for controlnet/unet

* Ignore down/up_block len check on ControlNet

* add a copy from test_stable_diffusion.py

* Accept controlnet_hint is None

* merge pipeline_stable_diffusion.py diff

* Update class name to SDControlNetPipeline

* make style

* Baseline fast test almost passed (w long desc)

* still needs investigate.

Following didn't passed descriped in TODO comment:
- test_stable_diffusion_long_prompt
- test_stable_diffusion_no_safety_checker

Following didn't passed same as stable_diffusion_pipeline:
- test_attention_slicing_forward_pass
- test_inference_batch_single_identical
- test_xformers_attention_forwardGenerator_pass
these seems come from calc accuracy.

* Add note comment related vae_scale_factor

* add test_stable_diffusion_controlnet_ddim

* add assertion for vae_scale_factor != 8

* slow test of pipeline almost passed
Failed: test_stable_diffusion_pipeline_with_model_offloading
- ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher

but currently latest version == 0.16.0

* test_stable_diffusion_long_prompt passed

* test_stable_diffusion_no_safety_checker passed

- due to its model size, move to slow test

* remove PoC test files

* fix num_of_image, prompt length issue add add test

* add support List[PIL.Image] for controlnet_hint

* wip

* all slow test passed

* make style

* update for slow test

* RGB(PIL)->BGR(ctrlnet) conversion

* fixes

* remove manual num_images_per_prompt test

* add document

* add `image` argument docstring

* make style

* Add line to correct conversion

* add controlnet_conditioning_scale (aka control_scales
strength)

* rgb channel ordering by default

* image batching logic

* Add control image descriptions for each checkpoint

* Only save controlnet model in conversion script

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

typo

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add gerated image example

* a depth mask -> a depth map

* rename control_net.mdx to controlnet.mdx

* fix toc title

* add ControlNet abstruct and link

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py

Co-authored-by: dqueue <dbyqin@gmail.com>

* remove controlnet constructor arguments re: @patrickvonplaten

* [integration tests] test canny

* test_canny fixes

* [integration tests] test_depth

* [integration tests] test_hed

* [integration tests] test_mlsd

* add channel order config to controlnet

* [integration tests] test normal

* [integration tests] test_openpose test_scribble

* change height and width to default to conditioning image

* [integration tests] test seg

* style

* test_depth fix

* [integration tests] size fixes

* [integration tests] cpu offloading

* style

* generalize controlnet embedding

* fix conversion script

* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Style adapted to the documentation of pix2pix

* merge main by hand

* style

* [docs] controlling generation doc nits

* correct some things

* add: controlnetmodel to autodoc.

* finish docs

* finish

* finish 2

* correct images

* finish controlnet

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* uP

* upload model

* up

* up

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: dqueue <dbyqin@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-03-02 15:34:07 +01:00
Will Berman
1a6fa69ab6 PipelineTesterMixin parameter configuration refactor (#2502)
* attend and excite batch test causing timeouts

* PipelineTesterMixin argument configuration refactor

* error message text re: @yiyixuxu

* remove eta re: @patrickvonplaten
2023-03-01 12:44:55 -08:00
Patrick von Platen
664b4de9e2 [Tests] Fix slow tests (#2526)
* [Tests] Fix slow tests

* [Tests] Fix slow tsets
2023-03-01 16:21:33 +01:00
Pedro Cuenca
e4a9fb3b74 Bring Flax attention naming in sync with PyTorch (#2511)
Bring flax attention naming in sync with PyTorch.
2023-03-01 12:28:56 +01:00
Patrick von Platen
eadf0e2555 [Copyright] 2023 (#2524) 2023-03-01 10:31:00 +01:00
Will Berman
856dad57bb is_safetensors_compatible refactor (#2499)
* is_safetensors_compatible refactor

* files list comma
2023-02-28 20:37:42 -08:00
Pedro Cuenca
a75ac3fa8d Sequential cpu offload: require accelerate 0.14.0 (#2517)
* Sequential cpu offload: require accelerate 0.14.0.

* Import is_accelerate_version

* Missing copy.
2023-02-28 20:34:36 +01:00
Pedro Cuenca
477aaa96d0 Use "hub" directory for cache instead of "diffusers" (#2005)
* Use "hub" directory for cache instead of "diffusers"

* Import cache locations from huggingface_hub

I verified that the constants are available in huggingface_hub version
0.10.0, which is the minimum we require.

Co-authored-by: Lucain Pouget <lucainp@gmail.com>

* make style

* Move cached directories to new location.

* make style

* Apply suggestions by @Wauplin

Co-authored-by: Lucain <lucainp@gmail.com>

* Fix is_file

* Ignore symlinks.

Especially important if we want to ensure that the user may want to invoke the
process again later, if they are keeping multiple envs with different
versions.

* Style

---------

Co-authored-by: Lucain Pouget <lucainp@gmail.com>
2023-02-28 20:01:02 +01:00
Sayak Paul
e3a2c7f02c [Docs] Include more information in the "controlling generation" doc (#2434)
* edit controlling generation doc.

* add: demo link to pix2pix zero docs.

* refactor oanorama a bit.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* pix: typo.

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-28 19:51:35 +05:30
Will Berman
1586186eea pix2pix tests no write to fs (#2497)
* attend and excite batch test causing timeouts

* pix2pix tests, no write to fs
2023-02-27 15:26:28 -08:00
Will Berman
42beaf1d23 move pipeline based test skips out of pipeline mixin (#2486) 2023-02-27 10:02:34 -08:00
Will Berman
824cb538b1 attend and excite batch test causing timeouts (#2498) 2023-02-27 10:01:59 -08:00
Patrick von Platen
a0549fea44 Disable ONNX tests (#2509) 2023-02-27 18:58:15 +01:00
Patrick von Platen
1c36a1239e [Docs] Improve safetensors (#2508)
* [Docs] Improve safetensors

* Apply suggestions from code review
2023-02-27 18:39:02 +01:00
Pedro Cuenca
48a2eb33f9 Add 4090 benchmark (PyTorch 2.0) (#2503)
* Add 4090 benchmark (PyTorch 2.0)

* Small changes in nomenclature.
2023-02-27 18:26:00 +01:00
Patrick von Platen
0e975e5ff6 [Safetensors] Make sure metadata is saved (#2506)
* [Safetensors] Make sure metadata is saved

* make style
2023-02-27 16:24:40 +01:00
Will Berman
7f43f65235 image_noiser -> image_normalizer comment (#2496) 2023-02-26 23:03:23 -08:00
Omer Bar Tal
6960e72225 add MultiDiffusion to controlling generation (#2490) 2023-02-25 14:28:17 +05:30
Pedro Cuenca
5de4347663 Fix test train_unconditional (#2481)
* Fix tensorboard tracking with `accelerate` @ `main`

* Fix `train_unconditional.py` with accelerate from main.
2023-02-24 14:31:16 -08:00
Pedro Cuenca
54bc882d96 mps test fixes (#2470)
* Skip variant tests (UNet1d, UNetRL) on mps.

mish op not yet supported.

* Exclude a couple of panorama tests on mps

They are too slow for fast CI.

* Exclude mps panorama from more tests.

* mps: exclude all fast panorama tests as they keep failing.
2023-02-24 15:19:53 +01:00
Haofan Wang
589faa8c88 Update train_text_to_image_lora.py (#2464)
* Update train_text_to_image_lora.py

* Update train_text_to_image_lora.py
2023-02-23 11:08:21 +05:30
Sayak Paul
39a3c77e0d fix: code snippet of instruct pix2pix from the docs. (#2446) 2023-02-21 18:07:22 +05:30
YiYi Xu
17ecf72d44 add demo (#2436)
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
2023-02-20 06:25:28 -10:00
Michael Gartsbein
f3fac68c55 small bugfix at StableDiffusionDepth2ImgPipeline call to check_inputs and batch size calculation (#2423)
small bugfix in call to check_inputs and batch size calculation

Co-authored-by: mishka <gartsocial@gmail.com>
2023-02-20 11:55:47 +01:00
Sayak Paul
8f1fe75b4c [Docs] Add a note on SDEdit (#2433)
add: note on SDEdit
2023-02-20 16:19:51 +05:30
Patrick von Platen
2ab4fcdb43 fix transformers naming (#2430) 2023-02-20 08:38:30 +01:00
Patrick von Platen
d7cfa0baa2 make style 2023-02-20 09:34:30 +02:00
Haofan Wang
4135558a78 Update pipeline_utils.py (#2415) 2023-02-20 08:34:13 +01:00
YiYi Xu
45572c2485 fix the get_indices function (#2418)
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-02-19 18:34:13 -10:00
Sayak Paul
5f65ef4d0a remove author names. (#2428)
* remove author names.

* add: demo link to panorama.
2023-02-20 07:19:57 +05:30
Patrick von Platen
c85efbb9ff Fix deprecation warning (#2426)
Deprecation warning should only hit at version 0.15
2023-02-20 00:13:03 +02:00
Will Berman
1e5eaca754 stable unclip integration tests turn on memory saving (#2408)
* stable unclip integration tests turn on memory saving

* add note on turning on memory saving
2023-02-18 19:24:52 -08:00
Patrick von Platen
55de50921f make style 2023-02-18 00:00:01 +02:00
Patrick von Platen
3231712b7d Post release 0.14 2023-02-17 23:57:46 +02:00
Patrick von Platen
b2c1e0d6d4 Release: v0.13.0 2023-02-17 23:38:05 +02:00
Will Berman
bfdffbea32 add xformers 0.0.16 warning message (#2345)
* add xformers 0.0.16 warning message

* fix version check to check whole version string
2023-02-17 13:25:46 -08:00
YiYi Xu
770d3b3c29 add index page (#2401)
* add index page

* update

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
2023-02-17 22:24:16 +01:00
Pedro Cuenca
780b3a4f8c Fix typo in AttnProcessor2_0 symbol (#2404)
Fix typo in AttnProcessor2_0 symbol.
2023-02-17 22:21:18 +02:00
Will Berman
07547dfacd controlling generation doc nits (#2406)
controlling generation docs fixes
2023-02-17 22:20:53 +02:00
Will Berman
5979089713 Revert "Release: v0.13.0" (#2405)
This reverts commit 024c4376fb.
2023-02-17 10:48:16 -08:00
Patrick von Platen
024c4376fb Release: v0.13.0 2023-02-17 18:46:00 +02:00
daquexian
0866e85e76 apply_forward_hook simply returns if no accelerate (#2387)
Signed-off-by: daquexian <daquexian566@gmail.com>
2023-02-17 17:27:23 +01:00
Will Berman
d2e2c611bc controlling generation docs (#2388)
* controlling generation docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* up

* up

* uP

* up

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-02-17 17:20:37 +01:00
Amiruddin Nagri
b6b73d97b4 Fixing typos in documentation (#2389)
Fixing typos in outgoing links
2023-02-17 16:42:59 +01:00
Omer Bar Tal
38de964343 add MultiDiffusionPanorama pipeline (#2393)
* add MultiDiffusionPanorama pipeline

* fix docs naming

* update pipeline name, remove redundant tests

* apply styling.

* debugging information.

* fix: assertion values.

* fix-copies.

* update docs

* update docs

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-02-17 16:39:50 +01:00
Patrick von Platen
14b950705a Add ddim inversion pix2pix (#2397)
* add

* finish

* add tests

* add tests

* up

* up

* pull from main

* uP

* Apply suggestions from code review

* finish

* Update docs/source/en/_toctree.yml

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* finish

* clean docs

* next

* next

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* up

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-17 16:27:51 +01:00
Manuel Brack
01a80807de Add semantic guidance pipeline (#2223)
* Add semantic guidance pipeline

* Fix style

* Refactor Pipeline

* Pipeline documentation

* Add documentation

* Fix style and quality

* Fix doctree

* Add tests for SEGA

* Update src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Make compatible with half precision

* Change deprecation warning to throw an exception

* update

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-17 15:54:15 +01:00
patil-suraj
291ecdacd3 quikc doc fix 2023-02-17 15:45:54 +01:00
patil-suraj
350a510335 fix docs 2023-02-17 15:25:55 +01:00
Sayak Paul
867a217d14 add: inversion to pix2pix zero docs. (#2398)
* add: inversion to pix2pix zero docs.

* add: comment to emphasize the use of flan to generate.

* more nits.
2023-02-17 14:51:58 +01:00
Suraj Patil
0c0bb085e1 Torch2.0 scaled_dot_product_attention processor (#2303)
* add sdpa processor

* don't use it by default

* add some checks and style

* typo

* support torch sdpa in dreambooth example

* use torch attn proc by default when available

* typo

* add attn mask

* fix naming

* being doc

* doc

* Apply suggestions from code review

* polish

* torctree

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* better name

* style

* add benchamrk table

* Update docs/source/en/optimization/torch2.0.mdx

* up

* fix example

* check if processor is None

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add fp32 benchmakr

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-17 14:22:26 +01:00
Sayak Paul
c5fa13aa0d [Pipelines] Add a section on generating captions and embeddings for Pix2Pix Zero (#2395)
* add: section on generating embeddings.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* apply changes from code review.

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-17 13:21:21 +01:00
Pedro Cuenca
351b37ea73 Fix UniPC tests and remove some test warnings (#2396)
* Change solver_type to match the previous tests.

* Prevent warnings about scale_model_inputs

* Prevent console log about division by zero.
2023-02-17 13:20:20 +01:00
Patrick von Platen
2e0d489a4e [Pix2Pix] Add utility function (#2385)
* [Pix2Pix] Add utility function

* improve

* update

* Apply suggestions from code review

* uP

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
2023-02-17 10:49:00 +01:00
Sayak Paul
abd5dcbbf1 [Pix2Pix Zero] Fix slow tests (#2391)
* fix: slow tests.

* retrieving the slices.

* fix: assertion.

* debugging.

* debugging.

* debugging

* debugging.

* debugging

* debugging.

* debugging.

* debugging

* debugging

* change debugging.

* change debugging.

* fix: tests for pix2pix zero.
2023-02-17 10:35:50 +01:00
Wenliang Zhao
d45bb937ab [Docs] Fix UniPC docs (#2386)
* fix typos in the doc

* restyle the code
2023-02-17 08:10:56 +02:00
Tianlei Wu
568b73fdf8 Fix stable diffusion onnx pipeline error when batch_size > 1 (#2366)
fix safety_checker for batch_size > 1
2023-02-16 23:57:33 +01:00
Patrick von Platen
8e1cae5d66 Revert "[Pix2Pix0] Add utility function to get edit vector" (#2384)
Revert "[Pix2Pix0] Add utility function to get edit vector (#2383)"

This reverts commit 857c04cfba.
2023-02-16 23:00:27 +01:00
Patrick von Platen
857c04cfba [Pix2Pix0] Add utility function to get edit vector (#2383)
uP
2023-02-16 22:59:53 +01:00
YiYi Xu
2e7a28652a Attend and excite 2 (#2369)
* attend and excite pipeline

* update

update docstring example

remove visualization

remove the base class attention control

remove dependency on stable diffusion pipeline

always apply gaussian filter with default setting

remove run_standard_sd argument

hardcode attention_res and scale_range (related to step size)

Update docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Update tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py

Co-authored-by: Will Berman <wlbberman@gmail.com>

revert test_float16_inference

revert change to the batch related tests

fix test_float16_inference

handle batch

remove the deprecation message

remove None check, step_size

remove debugging logging

add slow test

indices_to_alter -> indices

add check_input

* skip mps

* style

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* indices -> token_indices
---------

Co-authored-by: evin <evinpinarornek@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-16 11:15:54 -10:00
Patrick von Platen
f243282e3e [Dummy imports] Add missing if else statements for SD] (#2381)
* [Dummy imports] Add missing if else statements for SD]

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-16 21:53:07 +01:00
Patrick von Platen
ca980fd0d1 [Examples] Make sure EMA works with any device (#2382)
* Fix EMA

* up

* update
2023-02-16 21:27:47 +01:00
Pedro Cuenca
a60f5555f5 Make diffusers importable with transformers < 4.26 (#2380) 2023-02-16 20:17:33 +01:00
Patrick von Platen
90a624f697 improve tests 2023-02-16 20:42:00 +02:00
fxmarty
d32e9391f9 Replace torch.concat calls by torch.cat (#2378)
replace torch.concat by torch.cat
2023-02-16 19:36:33 +01:00
Wenliang Zhao
aaaec06487 add the UniPC scheduler (#2373)
* add UniPC scheduler

* add the return type to the functions

* code quality check

* add tests

* finish docs

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-16 19:19:06 +01:00
Pedro Cuenca
2777264ee8 enable_model_cpu_offload (#2285)
* enable_model_offload PoC

It's surprisingly more involved than expected, see comments in the PR.

* Rename final_offload_hook

* Invoke the vae forward hook manually.

* Completely remove decoder.

* Style

* apply_forward_hook decorator

* Rename method.

* Style

* Copy enable_model_cpu_offload

* Fix copies.

* Remove comment.

* Fix copies

* Missing import

* Fix doc-builder style.

* Merge main and fix again.

* Add docs

* Fix docs.

* Add a couple of tests.

* style
2023-02-16 19:06:36 +01:00
Sayak Paul
6eaebe8278 [Utils] Adds store() and restore() methods to EMAModel (#2302)
* add store and restore() methods to EMAModel.

* Update src/diffusers/training_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style with doc builder

* remove explicit listing.

* Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* chore: better variable naming.

* better treatment of temp_stored_params

Co-authored-by: patil-suraj <surajp815@gmail.com>

* make style

* remove temporary params from earth 🌎

* make fix-copies.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: patil-suraj <surajp815@gmail.com>
2023-02-16 15:20:25 +01:00
Will Berman
b214bb25f8 train_text_to_image EMAModel saving (#2341) 2023-02-16 14:40:28 +01:00
Suraj Patil
de9ce9e936 [SchedulingPNDM ] reset cur_model_output after each call (#2376)
reset cur_model_output
2023-02-16 14:38:42 +01:00
Susung Hong
fa35750d3b Add Self-Attention-Guided (SAG) Stable Diffusion pipeline (#2193)
* Add Stable Diffusion Sw/ elf-Attention Guidance

* Modify __init__.py

* Register attention storing processor

* Update pipeline_stable_diffusion_sag.py

* Editing default value

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update dummy_torch_and_transformers_objects.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update pipeline_stable_diffusion_sag.py

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Create test_stable_diffusion_sag.py

* Create self_attention_guidance.py

* Update pipeline_stable_diffusion_sag.py

* Update test_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Rename self_attention_guidance.py to self_attention_guidance.mdx

* Update self_attention_guidance.mdx

* Update self_attention_guidance.mdx

* Update _toctree.yml

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Fixing order

* Update pipeline_stable_diffusion_sag.py

* fixing import order

* fix order

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* Naming change

* Noting pred_x0

* Adding some fast tests

* Update pipeline_stable_diffusion_sag.py

* Update test_stable_diffusion_sag.py

* Update test_stable_diffusion_sag.py

* Update test_stable_diffusion_sag.py

* Update docs/source/en/api/pipelines/stable_diffusion/self_attention_guidance.mdx

* implement gaussian_blur

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

* fix tests

* Update pipeline_stable_diffusion_sag.py

* Update pipeline_stable_diffusion_sag.py

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-02-16 13:04:49 +01:00
Sayak Paul
fd3d5502d4 [Pipelines] Adds pix2pix zero (#2334)
* add: support for BLIP generation.

* add: support for editing synthetic images.

* remove unnecessary comments.

* add inits and run make fix-copies.

* version change of diffusers.

* fix: condition for loading the captioner.

* default conditions_input_image to False.

* guidance_amount -> cross_attention_guidance_amount

* fix inputs to check_inputs()

* fix: attribute.

* fix: prepare_attention_mask() call.

* debugging.

* better placement of references.

* remove torch.no_grad() decorations.

* put torch.no_grad() context before the first denoising loop.

* detach() latents before decoding them.

* put deocding in a torch.no_grad() context.

* add reconstructed image for debugging.

* no_grad(0

* apply formatting.

* address one-off suggestions from the draft PR.

* back to torch.no_grad() and add more elaborate comments.

* refactor prepare_unet() per Patrick's suggestions.

* more elaborate description for .

* formatting.

* add docstrings to the methods specific to pix2pix zero.

* suspecting a redundant noise prediction.

* needed for gradient computation chain.

* less hacks.

* fix: attention mask handling within the processor.

* remove attention reference map computation.

* fix: cross attn args.

* fix: prcoessor.

* store attention maps.

* fix: attention processor.

* update docs and better treatment to xa args.

* update the final noise computation call.

* change xa args call.

* remove xa args option from the pipeline.

* add: docs.

* first test.

* fix: url call.

* fix: argument call.

* remove image conditioning for now.

* 🚨 add: fast tests.

* explicit placement of the xa attn weights.

* add: slow tests 🐢

* fix: tests.

* edited direction embedding should be on the same device as prompt_embeds.

* debugging message.

* debugging.

* add pix2pix zero pipeline for a non-deterministic test.

* debugging/

* remove debugging message.

* make caption generation _

* address comments (part I).

* address PR comments (part II)

* fix: DDPM test assertion.

* refactor doc.

* address PR comments (part III).

* fix: type annotation for the scheduler.

* apply styling.

* skip_mps and add note on embeddings in the docs.
2023-02-16 11:20:38 +01:00
Patrick von Platen
e5810e686e [Variant] Add "variant" as input kwarg so to have better UX when downloading no_ema or fp16 weights (#2305)
* [Variant] Add variant loading mechanism

* clean

* improve further

* up

* add tests

* add some first tests

* up

* up

* use path splittetx

* add deprecate

* deprecation warnings

* improve docs

* up

* up

* up

* fix tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct code format

* fix warning

* finish

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update docs/source/en/using-diffusers/loading.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Will Berman <wlbberman@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* correct loading docs

* finish

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-02-16 11:02:58 +01:00
Damian Stewart
e3ddbe25ed Fix 3-way merging with the checkpoint_merger community pipeline (#2355)
correctly locate 3rd file; also correct misleading docs
2023-02-16 10:52:41 +01:00
Will Berman
46def7265f checkpointing_steps_total_limit->checkpoints_total_limit (#2374) 2023-02-16 00:28:58 -08:00
Will Berman
296b01e1a1 add total number checkpoints to training scripts (#2367)
* add total number checkpoints to training scripts

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-02-15 23:58:06 -08:00
Will Berman
a3ae46610f schedulers add glide noising schedule (#2347) 2023-02-15 23:51:33 -08:00
meg
c613288c9b Funky spacing issue (#2368)
There isn't a space between the "Scope" paragraph and "Ethical Guidelines", here: https://huggingface.co/docs/diffusers/main/en/conceptual/ethical_guidelines , yet I can't see that in the preview. In this PR, I'm simply adding some spaces in the hopes that it resolves the issue.....
2023-02-15 17:36:31 -08:00
Patrick von Platen
4c52982a0b [Tests] Add MPS skip decorator (#2362)
* finish

* Apply suggestions from code review

* fix indent and import error in test_stable_diffusion_depth

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
2023-02-15 22:17:25 +01:00
Will Berman
2a49fac864 KarrasDiffusionSchedulers type note (#2365) 2023-02-15 12:37:56 -08:00
Kashif Rasul
51b61b69c5 [Docs] initial docs about KarrasDiffusionSchedulers (#2349)
* initial docs about KarrasDiffusionSchedulers

* typo

* grammer

* Update docs/source/en/api/schedulers/overview.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* do not list the schedulers explicitly

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-15 10:19:57 -08:00
Patrick von Platen
666d80a1c8 fix some tests 2023-02-15 10:22:06 +00:00
Patrick von Platen
91925fbb76 Fix callback type hints - no optional function argument (#2357)
replace type hints
2023-02-14 14:35:05 -08:00
Ben Evans
0db19da01f Log Unconditional Image Generation Samples to W&B (#2287)
* Log Unconditional Image Generation Samples to WandB

* Check for wandb installation and parity between onnxruntime script

* Log epoch to wandb

* Check for tensorboard logger early on

* style fixes

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-14 23:11:12 +01:00
Will Berman
62b3c9e06a unCLIP variant (#2297)
* pipeline_variant

* Add docs for when clip_stats_path is specified

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* prepare_latents # Copied from re: @patrickvonplaten

* NoiseAugmentor->ImageNormalizer

* stable_unclip_prior default to None re: @patrickvonplaten

* prepare_prior_extra_step_kwargs

* prior denoising scale model input

* {DDIM,DDPM}Scheduler -> KarrasDiffusionSchedulers re: @patrickvonplaten

* docs

* Update docs/source/en/api/pipelines/stable_unclip.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-14 11:28:57 -08:00
Will Berman
e55687e1e1 unet check length inputs (#2327)
* unet check length input

* prep test file for changes

* correct all tests

* clean up

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-13 16:25:50 -08:00
Will Berman
9e8ee2ace1 dreambooth checkpointing tests and docs (#2339) 2023-02-13 14:16:32 -08:00
Will Berman
6782b70dd3 github issue forum link (#2335)
* github issue forum link

* Update .github/ISSUE_TEMPLATE/config.yml

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-13 11:21:14 -08:00
Will Berman
f190714e77 karlo image variation use kakaobrain upload (#2338) 2023-02-13 10:53:33 -08:00
Patrick von Platen
6cbd7b8b27 [Tests] Remove unnecessary tests (#2337) 2023-02-13 18:27:41 +01:00
Patrick von Platen
bc0cee9d1c [Latent Upscaling] Remove unused noise (#2298) 2023-02-13 18:06:26 +01:00
Patrick von Platen
1f5f17c5b4 [Versatile Diffusion] Fix tests (#2336) 2023-02-13 18:04:50 +01:00
Patrick von Platen
98c1a8e793 [Docs] Fix ethical guidelines docs (#2333) 2023-02-13 14:15:53 +01:00
Plat
0850b88fa1 Fix typo in load_pipeline_from_original_stable_diffusion_ckpt() method (#2320)
fix typo
2023-02-13 12:26:56 +01:00
bddppq
5d4f59ee96 Fix running LoRA with xformers (#2286)
* Fix running LoRA with xformers

* support disabling xformers

* reformat

* Add test
2023-02-13 11:58:18 +01:00
Giada Pistilli
f2eae16849 Add ethical guidelines (#2330)
* add ethical guidelines

* update file name

* edit file name

* update toctree

* Update docs/source/en/conceptual/ethical_guidelines.mdx

* Update docs/source/en/conceptual/ethical_guidelines.mdx

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-13 10:43:40 +01:00
Patrick von Platen
120844aadf [Tests] Refactor push tests (#2329)
* [Tests] Refactor push tests

* correct
2023-02-13 10:06:11 +01:00
Naga Sai Abhinay
a688c7bdfb [Community Pipeline] UnCLIP Text Interpolation Pipeline (#2257)
* UnCLIP Text Interpolation Pipeline

* Formatter fixes

* Changes based on feedback

* Formatting fix

* Formatting fix

* isort formatting fix(?)

* Remove duplicate code

* Formatting fix

* Refactor __call__ and change example in readme.

* Update examples/community/unclip_text_interpolation.py

Refactor to linter formatting

Co-authored-by: Will Berman <wlbberman@gmail.com>

---------

Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-02-12 22:16:18 -08:00
Will Berman
1e7f965442 convert ckpt script docstring fixes (#2293)
* convert ckpt script docstring fixes

* Update src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-10 13:57:49 -08:00
Will Berman
beb59abfa0 remove ddpm test_full_inference (#2291)
* remove ddpm test_full_inference

* style
2023-02-10 13:51:07 -08:00
Patrick von Platen
96c2279bcd Correct fast tests (#2314)
* correct some

* Apply suggestions from code review

* correct

* Update tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py

* Final
2023-02-10 14:12:34 +01:00
Patrick von Platen
716286f19d Fast CPU tests should also run on main (#2313)
add fast tests
2023-02-10 12:46:01 +01:00
Patrick von Platen
e83b43612b make style 2023-02-10 13:07:46 +02:00
erkams
1be7df0205 [LoRA] Freezing the model weights (#2245)
* [LoRA] Freezing the model weights

Freeze the model weights since we don't need to calculate grads for them.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-02-09 11:45:11 +01:00
Patrick von Platen
62a15cec6e make style 2023-02-09 12:27:44 +02:00
Ben Evans
f3c848383a Run same number of DDPM steps in inference as training (#2263)
Resolves ValueError: `num_inference_steps`: 1000 cannot be larger than `self.config.train_timesteps`: 50 as the unet model trained with this scheduler can only handle maximal 50 timesteps.
2023-02-09 10:36:38 +01:00
Will Berman
fd5c3c09af misc fixes (#2282)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-08 09:02:42 -08:00
Patrick von Platen
648090e26e fix pix2pix docs (#2290) 2023-02-08 16:38:18 +01:00
Patrick von Platen
1ed6b77781 [Examples] Test all examples on CPU (#2289)
* [Examples] Test all examples on CPU

* add

* correct

* Apply suggestions from code review
2023-02-08 15:59:13 +01:00
Chenguo Lin
9d0d070996 EMA: fix state_dict() and load_state_dict() & add cur_decay_value (#2146)
* EMA: fix `state_dict()` & add `cur_decay_value`

* EMA: fix a bug in `load_state_dict()`

'float' object (`state_dict["power"]`) has no attribute 'get'.

* del train_unconditional_ort.py
2023-02-08 10:44:50 +01:00
Isamu Isozaki
c1971a53bc Textual inv save log memory (#2184)
* Quality check and adding tokenizer

* Adapted stable diffusion to mixed precision+finished up style fixes

* Fixed based on patrick's review

* Fixed oom from number of validation images

* Removed unnecessary np.array conversion

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-08 10:37:10 +01:00
Patrick von Platen
41db2dbf90 correct tests 2023-02-08 11:12:51 +02:00
Patrick von Platen
a7ca03aa85 Replace flake8 with ruff and update black (#2279)
* before running make style

* remove left overs from flake8

* finish

* make fix-copies

* final fix

* more fixes
2023-02-07 23:46:23 +01:00
Patrick von Platen
f5ccffecf7 Use accelerate save & loading hooks to have better checkpoint structure (#2048)
* better accelerated saving

* up

* finish

* finish

* uP

* up

* up

* fix

* Apply suggestions from code review

* correct ema

* Remove @

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/training/dreambooth.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-07 20:03:59 +01:00
Pedro Cuenca
e619db24be mps cross-attention hack: don't crash on fp16 (#2258)
* mps cross-attention hack: don't crash on fp16

* Make conversion explicit.
2023-02-07 19:51:33 +01:00
wfng92
111228cb39 Fix torchvision.transforms and transforms function naming clash (#2274)
* Fix torchvision.transforms and transforms function naming clash

* Update unconditional script for onnx

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-07 17:36:32 +01:00
Patrick von Platen
bbb46ad3d5 [Tests] Fix slow tests (#2271) 2023-02-07 14:42:12 +01:00
wfng92
b1dad2e9d3 Make center crop and random flip as args for unconditional image generation (#2259)
* Add center crop and horizontal flip to args

* Update command to use center crop and random flip

* Add center crop and horizontal flip to args

* Update command to use center crop and random flip
2023-02-07 11:58:31 +01:00
Patrick von Platen
cd52475560 [Examples] Remove datasets important that is not needed (#2267)
* [Examples] Remove datasets important that is not needed

* remove from lora tambien
2023-02-07 11:55:34 +01:00
Patrick von Platen
0f04e799dc fix vae pt script 2023-02-07 08:34:19 +00:00
YiYi Xu
1051ca81a6 Stable Diffusion Latent Upscaler (#2059)
* Modify UNet2DConditionModel

- allow skipping mid_block

- adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`

- allow user to set dimension for the timestep embedding (`time_embed_dim`)

- the kernel_size for `conv_in` and `conv_out` is now configurable

- add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`

- allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`

- added 2 arguments `attn1_types` and `attn2_types`

  * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
`BasicTransformerBlock` block with 2 cross-attention , otherwise we
get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block;  note that I stil kept
the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks

- the position of downsample layer and upsample layer is now configurable

- in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
this use case

- if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
inside cross attention block

add up/down blocks for k-upscaler

modify CrossAttention class

- make the `dropout` layer in `to_out` optional

- `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d

- `cross_attention_norm` - add an optional layernorm on encoder_hidden_states

- `attention_dropout`: add an optional dropout on attention score

adapt BasicTransformerBlock

- add an ada groupnorm layer  to conditioning attention input with timestep embedding

- allow skipping the FeedForward layer in between the attentions

- replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration

update timestep embedding: add new act_fn  gelu and an optional act_2

modified ResnetBlock2D

- refactored with AdaGroupNorm class (the timestep scale shift normalization)

- add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv

- add option to use input AdaGroupNorm on the input instead of groupnorm

- add options to add a dropout layer after each conv

- allow user to set the bias in conv_shortcut (needed for k-upscaler)

- add gelu

adding conversion script for k-upscaler unet

add pipeline

* fix attention mask

* fix a typo

* fix a bug

* make sure model can be used with GPU

* make pipeline work with fp16

* fix an error in BasicTransfomerBlock

* make style

* fix typo

* some more fixes

* uP

* up

* correct more

* some clean-up

* clean time proj

* up

* uP

* more changes

* remove the upcast_attention=True from unet config

* remove attn1_types, attn2_types etc

* fix

* revert incorrect changes up/down samplers

* make style

* remove outdated files

* Apply suggestions from code review

* attention refactor

* refactor cross attention

* Apply suggestions from code review

* update

* up

* update

* Apply suggestions from code review

* finish

* Update src/diffusers/models/cross_attention.py

* more fixes

* up

* up

* up

* finish

* more corrections of conversion state

* act_2 -> act_2_fn

* remove dropout_after_conv from ResnetBlock2D

* make style

* simplify KAttentionBlock

* add fast test for latent upscaler pipeline

* add slow test

* slow test fp16

* make style

* add doc string for pipeline_stable_diffusion_latent_upscale

* add api doc page for latent upscaler pipeline

* deprecate attention mask

* clean up embeddings

* simplify resnet

* up

* clean up resnet

* up

* correct more

* up

* up

* improve a bit more

* correct more

* more clean-ups

* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add docstrings for new unet config

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* # Copied from

* encode the image if not latent

* remove force casting vae to fp32

* fix

* add comments about preconditioning parameters from k-diffusion paper

* attn1_type, attn2_type -> add_self_attention

* clean up get_down_block and get_up_block

* fix

* fixed a typo(?) in ada group norm

* update slice attention processer for cross attention

* update slice

* fix fast test

* update the checkpoint

* finish tests

* fix-copies

* fix-copy for modeling_text_unet.py

* make style

* make style

* fix f-string

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix import

* correct changes

* fix resnet

* make fix-copies

* correct euler scheduler

* add missing #copied from for preprocess

* revert

* fix

* fix copies

* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/cross_attention.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* clean up conversion script

* KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D

* more

* Update src/diffusers/models/unet_2d_condition.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* remove prepare_extra_step_kwargs

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix a typo in timestep embedding

* remove num_image_per_prompt

* fix fasttest

* make style + fix-copies

* fix

* fix xformer test

* fix style

* doc string

* make style

* fix-copies

* docstring for time_embedding_norm

* make style

* final finishes

* make fix-copies

* fix tests

---------

Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-02-07 09:11:57 +01:00
Patrick von Platen
3b66cc0fc1 make style 2023-02-07 08:11:22 +00:00
chavinlo
717a956a02 Create convert_vae_pt_to_diffusers.py (#2215)
* Create convert_vae_pt_to_diffusers.py

Just a simple script to convert VAE.pt files to diffusers format
Tested with: https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/VAEs/orangemix.vae.pt

* Update convert_vae_pt_to_diffusers.py

Forgot to add the function call

* make style

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: chavinlo <example@example.com>
2023-02-07 09:10:34 +01:00
Jorge C. Gomes
d43972ae71 Fixes prompt input checks in StableDiffusion img2img pipeline (#2206)
* Fixes prompt input checks in img2img

Allows providing prompt_embeds instead of the prompt, which is not currently possible as the first check fails.
This becomes the same as the function found in 8267c78445/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py (L393)

* Continues the fix

This also needs to be fixed. Becomes consistent with 8267c78445/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py (L558)

I've now tested this implementation, and it produces the expected results.
2023-02-07 09:10:24 +01:00
Fazzie-Maqianli
ffed2420c4 fix distributed init twice (#2252)
fix colossalai dreambooth
2023-02-07 08:55:39 +01:00
Pedro Cuenca
8178c840f2 Mention training problems with xFormers 0.0.16 (#2254) 2023-02-06 11:19:26 +01:00
nickkolok
3a0d3da66f Fix a typo: bfloa16 -> bfloat16 (#2243) 2023-02-06 09:14:08 +01:00
psychedelicious
22c1ba56c2 Fix k_dpm_2 & k_dpm_2_a on MPS (#2241)
Needed to convert `timesteps` to `float32` a bit sooner.

Fixes #1537
2023-02-05 23:45:15 +01:00
Pedro Cuenca
7386e7730c Show error when loading safety_checker from_flax (#2187)
* Show error when loading safety_checker `from_flax`

* fix style
2023-02-04 20:55:11 +01:00
Pedro Cuenca
154a7865fc [Flax DDPM] Make key optional so default pipelines don't fail (#2176)
Make `key` optional so default pipelines don't fail.
2023-02-04 20:45:20 +01:00
Robin Hutmacher
9baa29e9c0 Fix typo in StableDiffusionInpaintPipeline (#2197)
* Fix typo in StableDiffusionInpaintPipeline

* Add embedded prompt handling

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-03 19:03:15 +01:00
Jorge C. Gomes
58c416ab0c Fixes LoRAXFormersCrossAttnProcessor (#2207)
Related to #2124 
The current implementation is throwing a shape mismatch error. Which makes sense, as this line is obviously missing, comparing to XFormersCrossAttnProcessor and LoRACrossAttnProcessor.

I don't have formal tests, but I compared `LoRACrossAttnProcessor` and `LoRAXFormersCrossAttnProcessor` ad-hoc, and they produce the same results with this fix.
2023-02-03 18:10:48 +01:00
Isamu Isozaki
d46d78c584 Hotfix textual inv logging (#2183) 2023-02-03 18:08:46 +01:00
Patrick von Platen
05168e5d83 make style 2023-02-03 19:03:13 +02:00
Justin Merrell
948022e1e8 fix: flagged_images implementation (#1947)
Flagged images would be set to the blank image instead of the original image that contained the NSF concept for optional viewing.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-03 18:02:56 +01:00
Patrick von Platen
2f9a70aa85 [LoRA] Make sure validation works in multi GPU setup (#2172)
* [LoRA] Make sure validation works in multi GPU setup

* more fixes

* up
2023-02-03 16:50:10 +01:00
Sayak Paul
e43e206dc7 removes ~s in favor of full-fledged links. (#2229)
remove ~ in favor of full-fledged links.
2023-02-03 20:18:39 +05:30
Will Berman
99c39b4012 [nit] negative_prompt typo (#2227)
* negative_prompt typo

* fix
2023-02-03 14:05:50 +01:00
dymil
7547f9b475 Fix timestep dtype in legacy inpaint (#2120)
* Fix timestep dtype in legacy inpaint

This matches the structure in the text2img, img2img, and inpaint ONNX pipelines

* Fix style in dtype patch
2023-02-03 13:04:21 +01:00
Prathik Rao
a87e87fcbe refactor onnxruntime integration (#2042)
* refactor onnxruntime integration

* fix requirements.txt bug

* make style

* add support for textual_inversion

* make style

* add readme

* cleanup README files

* 1/27/2023 update to training scripts

* make style

* 1/30 update to train_unconditional

* style with black-22.8.0

---------

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: anton- <anton@huggingface.co>
2023-02-03 12:04:59 +01:00
Dudu Moshe
ecadcdefe1 [Bug] scheduling_ddpm: fix variance in the case of learned_range type. (#2090)
scheduling_ddpm: fix variance in the case of learned_range type.

In the case of learned_range variance type, there are missing logs
and exponent comparing to the theory (see "Improved Denoising Diffusion
Probabilistic Models" section 3.1 equation 15:
https://arxiv.org/pdf/2102.09672.pdf).
2023-02-03 09:42:42 +01:00
Pedro Cuenca
2bbd532990 Docs: short section on changing the scheduler in Flax (#2181)
* Short doc on changing the scheduler in Flax.

* Apply fix from @patil-suraj

Co-authored-by: Suraj Patil <surajp815@gmail.com>

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-02-02 18:52:21 +01:00
Adalberto
68ef0666e2 Create train_dreambooth_inpaint_lora.py (#2205)
* Create train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py
2023-02-02 13:15:15 +01:00
Kashif Rasul
7ac95703cd add CITATION.cff (#2211)
add citation.cff
2023-02-02 12:46:44 +01:00
Pedro Cuenca
3816c9ad9f Update xFormers docs (#2208)
Update xFormers docs.
2023-02-01 19:56:32 +01:00
Patrick von Platen
8267c78445 [Loading] Better error message on missing keys (#2198)
* up

* finish
2023-02-01 14:22:39 +01:00
Muyang Li
4fc7084875 Fix a dimension bug in Transform2d (#2144)
The dimension does not match when `inner_dim` is not equal to `in_channels`.
2023-02-01 10:11:45 +01:00
Sayak Paul
9213d81bd0 add: guide on kerascv conversion tool. (#2169)
* add: guide on kerascv conversion tool.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* address additional suggestions from review.

* change links to documentation-images.

* add separate links for training and inference goodies from diffusers.

* address Patrick's comments.

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-02-01 09:41:00 +01:00
Asad Memon
dd3cae3327 Pass LoRA rank to LoRALinearLayer (#2191) 2023-02-01 09:40:02 +01:00
Patrick von Platen
f73d0b6bec [Docs] remove license (#2188) 2023-01-31 22:11:32 +01:00
Patrick von Platen
d0d7ffffbd [Docs] Add components to docs (#2175) 2023-01-31 22:11:14 +01:00
Abhishek Varma
87cf88ed3d Use requests instead of wget in convert_from_ckpt.py (#2168)
-- This commit adopts `requests` in place of `wget` to fetch config `.yaml`
   files as part of `load_pipeline_from_original_stable_diffusion_ckpt` API.
-- This was done because in Windows PowerShell one needs to explicitly ensure
   that `wget` binary is part of the PATH variable. If not present, this leads
   to the code not being able to download the `.yaml` config file.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
Co-authored-by: Abhishek Varma <abhishek@nod-labs.com>
2023-01-31 14:35:45 +01:00
Patrick von Platen
60d915fbed make style 2023-01-31 11:46:48 +00:00
1lint
d1efefe15e [Breaking change] fix legacy inpaint noise and resize mask tensor (#2147)
* fix legacy inpaint noise and resize mask tensor

* updated legacy inpaint pipe test expected_slice
2023-01-31 12:44:35 +01:00
Sayak Paul
7d96b38b70 [examples] Fix CLI argument in the launch script command for text2image with LoRA (#2171)
* Update README.md

* Update README.md
2023-01-31 09:47:09 +01:00
Dudu Moshe
cedafb8600 [Bug]: fix DDPM scheduler arbitrary infer steps count. (#2076)
scheduling_ddpm: fix evaluate with lower timesteps count than train.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-31 09:13:26 +01:00
Patrick von Platen
69caa96472 fix slow test 2023-01-31 07:39:30 +00:00
hysts
da113364df Add instance prompt to model card of lora dreambooth example (#2112) 2023-01-31 08:14:25 +01:00
Pedro Cuenca
44f6bc81c7 Don't copy when unwrapping model (#2166)
* Don't copy when unwrapping model.

Otherwise an exception is raised when using fp16.

* Remove unused import
2023-01-30 20:18:20 +01:00
Pedro Cuenca
164b6e0532 Section on using LoRA alpha / scale (#2139)
* Section on using LoRA alpha / scale.

* Accept suggestion

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Clarify on merge.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-01-30 14:14:46 +01:00
Patrick von Platen
a6610db7a8 [Design philosopy] Create official doc (#2140)
* finish more

* finish philosophy

* Apply suggestions from code review

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-01-30 09:27:37 +01:00
Pedro Cuenca
0b68101a13 [diffusers-cli] Fix typo in accelerate and transformers versions (#2154)
Fix typo in accelerate and transformers versions.
2023-01-30 09:04:45 +01:00
Ayan Das
125d783076 fix typo in EMAModel's load_state_dict() (#2151)
Possible typo introduced in 7c82a16fc1
2023-01-29 13:23:18 +01:00
Pedro Cuenca
fdf70cb54b Fix typo (#2138) 2023-01-27 20:08:56 +01:00
Nicolas Patry
20396e2bd2 Adding some safetensors docs. (#2122)
* Tmp.

* Adding more docs.

* Doc style.

* Remove the argument `use_safetensors=True`.

* doc-builder
2023-01-27 18:20:50 +01:00
Will Berman
2cf34e6db4 [from_pretrained] only load config one time (#2131) 2023-01-27 08:23:55 -08:00
Patrick von Platen
04ad948673 make style 2 - sorry 2023-01-27 16:54:40 +02:00
Patrick von Platen
97ef5e0665 make style 2023-01-27 16:52:04 +02:00
Patrick von Platen
31be42209d Don't call the Hub if local_files_only is specifiied (#2119)
Don't call the Hub if
2023-01-27 09:42:33 +02:00
RahulBhalley
43c5ac2be7 Typo fix: torwards -> towards (#2134) 2023-01-27 08:20:18 +01:00
Ji soo Kim
c750a82374 Fix typos in loaders.py (#2137)
Fix typo in loaders.py
2023-01-27 08:20:07 +01:00
Patrick von Platen
0c39f53cbb Allow lora from pipeline (#2129)
* [LoRA] All to use in inference with pipeline

* [LoRA] allow cross attention kwargs passed to pipeline

* finish
2023-01-27 08:19:46 +01:00
Will Berman
0a5948e7f4 remove redundant allow_patterns (#2130) 2023-01-26 13:22:28 -08:00
Patrick von Platen
f653ded7ed [LoRA] Make sure LoRA can be disabled after it's run (#2128) 2023-01-26 21:26:11 +01:00
Will Berman
e92d43feb0 [nit] torch_dtype used twice in doc string (#2126) 2023-01-26 11:19:20 -08:00
hysts
7436e30c72 Fix model card of LoRA (#2114)
Fix
2023-01-26 19:08:45 +01:00
Will Berman
14976500ed fuse attention mask (#2111)
* fuse attention mask

* lint

* use 0 beta when no attention mask re: @Birch-san
2023-01-26 08:36:07 -08:00
Cyberes
96af5bf7d9 Fix unable to save_pretrained when using pathlib (#1972)
* fix PosixPath is not JSON serializable

* use PosixPath

* forgot elif like a dummy
2023-01-26 16:53:34 +01:00
Patrick von Platen
bbc2a03052 [Import Utils] Fix naming (#2118) 2023-01-26 15:54:59 +01:00
Suraj Patil
1e216be895 make scaling factor a config arg of vae/vqvae (#1860)
* make scaling factor cnfig arg of vae

* fix

* make flake happy

* fix ldm

* fix upscaler

* qualirty

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* solve conflicts, addres some comments

* examples

* examples min version

* doc

* fix type

* typo

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* remove duplicate line

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-26 14:37:19 +01:00
Pedro Cuenca
915a563611 Allow UNet2DModel to use arbitrary class embeddings (#2080)
* Allow `UNet2DModel` to use arbitrary class embeddings.

We can currently use class conditioning in `UNet2DConditionModel`, but
not in `UNet2DModel`. However, `UNet2DConditionModel` requires text
conditioning too, which is unrelated to other types of conditioning.
This commit makes it possible for `UNet2DModel` to be conditioned on
entities other than timesteps. This is useful for training /
research purposes. We can currently train models to perform
unconditional image generation or text-to-image generation, but it's not
straightforward to train a model to perform class-conditioned image
generation, if text conditioning is not required.

We could potentiall use `UNet2DConditionModel` for class-conditioning
without text embeddings by using down/up blocks without
cross-conditioning. However:
- The mid block currently requires cross attention.
- We are required to provide `encoder_hidden_states` to `forward`.

* Style

* Align class conditioning, add docstring for `num_class_embeds`.

* Copy docstring to versatile_diffusion UNetFlatConditionModel
2023-01-26 13:46:32 +01:00
Pedro Cuenca
0856137337 [textual inversion] Allow validation images (#2077)
* [textual inversion] Allow validation images.

* Change key to `validation`

* Specify format instead of transposing.

As discussed with @sayakpaul.

* Style

Co-authored-by: isamu-isozaki <isamu.website@gmail.com>
2023-01-26 09:20:03 +01:00
Suraj Patil
946d1cb200 [dreambooth] check the low-precision guard before preparing model (#2102)
check the dtype before preparing model
2023-01-25 11:06:33 -08:00
Patrick von Platen
09779cbb40 [Bump version] 0.13.0dev0 & Deprecate predict_epsilon (#2109)
* [Bump version] 0.13

* Bump model up

* up
2023-01-25 17:59:02 +01:00
Patrick von Platen
b0cc7c202b make style 2023-01-25 16:03:56 +02:00
Oren WANG
fb98acf03b [lora] Fix bug with training without validation (#2106) 2023-01-25 14:56:13 +01:00
Patrick von Platen
180841bbde Release: v0.12.0 2023-01-25 15:48:00 +02:00
Patrick von Platen
6ba2231d72 Reproducibility 3/3 (#1924)
* make tests deterministic

* run slow tests

* prepare for testing

* finish

* refactor

* add print statements

* finish more

* correct some test failures

* more fixes

* set up to correct tests

* more corrections

* up

* fix more

* more prints

* add

* up

* up

* up

* uP

* uP

* more fixes

* uP

* up

* up

* up

* up

* fix more

* up

* up

* clean tests

* up

* up

* up

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* make

* correct

* finish

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-25 13:44:22 +01:00
Patrick von Platen
008c22d334 Improve transformers versions handling (#2104) 2023-01-25 12:50:54 +01:00
Patrick von Platen
b562b6611f Allow directly passing text embeddings to Stable Diffusion Pipeline for prompt weighting (#2071)
* add text embeds to sd

* add text embeds to sd

* finish tests

* finish

* finish

* make style

* fix tests

* make style

* make style

* up

* better docs

* fix

* fix

* new try

* up

* up

* finish
2023-01-25 12:29:49 +01:00
Sayak Paul
c1184918c5 [docs] Adds a doc on LoRA support for diffusers (#2086)
* add: a doc on LoRA support in diffusers.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* apply PR suggestions.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* remove visually incoherent elements.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-25 12:23:12 +01:00
apolinario
263b968041 Add lora tag to the model tags (#2103)
* Add `lora` tag to the model tags

For lora training

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-25 12:17:59 +01:00
Suraj Patil
480d8846a9 [doc] update example for pix2pix (#2101)
update example for pix2pix
2023-01-25 11:22:09 +01:00
patil-suraj
9dbf78e2f1 Merge branch 'main' of https://github.com/huggingface/diffusers 2023-01-25 09:12:49 +01:00
patil-suraj
9aa6fcab60 fix docs for center_crop 2023-01-25 09:12:47 +01:00
Pedro Cuenca
f37d880f6a Remove wandb from text_to_image requirements.txt (#2092) 2023-01-25 08:54:14 +01:00
Will Berman
febaf86302 [docs] [dreambooth] note random crop (#2085)
* [docs] [dreambooth] note random crop

documenting default random crop behavior
2023-01-24 08:37:34 -08:00
Takuma Mori
16bb5058b9 xFormers attention op arg (#2049)
* allow passing op to xFormers attention

original code by @patil-suraj
huggingface/diffusers@ae0cc0b71f

* correct style by `make style`

* add attention_op arg documents

* add usage example to docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add usage example to docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* code style correction by `make style`

* Update docstring code to a valid python example

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update docstring code to a valid python example

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style correction by `make style`

* Update code exmaple to fully functional

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-24 17:26:04 +01:00
Lincoln Stein
7533e3d7e6 [Feat] checkpoint_merger works on local models as well as ones that use safetensors (#2060)
* allow a local model directory to be used for merging

* moved checkpoint merge bugfix into main for testing

* possibly fix local variable "config_dict" referenced before assignment

* fix deprecation warning

* debugging...

* debugging

* allow safetensors

* safetensors try again

* fix syntax error

* further debugging

* fix logic error when checkpoint 2 is none

* more debugging...

* more debuging...

* more debugging...

* more debugging...

* debugging

* clean up status reporting

* skip the requires_safety_checker boolean

* moved checkpoint merge bugfix into main for testing

* possibly fix local variable "config_dict" referenced before assignment

* fix deprecation warning

* allow safetensors

* fix logic error when checkpoint 2 is none

* clean up status reporting

* undo hack to use private repo for community pipelines

* allow a local model directory to be used for merging

* allow safetensors

* clean up status reporting

* reformatted with black

* sort imported modules correctly

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix import style error

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-24 16:35:17 +01:00
Yuta Hayashibe
418331094d Run inference on a specific condition and fix call of manual_seed() (#2074) 2023-01-24 14:19:22 +01:00
Suraj Patil
fc8afa3ab5 [dreambooth] fix multi on gpu. (#2088)
unwrap model on multi gpu
2023-01-24 13:23:56 +01:00
Pedro Cuenca
31336dae3b Fix resume epoch for all training scripts except textual_inversion (#2079) 2023-01-24 12:02:41 +01:00
Pedro Cuenca
0e98e83927 [lora] Log images when using tensorboard (#2078)
* [lora] Log images when using tensorboard.

* Specify image format instead of transposing.

As discussed with @sayakpaul.

* Style
2023-01-24 10:26:39 +01:00
Pedro Cuenca
f4dddaf5ee [textual_inversion] Fix resuming state when using gradient checkpointing (#2072)
* Fix resuming state when using gradient checkpointing.

Also, allow --resume_from_checkpoint to be used when the checkpoint does
not yet exist (a normal training run will be started).

* style
2023-01-24 10:25:41 +01:00
Patrick von Platen
7d8b4f7f8e [Paint by example] Fix cpu offload for paint by example (#2062)
* [Paint by example] Fix paint by example

* fix more

* final fix
2023-01-24 10:05:50 +01:00
Gleb Akhmerov
a66f2baeb7 Dreambooth: reduce VRAM usage (#2039)
* Dreambooth: use `optimizer.zero_grad(set_to_none=True)` to reduce VRAM usage

* Allow the user to control `optimizer.zero_grad(set_to_none=True)` with --set_grads_to_none

* Update Dreambooth readme

* Fix link in readme

* Fix header size in readme
2023-01-23 12:21:03 +01:00
Suraj Patil
6fedb29f11 [examples] add dataloader_num_workers argument (#2070)
add --dataloader_num_workers argument
2023-01-23 10:58:03 +01:00
cafe+ai — かふぇあい
d75ad93ca7 Safetensors loading in "convert_diffusers_to_original_stable_diffusion" (#2054)
* Safetensors loading in "convert_diffusers_to_original_stable_diffusion"

Adds diffusers format saftetensors loading support

* Fix import sort order: convert_diffusers_to_original_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-23 09:44:55 +01:00
Sayak Paul
ffb3a26c5c [LoRA] Adds example on text2image fine-tuning with LoRA (#2031)
* example on fine-tuning with LoRA.

* apply make quality.

* fix: pipeline loading.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply suggestions for PR review.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply make style and make quality.

* chore: remove mention of dreambooth from text2image.

* add: weight path and wandb run link.

* Apply suggestions from code review

* apply make style.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-23 08:31:07 +01:00
wrong.wang
b15a951a48 add community pipeline: StableUnCLIPPipeline (#2037)
* add community pipeline: StableUnCLIPPipeline

* reformt stable_unclip.py with isort and black
2023-01-22 21:03:42 +01:00
Patrick von Platen
69c76173fa fix tests 2023-01-22 14:31:05 +02:00
Patrick von Platen
926b34b40c improve tests 2023-01-22 14:30:15 +02:00
Patrick von Platen
8d326e61cf Correct Pix2Pix example (#2056)
* Correct Pix2Pix example

- no advertisement of revision -> it'll be deprecated soon
- by default safety checker should be used

* Update docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx

* up
2023-01-21 15:56:29 +01:00
Patrick von Platen
59b7339a84 [From pretrained] Don't download .safetensors files if safetensors is… (#2057)
* [From pretrained] Don't download .safetensors files if safetensors is not available

* tests

* tests

* up
2023-01-21 15:51:33 +01:00
Suraj Patil
aa265f74bd [StableDiffusionInstructPix2Pix] use cpu generator in slow tests (#2051)
* use cpu generator in slow tests

* ifx get_inputs
2023-01-20 21:43:00 +02:00
Damian Stewart
3d2f24b099 Module-ise "original stable diffusion to diffusers" conversion script (#2019)
* convert __main__ to a function call and call it

* add missing type hint

* make style check pass

* move loading to src/diffusers

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 17:30:44 +01:00
Lucain
bcb476797c Remove modelcards dependency (#2050)
* Switch to huggingface_hub.ModelCard

* Remove modelcards dependency in favor of Jinja2
2023-01-20 16:39:42 +01:00
Lucain
5ea4be86ab Create repo before cloning in examples (#2047)
* Create repo before cloning in examples

* code quality
2023-01-20 16:38:37 +01:00
Suraj Patil
e5ff75540c Add InstructPix2Pix pipeline (#2040)
* being pix2pix

* ifx

* cfg image_latents

* fix some docstr

* fix

* fix

* hack

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add comments to explain the hack

* move __call__ to the top

* doc

* remove height and width

* remove depreications

* fix doc str

* quality

* fast tests

* chnage model id

* fast tests

* fix test

* address Pedro's comments

* copyright

* Simple doc page.

* Apply suggestions from code review

* style

* Remove import

* address some review comments

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 16:25:46 +01:00
hysts
3ecbbd6288 Minor fix in the documentation of LoRA (#2045)
Fix
2023-01-20 13:19:54 +01:00
Anton Lozhkov
7c82a16fc1 Fix EMA for multi-gpu training in the unconditional example (#1930)
* improve EMA

* style

* one EMA model

* quality

* fix tests

* fix test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* re organise the unconditional script

* backwards compatibility

* default to init values for some args

* fix ort script

* issubclass => isinstance

* update state_dict

* docstr

* doc

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* use .to if device is passed

* deprecate device

* make flake happy

* fix typo

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 11:35:55 +01:00
Patrick von Platen
f354dd9e2f [Save Pretrained] Remove dead code lines that can accidentally remove pytorch files (#2038)
correct safetensors
2023-01-19 10:11:27 +01:00
Patrick von Platen
007c914c70 [Lora] Model card (#2032)
* [Lora] up lora training

* finish

* finish

* finish model card
2023-01-19 09:44:02 +01:00
Patrick von Platen
3c07840b1b make style 2023-01-19 02:22:47 +01:00
Joqsan
fcb2ec8c2f Fix typos and minor redundancies (#2029)
fix typos and minor redundancies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 02:22:18 +01:00
Patrick von Platen
013955b5a7 [Dit] Fix dit tests (#2034)
* [Dit] Fix dit tests

* up
2023-01-19 01:50:22 +01:00
Patrick von Platen
ed616bd8a8 [LoRA] Add LoRA training script (#1884)
* [Lora] first upload

* add first lora version

* upload

* more

* first training

* up

* correct

* improve

* finish loaders and inference

* up

* up

* fix more

* up

* finish more

* finish more

* up

* up

* change year

* revert year change

* Change lines

* Add cloneofsimo as co-author.

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>

* finish

* fix docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* upload

* finish

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-18 18:05:51 +01:00
Suraj Patil
ac3fc64906 fix dit doc header (#2027)
The link in the main heading needs to be rendered on the doc page. It displays the text as is.
2023-01-18 10:31:24 +01:00
Kashif Rasul
37d113cce7 DiT Pipeline (#1806)
* added dit model

* import

* initial pipeline

* initial convert script

* initial pipeline

* make style

* raise valueerror

* single function

* rename classes

* use DDIMScheduler

* timesteps embedder

* samples to cpu

* fix var names

* fix numpy type

* use timesteps class for proj

* fix typo

* fix arg name

* flip_sin_to_cos and better var names

* fix C shape cal

* make style

* remove unused imports

* cleanup

* add back patch_size

* initial dit doc

* typo

* Update docs/source/api/pipelines/dit.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* added copyright license headers

* added example usage and toc

* fix variable names asserts

* remove comment

* added docs

* fix typo

* upstream changes

* set proper device for drop_ids

* added initial dit pipeline test

* update docs

* fix imports

* make fix-copies

* isort

* fix imports

* get rid of more magic numbers

* fix code when guidance is off

* remove block_kwargs

* cleanup script

* removed to_2tuple

* use FeedForward class instead of another MLP

* style

* work on mergint DiTBlock with BasicTransformerBlock

* added missing final_dropout and args to BasicTransformerBlock

* use norm from block

* fix arg

* remove unused arg

* fix call to class_embedder

* use timesteps

* make style

* attn_output gets multiplied

* removed commented code

* use Transformer2D

* use self.is_input_patches

* fix flags

* fixed conversion to use Transformer2DModel

* fixes for pipeline

* remove dit.py

* fix timesteps device

* use randn_tensor and fix fp16 inf.

* timesteps_emb already the right dtype

* fix dit test class

* fix test and style

* fix norm2 usage in vq-diffusion

* added author names to pipeline and lmagenet labels link

* fix tests

* use norm_type as string

* rename dit to transformer

* fix name

* fix test

* set  norm_type = "layer" by default

* fix tests

* do not skip common tests

* Update src/diffusers/models/attention.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* revert AdaLayerNorm API

* fix norm_type name

* make sure all components are in eval mode

* revert norm2 API

* compact

* finish deprecation

* add slow tests

* remove @

* refactor some stuff

* upload

* Update src/diffusers/pipelines/dit/pipeline_dit.py

* finish more

* finish docs

* improve docs

* finish docs

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-17 23:09:29 +01:00
Pedro Cuenca
7e29b747f9 Check k-diffusion version is at least 0.0.12 (#2022)
* Check k-diffusion version is at least 0.0.12

* make style
2023-01-17 21:16:46 +01:00
Jerry Jiarui XU
a43bdd01cd [Flax] Add Flax inpainting impl (#1966)
* [Flax] Add Flax inpainting impl

* fixed copies, add README.md

* fixed README.md

* add test

* format

* update README.md
2023-01-17 10:42:04 +01:00
Patrick von Platen
f77ff56158 [Docs] No more autocast (#2021)
no more autocast
2023-01-17 10:31:25 +01:00
Suraj Patil
f861cde14f [train_unconditional] fix LR scheduler init (#2010)
fix lr scheduler
2023-01-17 10:11:46 +01:00
William Dalheim
b2ea8a84e9 Change PNDMPipeline to use PNDMScheduler (#2003)
* pndmpipeline uses pndmscheduler

* formatted pipeline_pndm
2023-01-16 15:34:59 +01:00
Will Berman
07c0fe4b87 Use pipeline tests mixin for UnCLIP pipeline tests + unCLIP MPS fixes (#1908)
re: https://github.com/huggingface/diffusers/issues/1857

We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1).

- [x] add mixin to UnCLIPPipelineFastTests
- [x] add mixin to UnCLIPImageVariationPipelineFastTests
- [x] Move UnCLIPPipeline flags in mixin to base class
- [x] Small MPS fixes for F.pad and F.interpolate
- [x] Made test unCLIP model's dimensions smaller to run tests faster
2023-01-16 15:21:58 +01:00
Haofan Wang
1e651ca2c9 Fix typos in ColossalAI example (#2001)
fix typos
2023-01-16 15:21:04 +01:00
Patrick von Platen
522f8aa7b2 [Black] Update black library (#2007) 2023-01-16 15:16:28 +01:00
Patrick von Platen
8a3f0c1f71 [Conversion] Improve safetensors (#1989) 2023-01-16 14:26:56 +01:00
Patrick von Platen
f6a5c359cc [Community] Fix merger (#2006)
* [Community] Fix merger

* finish
2023-01-16 14:25:25 +01:00
蓝色的秋风
651c5adf8a [Conversion] Support convert diffusers to safetensors (#1996)
fix: support diffusers to safetensors
2023-01-16 12:58:01 +01:00
Erin
cc2cc00d20 Add tests for 2D UNet blocks (#1945)
* test unet blocks 2d

* change to randn_tensor

* mps flaky
2023-01-16 12:53:05 +01:00
Pedro Cuenca
8f58159159 Fix a couple typos in Dreambooth readme (#2004)
Fix a couple typos in Dreambooth readme.
2023-01-16 11:38:07 +01:00
Sayak Paul
216d190178 Update README.md to include our blog post (#1998)
* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-16 09:16:54 +01:00
Vladimir Sotnikov
9b37ed33b5 [SD Img2Img] resize source images to multiple of 8 instead of 32 (#1571)
* [Stable Diffusion Img2Img] resize source images to integer multiple of 8 instead of 32

* [Alt Diffusion Img2Img] resize source images to multiple of 8 instead of 32

* [Img2Img] fix AltDiffusion Img2Img resolution test

* [Img2Img] add Stable Diffusion Img2Img resolution test

* [Cycle Diffusion] round resolution to multiplies of 8 instead of 32

* [ONNX SD Img2Img] round resolution to multiplies of 64 instead of 32

* [SD Depth2Img] round resolution to multiplies of 8 instead of 32

* [Repaint] round resolution to multiplies of 8 instead of 32

* fix make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-13 16:02:22 +01:00
Patrick von Platen
135567f18e make style 2023-01-12 20:02:41 +00:00
Walter Hugo Lopez Pinaya
9a5d3322e7 Update docstring (#1971) 2023-01-12 21:01:40 +01:00
camenduru
f73ed17961 Allow converting Flax to PyTorch by adding a "from_flax" keyword (#1900)
* from_flax

* oops

* oops

* make style with pip install -e ".[dev]"

* oops

* now code quality happy 😋

* allow_patterns += FLAX_WEIGHTS_NAME

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* for test

* bye bye is_flax_available()

* oops

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* make style

* add test

* finihs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 20:00:35 +01:00
Katsuya
9147c4c954 Fix unused upcast_attn flag in convert_original_stable_diffusion_to_diffusers script (#1942)
Fix unused upcast_attn flag in sd to diffusers script
2023-01-12 19:55:40 +01:00
Patrick von Platen
6d3adf6570 Fix slow tests (#1983)
* [Slow tests] Fix tests

* Update tests/pipelines/karras_ve/test_karras_ve.py
2023-01-12 18:24:51 +01:00
Patrick von Platen
dbdd585cad Example tests (#1982)
* Example tests

* fix
2023-01-12 17:39:37 +01:00
klopsahlong
7f0eb35af3 Research project multi subject dreambooth (#1948)
* implemented multi subject dreambooth in research_projects

* minor edits to readme

* added style and quality fixes

Co-authored-by: Krista Opsahl-Ong <kristaopsahlong@gmail.com>
2023-01-12 11:42:35 +01:00
Haofan Wang
40aa162808 [Docs] Update README.md (#1960)
Update README.md
2023-01-12 11:34:10 +01:00
Patrick von Platen
f06e4e5579 make style 2023-01-12 11:33:11 +01:00
Patrick von Platen
57f7d25934 [CPU offload] correct cpu offload (#1968)
* [CPU offload] correct cpu offload

* [CPU offload] correct cpu offload

* finish

* finish

* Update docs/source/en/optimization/fp16.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 11:23:52 +01:00
Michael Krasnyk
50b6513531 Update CLIPGuidedStableDiffusion.feature_extractor.size to fix TypeError (#1938)
TypeError: Size should be int or sequence. Got <class 'dict'>
2023-01-11 10:52:48 +01:00
Patrick von Platen
d1d5451b64 [Community] Correct checkpoint merger (#1965) 2023-01-10 15:11:50 +01:00
Patrick von Platen
f6f1ec3a7c allow loading ddpm models into ddim (#1932) 2023-01-10 14:52:32 +01:00
Patrick von Platen
beb932c5d1 [Conversion SD] Make sure weirdly sorted keys work as well (#1959) 2023-01-10 01:23:14 +01:00
andreemic
4401e6aa2b fix typo in imagic_stable_diffusion.py (#1956)
changed named arg to self.tokenizer from `truncaton` to `truncation` according to Tokenizer docs (https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.truncation)
2023-01-09 20:34:19 +01:00
Fazzie-Maqianli
089f0f4c98 update to latest colossalai (#1951) 2023-01-09 19:47:41 +01:00
Patrick von Platen
aba2a65d6a Add automatic doc sorting (#1940)
* automatically sort docs

* add new check toc doc

* add new check toc doc

* add

* add new check toc doc

* add

* add new check toc doc

* correct

* finalize
2023-01-06 17:24:47 +01:00
vvssttkk
9f4c4f5e82 fix path to logo (#1939) 2023-01-06 16:12:30 +01:00
Patrick von Platen
409387889d [Conversion] Make sure ema weights are extracted correctly (#1937)
* [Conversion] Make sure ema weights are extracted correctly

* up

* finish
2023-01-06 07:08:39 +01:00
Patrick von Platen
2533f92532 [Stable Diffusion Guide] 101 Stable Diffusion Guide directly into the docs (#1927)
* finish

* improve

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-05 21:14:23 +01:00
Patrick von Platen
f6af0d1f33 move to intro 2023-01-05 20:57:43 +01:00
Will Berman
247b5feea1 [dreambooth] low precision guard (#1916)
* [dreambooth] low precision guard

* fix

* add docs to cli args

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 16:54:56 +01:00
Shubhamai
7101c7316b [StableDiffusionimg2img] validating input type (#1913)
* [StableDiffusionimg2img] validating input type

* fixing tests

* running make style

* make fix-copies
2023-01-05 12:01:00 +01:00
Chanran Kim
f6f4176294 [Docs] Add TRANSLATING.md file (#1920)
* init for korean docs

* edit build yml file for multi language docs

* edit one more build yml file for multi language docs

* add title for get_frontmatter error

* add translating.md

* default language for docs is en

* Update docs/TRANSLATING.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 09:42:43 +01:00
Fazzie-Maqianli
d8062ad700 Feature/colossalai (#1793)
Support ColossalAi for Dreamblooth
2023-01-05 08:53:42 +01:00
qsh-zh
be99201a56 feat : add log-rho deis multistep scheduler (#1432)
* feat : add log-rho deis multistep deis

* docs :fix typo

* docs : add docs for impl algo

* docs : remove duplicate ref

* finish deis

* add docs

* fix

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 00:09:30 +01:00
Patrick von Platen
9b63854886 Improve reproduceability 2/3 (#1906)
* [Repro] Correct reproducability

* up

* up

* uP

* up

* need better image

* allow conversion from no state dict checkpoints

* up

* up

* up

* up

* check tensors

* check tensors

* check tensors

* check tensors

* next try

* up

* up

* better name

* up

* up

* Apply suggestions from code review

* correct more

* up

* replace all torch randn

* fix

* correct

* correct

* finish

* fix more

* up
2023-01-04 23:51:17 +01:00
Peter Willemsen
67e2f95cc4 New Pipeline: Tiled-upscaling with depth perception to avoid blurry spots (#1615)
* added first version of the tiled upscaling pipeline

* reformatted to pass code quality tests
2023-01-04 23:07:58 +01:00
Chanran Kim
75d53cc839 Init for korean docs (#1910)
* init for korean docs

* edit build yml file for multi language docs

* edit one more build yml file for multi language docs

* add title for get_frontmatter error
2023-01-04 22:59:42 +01:00
Erin
9e17983d9f Test ResnetBlock2D (#1850)
* test resnet block

* fix code format required by isort

* add torch device

* nit
2023-01-04 22:57:32 +01:00
Patrick von Platen
cb8a3dbe34 make style 2023-01-04 21:50:17 +00:00
Yasyf Mohamedali
bcd6f3f9ce Various Fixes for Flax Dreambooth (#1782)
* Various Fixes for Flax Dreambooth

- Correctly update the progress bar every epoch
- Allow specifying a pretrained VAE
- Allow specifying a revision to pretrained models
- Cache compiled models between invocations (speeds up TPU execution a lot!)
- Save intermediate checkpoints by specifying `save_steps`

* Don't die when save_steps is not set.

* Address CR

* Address comments

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-04 22:49:56 +01:00
Alex Redden
19a0ce4a47 Fix lr-scaling store_true & default=True cli argument for textual_inversion training. (#1090)
Fix default lr-scaling cli argument
2023-01-04 15:43:41 +01:00
Yasyf Mohamedali
856331c61b Support training SD V2 with Flax (#1783)
* Support training SD V2 with Flax

Mostly involves supporting a v_prediction scheduler.

The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead.

* Add to other top-level files.
2023-01-04 13:19:22 +01:00
Manfred Lindmark
f7154f859c Fix --resume_from_checkpoint step in train_text_to_image.py (#1914)
fix resume step in train_text_to_image example
2023-01-04 13:05:55 +01:00
Joqsan
675ef1ffbd fix: DDPMScheduler.set_timesteps() (#1912) 2023-01-04 13:02:50 +01:00
Patrick von Platen
d67c305120 allow conversion from no state dict checkpoints 2023-01-03 19:48:13 +00:00
Patrick von Platen
2bd53a940c [Docs] Remove duplicated API doc string (#1901)
only have api docstring once for sD
2023-01-03 19:11:48 +01:00
Patrick von Platen
8ed08e4270 [Deterministic torch randn] Allow tensors to be generated on CPU (#1902)
* [Deterministic torch randn] Allow tensors to be generated on CPU

* fix more

* up

* fix more

* up

* Update src/diffusers/utils/torch_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Apply suggestions from code review

* up

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-03 18:22:40 +01:00
neverix
0df83c79e4 Fixes in comments in SD2 D2I (#1903) 2023-01-03 16:24:36 +01:00
Robert Dargavel Smith
4a7e4cec38 Add condtional generation to AudioDiffusionPipeline (#1826)
* Add condtional generation

* add fast test for conditional audio generation
2023-01-03 14:09:14 +01:00
aengusng8
f45c675d2c [addresses issue #1642] add add_noise to scheduling-sde-ve (#1827)
* add add_noise to scheduling-sde-ve

* run Black formater
2023-01-03 14:08:41 +01:00
Anton Lozhkov
1bf4f0da7e Add accelerate and xformers versions to diffusers-cli env (#1898)
Add accelerate and xformers to diffusers-cli env
2023-01-03 13:51:27 +01:00
Anton Lozhkov
f17fae641c Add UnCLIPImageVariationPipeline to dummy imports (#1897)
* Add UnCLIPImageVariationPipeline to dummy imports

* style
2023-01-03 11:57:56 +01:00
YiYi Xu
da31075700 updated doc for stable diffusion pipelines (#1770)
* add a doc page for each pipeline under api/pipelines/stable_diffusion
* add pipeline examples to docstrings
* updated stable_diffusion_2 page
* updated default markdown syntax to list methods based on https://github.com/huggingface/diffusers/pull/1870
* add function decorator

Co-authored-by: yiyixuxu <yixu@Yis-MacBook-Pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 11:35:51 -10:00
Pedro Cuenca
8c14ca3d43 Fixes to the help for report_to in training scripts (#1888)
Fixes to the help for report_to in training scripts.
2023-01-02 15:53:28 +01:00
Suraj Patil
fa1f4701e8 [examples] misc fixes (#1886)
* misc fixes

* more comments

* Update examples/textual_inversion/textual_inversion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* set transformers verbosity to warning

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 14:09:01 +01:00
agizmo
423c3a4cc6 Update ONNX Pipelines to use np.float64 instead of np.float (#1789)
Numpy 1.24 had removed the "float" scalar alias as it was depricated in v1.20.
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
https://numpy.org/devdocs/release/1.24.0-notes.html#expired-deprecations
2023-01-02 13:21:49 +01:00
Pedro Cuenca
f769d74b0f Fix typo in train_dreambooth_inpaint (#1885)
Fix typo in train_dreambooth_inpaint.
2023-01-02 11:50:58 +01:00
Patrick von Platen
21bbc633c4 [Attention] Finish refactor attention file (#1879)
* [Attention] Finish refactor attention file

* correct more

* fix

* more fixes

* correct

* up
2023-01-01 19:18:10 +01:00
Suraj Patil
62608a9102 [train_text_to_image] allow using non-ema weights for training (#1834)
* allow using non-ema weights for training

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* address more review comment

* reorganise a few lines

* always pad text to max_length to match original training

* ifx collate_fn

* remove unused code

* don't prepare ema_unet, don't register lr scheduler

* style

* assert => ValueError

* add allow_tf32

* set log level

* fix comment

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 21:49:47 +01:00
Suraj Patil
e4fe941312 [examples] update loss computation (#1861)
update loss computation
2022-12-30 14:32:38 +01:00
Patrick von Platen
ac3738462b [Docs] Improve docs (#1870)
* [Docs] Improve docs

* up
2022-12-30 13:50:01 +01:00
Pedro Cuenca
a6e2c1fe5c Fix ema decay (#1868)
* Fix ema decay and clarify nomenclature.

* Rename var.
2022-12-30 12:42:42 +01:00
Patrick von Platen
b28ab30215 [Unclip] Make sure text_embeddings & image_embeddings can directly be passed to enable interpolation tasks. (#1858)
* [Unclip] Make sure latents can be reused

* allow one to directly pass embeddings

* up

* make unclip for text work

* finish allowing to pass embeddings

* correct more

* make style
2022-12-30 12:18:19 +01:00
Patrick von Platen
29b2c93c90 Make repo structure consistent (#1862)
* move files a bit

* more refactors

* fix more

* more fixes

* fix more onnx

* make style

* upload

* fix

* up

* fix more

* up again

* up

* small fix

* Update src/diffusers/__init__.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 11:51:08 +01:00
Simon Kirsten
ab0e92fdc8 Flax: Fix img2img and align with other pipeline (#1824)
* Flax: Add components function

* Flax: Fix img2img and align with other pipeline

* Flax: Fix PRNGKey type

* Refactor strength to start_timestep

* Fix preprocess images

* Fix processed_images dimen

* latents.shape -> latents_shape

* Fix typo

* Remove "static" comment

* Remove unnecessary optional types in _generate

* Apply doc-builder code style.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-29 18:56:03 +01:00
Suraj Patil
9ea7052f0e [textual inversion] add gradient checkpointing and small fixes. (#1848)
Co-authored-by: Henrik Forstén <henrik.forsten@gmail.com>

* update TI script

* make flake happy

* fix typo
2022-12-29 15:02:29 +01:00
Patrick von Platen
03bf877bf4 [StableDiffusionInpaint] Correct test (#1859) 2022-12-29 14:47:56 +01:00
Patrick von Platen
f2e521c499 [Dtype] Align dtype casting behavior with Transformers and Accelerate (#1725)
* [Dtype] Align automatic dtype

* up

* up

* fix

* re-add accelerate
2022-12-29 14:36:02 +01:00
Patrick von Platen
debc74f442 [Versatile Diffusion] Fix cross_attention_kwargs (#1849)
fix versatile
2022-12-28 18:49:04 +01:00
Partho
2ba42aa9b1 [Community Pipeline] MagicMix (#1839)
* initial

* type hints

* update scheduler type hint

* add to README

* add example generation to README

* v -> mix_factor

* load scheduler from pretrained
2022-12-28 17:02:53 +01:00
Will Berman
53c8147afe unCLIP image variation (#1781)
* unCLIP image variation

* remove prior comment re: @pcuenca

* stable diffusion -> unCLIP re: @pcuenca

* add copy froms re: @patil-suraj
2022-12-28 14:17:09 +01:00
kabachuha
cf5265ad41 Allow selecting precision to make Dreambooth class images (#1832)
* allow selecting precision to make DB class images

addresses #1831

* add prior_generation_precision argument

* correct prior_generation_precision's description

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-27 19:51:32 +01:00
Katsuya
8874027efc Make xformers optional even if it is available (#1753)
* Make xformers optional even if it is available

* Raise exception if xformers is used but not available

* Rename use_xformers to enable_xformers_memory_efficient_attention

* Add a note about xformers in README

* Reformat code style
2022-12-27 19:47:50 +01:00
Christopher Friesen
b693aff795 fix: resize transform now preserves aspect ratio (#1804) 2022-12-27 15:10:25 +01:00
William Held
8a4c3e50bd Width was typod as weight (#1800)
* Width was typod as weight

* Run Black
2022-12-27 15:09:21 +01:00
Pedro Cuenca
68e24259af Avoid duplicating PyTorch + safetensors downloads. (#1836) 2022-12-27 14:58:15 +01:00
camenduru
1f1b6c6544 Device to use (e.g. cpu, cuda:0, cuda:1, etc.) (#1844)
* Device to use (e.g. cpu, cuda:0, cuda:1, etc.)

* "cuda" if torch.cuda.is_available() else "cpu"
2022-12-27 14:42:56 +01:00
Pedro Cuenca
df2b548e89 Make safety_checker optional in more pipelines (#1796)
* Make safety_checker optional in more pipelines.

* Remove inappropriate comment in inpaint pipeline.

* InPaint Test: set feature_extractor to None.

* Remove import

* img2img test: set feature_extractor to None.

* inpaint sd2 test: set feature_extractor to None.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-25 21:58:45 +01:00
Daquan Lin
b6d4702301 fix small mistake in annotation: 32 -> 64 (#1780)
Fix inconsistencies between code and comments in the function 'preprocess'
2022-12-24 19:56:57 +01:00
Suraj Patil
9be94d9c66 [textual_inversion] unwrap_model text encoder before accessing weights (#1816)
* unwrap_model text encoder before accessing weights

* fix another call

* fix the right call
2022-12-23 16:46:24 +01:00
Patrick von Platen
f2acfb67ac Remove hardcoded names from PT scripts (#1778)
* Remove hardcoded names from PT scripts

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-23 15:36:29 +01:00
Prathik Rao
8aa4372aea reorder model wrap + bug fix (#1799)
* reorder model wrap

* bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-22 14:51:47 +01:00
Pedro Cuenca
6043838971 Fix OOM when using PyTorch with JAX installed. (#1795)
Don't initialize Jax on startup.
2022-12-21 14:07:24 +01:00
Patrick von Platen
4125756e88 Refactor cross attention and allow mechanism to tweak cross attention function (#1639)
* first proposal

* rename

* up

* Apply suggestions from code review

* better

* up

* finish

* up

* rename

* correct versatile

* up

* up

* up

* up

* fix

* Apply suggestions from code review

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add error message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 18:49:05 +01:00
Dhruv Naik
a9190badf7 Add Flax stable diffusion img2img pipeline (#1355)
* add flax img2img pipeline

* update pipeline

* black format file

* remove argg from get_timesteps

* update get_timesteps

* fix bug: make use of timesteps for for_loop

* black file

* black, isort, flake8

* update docstring

* update readme

* update flax img2img readme

* update sd pipeline init

* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* update inits

* revert change

* update var name to image, typo

* update readme

* return new t_start instead of modified timestep

* black format

* isort files

* update docs

* fix-copies

* update prng_seed typing

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:25:08 +01:00
Suraj Patil
d07f73003d Fix num images per prompt unclip (#1787)
* use repeat_interleave

* fix repeat

* Trigger Build

* don't install accelerate from main

* install released accelrate for mps test

* Remove additional accelerate installation from main.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:03:38 +01:00
Pedro Cuenca
a6fb9407fd Dreambooth docs: minor fixes (#1758)
* Section header for in-painting, inference from checkpoint.

* Inference: link to section to perform inference from checkpoint.

* Move Dreambooth in-painting instructions to the proper place.
2022-12-20 08:39:16 +01:00
Patrick von Platen
261a448c6a Correct hf hub download (#1767)
* allow model download when no internet

* up

* make style
2022-12-20 02:07:15 +01:00
Simon Kirsten
f106ab40b3 [Flax] Stateless schedulers, fixes and refactors (#1661)
* [Flax] Stateless schedulers, fixes and refactors

* Remove scheduling_common_flax and some renames

* Update src/diffusers/schedulers/scheduling_pndm_flax.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:42:41 +01:00
Emil Bogomolov
d87cc15977 expose polynomial:power and cosine_with_restarts:num_cycles params (#1737)
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py

* fix formatting

* fix style

* Update src/diffusers/optimization.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:41:37 +01:00
Patrick von Platen
e29dc97215 make style 2022-12-20 01:38:45 +01:00
Ilmari Heikkinen
8e4733b3c3 Only test for xformers when enabling them #1773 (#1776)
* only check for xformers when xformers are enabled

* only test for xformers when enabling them
2022-12-20 01:38:28 +01:00
Prathik Rao
847daf25c7 update train_unconditional_ort.py (#1775)
* reflect changes

* run make style

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-19 23:58:55 +01:00
Pedro Cuenca
9f8c915a75 [Dreambooth] flax fixes (#1765)
* Fail if there are less images than the effective batch size.

* Remove lr-scheduler arg as it's currently ignored.

* Make guidance_scale work for batch_size > 1.
2022-12-19 20:42:25 +01:00
Anton Lozhkov
8331da4683 Bump to 0.12.0.dev0 (#1771) 2022-12-19 18:44:08 +01:00
Anton Lozhkov
f1a32203aa [Tests] Fix UnCLIP cpu offload tests (#1769) 2022-12-19 18:25:08 +01:00
Nan Liu
6f15026330 update composable diffusion for an updated diffuser library (#1697)
* update composable diffusion for an updated diffuser library

* fix style/quality for code

* Revert "fix style/quality for code"

This reverts commit 71f2349763.

* update style

* reduce memory usage by computing score sequentially
2022-12-19 18:03:40 +01:00
anton-
a5edb981a7 [Patch] Return import for the unclip pipeline loader 2022-12-19 17:56:42 +01:00
anton-
54796b7e43 Release: v0.11.0 2022-12-19 17:43:22 +01:00
Anton Lozhkov
4cb887e0a7 Transformers version req for UnCLIP (#1766)
* Transformers version req for UnCLIP

* add to the list
2022-12-19 17:11:17 +01:00
Anish Shah
9f657f106d [Examples] Update train_unconditional.py to include logging argument for Wandb (#1719)
Update train_unconditional.py

Add logger flag to choose between tensorboard and wandb
2022-12-19 16:57:03 +01:00
Patrick von Platen
ce1c27adc8 [Revision] Don't recommend using revision (#1764) 2022-12-19 16:25:41 +01:00
Patrick von Platen
b267d28566 [Versatile] fix attention mask (#1763) 2022-12-19 15:58:39 +01:00
Anton Lozhkov
c7b4acfb37 Add CPU offloading to UnCLIP (#1761)
* Add CPU offloading to UnCLIP

* use fp32 for testing the offload
2022-12-19 14:44:08 +01:00
Suraj Patil
be38b2d711 [UnCLIPPipeline] fix num_images_per_prompt (#1762)
duplicate maks for num_images_per_prompt
2022-12-19 14:32:46 +01:00
Anton Lozhkov
32a5d70c42 Support attn2==None for xformers (#1759) 2022-12-19 12:43:30 +01:00
Patrick von Platen
429e5449c1 Add attention mask to uclip (#1756)
* Remove bogus file

* [Unclip] Add efficient attention

* [Unclip] Add efficient attention
2022-12-19 12:10:46 +01:00
Anton Lozhkov
dc7cd893fd Add resnet_time_scale_shift to VD layers (#1757) 2022-12-19 12:01:46 +01:00
Mikołaj Siedlarek
8890758823 Correct help text for scheduler_type flag in scripts. (#1749) 2022-12-19 11:27:23 +01:00
Will Berman
b25843e799 unCLIP docs (#1754)
* [unCLIP docs] markdown

* [unCLIP docs] UnCLIPPipeline
2022-12-19 10:27:32 +01:00
Will Berman
830a9d1f01 [fix] pipeline_unclip generator (#1751)
* [fix] pipeline_unclip generator

pass generator to all schedulers

* fix fast tests test data
2022-12-19 10:27:18 +01:00
Will Berman
2dcf64b72a kakaobrain unCLIP (#1428)
* [wip] attention block updates

* [wip] unCLIP unet decoder and super res

* [wip] unCLIP prior transformer

* [wip] scheduler changes

* [wip] text proj utility class

* [wip] UnCLIPPipeline

* [wip] kakaobrain unCLIP convert script

* [unCLIP pipeline] fixes re: @patrickvonplaten

remove callbacks

move denoising loops into call function

* UNCLIPScheduler re: @patrickvonplaten

Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
DDPM scheduler with changes to support karlo

* mask -> attention_mask re: @patrickvonplaten

* [DDPMScheduler] remove leftover change

* [docs] PriorTransformer

* [docs] UNet2DConditionModel and UNet2DModel

* [nit] UNCLIPScheduler -> UnCLIPScheduler

matches existing unclip naming better

* [docs] SchedulingUnCLIP

* [docs] UnCLIPTextProjModel

* refactor

* finish licenses

* rename all to attention_mask and prep in models

* more renaming

* don't expose unused configs

* final renaming fixes

* remove x attn mask when not necessary

* configure kakao script to use new class embedding config

* fix copies

* [tests] UnCLIPScheduler

* finish x attn

* finish

* remove more

* rename condition blocks

* clean more

* Apply suggestions from code review

* up

* fix

* [tests] UnCLIPPipelineFastTests

* remove unused imports

* [tests] UnCLIPPipelineIntegrationTests

* correct

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-18 15:15:30 -08:00
Patrick von Platen
402b9560b2 Remove license accept ticks 2022-12-19 00:10:17 +01:00
Anton Lozhkov
c2a38ef9df Fix/update the LDM pipeline and tests (#1743)
* Fix/update LDM tests

* batched generators
2022-12-18 11:49:53 +01:00
Anton Lozhkov
08cc36ddff Fix MPS fast test warnings (#1744)
* unset level
2022-12-17 22:57:30 +01:00
Peter
723e8f6bb4 Fix ONNX img2img preprocessing (#1736)
Co-authored-by: Peter <peterto@users.noreply.github.com>
2022-12-17 13:12:10 +01:00
Patrick von Platen
c53a850604 [Batched Generators] This PR adds generators that are useful to make batched generation fully reproducible (#1718)
* [Batched Generators] all batched generators

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* hey

* up again

* fix tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct tests

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-17 11:13:16 +01:00
Anton Lozhkov
086c7f9ea8 Nightly integration tests (#1664)
* [WIP] Nightly integration tests

* initial SD tests

* update SD slow tests

* style

* repaint

* ImageVariations

* style

* finish imgvar

* img2img tests

* debug

* inpaint 1.5

* inpaint legacy

* torch isn't happy about deterministic ops

* allclose -> max diff for shorter logs

* add SD2

* debug

* Update tests/pipelines/stable_diffusion_2/test_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/stable_diffusion/test_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix refs

* Update src/diffusers/utils/testing_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix refs

* remove debug

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-16 18:51:11 +01:00
Pedro Cuenca
acd317810b Docs: recommend xformers (#1724)
* Fix links to flash attention.

* Add xformers installation instructions.

* Make link to xformers install more prominent.

* Link to xformers install from training docs.
2022-12-16 15:49:01 +01:00
Patrick von Platen
c6d0dff4a3 Fix ldm tests on master by not running the CPU tests on GPU (#1729) 2022-12-16 15:28:40 +01:00
Anton Lozhkov
a40095dd22 Fix ONNX img2img preprocessing and add fast tests coverage (#1727)
* Fix ONNX img2img preprocessing and add fast tests coverage

* revert

* disable progressbars
2022-12-16 15:24:16 +01:00
Partho
727434c206 Accept latents as optional input in Latent Diffusion pipeline (#1723)
* Latent Diffusion pipeline accept latents

* make style

* check for mps

randn does not work reproducibly on mps
2022-12-16 12:13:41 +01:00
YiYi Xu
21e61eb3a9 Added a README page for docs and a "schedulers" page (#1710)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-15 13:04:40 -10:00
Haihao Shen
c891330f79 Add examples with Intel optimizations (#1579)
* Add examples with Intel optimizations (BF16 fine-tuning and inference)

* Remove unused package

* Add README for intel_opts and refine the description for research projects

* Add notes of intel opts for diffusers
2022-12-15 21:16:27 +01:00
jiqing-feng
c5f04d4e34 apply amp bf16 on textual inversion (#1465)
* add conf.yaml

* enable bf16

enable amp bf16 for unet forward

fix style

fix readme

remove useless file

* change amp to full bf16

* align

* make stype

* fix format
2022-12-15 21:15:23 +01:00
CyberMeow
61dec53356 Improve pipeline_stable_diffusion_inpaint_legacy.py (#1585)
* update inpaint_legacy to allow the use of predicted noise to construct intermediate diffused images

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 20:59:31 +01:00
Pedro Cuenca
badddee0ef Add state checkpointing to other training scripts (#1687)
* Add state checkpointing to other training scripts

* Fix first_epoch

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update Dreambooth checkpoint help message.

* Dreambooth docs: checkpoints, inference from a checkpoint.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 19:49:40 +01:00
Anton Lozhkov
13994b2d3f RePaint fast tests and API conforming (#1701)
* add fast tests

* better tests and fp16

* batch fixes

* Reuse preprocessing

* quickfix
2022-12-15 18:35:31 +01:00
anton-
ea90bf2ba1 skip mps 2022-12-15 18:01:39 +01:00
Chino
8cecc66a74 Fix the bug that torch version less than 1.12 throws TypeError (#1671) 2022-12-14 21:29:39 +01:00
Anton Lozhkov
35b66c8e32 [Readme] Clarify package owners (#1707)
Specify that we don't actively monitor the conda scripts
2022-12-14 20:49:36 +01:00
Patrick von Platen
013edb641a Update main docs (#1706)
* Remove bogus file

* [Docs] Remove mentioning of gated access since no longer exsits

* add docs to index

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-14 20:33:54 +01:00
Anton Lozhkov
86ac3ea1d7 Delete _ 2022-12-14 13:52:29 +01:00
Anton Lozhkov
ef3fcbb688 Remove all local telemetry (#1702) 2022-12-14 12:56:35 +01:00
Prathik Rao
7c823c2ed7 manually update train_unconditional_ort (#1694)
* manually update train_unconditional_ort

* formatting

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-14 11:35:41 +01:00
Pedro Cuenca
784beee969 Dreambooth: use warnings instead of logger in parse_args() (#1688)
Use warnings instead of logger in parse_args()

logger requires an `Accelerator`.
2022-12-13 22:01:48 +01:00
Patrick von Platen
8b7cb962a5 make style 2022-12-13 17:01:18 +00:00
Patrick von Platen
e1bb8f6188 [Community pipeline] Add github mechanism (#1680)
* [Community pipeline] Add github mechanism

* better

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* adapt

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-13 18:01:00 +01:00
Patrick von Platen
e62dd5cfa8 Change one-step dummy pipeline for testing (#1690)
Change the one-step dummy pipeline for testing
2022-12-13 16:55:49 +01:00
w4ffl35
07f95503e5 Disable telemetry when DISABLE_TELEMETRY is set (#1686)
fixed #1685 - disables telemetry when DISABLE_TELEMETRY and HF_HUB_OFFLINE is set
2022-12-13 16:28:07 +01:00
Pedro Cuenca
e01d6cf295 Dreambooth: save / restore training state (#1668)
* Dreambooth: save / restore training state.

* make style

* Rename vars for clarity.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Remove unused import

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-13 15:16:44 +01:00
Patrick von Platen
244e16a7ab [Version] Bump to 0.11.0.dev0 (#1682)
upgrade version
2022-12-13 13:51:36 +01:00
Patrick von Platen
b345c74d4d Make sure all pipelines can run with batched input (#1669)
* [SD] Make sure batched input works correctly

* uP

* uP

* up

* up

* uP

* up

* fix mask stuff

* up

* uP

* more up

* up

* uP

* up

* finish

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-13 12:50:15 +01:00
apolinario
b417042291 Fix wrong type checking in convert_diffusers_to_original_stable_diffusion.py (#1681)
* Fix type checking remainders

* Remove IS_V20_MODEL flag always being True

Co-authored-by: apolinario <joaopaulo.passos+multimodal@gmail.com>
2022-12-13 12:44:20 +01:00
Suvaditya Mukherjee
40c16ed2f0 Added Community pipeline for comparing Stable Diffusion v1.1-4 checkpoints (#1584)
* Added Community pipeline for comparing Stable Diffusion v1.1-4

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* Made changes to provide support for current iteration of from_pretrained and added example

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* updated a small spelling error

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* added pipeline entry to table

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
2022-12-13 11:31:30 +01:00
Patrick von Platen
69de9b2eaa [Textual Inversion] Do not update other embeddings (#1665) 2022-12-12 17:44:39 +01:00
Patrick von Platen
3ce6380d3a [SD] Make sure scheduler is correct when converting (#1667) 2022-12-12 16:57:48 +01:00
Cyberes
d2dc4de303 Handle missing global_step key in scripts/convert_original_stable_diffusion_to_diffusers.py (#1612)
handle missing global_step key and don't download config if it already exists
2022-12-12 16:10:52 +01:00
Kangfu Mei
ded3299d68 fix bug if we don't do_classifier_free_guidance (#1601)
* fix bug if we don't do_classifier_free_guidance

* update for copied diffusers.pipelines..alt_diffusion..pipeline_alt_diffusion.AltDiffusionPipeline
2022-12-12 15:02:13 +01:00
Patrick von Platen
8bf5e59931 Deprecate init image correctly (#1649)
Deprecate init image correctl
2022-12-12 15:00:20 +01:00
Prathik Rao
4645e28355 tensor format ort bug fix (#1557)
bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: anton- <anton@huggingface.co>
2022-12-12 13:56:02 +01:00
Lukas Struppek
589330595d VersatileDiffusion: fix input processing (#1568)
* fix versatile diffusion input

* merge main

* `make fix-copies`

Co-authored-by: anton- <anton@huggingface.co>
2022-12-12 13:45:27 +01:00
lawfordp2017
31444f5790 Add text encoder conversion (#1559)
* Initial code for attempt at improving SD <--> diffusers conversions for v2.0

* Updates to support round-trip between orig. SD 2.0 and diffusers models

* Corrected formatting to Black standard

* Correcting import formatting

* Fixed imports (properly this time)

* add some corrections

* remove inference files

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-12 10:07:42 +01:00
M-A
c3b2f97534 Remove unnecessary kwargs in depth2img (#1648)
Since no deprecate() call was done, the typos were silently ignored.
2022-12-11 17:00:50 +01:00
Patrick von Platen
fc94c60c83 Remove unnecessary offset in img2img (#1653)
remove unnecessary offset in img2img
2022-12-10 19:26:25 +01:00
Pedro Cuenca
ea64a7860a Allow k pipeline to generate > 1 images (#1645)
Allow k pipeline to generate > 1 images.
2022-12-10 17:54:02 +01:00
Tim Hinderliter
2868d99181 dreambooth: fix #1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16 (#1618)
* dreambooth: fix #1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16

* dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for #1566

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-10 15:45:45 +01:00
Pedro Cuenca
0c18d02cc9 Remove spurious arg in training scripts (#1644)
Remove spurious arg in training scripts.
2022-12-10 13:57:20 +01:00
Patrick von Platen
6b68afd8e4 do not automatically enable xformers (#1640)
* do not automatically enable xformers

* uP
2022-12-09 18:28:36 +01:00
anton-
63c4944998 Patch release: v0.10.2 2022-12-09 17:59:32 +01:00
Anton Lozhkov
3ebe40fc5f Adapt to forced transformers version in some dependent libraries (#1638)
* Adapt to forced transformers version in some dependent libraries

* style

* Update __init__.py

* update requires_backends
2022-12-09 17:42:53 +01:00
Anton Lozhkov
089252542c V0.10.1 patch (#1637)
* Re-add xformers enable to UNet2DCondition (#1627)

* finish

* fix

* Update tests/models/test_models_unet_2d.py

* style

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Release: v0.10.1

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-09 17:04:25 +01:00
Patrick von Platen
cd91fc06fe Re-add xformers enable to UNet2DCondition (#1627)
* finish

* fix

* Update tests/models/test_models_unet_2d.py

* style

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-12-09 14:05:38 +01:00
Pedro Cuenca
ff65c2d72b Don't assume 512x512 in k-diffusion pipeline (#1625)
Don't assume 512x512 in k-diffusion pipeline.
2022-12-09 11:50:29 +01:00
Haofan Wang
f1b726e46e Update requirements.txt (#1623)
* Update requirements.txt

* Update requirements_flax.txt

* Update requirements.txt

* Update requirements_flax.txt

* Update requirements.txt

* Update requirements_flax.txt
2022-12-09 08:35:27 +01:00
SkyTNT
f242eba4fd Fix lpw stable diffusion pipeline compatibility (#1622) 2022-12-09 08:30:26 +01:00
anton-
3faf204c49 Release: v0.10.0 2022-12-08 19:24:10 +01:00
Suraj Patil
5383188c7e StableDiffusionDepth2ImgPipeline (#1531)
* begin depth pipeline

* add depth estimation model

* fix prepare_depth_mask

* add a comment about autocast

* copied from, quality, cleanup

* begin tests

* handle tensors

* norm image tensor

* fix batch size

* fix tests

* fix enable_sequential_cpu_offload

* fix save load

* fix test_save_load_float16

* fix test_save_load_optional_components

* fix test_float16_inference

* fix test_cpu_offload_forward_pass

* fix test_dict_tuple_outputs_equivalent

* up

* fix fast tests

* fix test_stable_diffusion_img2img_multiple_init_images

* fix few more fast tests

* don't use device map for DPT

* fix test_stable_diffusion_pipeline_with_sequential_cpu_offloading

* accept external depth maps

* prepare_depth_mask -> prepare_depth_map

* fix file name

* fix file name

* quality

* check transformers version

* fix test names

* use skipif

* fix import

* add docs

* skip tests on mps

* correct version

* uP

* Update docs/source/api/pipelines/stable_diffusion_2.mdx

* fix fix-copies

* fix fix-copies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: anton- <anton@huggingface.co>
2022-12-08 18:25:12 +01:00
Anton Lozhkov
dbe0719246 Fix PyCharm/VSCode static type checking for dummy objects (#1596)
* Fix PyCharm/VSCode static type checking for dummy objects

* Re-add dummies

* Fix AudioDiffusion imports

* fix import

* fix import

* Update utils/check_dummies.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/utils/import_utils.py

* Update src/diffusers/__init__.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/__init__.py

* fix double import

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-08 14:02:11 +01:00
Anton Lozhkov
03566d8689 Delete hi 2022-12-08 13:07:25 +01:00
Suraj Patil
a934e5bc6c [Versatile Diffusion] add upcast_attention (#1605)
add upcast_attention arg
2022-12-08 13:03:32 +01:00
Patrick von Platen
a643c6300e [K Diffusion] Add k diffusion sampler natively (#1603)
* uP

* uP
2022-12-08 12:48:37 +01:00
Ben Sherman
326de41915 Trivial fix for undefined symbol in train_dreambooth.py (#1598)
easy fix for undefined name in train_dreambooth.py

import_model_class_from_model_name_or_path loads a pretrained model
and refers to args.revision in a context where args is undefined. I modified
the function to take revision as an argument and modified the invocation
of the function to pass in the revision from args. Seems like this was caused
by a cut and paste.
2022-12-07 21:39:48 +01:00
Anton Lozhkov
eb1abee693 [ONNX] Fix flaky tests (#1593)
* [ONNX] Fix flaky tests

* revert
2022-12-07 19:53:13 +01:00
Pedro Cuenca
5e0369219f Make cross-attention check more robust (#1560)
* Make cross-attention check more robust.

* Fix copies.
2022-12-07 18:33:29 +01:00
Nathan Lambert
bea7eb4314 Update RL docs for better sharing / adding models (#1563)
* init docs update

* style

* fix bad colab formatting, add pipeline comment

* update todo
2022-12-07 09:08:12 -08:00
Randolph-zeng
ca68ab3eef Update scheduling_repaint.py (#1582)
* Update scheduling_repaint.py

* update the expected image

Co-authored-by: anton- <anton@huggingface.co>
2022-12-07 17:41:07 +01:00
Suraj Patil
ced7c9601a fix upcast in slice attention (#1591)
* fix upcast in slice attention

* fix dtype

* add test

* fix test
2022-12-07 15:14:34 +01:00
Cheng Lu
8e74efad01 Add Singlestep DPM-Solver (singlestep high-order schedulers) (#1442)
* add singlestep dpmsolver

* fix a style typo

* fix a style typo

* add docs

* finish

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-07 15:03:58 +01:00
Pedro Cuenca
6a7f1f0965 Flax: avoid recompilation when params change (#1096)
* Do not recompile when guidance_scale changes.

* Remove debug for simplicity.

* make style

* Make guidance_scale an array.

* Make DEBUG a constant to avoid passing it down.

* Add comments for clarification.
2022-12-07 14:50:55 +01:00
Suraj Patil
170ebd288f [UNet2DConditionModel] add an option to upcast attention to fp32 (#1590)
upcast attention
2022-12-07 14:36:22 +01:00
Anton Lozhkov
dc87f526d4 Fix common tests for FP16 (#1588)
* Fix common tests for FP16

* revert
2022-12-07 14:09:51 +01:00
Fantasy-Studio
d9b5b43d46 Correct order height & width in pipeline_paint_by_example.py (#1589)
Update pipeline_paint_by_example.py
2022-12-07 13:40:56 +01:00
Anton Lozhkov
bb2d7cacc0 Add from_pretrained telemetry (#1461)
* Add from_pretrained usage logging

* Add classes

* add a telemetry notice

* macos
2022-12-07 11:56:21 +01:00
Patrick von Platen
4f3ddb6cca [Paint by Example] Better default for image width (#1587) 2022-12-07 11:43:28 +01:00
SkyTNT
4eb9ad0d1c [Community Pipeline] fix lpw_stable_diffusion (#1570)
* fix lpw_stable_diffusion

* rollback preprocess_mask resample
2022-12-07 11:20:01 +01:00
Patrick von Platen
896c98a2ae Add paint by example (#1533)
* add paint by example

* mkae loading possibel

* up

* Update src/diffusers/models/attention.py

* up

* finalize weight structure

* make example work

* make it work

* up

* up

* fix

* del

* add

* update

* Apply suggestions from code review

* correct transformer 2d

* finish

* up

* up

* up

* up

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

* up

* finish

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-07 11:06:30 +01:00
Anton Lozhkov
02d83c9ff1 Standardize fast pipeline tests with PipelineTestMixin (#1526)
* [WIP] Standardize fast pipeline tests with PipelineTestMixin

* refactor the sd tests a bit

* add more common tests

* add xformers

* add progressbar test

* cleanup

* upd fp16

* CycleDiffusionPipelineFastTests

* DanceDiffusionPipelineFastTests

* AltDiffusionPipelineFastTests

* StableDiffusion2PipelineFastTests

* StableDiffusion2InpaintPipelineFastTests

* StableDiffusionImageVariationPipelineFastTests

* StableDiffusionImg2ImgPipelineFastTests

* StableDiffusionInpaintPipelineFastTests

* remove unused mixins

* quality

* add missing inits

* try to fix mps tests

* fix mps tests

* add mps warmups

* skip for some pipelines

* style

* Update tests/test_pipelines_common.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-06 18:35:30 +01:00
Suraj Patil
9e1102990a [dreambooth] make collate_fn global (#1547)
make collate_fn global
2022-12-06 14:41:53 +01:00
Suraj Patil
c228331068 [examples] add check_min_version (#1550)
* add check_min_version for examples

* move __version__ to the top

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix comment

* fix error_message

* adapt the install message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-06 14:36:50 +01:00
Patrick von Platen
ae4112d2bb Mega community pipeline (#1561)
* Mega community pipeline

* fix
2022-12-06 11:18:53 +01:00
Will Berman
af04479e85 [docs] [dreambooth training] default accelerate config (#1564) 2022-12-05 18:24:32 -08:00
Patrick von Platen
9a52e33eb6 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-12-05 19:38:05 +00:00
Patrick von Platen
c524fd8589 correct librosa import 2022-12-05 19:37:46 +00:00
Pedro Cuenca
2cfdf37537 Fix typo (#1558)
* Fix typo in pipeline_stable_diffusion.py

Fixes a typo in a warning message

* Fix copies.

* Fix copies

Co-authored-by: Scott <scott@scottinallca.ps>
2022-12-05 20:31:35 +01:00
Patrick von Platen
62b497c418 [Docs] Correct docs (#1554) 2022-12-05 19:54:20 +01:00
Patrick von Platen
922d56a19c Correct type from int to str in conversion script sd 2022-12-05 18:51:29 +00:00
Patrick von Platen
ae854746ab [Community download] Fix cache dir (#1555)
* [Community download] Fix cache dir

* up
2022-12-05 18:52:55 +01:00
Robert Dargavel Smith
48d0123f0f add AudioDiffusionPipeline and LatentAudioDiffusionPipeline #1334 (#1426)
* add AudioDiffusionPipeline and LatentAudioDiffusionPipeline

* add docs to toc

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* Update pr_tests.yml

Fix tests

* parent 499ff34b3e
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041721 +0000

parent 499ff34b3e
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041704 +0000

add colab notebook

[Flax] Fix loading scheduler from subfolder (#1319)

[FLAX] Fix loading scheduler from subfolder

Fix/Enable all schedulers for in-painting (#1331)

* inpaint fix k lms

* onnox as well

* up

Correct path to schedlure (#1322)

* [Examples] Correct path

* uP

Avoid nested fix-copies (#1332)

* Avoid nested `# Copied from` statements during `make fix-copies`

* style

Fix img2img speed with LMS-Discrete Scheduler (#896)

Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Fix the order of casts for onnx inpainting (#1338)

Legacy Inpainting Pipeline for Onnx Models (#1237)

* Add legacy inpainting pipeline compatibility for onnx

* remove commented out line

* Add onnx legacy inpainting test

* Fix slow decorators

* pep8 styling

* isort styling

* dummy object

* ordering consistency

* style

* docstring styles

* Refactor common prompt encoding pattern

* Update tests to permanent repository home

* support all available schedulers until ONNX IO binding is available

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* updated styling from PR suggested feedback

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

Jax infer support negative prompt (#1337)

* support negative prompts in sd jax pipeline

* pass batched neg_prompt

* only encode when negative prompt is None

Co-authored-by: Juan Acevedo <jfacevedo@google.com>

Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)

Minor change to Imagic Readme

Missing dir causes an error when running the example code.

make style

change the sample model (#1352)

* Update alt_diffusion.mdx

* Update alt_diffusion.mdx

Add bit diffusion [WIP] (#971)

* Create bit_diffusion.py

Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG

* adding bit diffusion to new branch

ran tests

* tests

* tests

* tests

* tests

* removed test folders + added to README

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* move Mel to module in pipeline construction, make librosa optional

* fix imports

* fix copy & paste error in comment

* fix style

* add missing register_to_config

* fix class docstrings

* fix class docstrings

* tweak docstrings

* tweak docstrings

* update slow test

* put trailing commas back

* respect alphabetical order

* remove LatentAudioDiffusion, make vqvae optional

* move Mel from models back to pipelines :-)

* allow loading of pretrained audiodiffusion models

* fix tests

* fix dummies

* remove reference to latent_audio_diffusion in docs

* unused import

* inherit from SchedulerMixin to make loadable

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-05 18:06:30 +01:00
Patrick von Platen
459b8ca81a Research folder (#1553)
* Research folder

* Update examples/research_projects/README.md

* up
2022-12-05 17:58:35 +01:00
Suraj Patil
bce65cd13a [refactor] make set_attention_slice recursive (#1532)
* make attn slice recursive

* remove set_attention_slice from blocks

* fix copies

* make enable_attention_slicing base class method of DiffusionPipeline

* fix set_attention_slice

* fix set_attention_slice

* fix copies

* add tests

* up

* up

* up

* update

* up

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-05 17:31:04 +01:00
Adalberto
e289998932 fix mask discrepancies in train_dreambooth_inpaint (#1529)
The mask and instance image were being cropped in different ways without --center_crop, causing the model to learn to ignore the mask in some cases. This PR fixes that and generate more consistent results.
2022-12-05 17:26:36 +01:00
Suraj Patil
634be6e53d [examples] use from_pretrained to load scheduler (#1549)
us from_pretrained to load scheduler
2022-12-05 15:32:24 +01:00
allo-
d1bcbf38ca [textual_inversion] Add an option for only saving the embeddings (#781)
[textual_inversion] Add an option to only save embeddings

Add an command line option --only_save_embeds to the example script, for
not saving the full model. Then only the learned embeddings are saved,
which can be added to the original model at runtime in a similar way as
they are created in the training script.
Saving the full model is forced when --push_to_hub is used. (Implements #759)
2022-12-05 14:45:13 +01:00
Patrick von Platen
df7cd5fe3f Update bug-report.yml 2022-12-05 14:39:35 +01:00
Naga Sai Abhinay
c28d6945b8 [Community Pipeline] Checkpoint Merger based on Automatic1111 (#1472)
* Add checkpoint_merger pipeline

* Added missing docs for a parameter.

* Fomratting fixes.

* Fixed code quality issues.

* Bug fix: Off by 1 index

* Added docs for pipeline
2022-12-05 14:36:55 +01:00
Patrick von Platen
5177e65ff0 Update bug-report.yml 2022-12-05 14:17:04 +01:00
Patrick von Platen
60ac5fc235 Update bug-report.yml 2022-12-05 14:13:02 +01:00
Patrick von Platen
19b01749f0 Update bug-report.yml 2022-12-05 14:10:25 +01:00
Patrick von Platen
a980ef2f08 Update bug-report.yml (#1548)
* Update bug-report.yml

* Update bug-report.yml

* Update bug-report.yml
2022-12-05 14:03:54 +01:00
Patrick von Platen
7932971542 [Upscaling] Fix batch size (#1525) 2022-12-05 13:28:55 +01:00
Benjamin Lefaudeux
720dbfc985 Compute embedding distances with torch.cdist (#1459)
small but mighty
2022-12-05 12:37:05 +01:00
Patrick von Platen
513fc68104 [Stable Diffusion Inpaint] Allow tensor as input image & mask (#1527)
up
2022-12-05 12:18:02 +01:00
Anton Lozhkov
cc22bda5f6 [CI] Add slow MPS tests (#1104)
* [CI] Add slow MPS tests

* fix yml

* temporarily resolve caching

* Tests: fix mps crashes.

* Skip test_load_pipeline_from_git on mps.

Not compatible with float16.

* Increase tolerance, use CPU generator, alt. slices.

* Move to nightly

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-05 11:50:24 +01:00
Ilmari Heikkinen
daebee0963 Add xformers attention to VAE (#1507)
* Add xformers attention to VAE

* Simplify VAE xformers code

* Update src/diffusers/models/attention.py

Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-03 15:08:11 +01:00
Matthieu Bizien
ae368e42d2 [Proposal] Support saving to safetensors (#1494)
* Add parameter safe_serialization to DiffusionPipeline.save_pretrained

* Add option safe_serialization on ModelMixin.save_pretrained

* Add test test_save_safe_serialization

* Black

* Re-trigger the CI

* Fix doc-builder

* Validate files are saved as safetensor in test_save_safe_serialization
2022-12-02 18:33:16 +01:00
Patrick von Platen
cf4664e885 fix tests 2022-12-02 17:27:58 +00:00
Patrick von Platen
7222a8eadf make style 2022-12-02 17:18:50 +00:00
bachr
155d272cc1 Update FlaxLMSDiscreteScheduler (#1474)
- Add the missing `scale_model_input` method to `FlaxLMSDiscreteScheduler`
- Use `jnp.append` for appending to `state.derivatives`
- Use `jnp.delete` to pop from `state.derivatives`
2022-12-02 18:18:30 +01:00
Adalberto
2b30b1090f Create train_dreambooth_inpaint.py (#1091)
* Create train_dreambooth_inpaint.py

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training

* Update train_dreambooth_inpaint.py

refactored train_dreambooth_inpaint with black

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

Fix prior preservation

* add instructions to readme, fix SD2 compatibility
2022-12-02 18:06:57 +01:00
Antoine Bouthors
3ad49eeedd Fixed mask+masked_image in sd inpaint pipeline (#1516)
* Fixed mask+masked_image in sd inpaint pipeline

Those were left unset when inputs are not PIL images

* Fixed formatting
2022-12-02 17:51:51 +01:00
Patrick von Platen
769f0be8fb Finalize 2nd order schedulers (#1503)
* up

* up

* finish

* finish

* up

* up

* finish
2022-12-02 16:38:35 +01:00
Pedro Gabriel Gengo Lourenço
4f596599f4 Fix training docs to install datasets (#1476)
Fixed doc to install from training packages
2022-12-02 15:52:04 +01:00
Dhruv Naik
f57a2e0745 Fix Imagic example (#1520)
fix typo, remove incorrect arguments from .train()
2022-12-02 15:06:04 +01:00
Pedro Cuenca
3ceaa280bd Do not use torch.long in mps (#1488)
* Do not use torch.long in mps

Addresses #1056.

* Use torch.int instead of float.

* Propagate changes.

* Do not silently change float -> int.

* Propagate changes.

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-12-02 13:10:17 +01:00
Benjamin Lefaudeux
a816a87a09 [refactor] Making the xformers mem-efficient attention activation recursive (#1493)
* Moving the mem efficiient attention activation to the top + recursive

* black, too bad there's no pre-commit ?

Co-authored-by: Benjamin Lefaudeux <benjamin@photoroom.com>
2022-12-02 12:30:01 +01:00
Patrick von Platen
f21415d1d9 Update conversion script to correctly handle SD 2 (#1511)
* Conversion SD 2

* finish
2022-12-02 12:28:01 +01:00
Patrick von Platen
22b9cb086b [From pretrained] Allow returning local path (#1450)
Allow returning local path
2022-12-02 12:26:39 +01:00
Will Berman
25f850a23b [docs] [dreambooth training] num_class_images clarification (#1508) 2022-12-02 12:12:28 +01:00
Will Berman
b25ae2e6ab [docs] [dreambooth training] accelerate.utils.write_basic_config (#1513) 2022-12-02 12:11:18 +01:00
Suraj Patil
0f1c24664c fix heun scheduler (#1512) 2022-12-01 22:39:57 +01:00
Anton Lozhkov
e65b71aba4 Add an explicit --image_size to the conversion script (#1509)
* Add an explicit `--image_size` to the conversion script

* style
2022-12-01 19:22:48 +01:00
Akash Gokul
a6a25ceb61 Fix Flax flip_sin_to_cos (#1369)
* Fix Flax flip_sin_to_cos

* Adding flip_sin_to_cos

Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
2022-12-01 18:57:01 +01:00
Suraj Patil
b85bb0753e support v prediction in other schedulers (#1505)
* support v prediction in other schedulers

* v heun

* add tests for v pred

* fix tests

* fix test euler a

* v ddpm
2022-12-01 18:10:39 +01:00
fboulnois
52eb0348e5 Standardize on using image argument in all pipelines (#1361)
* feat: switch core pipelines to use image arg

* test: update tests for core pipelines

* feat: switch examples to use image arg

* docs: update docs to use image arg

* style: format code using black and doc-builder

* fix: deprecate use of init_image in all pipelines
2022-12-01 16:55:22 +01:00
Suraj Patil
2bbf8b67a7 simplyfy AttentionBlock (#1492) 2022-12-01 16:40:59 +01:00
Patrick von Platen
5a5bf7ef5a [Deprecate] Correct stacklevel (#1483)
* Correct stacklevel

* fix
2022-12-01 16:28:10 +01:00
Anton Lozhkov
9276b1e148 Replace deprecated hub utils in train_unconditional_ort (#1504)
* Replace deprecated hub utils in `train_unconditional_ort`

* typo
2022-12-01 16:00:52 +01:00
regisss
2579d42158 Add doc for Stable Diffusion on Habana Gaudi (#1496)
* Add doc for Stable Diffusion on Habana Gaudi

* Make style

* Add benchmark

* Center-align columns in the benchmark table
2022-12-01 15:43:48 +01:00
Anton Lozhkov
999044596a Bump to 0.10.0.dev0 + deprecations (#1490) 2022-11-30 15:27:56 +01:00
Pedro Cuenca
eeeb28a9ad Remove reminder comment (#1489)
Remove reminder comment.
2022-11-30 14:59:54 +01:00
Patrick von Platen
c05356497a Add better docs xformers (#1487)
* Add better docs xformers

* update

* Apply suggestions from code review

* fix
2022-11-30 13:57:45 +01:00
Patrick von Platen
1d4ad34af0 [Dreambooth] Make compatible with alt diffusion (#1470)
* [Dreambooth] Make compatible with alt diffusion

* make style

* add example
2022-11-30 13:48:17 +01:00
Patrick von Platen
20ce68f945 Fix dtype model loading (#1449)
* Add test

* up

* no bfloat16 for mps

* fix

* rename test
2022-11-30 11:31:50 +01:00
Patrick von Platen
110ffe2589 Allow saving trained betas (#1468) 2022-11-30 10:05:51 +01:00
Anton Lozhkov
0b7225e918 Add ort_nightly_directml to the onnxruntime candidates (#1458)
* Add `ort_nightly_directml` to the `onnxruntime` candidates

* style
2022-11-29 14:00:41 +01:00
Anton Lozhkov
db7b7bd983 [Train unconditional] Unwrap model before EMA (#1469) 2022-11-29 13:45:42 +01:00
Rohan Taori
6a0a312370 Fix bug in half precision for DPMSolverMultistepScheduler (#1349)
* cast to float for quantile method

* add fp16 test for DPMSolverMultistepScheduler fix

* formatting update
2022-11-29 13:29:23 +01:00
Ilmari Heikkinen
c28d3c82ce StableDiffusion: Decode latents separately to run larger batches (#1150)
* StableDiffusion: Decode latents separately to run larger batches

* Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode

* Rename sliced_decode to slicing

* fix whitespace

* fix quality check and repository consistency

* VAE slicing tests and documentation

* API doc hooks for VAE slicing

* reformat vae slicing tests

* Skip VAE slicing for one-image batches

* Documentation tweaks for VAE slicing

Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
2022-11-29 13:28:14 +01:00
Alex McKinney
bcb6cc16df Updates Image to Image Inpainting community pipeline README (#1370)
* updates img2img_inpainting README

* Adds example image to community pipeline README
2022-11-29 13:17:22 +01:00
Pedro Cuenca
4d1e4e24e5 Flax support for Stable Diffusion 2 (#1423)
* Flax: start adapting to Stable Diffusion 2

* More changes.

* attention_head_dim can be a tuple.

* Fix typos

* Add simple SD 2 integration test.

Slice values taken from my Ampere GPU.

* Add simple UNet integration tests for Flax.

Note that the expected values are taken from the PyTorch results. This
ensures the Flax and PyTorch versions are not too far off.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Typos and style

* Tests: verify jax is available.

* Style

* Make flake happy

* Remove typo.

* Simple Flax SD 2 pipeline tests.

* Import order

* Remove unused import.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: @camenduru
2022-11-29 12:33:21 +01:00
Patrick von Platen
a808a85390 fix slow tests (#1467) 2022-11-29 11:48:57 +01:00
Patrick von Platen
4c54519e1a Add 2nd order heun scheduler (#1336)
* Add heun

* Finish first version of heun

* remove bogus

* finish

* finish

* improve

* up

* up

* fix more

* change progress bar

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* finish

* up

* up

* up
2022-11-28 22:56:28 +01:00
Pedro Cuenca
25f11424f6 Ensure Flax pipeline always returns numpy array (#1435)
* Ensure Flax pipeline always returns numpy array.

* Clarify documentation.
2022-11-28 18:02:13 +01:00
Pedro Cuenca
89300131d2 Fix Flax from_pt (#1436)
Fix Flax `from_pt`.

It worked for models but not for pipelines.
Accidentally broken in #1107.
2022-11-28 18:01:29 +01:00
Suraj Patil
6c56f05097 v-prediction training support (#1455)
* add get_velocity

* add v prediction for training

* fix saving

* add revision arg

* fix saving

* save checkpoints dreambooth

* fix saving embeds

* add instruction in readme

* quality

* noise_pred -> model_pred
2022-11-28 17:46:54 +01:00
Patrick von Platen
77fc197f70 Speed up test and remove kwargs from call (#1446)
Remove kwargs from call
2022-11-28 17:28:19 +01:00
Anton Lozhkov
edf22c052e Hotfix for AttributeErrors in OnnxStableDiffusionInpaintPipelineLegacy (#1448) 2022-11-28 14:18:14 +01:00
Nicolas Patry
5755d16868 [Proposal] Support loading from safetensors if file is present. (#1357)
* [Proposal] Support loading from safetensors if file is present.

* Style.

* Fix.

* Adding some test to check loading logic.

+ modify download logic to not download pytorch file if not necessary.

* Fixing the logic.

* Adressing comments.

* factor out into a function.

* Remove dead function.

* Typo.

* Extra fetch only if safetensors is there.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-28 10:39:42 +01:00
anton-
6b02323a60 Release: v0.9.0 2022-11-25 17:47:36 +01:00
Kashif Rasul
462a79d39a [Docs] fixed some typos (#1425)
fixed typos
2022-11-25 17:44:07 +01:00
Patrick von Platen
6883294d44 SD2 docs (#1424)
* up

* up

* up

* up
2022-11-25 17:23:21 +01:00
Kashif Rasul
b9e921feea added initial v-pred support to DPM-solver (#1421)
* added initial v-pred support to DPM-solver

* fix sign

* added v_prediction to flax

* fixed typo
2022-11-25 17:12:58 +01:00
Patrick von Platen
7684518377 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-25 15:15:09 +00:00
Patrick von Platen
520bb082be fixes tests 2022-11-25 15:15:05 +00:00
Suraj Patil
9ec5084a9c StableDiffusionUpscalePipeline (#1396)
* StableDiffusionUpscalePipeline

* fix a few things

* make it better

* fix image batching

* run vae in fp32

* fix docstr

* resize to mul of 64

* doc

* remove safety_checker

* add max_noise_level

* fix Copied

* begin tests

* slow tests

* default max_noise_level

* remove kwargs

* doc

* fix

* fix fast tests

* fix fast tests

* no sf

* don't offload vae

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-25 16:13:16 +01:00
Anton Lozhkov
02aa4ef12e Add tests for Stable Diffusion 2 V-prediction 768x768 (#1420) 2022-11-25 15:14:13 +01:00
Patrick von Platen
8faa822ddc Allow to set config params directly in init (#1419)
* fix

* fix deprecated kwargs logic

* add tests

* finish
2022-11-25 15:07:09 +01:00
Anton Lozhkov
86aa747da9 Fix ONNX conversion and inference (#1416) 2022-11-25 14:51:17 +01:00
Pedro Cuenca
d52388f486 Deprecate predict_epsilon (#1393)
* Adapt ddpm, ddpmsolver to prediction_type.

* Deprecate predict_epsilon in __init__.

* Bring FlaxDDIMScheduler up to date with DDIMScheduler.

* Set prediction_type as an ivar for consistency.

* Convert pipeline_ddpm

* Adapt tests.

* Adapt unconditional training script.

* Adapt BitDiffusion example.

* Add missing kwargs in dpmsolver_multistep

* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.

* make style

* Remove import no longer in use.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Use config.prediction_type everywhere

* Add a couple of Flax prediction type tests.

* make style

* fix register deprecated arg

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-25 14:02:15 +01:00
Kashif Rasul
babfb8a020 [MPS] call contiguous after permute (#1411)
* call contiguous after permute

Fixes for MPS device

* Fix MPS UserWarning

* make style

* Revert "Fix MPS UserWarning"

This reverts commit b46c32810e.
2022-11-25 13:59:56 +01:00
Patrick von Platen
35099b207e [Versatile Diffusion] Fix remaining tests (#1418)
fix all tests
2022-11-25 13:40:41 +01:00
Patrick von Platen
2c6bc0f13b small fix 2022-11-25 12:04:15 +00:00
Patrick von Platen
2902109061 Fix all stable diffusion (#1415)
* up

* uP
2022-11-25 12:53:10 +01:00
Patrick von Platen
f26cde3dff fix clip guided (#1414) 2022-11-25 12:04:40 +01:00
Patrick von Platen
9f10c545cb Fix sample size conversion script (#1408)
up
2022-11-25 11:26:27 +01:00
Anton Lozhkov
5c10e68a1f Add SD2 inpainting integration tests (#1412)
SD2 inpainting integration tests
2022-11-25 11:25:49 +01:00
Anton Lozhkov
d50e321745 Support SD2 attention slicing (#1397)
* Support SD2 attention slicing

* Support SD2 attention slicing

* Add more copies

* Use attn_num_head_channels in blocks

* fix-copies

* Update tests

* fix imports
2022-11-24 22:42:59 +01:00
Patrick von Platen
8e2c4cd56c Deprecate sample size (#1406)
* up

* up

* fix

* uP

* more fixes

* up

* uP

* up

* up

* uP

* fix final tests
2022-11-24 22:32:44 +01:00
Anton Lozhkov
bb2c64a08c Add the new SD2 attention params to the VD text unet (#1400) 2022-11-24 21:57:27 +01:00
Patrick von Platen
05a36d5c1a Upscaling fixed (#1402)
* Upscaling fixed

* up

* more fixes

* fix

* more fixes

* finish again

* up
2022-11-24 20:33:52 +01:00
Patrick von Platen
cbfed0c256 [Config] Add optional arguments (#1395)
* Optional Components

* uP

* finish

* finish

* finish

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* Update src/diffusers/pipeline_utils.py

* improve

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-24 20:05:41 +01:00
Patrick von Platen
e0e86b7470 Make height and width optional (#1401)
* fix

* add test

* fix test

* uP

* up

* fix some tests
2022-11-24 18:23:59 +01:00
Anton Lozhkov
81d8f4a9e1 Version 0.9.0.dev0 (#1394) 2022-11-24 14:54:29 +01:00
Suraj Patil
cecdd8bdd1 Adapt UNet2D for supre-resolution (#1385)
* allow disabling self attention

* add class_embedding

* fix copies

* fix condition

* fix copies

* do_self_attention -> only_cross_attention

* fix copies

* num_classes -> num_class_embeds

* fix default value
2022-11-24 14:49:03 +01:00
Suraj Patil
30f6f44104 add v prediction (#1386)
* add v prediction

* adat euler for v pred

* velocity -> v_prediction

* simplify

* fix naming

* Update src/diffusers/schedulers/scheduling_euler_discrete.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-24 12:25:19 +01:00
Patrick von Platen
9f476388fa trailing . fix 2022-11-24 00:53:57 +01:00
Patrick von Platen
9479052dde fix trailing . dep object 2022-11-24 00:33:32 +01:00
Patrick von Platen
35d8186172 [Bad dependencies] Fix imports (#1382)
* fix imports

* better error

* up

* finish
2022-11-24 00:24:05 +01:00
Suraj Patil
1524122532 [Transformer2DModel] don't norm twice (#1381)
don't norm twice
2022-11-24 00:12:45 +01:00
Suraj Patil
f07a16e09b update unet2d (#1376)
* boom boom

* remove duplicate arg

* add use_linear_proj arg

* fix copies

* style

* add fast tests

* use_linear_proj -> use_linear_projection
2022-11-23 20:46:30 +01:00
anton-l
16a32c9dab Release: v0.8.0 2022-11-23 19:12:31 +01:00
Patrick von Platen
2625fb59dc [Versatile Diffusion] Add versatile diffusion model (#1283)
* up

* convert dual unet

* revert dual attn

* adapt for vd-official

* test the full pipeline

* mixed inference

* mixed inference for text2img

* add image prompting

* fix clip norm

* split text2img and img2img

* fix format

* refactor text2img

* mega pipeline

* add optimus

* refactor image var

* wip text_unet

* text unet end to end

* update tests

* reshape

* fix image to text

* add some first docs

* dual guided pipeline

* fix token ratio

* propose change

* dual transformer as a native module

* DualTransformer(nn.Module)

* DualTransformer(nn.Module)

* correct unconditional image

* save-load with mega pipeline

* remove image to text

* up

* uP

* fix

* up

* final fix

* remove_unused_weights

* test updates

* save progress

* uP

* fix dual prompts

* some fixes

* finish

* style

* finish renaming

* up

* fix

* fix

* fix

* finish

Co-authored-by: anton-l <anton@huggingface.co>
2022-11-23 19:03:45 +01:00
Suraj Patil
0eb507f2af StableDiffusionImageVariationPipeline (#1365)
* add StableDiffusionImageVariationPipeline

* add ini init

* use CLIPVisionModelWithProjection

* fix _encode_image

* add copied from

* fix copies

* add doc

* handle tensor in _encode_image

* add tests

* correct model_id

* remove copied from in enable_sequential_cpu_offload

* fix tests

* make slow tests pass

* update slow tests

* use temp model for now

* fix test_stable_diffusion_img_variation_intermediate_state

* fix test_stable_diffusion_img_variation_intermediate_state

* check for torch.Tensor

* quality

* fix name

* fix slow tests

* install transformers from source

* fix install

* fix install

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* input_image -> image

* remove deprication warnings

* fix test_stable_diffusion_img_variation_multiple_images

* make flake happy

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-23 14:36:39 +01:00
Suraj Patil
9e234d8048 handle fp16 in UNet2DModel (#1216)
* make sure fp16 runs well

* add fp16 test for superes

* Update src/diffusers/models/unet_2d.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* gen on cuda

* always run fast inferecne test on cpu

* run on cpu

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-23 11:13:34 +01:00
Penn
8fd3a74322 Fix using non-square images with UNet2DModel and DDIM/DDPM pipelines (#1289)
* fix non square images with UNet2DModel and DDIM/DDPM pipelines

* fix unet_2d `sample_size` docstring

* update pipeline tests for unet uncond

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-23 11:11:39 +01:00
regisss
44e56de9aa Replace logger.warn by logger.warning (#1366) 2022-11-22 20:44:34 +01:00
Suraj Patil
2d6d4edbbd use memory_efficient_attention by default (#1354)
* use memory_efficient_attention by default

* Update src/diffusers/models/attention.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-22 13:37:17 +01:00
Suraj Patil
8b84f85192 [examples] fix mixed_precision arg (#1359)
* use accelerator to check mixed_precision

* default `mixed_precision` to `None`

* pass mixed_precision to accelerate launch
2022-11-22 13:35:23 +01:00
Manuel Brack
e50c25d808 Add Safe Stable Diffusion Pipeline (#1244)
* Add pipeline_stable_diffusion_safe.py to pipelines

* Fix repository consistency

Ran make fix-copies after adding new pipline

* Add Paper/Equation reference for parameters to doc string

* Ensure code style and quality

* Perform code refactoring

* Fix copies inherited from merge with huggingface/main

* Add docs

* Fix code style

* Fix errors in documentation

* Fix refactoring error

* remove debugging print statement

* added Safe Latent Diffusion tests

* Fix style

* Fix style

* Add pre-defined safety configurations

* Fix line-break

* fix some tests

* finish

* Change safety checker

* Add missing safety_checker.py file

* Remove unused imports

Co-authored-by: PatrickSchrML <patrick_schramowski@hotmail.de>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-22 11:51:30 +01:00
Patrick von Platen
182eb959e5 [Community Pipelines] K-Diffusion Pipeline (#1360)
* up

* add readme

* up

* uP
2022-11-21 18:45:50 +01:00
Birch-san
ad93593345 perf: prefer batched matmuls for attention (#1203)
perf: prefer batched matmuls for attention. added fast-path to Decoder when num_heads=1
2022-11-21 15:01:11 +01:00
Stuti R
78a6eed2d7 Add bit diffusion [WIP] (#971)
* Create bit_diffusion.py

Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG

* adding bit diffusion to new branch

ran tests

* tests

* tests

* tests

* tests

* removed test folders + added to README

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-21 11:50:32 +01:00
shunxing1234
94b27fb8da change the sample model (#1352)
* Update alt_diffusion.mdx

* Update alt_diffusion.mdx
2022-11-21 11:28:25 +01:00
Patrick von Platen
ab1f01e634 make style 2022-11-20 19:37:28 +01:00
Patrick von Platen
2b31740d54 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-11-20 19:37:14 +01:00
Victor Schmidt
3bec90ff2c Handle batches and Tensors in pipeline_stable_diffusion_inpaint.py:prepare_mask_and_masked_image (#1003)
* Handle batches and Tensors in `prepare_mask_and_masked_image`

* `blackfy`
upgrade `black`

* handle mask as `np.array`

* add docstring

* revert `black` changes with smaller line length

* missing ValueError in docstring

* raise `TypeError` for image as tensor but not mask

* typo in mask shape selection

* check for batch dim

* fix: wrong indentation

* add tests

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-20 19:33:09 +01:00
Ki
eb2425b88c Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
Minor change to Imagic Readme

Missing dir causes an error when running the example code.
2022-11-20 18:59:56 +01:00
Ki
44efcbda0a Update README.md: IMAGIC example code snippet misspelling (#1346)
Update README.md

Minor spelling mistake.
2022-11-20 18:56:57 +01:00
Juan Acevedo
7bbbfbfd18 Jax infer support negative prompt (#1337)
* support negative prompts in sd jax pipeline

* pass batched neg_prompt

* only encode when negative prompt is None

Co-authored-by: Juan Acevedo <jfacevedo@google.com>
2022-11-19 20:51:52 +01:00
Clayton Sims
30220905c4 Legacy Inpainting Pipeline for Onnx Models (#1237)
* Add legacy inpainting pipeline compatibility for onnx

* remove commented out line

* Add onnx legacy inpainting test

* Fix slow decorators

* pep8 styling

* isort styling

* dummy object

* ordering consistency

* style

* docstring styles

* Refactor common prompt encoding pattern

* Update tests to permanent repository home

* support all available schedulers until ONNX IO binding is available

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* updated styling from PR suggested feedback

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-11-18 16:33:12 +01:00
Anton Lozhkov
7240318179 Fix the order of casts for onnx inpainting (#1338) 2022-11-18 16:30:07 +01:00
NotNANtoN
aa2ce41b99 Fix img2img speed with LMS-Discrete Scheduler (#896)
Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-18 16:01:57 +01:00
Anton Lozhkov
81fa2d688d Avoid nested fix-copies (#1332)
* Avoid nested `# Copied from` statements during `make fix-copies`

* style
2022-11-18 15:33:57 +01:00
Patrick von Platen
195e437ac5 Correct path to schedlure (#1322)
* [Examples] Correct path

* uP
2022-11-18 12:32:49 +01:00
Patrick von Platen
fcfdd95f0b Fix/Enable all schedulers for in-painting (#1331)
* inpaint fix k lms

* onnox as well

* up
2022-11-18 12:32:17 +01:00
Simon Kirsten
5dcef138bf [Flax] Fix loading scheduler from subfolder (#1319)
[FLAX] Fix loading scheduler from subfolder
2022-11-18 11:31:07 +01:00
Nathan Lambert
0cfbb51b0c add docs for multi-modal examples (#1227)
* add docs for multi-modal

* many changes

* fix docs build

* fix links

* Update docs/source/using-diffusers/other-modalities.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-17 10:25:49 -08:00
Patrick von Platen
b9b7039f0e img2text Typo (#1329)
* make fix copies again

* Fix typo
2022-11-17 16:48:15 +01:00
Patrick von Platen
63b34191b9 Fix typo 2022-11-17 16:47:19 +01:00
Patrick von Platen
b21a463aa9 rg Merge branch 'main' of https://github.com/huggingface/diffusers 2022-11-17 16:46:33 +01:00
Anton Lozhkov
e05ca84f41 [ONNX] Support Euler schedulers (#1328) 2022-11-17 16:37:35 +01:00
Patrick von Platen
3b48620f5e Merge branch 'main' of https://github.com/huggingface/diffusers 2022-11-17 16:14:53 +01:00
Patrick von Platen
632dacea2f [Custom pipeline] Easier loading of local pipelines (#1327)
* [Custom pipeline] Easier loading of local pipelines

* upgrade black
2022-11-17 16:00:26 +01:00
Patrick von Platen
3fb28c44a3 xMerge branch 'main' of https://github.com/huggingface/diffusers 2022-11-17 15:50:36 +01:00
Patrick von Platen
2dd12e38af make fix copies again 2022-11-17 15:50:33 +01:00
Prathik Rao
3346ec3acd integrate ort (#1110)
* integrate ort

* use return_dict=False

* revert unet return value change

* revert unet return value change

* add note to readme

* adjust readme

* add contact

* `make style`

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-17 15:48:41 +01:00
Anton Lozhkov
61719bf26c Fix gpu_id (#1326) 2022-11-17 15:41:33 +01:00
Patrick von Platen
b3911f89a3 make fix copies 2022-11-17 15:06:23 +01:00
Patrick von Platen
245e9cc7ff fix make style 2022-11-17 15:03:31 +01:00
Pedro Cuenca
1138d63b51 Temporary local test for PIL_INTERPOLATION (#1317)
* Temporary local test for PIL_INTERPOLATION

* Fix examples too.
2022-11-16 18:42:21 +01:00
Dhruv Karan
afdd7bb635 [Community Pipeline] CLIPSeg + StableDiffusionInpainting (#1250)
* text inpainting

* refactor
2022-11-16 18:18:51 +01:00
Kamal Raj
aa5c4c2609 doc string args shape fix (#1243)
* doc string args shape fix

* fix styling
2022-11-16 18:03:44 +01:00
Will Berman
f1fcfdeec5 vq diffusion classifier free sampling (#1294)
* vq diffusion classifier free sampling

* correct

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-16 17:51:43 +01:00
dblunk88
09d0546ad0 cpu offloading: mutli GPU support (#1143)
mutli GPU support
2022-11-16 17:40:16 +01:00
Patrick von Platen
65d136e067 Add improved handling of pil (#1309)
* Better error message for transformers dummy

* [PIL] Better deprecation functionality

* up
2022-11-16 15:58:22 +01:00
Suraj Patil
46893adacd [AltDiffusion] add tests (#1311)
* being tests

* fix model ids

* don't use safety checker in tests

* add im2img2 tests

* fix integration tests

* integration tests

* style

* add sentencepiece in test dep

* quality

* 4 decimalk points

* fix im2img test

* increase the tok slightly
2022-11-16 15:40:26 +01:00
Mishig
327ddc8770 Revert "Update pr docs actions" (#1307)
Revert "Update pr docs actions (#1194)"

This reverts commit 32b0736d8a.
2022-11-16 11:46:13 +01:00
Patrick von Platen
af9ee8736c Better error message for transformers dummy (#1306) 2022-11-16 10:28:19 +01:00
Patrick von Platen
8a73064576 Add AltDiffusion (#1299)
* add conversion script for vae

* up

* up

* some fixes

* add text model

* use the correct config

* add docs

* move model in it's own file

* move model in its own file

* pass attenion mask to text encoder

* pass attn mask to uncond inputs

* quality

* fix image2image

* add imag2image in init

* fix import

* fix one more import

* fix import, dummy objetcs

* fix copied from

* up

* finish

Co-authored-by: patil-suraj <surajp815@gmail.com>
2022-11-15 21:32:26 +01:00
Patrick von Platen
4625f04bc0 remove bogus files 2022-11-15 17:34:00 +00:00
Patrick von Platen
554b374d20 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-15 17:17:47 +00:00
Patrick von Platen
a0520193e1 Add Scheduler.from_pretrained and better scheduler changing (#1286)
* add conversion script for vae

* uP

* uP

* more changes

* push

* up

* finish again

* up

* up

* up

* up

* finish

* up

* uP

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 18:15:13 +01:00
Glenn 'devalias' Grant
db1cb0b1a2 [dreambooth] link to bitsandbytes readme for installation (#1229)
* add 'conda install cudatoolkit' to dreambooth 'training on 16GB' example 

fixes https://github.com/huggingface/diffusers/issues/1207

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 12:53:54 +01:00
Dhruv Naik
610e2a6fd9 Fix incorrect link to Stable Diffusion notebook (#1291)
Update README.md
2022-11-15 10:19:35 +01:00
Nan Liu
07f9e56d51 add source link to composable diffusion model (#1293) 2022-11-15 10:19:06 +01:00
Joshua Lochner
57525bb418 Fix documentation typo for UNet2DModel and UNet2DConditionModel (#1275)
* Fix documentation typo

* Fix other typo
2022-11-14 22:54:09 +01:00
Nathan Lambert
7c5fef81e0 Add UNet 1d for RL model for planning + colab (#105)
* re-add RL model code

* match model forward api

* add register_to_config, pass training tests

* fix tests, update forward outputs

* remove unused code, some comments

* add to docs

* remove extra embedding code

* unify time embedding

* remove conv1d output sequential

* remove sequential from conv1dblock

* style and deleting duplicated code

* clean files

* remove unused variables

* clean variables

* add 1d resnet block structure for downsample

* rename as unet1d

* fix renaming

* rename files

* add get_block(...) api

* unify args for model1d like model2d

* minor cleaning

* fix docs

* improve 1d resnet blocks

* fix tests, remove permuts

* fix style

* add output activation

* rename flax blocks file

* Add Value Function and corresponding example script to Diffuser implementation (#884)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* update post merge of scripts

* add mdiblock / outblock architecture

* Pipeline cleanup (#947)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

* clean up comments

* convert older script to using pipeline and add readme

* rename scripts

* style, update tests

* delete unet rl model file

* remove imports in src

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* Update src/diffusers/models/unet_1d_blocks.py

* Update tests/test_models_unet.py

* RL Cleanup v2 (#965)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

* clean up comments

* convert older script to using pipeline and add readme

* rename scripts

* style, update tests

* delete unet rl model file

* remove imports in src

* add specific vf block and update tests

* style

* Update tests/test_models_unet.py

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* fix quality in tests

* fix quality style, split test file

* fix checks / tests

* make timesteps closer to main

* unify block API

* unify forward api

* delete lines in examples

* style

* examples style

* all tests pass

* make style

* make dance_diff test pass

* Refactoring RL PR (#1200)

* init file changes

* add import utils

* finish cleaning files, imports

* remove import flags

* clean examples

* fix imports, tests for merge

* update readmes

* hotfix for tests

* quality

* fix some tests

* change defaults

* more mps test fixes

* unet1d defaults

* do not default import experimental

* defaults for tests

* fix tests

* fix-copies

* fix

* changes per Patrik's comments (#1285)

* changes per Patrik's comments

* update conversion script

* fix renaming

* skip more mps tests

* last test fix

* Update examples/rl/README.md

Co-authored-by: Ben Glickenhaus <benglickenhaus@gmail.com>
2022-11-14 13:48:48 -08:00
Patrick von Platen
d5ab55e437 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-14 21:10:47 +00:00
Suraj Patil
a8d0977769 [StableDiffusionInpaintPipeline] fix batch_size for mask and masked latents (#1279)
fix bs for mask and masked latents
2022-11-14 22:03:10 +01:00
Patrick von Platen
e4ffadc429 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-14 21:01:39 +00:00
Patrick von Platen
ec7c8d32b0 add conversion script for vae 2022-11-14 19:43:17 +00:00
Partho
c9b3463703 Fix wrong link in text2img fine-tuning documentation (#1282)
fix link typo
2022-11-14 20:42:14 +01:00
Lime-Cakes
33d7e89c42 Edited attention.py for older xformers (#1270)
Older versions of xformers require query, key, value to be contiguous, this calls .contiguous() on q/k/v before passing to xformers.
2022-11-14 13:35:47 +01:00
Patrick von Platen
b3c5e086e5 Finalize stable diffusion refactor (#1269)
* finish

* cleaner

* more fixes

* refactor

* make fix copies

* refactor cycle diffusion

* finish

* finish2

* Apply suggestions from code review
2022-11-13 23:54:30 +01:00
Patrick von Platen
4c660d16d0 [Stable Diffusion] Fix padding / truncation (#1226)
* [Stable Diffusion] Fix padding / truncation

* finish
2022-11-13 20:19:55 +01:00
ruanrz
8171566163 [Docs] improve img2img example (#1193)
update img2img example
2022-11-11 12:28:20 +01:00
Pedro Cuenca
045157a46f Fix Flax usage comments (#1211)
* Fix Flax usage comments (they didn't work).

* Spell out dtype

* make style
2022-11-10 16:00:17 +01:00
apolinario
a09d47532d Add a reference to the name 'Sampler' (#1172)
* Add a reference to the name 'Sampler'

- Facilitate people that are familiar with the name samplers to understand that we call that schedulers
- Better SEO if people are googling for samplers to find our library as well

* Update README.md with a reference to 'Sampler'
2022-11-10 14:37:42 +01:00
Anton Lozhkov
2e980ac9a0 [Tests] Adjust TPU test values (#1233)
* [Tests] Adjust TPU test values

* slow tests

* remaining refs
2022-11-10 00:44:42 +01:00
Anton Lozhkov
0feb21a18c [Tests] Fix mps+generator fast tests (#1230)
* [Tests] Fix mps+generator fast tests

* mps for Euler

* retry

* warmup issue again?

* fix reproducible initial noise

* Revert "fix reproducible initial noise"

This reverts commit f300d05cb9.

* fix reproducible initial noise

* fix device
2022-11-10 00:09:22 +01:00
Patrick von Platen
187de44352 Fix device on save/load tests 2022-11-09 22:18:14 +00:00
Anton Lozhkov
7d0c272939 Match the generator device to the pipeline for DDPM and DDIM (#1222)
* Match the generator device to the pipeline for DDPM and DDIM

* style

* fix

* update values

* fix fast tests

* trigger slow tests

* deprecate

* last value fixes

* mps fixes
2022-11-09 23:00:23 +01:00
Patrick von Platen
3d98dc763a Factor out encode text with Copied from (#1224)
* up

* more fixes

* fix

* finalize

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* upload models

* up
2022-11-09 22:18:57 +01:00
exo-pla-net
13f388eeb2 Improve documentation for the LPW pipeline (#1182) 2022-11-09 21:39:27 +01:00
Pedro Cuenca
af279434d0 Flax tests: don't hardcode number of devices (#1175)
Flax tests: don't hardcode number of devices.

This makes it possible to test on CPU/GPU. However, expected slices are
only checked when there are 8 devices.
2022-11-09 20:04:43 +01:00
Jesse Casey
4969f46511 apply repeat_interleave fix for mps to stable diffusion image2image pipeline (#1135)
copy from other pipeline
2022-11-09 20:01:31 +01:00
Patrick von Platen
6c0335c7f9 DDIM docs (#1219) 2022-11-09 16:02:11 +01:00
Patrick von Platen
0248541dea [Conversion] Improve conversion script (#1218)
up
2022-11-09 15:46:08 +01:00
Duong A. Nguyen
5a59f9b717 Add LDM Super Resolution pipeline (#1116)
* Add ldm super resolution pipeline

* style

* fix copies

* style

* fix doc

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* add doc

* address comments

* address comments

* fix doc

* minor

* add tests

* add tests

* load text encoder from subfolder

* fix test

* fix test

* style

* style

* handle mps latents

* unfix typo

* unfix typo

* Update tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix set_timesteps mps

* fix set_timesteps mps

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style

* test 64x64 instead of 256x256

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-09 13:42:16 +01:00
Patrick von Platen
b93fe08545 [Loading] Make sure loading edge cases work (#1192)
* [Loading] Make edge cases work

* up

* finish

* up
2022-11-09 12:28:56 +01:00
Duong A. Nguyen
3f7edc5f72 Fix layer names convert LDM script (#1206)
fix script convert LDM
2022-11-09 12:08:30 +01:00
Suraj Patil
cd77a03651 [CLIPGuidedStableDiffusion] support DDIM scheduler (#1190)
add ddim in clip guided
2022-11-09 11:46:12 +01:00
camenduru
663f0c1963 [Flax] fix extra copy pasta 🍝 (#1187) 2022-11-09 11:34:15 +01:00
Patrick von Platen
6cf72a9b1e Fix slow tests (#1210)
* fix tests

* Fix more

* more
2022-11-09 11:22:12 +01:00
Anton Lozhkov
24895a1f49 Fix cpu offloading (#1177)
* Fix cpu offloading

* get offloaded devices locally for SD pipelines
2022-11-09 10:28:10 +01:00
Nathan Lambert
598ff76bbf add licenses to pipelines (#1201)
add licenses
2022-11-09 10:06:49 +01:00
Patrick von Platen
249d9bc0e7 [Scheduler] Move predict epsilon to init (#1155)
* [Scheduler] Move predict epsilon to init

* up

* uP

* uP

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-08 18:08:08 +01:00
Suraj Patil
5786b0e2f7 handle dtype xformers attention (#1196)
handle dtype xformers
2022-11-08 17:15:23 +01:00
Mishig
32b0736d8a Update pr docs actions (#1194) 2022-11-08 16:38:09 +01:00
Pedro Cuenca
614c182f94 Restore compatibility with deprecated StableDiffusionOnnxPipeline (#1191)
* Restore compatibility with old ONNX pipeline.

I think it broke in #552.

* Add missing attribute `vae_encoder`
2022-11-08 15:08:35 +01:00
Anton Lozhkov
11f7d6f3cc [ONNX] Improve ONNXPipeline scheduler compatibility, fix safety_checker (#1173)
* [ONNX] Improve ONNX scheduler compatibility, fix safety_checker

* typo
2022-11-08 14:39:11 +01:00
Yuta Hayashibe
555203e1fa Warning for invalid options without "--with_prior_preservation" (#1065)
* Make errors for invalid options without "--with_prior_preservation"

* Make --instance_prompt required

* Removed needless check because --instance_data_dir is marked with required

* Updated messages

* Use logger.warning instead of raise errors

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-08 14:31:13 +01:00
Pedro Cuenca
813744e5f3 MPS schedulers: don't use float64 (#1169)
* Schedulers: don't use float64 on mps

* Test set_timesteps() on device (float schedulers).

* SD pipeline: use device in set_timesteps.

* SD in-painting pipeline: use device in set_timesteps.

* Tests: fix mps crashes.

* Skip test_load_pipeline_from_git on mps.

Not compatible with float16.

* Use device.type instead of str in Euler schedulers.
2022-11-08 13:11:33 +01:00
Suraj Patil
5a8b356922 [DDIMScheduler] fix noise device in ddim step (#1189)
* fix noise device in ddim sched

* fix typo

* self.device -> device

* remove duplicated if

* use str device

* don't use str for device
2022-11-08 13:11:12 +01:00
Pedro Cuenca
20a05d6a50 Fix small typo (#1178)
Unless it's intentional, lol
2022-11-08 12:30:51 +01:00
Patrick von Platen
c3dcb6749b Update config.yml 2022-11-08 11:31:15 +01:00
Pedro Cuenca
fa6e5209a8 Link to Dreambooth blog post instead of W&B report (#1180)
Link to Dreambooth blog post instead of W&B report.
2022-11-07 21:59:36 +01:00
Duong A. Nguyen
ac4c695d97 [Flax examples] Load text encoder from subfolder (#1147)
load text encoder from subfolder
2022-11-07 21:26:59 +01:00
JuanCarlosPi
01733238a6 [Community Pipeline] Add multilingual stable diffusion to community pipelines (#1142)
* Add multilingual_stable_diffusion.py file

* Add multilingual stable diffusion to examples README file

* Update examples/community/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-07 21:11:59 +01:00
Alex McKinney
bcdb3d594c Community pipeline img2img inpainting (#1114)
* adds image to image inpainting with `PIL.Image.Image` inputs
the base implementation claims to support `torch.Tensor` but seems it
would also fail in this case.

* `make style` and `make quality`

* updates community examples readme

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-07 21:06:52 +01:00
Patrick von Platen
72eae64d67 Fix dtype safety checker inpaint legacy (#1137)
* [Stable Diffusion Inpaint Legacy] Fiix some things

* uP
2022-11-07 20:57:45 +01:00
Patrick von Platen
de7536281a fix image docs 2022-11-07 17:25:13 +01:00
Patrick von Platen
b500df1155 [Docs] Add loading script (#1174)
* add loading script

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* correct

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* uP

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-07 17:15:41 +01:00
Pedro Cuenca
0dd8c6b4db Fix community pipeline links (#1162)
* Change title to match the sidebar in _toctree.

* Fix custom pipe link, add link to contribute.

* Fix community pipeline links.
2022-11-07 14:32:51 +01:00
Duong A. Nguyen
cd502b25cf Fix typo latens -> latents (#1171)
fix typo
2022-11-07 13:34:45 +01:00
Pedro Cuenca
e86a280c45 Remove warning about half precision on MPS (#1163)
Remove warning about half precision on MPS.
2022-11-07 12:27:17 +01:00
Cheng Lu
b4a1ed8544 Add multistep DPM-Solver discrete scheduler (#1132)
* add dpmsolver discrete pytorch scheduler

* fix some typos in dpm-solver pytorch

* add dpm-solver pytorch in stable-diffusion pipeline

* add jax/flax version dpm-solver

* change code style

* change code style

* add docs

* add `add_noise` method for dpmsolver

* add pytorch unit test for dpmsolver

* add dummy object for pytorch dpmsolver

* Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* resolve the code comments

* rename the file

* change class name

* fix code style

* add auto docs for dpmsolver multistep

* add more explanations for the stabilizing trick (for steps < 15)

* delete the dummy file

* change the API name of predict_epsilon, algorithm_type and solver_type

* add compatible lists

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-06 22:49:55 +01:00
Pedro Cuenca
08a6dc8a58 Flax: Flip sin to cos in time embeddings (#1149)
Flip sin to cos in t embeddings.

This was assumed in the previous implementation, but now the default is
the opposite.

Fixes #1145.
2022-11-05 22:17:41 +01:00
Chen Wu (吴尘)
9d8943b7e7 Add CycleDiffusion pipeline using Stable Diffusion (#888)
* Add CycleDiffusion pipeline for Stable Diffusion

* Add the option of passing noise to DDIMScheduler

Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator.

* Update README.md

* Update README.md

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update import format

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* add two tests

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update README.md

* Rename pipeline name as suggested in the latest reviewer comment

* Update test_pipelines.py

* Update test_pipelines.py

* Update test_pipelines.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Remove the generator

This generator does not control all randomness during sampling, which can be misleading.

* Update optimal hyperparameters

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

* uP

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

* Replace assert with ValueError

* finish docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-04 20:51:06 +01:00
Pi Esposito
1172c9634b add enable sequential cpu offloading to other stable diffusion pipelines (#1085)
* add enable sequential cpu offloading to other stable diffusion pipelines

* trigger ci

* fix styling

* interpolate before converting to device to avoid breking when cpu_offload is enabled with fp16

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>

* style again I need to stop forgething this thing

* fix inpainting bug that could cause device misalignment

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 19:25:28 +01:00
Anton Lozhkov
2fcae69f2a Bump to 0.8.0.dev0 (#1131)
* Bump to 0.8.0.dev0

* deprecate int timesteps

* style
2022-11-04 19:06:24 +01:00
SkyTNT
a480229463 [Community Pipeline] lpw_stable_diffusion: add xformers_memory_efficient_attention and sequential_cpu_offload (#1130)
lpw_stable_diffusion: xformers and cpu_offload
2022-11-04 18:38:37 +01:00
Chenguo Lin
5b20d3b3d7 fix the parameter naming in self.downsamplers (#1108)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 18:05:19 +01:00
Lewington-pitsos
2c108693cc Test precision increases (#1113)
* increase the precision of slice-based tests and make the default test case easier to single out

* increase precision of unit tests which already rely on float comparisons

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 17:54:01 +01:00
webbigdata-jp
af7b1c3bf2 fix 404 link in example/README.mb (#1136)
fix 404 link in README.mb
2022-11-04 16:45:58 +01:00
Patrick von Platen
1d0f3c211e Move accelerate to a soft-dependency (#1134)
* finish

* finish

* Update src/diffusers/modeling_utils.py

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* more fixes

* fix

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-04 14:58:52 +01:00
Duong A. Nguyen
c62b3a2e7e [Flax] Fix sample batch size DreamBooth (#1129)
fix sample batch size
2022-11-04 13:49:57 +01:00
1278 changed files with 398004 additions and 23680 deletions

View File

@@ -1,11 +1,25 @@
name: "\U0001F41B Bug Report"
description: Report a bug on diffusers
description: Report a bug on Diffusers
labels: [ "bug" ]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
Thanks a lot for taking the time to file this issue 🤗.
Issues do not only help to improve the library, but also publicly document common problems, questions, workflows for the whole community!
Thus, issues are of the same importance as pull requests when contributing to this library ❤️.
In order to make your issue as **useful for the community as possible**, let's try to stick to some simple guidelines:
- 1. Please try to be as precise and concise as possible.
*Give your issue a fitting title. Assume that someone which very limited knowledge of Diffusers can understand your issue. Add links to the source code, documentation other issues, pull requests etc...*
- 2. If your issue is about something not working, **always** provide a reproducible code snippet. The reader should be able to reproduce your issue by **only copy-pasting your code snippet into a Python shell**.
*The community cannot solve your issue if it cannot reproduce it. If your bug is related to training, add your training script and make everything needed to train public. Otherwise, just add a simple Python code snippet.*
- 3. Add the **minimum** amount of code / context that is needed to understand, reproduce your issue.
*Make the life of maintainers easy. `diffusers` is getting many issues every day. Make sure your issue is about one bug and one bug only. Make sure you add only the context, code needed to understand your issues - nothing more. Generally, every issue is a way of documenting this library, try to make it a good documentation entry.*
- 4. For issues related to community pipelines (i.e., the pipelines located in the `examples/community` folder), please tag the author of the pipeline in your issue thread as those pipelines are not maintained.
- type: markdown
attributes:
value: |
For more in-detail information on how to write good issues you can have a look [here](https://huggingface.co/course/chapter8/5?fw=pt).
- type: textarea
id: bug-description
attributes:
@@ -20,6 +34,8 @@ body:
label: Reproduction
description: Please provide a minimal reproducible code which we can copy/paste and reproduce the issue.
placeholder: Reproduction
validations:
required: true
- type: textarea
id: logs
attributes:
@@ -31,6 +47,60 @@ body:
attributes:
label: System Info
description: Please share your system info with us. You can run the command `diffusers-cli env` and copy-paste its output below.
placeholder: diffusers version, platform, python version, ...
placeholder: Diffusers version, platform, Python version, ...
validations:
required: true
- type: textarea
id: who-can-help
attributes:
label: Who can help?
description: |
Your issue will be replied to more quickly if you can figure out the right person to tag with @.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
All issues are read by one of the core maintainers, so if you don't know who to tag, just leave this blank and
a core maintainer will ping the right person.
Please tag a maximum of 2 people.
Questions on DiffusionPipeline (Saving, Loading, From pretrained, ...):
Questions on pipelines:
- Stable Diffusion @yiyixuxu @DN6 @sayakpaul @patrickvonplaten
- Stable Diffusion XL @yiyixuxu @sayakpaul @DN6 @patrickvonplaten
- Kandinsky @yiyixuxu @patrickvonplaten
- ControlNet @sayakpaul @yiyixuxu @DN6 @patrickvonplaten
- T2I Adapter @sayakpaul @yiyixuxu @DN6 @patrickvonplaten
- IF @DN6 @patrickvonplaten
- Text-to-Video / Video-to-Video @DN6 @sayakpaul @patrickvonplaten
- Wuerstchen @DN6 @patrickvonplaten
- Other: @yiyixuxu @DN6
Questions on models:
- UNet @DN6 @yiyixuxu @sayakpaul @patrickvonplaten
- VAE @sayakpaul @DN6 @yiyixuxu @patrickvonplaten
- Transformers/Attention @DN6 @yiyixuxu @sayakpaul @DN6 @patrickvonplaten
Questions on Schedulers: @yiyixuxu @patrickvonplaten
Questions on LoRA: @sayakpaul @patrickvonplaten
Questions on Textual Inversion: @sayakpaul @patrickvonplaten
Questions on Training:
- DreamBooth @sayakpaul @patrickvonplaten
- Text-to-Image Fine-tuning @sayakpaul @patrickvonplaten
- Textual Inversion @sayakpaul @patrickvonplaten
- ControlNet @sayakpaul @patrickvonplaten
Questions on Tests: @DN6 @sayakpaul @yiyixuxu
Questions on Documentation: @stevhliu
Questions on JAX- and MPS-related things: @pcuenca
Questions on audio pipelines: @DN6 @patrickvonplaten
placeholder: "@Username ..."

View File

@@ -1,7 +1,4 @@
contact_links:
- name: Forum
url: https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63
- name: Questions / Discussions
url: https://github.com/huggingface/diffusers/discussions
about: General usage questions and community discussions
- name: Blank issue
url: https://github.com/huggingface/diffusers/issues/new
about: Please note that the Forum is in most places the right place for discussions

View File

@@ -1,5 +1,5 @@
---
name: "\U0001F680 Feature request"
name: "\U0001F680 Feature Request"
about: Suggest an idea for this project
title: ''
labels: ''
@@ -8,13 +8,13 @@ assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].
**Describe the solution you'd like**
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
**Describe alternatives you've considered.**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
**Additional context.**
Add any other context or screenshots about the feature request here.

View File

@@ -1,5 +1,5 @@
name: "\U0001F31F New model/pipeline/scheduler addition"
description: Submit a proposal/request to implement a new diffusion model / pipeline / scheduler
name: "\U0001F31F New Model/Pipeline/Scheduler Addition"
description: Submit a proposal/request to implement a new diffusion model/pipeline/scheduler
labels: [ "New model/pipeline/scheduler" ]
body:
@@ -19,7 +19,7 @@ body:
description: |
Please note that if the model implementation isn't available or if the weights aren't open-source, we are less likely to implement it in `diffusers`.
options:
- label: "The model implementation is available"
- label: "The model implementation is available."
- label: "The model weights are available (Only relevant if addition is not a scheduler)."
- type: textarea

29
.github/ISSUE_TEMPLATE/translate.md vendored Normal file
View File

@@ -0,0 +1,29 @@
---
name: 🌐 Translating a New Language?
about: Start a new translation effort in your language
title: '[<languageCode>] Translating docs to <languageName>'
labels: WIP
assignees: ''
---
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about Diffusers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).
Thank you so much for your help! 🤗

60
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,60 @@
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md)?
- [ ] Did you read our [philosophy doc](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) (important for complex PRs)?
- [ ] Was this discussed/approved via a GitHub issue or the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/diffusers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/diffusers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Core library:
- Schedulers: @yiyixuxu and @patrickvonplaten
- Pipelines: @patrickvonplaten and @sayakpaul
- Training examples: @sayakpaul and @patrickvonplaten
- Docs: @stevhliu and @yiyixuxu
- JAX and MPS: @pcuenca
- Audio: @sanchit-gandhi
- General functionalities: @patrickvonplaten and @sayakpaul
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- transformers: [different repo](https://github.com/huggingface/transformers)
- safetensors: [different repo](https://github.com/huggingface/safetensors)
-->

View File

@@ -4,7 +4,7 @@ description: Sets up miniconda in your ${RUNNER_TEMP} environment and gives you
inputs:
python-version:
description: If set to any value, dont use sudo to clean the workspace
description: If set to any value, don't use sudo to clean the workspace
required: false
type: string
default: "3.9"
@@ -27,7 +27,7 @@ runs:
- name: Get date
id: get-date
shell: bash
run: echo "::set-output name=today::$(/bin/date -u '+%Y%m%d')d"
run: echo "today=$(/bin/date -u '+%Y%m%d')d" >> $GITHUB_OUTPUT
- name: Setup miniconda cache
id: miniconda-cache
uses: actions/cache@v2
@@ -143,4 +143,4 @@ runs:
echo "There is ${AVAIL}KB free space left in $MOUNT, continue"
fi
fi
done
done

52
.github/workflows/benchmark.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: Benchmarking tests
on:
schedule:
- cron: "30 1 1,15 * *" # every 2 weeks on the 1st and the 15th of every month at 1:30 AM
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
jobs:
torch_pipelines_cuda_benchmark_tests:
name: Torch Core Pipelines CUDA Benchmarking Tests
strategy:
fail-fast: false
max-parallel: 1
runs-on: [single-gpu, nvidia-gpu, a10, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: |
nvidia-smi
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install pandas
- name: Environment
run: |
python utils/print_env.py
- name: Diffusers Benchmarking
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.DIFFUSERS_BOT_TOKEN }}
BASE_PATH: benchmark_outputs
run: |
export TOTAL_GPU_MEMORY=$(python -c "import torch; print(torch.cuda.get_device_properties(0).total_memory / (1024**3))")
cd benchmarks && mkdir ${BASE_PATH} && python run_all.py && python push_results.py
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: benchmark_test_reports
path: benchmarks/benchmark_outputs

View File

@@ -26,6 +26,8 @@ jobs:
image-name:
- diffusers-pytorch-cpu
- diffusers-pytorch-cuda
- diffusers-pytorch-compile-cuda
- diffusers-pytorch-xformers-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu

View File

@@ -6,12 +6,18 @@ on:
- main
- doc-builder*
- v*-release
- v*-patch
jobs:
build:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
with:
commit_sha: ${{ github.sha }}
install_libgl1: true
package: diffusers
notebook_folder: diffusers_doc
languages: en ko zh ja pt
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

View File

@@ -13,4 +13,6 @@ jobs:
with:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
install_libgl1: true
package: diffusers
languages: en ko zh ja pt

View File

@@ -1,13 +0,0 @@
name: Delete dev documentation
on:
pull_request:
types: [ closed ]
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
with:
pr_number: ${{ github.event.number }}
package: diffusers

162
.github/workflows/nightly_tests.yml vendored Normal file
View File

@@ -0,0 +1,162 @@
name: Nightly tests on main
on:
schedule:
- cron: "0 0 * * *" # every day at midnight
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
RUN_SLOW: yes
RUN_NIGHTLY: yes
jobs:
run_nightly_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Nightly PyTorch CUDA tests on Ubuntu
framework: pytorch
runner: docker-gpu
image: diffusers/diffusers-pytorch-cuda
report: torch_cuda
- name: Nightly Flax TPU tests on Ubuntu
framework: flax
runner: docker-tpu
image: diffusers/diffusers-flax-tpu
report: flax_tpu
- name: Nightly ONNXRuntime CUDA tests on Ubuntu
framework: onnxruntime
runner: docker-gpu
image: diffusers/diffusers-onnxruntime-cuda
report: onnx_cuda
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
if: ${{ matrix.config.runner == 'docker-gpu' }}
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m pip install -e .[quality,test]
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
python utils/print_env.py
- name: Run nightly PyTorch CUDA tests
if: ${{ matrix.config.framework == 'pytorch' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run nightly Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run nightly ONNXRuntime CUDA tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: ${{ matrix.config.report }}_test_reports
path: reports
run_nightly_tests_apple_m1:
name: Nightly PyTorch MPS tests on MacOS
runs-on: [ self-hosted, apple-m1 ]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Clean checkout
shell: arch -arch arm64 bash {0}
run: |
git clean -fxd
- name: Setup miniconda
uses: ./.github/actions/setup-miniconda
with:
python-version: 3.9
- name: Install dependencies
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py
- name: Run nightly PyTorch tests on M1 (MPS)
shell: arch -arch arm64 bash {0}
env:
HF_HOME: /System/Volumes/Data/mnt/cache
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_mps_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_mps_test_reports
path: reports

View File

@@ -0,0 +1,32 @@
name: Run dependency tests
on:
pull_request:
branches:
- main
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check_dependencies:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e .
pip install pytest
- name: Check for soft dependencies
run: |
pytest tests/others/test_dependencies.py

View File

@@ -0,0 +1,34 @@
name: Run Flax dependency tests
on:
pull_request:
branches:
- main
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check_flax_dependencies:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e .
pip install "jax[cpu]>=0.2.16,!=0.3.2"
pip install "flax>=0.4.1"
pip install "jaxlib>=0.1.65"
pip install pytest
- name: Check for soft dependencies
run: |
pytest tests/others/test_dependencies.py

View File

@@ -20,17 +20,15 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.7"
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install .[quality]
- name: Check quality
run: |
black --check --preview examples tests src utils scripts
isort --check-only examples tests src utils scripts
flake8 examples tests src utils scripts
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source
ruff check examples tests src utils scripts
ruff format examples tests src utils scripts --check
check_repository_consistency:
runs-on: ubuntu-latest
@@ -39,7 +37,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.7"
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
@@ -48,3 +46,4 @@ jobs:
run: |
python utils/check_copies.py
python utils/check_dummies.py
make deps_table_check_updated

170
.github/workflows/pr_test_fetcher.yml vendored Normal file
View File

@@ -0,0 +1,170 @@
name: Fast tests for PRs - Test Fetcher
on: workflow_dispatch
env:
DIFFUSERS_IS_CI: yes
OMP_NUM_THREADS: 4
MKL_NUM_THREADS: 4
PYTEST_TIMEOUT: 60
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
setup_pr_tests:
name: Setup PR Tests
runs-on: docker-cpu
container:
image: diffusers/diffusers-pytorch-cpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
outputs:
matrix: ${{ steps.set_matrix.outputs.matrix }}
test_map: ${{ steps.set_matrix.outputs.test_map }}
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
- name: Environment
run: |
python utils/print_env.py
echo $(git --version)
- name: Fetch Tests
run: |
python utils/tests_fetcher.py | tee test_preparation.txt
- name: Report fetched tests
uses: actions/upload-artifact@v3
with:
name: test_fetched
path: test_preparation.txt
- id: set_matrix
name: Create Test Matrix
# The `keys` is used as GitHub actions matrix for jobs, i.e. `models`, `pipelines`, etc.
# The `test_map` is used to get the actual identified test files under each key.
# If no test to run (so no `test_map.json` file), create a dummy map (empty matrix will fail)
run: |
if [ -f test_map.json ]; then
keys=$(python3 -c 'import json; fp = open("test_map.json"); test_map = json.load(fp); fp.close(); d = list(test_map.keys()); print(json.dumps(d))')
test_map=$(python3 -c 'import json; fp = open("test_map.json"); test_map = json.load(fp); fp.close(); print(json.dumps(test_map))')
else
keys=$(python3 -c 'keys = ["dummy"]; print(keys)')
test_map=$(python3 -c 'test_map = {"dummy": []}; print(test_map)')
fi
echo $keys
echo $test_map
echo "matrix=$keys" >> $GITHUB_OUTPUT
echo "test_map=$test_map" >> $GITHUB_OUTPUT
run_pr_tests:
name: Run PR Tests
needs: setup_pr_tests
if: contains(fromJson(needs.setup_pr_tests.outputs.matrix), 'dummy') != true
strategy:
fail-fast: false
max-parallel: 2
matrix:
modules: ${{ fromJson(needs.setup_pr_tests.outputs.matrix) }}
runs-on: docker-cpu
container:
image: diffusers/diffusers-pytorch-cpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install accelerate
- name: Environment
run: |
python utils/print_env.py
- name: Run all selected tests on CPU
run: |
python -m pytest -n 2 --dist=loadfile -v --make-reports=${{ matrix.modules }}_tests_cpu ${{ fromJson(needs.setup_pr_tests.outputs.test_map)[matrix.modules] }}
- name: Failure short reports
if: ${{ failure() }}
continue-on-error: true
run: |
cat reports/${{ matrix.modules }}_tests_cpu_stats.txt
cat reports/${{ matrix.modules }}_tests_cpu_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.modules }}_test_reports
path: reports
run_staging_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Hub tests for models, schedulers, and pipelines
framework: hub_tests_pytorch
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_hub
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
- name: Environment
run: |
python utils/print_env.py
- name: Run Hub tests for models, schedulers, and pipelines on a staging env
if: ${{ matrix.config.framework == 'hub_tests_pytorch' }}
run: |
HUGGINGFACE_CO_STAGING=true python -m pytest \
-m "is_staging_test" \
--make-reports=tests_${{ matrix.config.report }} \
tests
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_${{ matrix.config.report }}_test_reports
path: reports

View File

@@ -0,0 +1,65 @@
name: Fast tests for PRs - PEFT backend
on:
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
DIFFUSERS_IS_CI: yes
OMP_NUM_THREADS: 4
MKL_NUM_THREADS: 4
PYTEST_TIMEOUT: 60
jobs:
run_fast_tests:
strategy:
fail-fast: false
matrix:
lib-versions: ["main", "latest"]
name: LoRA - ${{ matrix.lib-versions }}
runs-on: docker-cpu
container:
image: diffusers/diffusers-pytorch-cpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
if [ "${{ matrix.lib-versions }}" == "main" ]; then
python -m pip install -U git+https://github.com/huggingface/peft.git
python -m pip install -U git+https://github.com/huggingface/transformers.git
python -m pip install -U git+https://github.com/huggingface/accelerate.git
else
python -m pip install -U peft transformers accelerate
fi
- name: Environment
run: |
python utils/print_env.py
- name: Run fast PyTorch LoRA CPU tests with PEFT backend
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v \
--make-reports=tests_${{ matrix.config.report }} \
tests/lora/test_lora_layers_peft.py

View File

@@ -1,9 +1,12 @@
name: Run fast tests
name: Fast tests for PRs
on:
pull_request:
branches:
- main
push:
branches:
- ci-*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@@ -14,7 +17,6 @@ env:
OMP_NUM_THREADS: 4
MKL_NUM_THREADS: 4
PYTEST_TIMEOUT: 60
MPS_TORCH_VERSION: 1.13.0
jobs:
run_fast_tests:
@@ -22,21 +24,26 @@ jobs:
fail-fast: false
matrix:
config:
- name: Fast PyTorch CPU tests on Ubuntu
framework: pytorch
- name: Fast PyTorch Pipeline CPU tests
framework: pytorch_pipelines
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu
- name: Fast Flax CPU tests on Ubuntu
report: torch_cpu_pipelines
- name: Fast PyTorch Models & Schedulers CPU tests
framework: pytorch_models
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu_models_schedulers
- name: Fast Flax CPU tests
framework: flax
runner: docker-cpu
image: diffusers/diffusers-flax-cpu
report: flax_cpu
- name: Fast ONNXRuntime CPU tests on Ubuntu
framework: onnxruntime
- name: PyTorch Example CPU tests
framework: pytorch_examples
runner: docker-cpu
image: diffusers/diffusers-onnxruntime-cpu
report: onnx_cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_example_cpu
name: ${{ matrix.config.name }}
@@ -58,20 +65,29 @@ jobs:
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install accelerate
- name: Environment
run: |
python utils/print_env.py
- name: Run fast PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch' }}
- name: Run fast PyTorch Pipeline CPU tests
if: ${{ matrix.config.framework == 'pytorch_pipelines' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
tests/pipelines
- name: Run fast PyTorch Model Scheduler CPU tests
if: ${{ matrix.config.framework == 'pytorch_models' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx and not Dependency" \
--make-reports=tests_${{ matrix.config.report }} \
tests/models tests/schedulers tests/others
- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
@@ -79,15 +95,15 @@ jobs:
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
tests
- name: Run fast ONNXRuntime CPU tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
- name: Run example PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch_examples' }}
run: |
python -m pip install peft
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
examples
- name: Failure short reports
if: ${{ failure() }}
@@ -100,9 +116,28 @@ jobs:
name: pr_${{ matrix.config.report }}_test_reports
path: reports
run_fast_tests_apple_m1:
name: Fast PyTorch MPS tests on MacOS
runs-on: [ self-hosted, apple-m1 ]
run_staging_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Hub tests for models, schedulers, and pipelines
framework: hub_tests_pytorch
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_hub
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
@@ -110,41 +145,30 @@ jobs:
with:
fetch-depth: 2
- name: Clean checkout
shell: arch -arch arm64 bash {0}
run: |
git clean -fxd
- name: Setup miniconda
uses: ./.github/actions/setup-miniconda
with:
python-version: 3.9
- name: Install dependencies
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install --pre torch==${MPS_TORCH_VERSION} --extra-index-url https://download.pytorch.org/whl/test/cpu
${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py
python utils/print_env.py
- name: Run fast PyTorch tests on M1 (MPS)
shell: arch -arch arm64 bash {0}
- name: Run Hub tests for models, schedulers, and pipelines on a staging env
if: ${{ matrix.config.framework == 'hub_tests_pytorch' }}
run: |
${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps tests/
HUGGINGFACE_CO_STAGING=true python -m pytest \
-m "is_staging_test" \
--make-reports=tests_${{ matrix.config.report }} \
tests
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_mps_failures_short.txt
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_torch_mps_test_reports
name: pr_${{ matrix.config.report }}_test_reports
path: reports

View File

@@ -0,0 +1,32 @@
name: Run Torch dependency tests
on:
pull_request:
branches:
- main
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check_torch_dependencies:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e .
pip install torch torchvision torchaudio
pip install pytest
- name: Check for soft dependencies
run: |
pytest tests/others/test_dependencies.py

View File

@@ -1,51 +1,313 @@
name: Run all tests
name: Slow Tests on main
on:
push:
branches:
- main
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 1000
PYTEST_TIMEOUT: 600
RUN_SLOW: yes
PIPELINE_USAGE_CUTOFF: 50000
jobs:
run_slow_tests:
setup_torch_cuda_pipeline_matrix:
name: Setup Torch Pipelines CUDA Slow Tests Matrix
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-cpu # this is a CPU image, but we need it to fetch the matrix
options: --shm-size "16gb" --ipc host
outputs:
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
- name: Environment
run: |
python utils/print_env.py
- name: Fetch Pipeline Matrix
id: fetch_pipeline_matrix
run: |
matrix=$(python utils/fetch_torch_cuda_pipeline_test_matrix.py)
echo $matrix
echo "pipeline_test_matrix=$matrix" >> $GITHUB_OUTPUT
- name: Pipeline Tests Artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: test-pipelines.json
path: reports
torch_pipelines_cuda_tests:
name: Torch Pipelines CUDA Slow Tests
needs: setup_torch_cuda_pipeline_matrix
strategy:
fail-fast: false
max-parallel: 1
matrix:
config:
- name: Slow PyTorch CUDA tests on Ubuntu
framework: pytorch
runner: docker-gpu
image: diffusers/diffusers-pytorch-cuda
report: torch_cuda
- name: Slow Flax TPU tests on Ubuntu
framework: flax
runner: docker-tpu
image: diffusers/diffusers-flax-tpu
report: flax_tpu
- name: Slow ONNXRuntime CUDA tests on Ubuntu
framework: onnxruntime
runner: docker-gpu
image: diffusers/diffusers-onnxruntime-cuda
report: onnx_cuda
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }}
runs-on: docker-gpu
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: |
nvidia-smi
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
- name: Environment
run: |
python utils/print_env.py
- name: Slow PyTorch CUDA checkpoint tests on Ubuntu
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
tests/pipelines/${{ matrix.module }}
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pipeline_${{ matrix.module }}_test_reports
path: reports
torch_cuda_tests:
name: Torch CUDA Tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
strategy:
matrix:
module: [models, schedulers, lora, others]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
- name: Environment
run: |
python utils/print_env.py
- name: Run slow PyTorch CUDA tests
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_cuda \
tests/${{ matrix.module }}
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_cuda_stats.txt
cat reports/tests_torch_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_cuda_test_reports
path: reports
peft_cuda_tests:
name: PEFT CUDA Tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
python -m pip install git+https://github.com/huggingface/peft.git
- name: Environment
run: |
python utils/print_env.py
- name: Run slow PEFT CUDA tests
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx and not PEFTLoRALoading" \
--make-reports=tests_peft_cuda \
tests/lora/
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_peft_cuda_stats.txt
cat reports/tests_peft_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_peft_test_reports
path: reports
flax_tpu_tests:
name: Flax TPU Tests
runs-on: docker-tpu
container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
- name: Environment
run: |
python utils/print_env.py
- name: Run slow Flax TPU tests
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_flax_tpu \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_flax_tpu_stats.txt
cat reports/tests_flax_tpu_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: flax_tpu_test_reports
path: reports
onnx_cuda_tests:
name: ONNX CUDA Tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-onnxruntime-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
- name: Environment
run: |
python utils/print_env.py
- name: Run slow ONNXRuntime CUDA tests
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_onnx_cuda \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_onnx_cuda_stats.txt
cat reports/tests_onnx_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: onnx_cuda_test_reports
path: reports
run_torch_compile_tests:
name: PyTorch Compile CUDA tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-compile-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
steps:
- name: Checkout diffusers
@@ -54,58 +316,68 @@ jobs:
fetch-depth: 2
- name: NVIDIA-SMI
if : ${{ matrix.config.runner == 'docker-gpu' }}
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -e .[quality,test,training]
- name: Environment
run: |
python utils/print_env.py
- name: Run slow PyTorch CUDA tests
if: ${{ matrix.config.framework == 'pytorch' }}
- name: Run example tests on GPU
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run slow Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run slow ONNXRuntime CUDA tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
run: cat reports/tests_torch_compile_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: ${{ matrix.config.report }}_test_reports
name: torch_compile_test_reports
path: reports
run_xformers_tests:
name: PyTorch xformers CUDA tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-xformers-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m pip install -e .[quality,test,training]
- name: Environment
run: |
python utils/print_env.py
- name: Run example tests on GPU
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "xformers" --make-reports=tests_torch_xformers_cuda tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_xformers_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_xformers_test_reports
path: reports
run_examples_tests:
@@ -130,7 +402,6 @@ jobs:
- name: Install dependencies
run: |
python -m pip install -e .[quality,test,training]
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
@@ -144,11 +415,13 @@ jobs:
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/examples_torch_cuda_failures_short.txt
run: |
cat reports/examples_torch_cuda_stats.txt
cat reports/examples_torch_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: examples_test_reports
path: reports
path: reports

115
.github/workflows/push_tests_fast.yml vendored Normal file
View File

@@ -0,0 +1,115 @@
name: Fast tests on main
on:
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
RUN_SLOW: no
jobs:
run_fast_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Fast PyTorch CPU tests on Ubuntu
framework: pytorch
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu
- name: Fast Flax CPU tests on Ubuntu
framework: flax
runner: docker-cpu
image: diffusers/diffusers-flax-cpu
report: flax_cpu
- name: Fast ONNXRuntime CPU tests on Ubuntu
framework: onnxruntime
runner: docker-cpu
image: diffusers/diffusers-onnxruntime-cpu
report: onnx_cpu
- name: PyTorch Example CPU tests on Ubuntu
framework: pytorch_examples
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_example_cpu
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
- name: Environment
run: |
python utils/print_env.py
- name: Run fast PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run fast ONNXRuntime CPU tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run example PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch_examples' }}
run: |
python -m pip install peft
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
--make-reports=tests_${{ matrix.config.report }} \
examples
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_${{ matrix.config.report }}_test_reports
path: reports

72
.github/workflows/push_tests_mps.yml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Fast mps tests on main
on:
push:
branches:
- main
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
RUN_SLOW: no
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
run_fast_tests_apple_m1:
name: Fast PyTorch MPS tests on MacOS
runs-on: [ self-hosted, apple-m1 ]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Clean checkout
shell: arch -arch arm64 bash {0}
run: |
git clean -fxd
- name: Setup miniconda
uses: ./.github/actions/setup-miniconda
with:
python-version: 3.9
- name: Install dependencies
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install torch torchvision torchaudio
${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate.git
${CONDA_RUN} python -m pip install transformers --upgrade
- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py
- name: Run fast PyTorch tests on M1 (MPS)
shell: arch -arch arm64 bash {0}
env:
HF_HOME: /System/Volumes/Data/mnt/cache
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
${CONDA_RUN} python -m pytest -n 0 -s -v --make-reports=tests_torch_mps tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_mps_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_torch_mps_test_reports
path: reports

View File

@@ -17,7 +17,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
python-version: 3.8
- name: Install requirements
run: |

View File

@@ -0,0 +1,16 @@
name: Upload PR Documentation
on:
workflow_run:
workflows: ["Build PR Documentation"]
types:
- completed
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
with:
package_name: diffusers
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

20
.gitignore vendored
View File

@@ -1,4 +1,4 @@
# Initially taken from Github's Python gitignore file
# Initially taken from GitHub's Python gitignore file
# Byte-compiled / optimized / DLL files
__pycache__/
@@ -34,7 +34,7 @@ wheels/
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# Usually these files are written by a Python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
@@ -153,7 +153,7 @@ debug.env
# vim
.*.swp
#ctags
# ctags
tags
# pre-commit
@@ -163,4 +163,16 @@ tags
*.lock
# DS_Store (MacOS)
.DS_Store
.DS_Store
# RL pipelines may produce mp4 outputs
*.mp4
# dependencies
/transformers
# ruff
.ruff_cache
# wandb
wandb

42
CITATION.cff Normal file
View File

@@ -0,0 +1,42 @@
cff-version: 1.2.0
title: 'Diffusers: State-of-the-art diffusion models'
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Patrick
family-names: von Platen
- given-names: Suraj
family-names: Patil
- given-names: Anton
family-names: Lozhkov
- given-names: Pedro
family-names: Cuenca
- given-names: Nathan
family-names: Lambert
- given-names: Kashif
family-names: Rasul
- given-names: Mishig
family-names: Davaadorj
- given-names: Thomas
family-names: Wolf
repository-code: 'https://github.com/huggingface/diffusers'
abstract: >-
Diffusers provides pretrained diffusion models across
multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of
diffusion models.
keywords:
- deep-learning
- pytorch
- image-generation
- hacktoberfest
- diffusion
- text2image
- image2image
- score-based-generative-modeling
- stable-diffusion
- stable-diffusion-diffusers
license: Apache-2.0
version: 0.12.1

View File

@@ -7,7 +7,7 @@ We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
nationality, personal appearance, race, caste, color, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
@@ -24,7 +24,7 @@ community include:
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
overall Diffusers community
Examples of unacceptable behavior include:
@@ -34,6 +34,7 @@ Examples of unacceptable behavior include:
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Spamming issues or PRs with links to projects unrelated to this library
* Other conduct which could reasonably be considered inappropriate in a
professional setting
@@ -116,8 +117,8 @@ the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
version 2.1, available at
https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).

View File

@@ -1,94 +1,350 @@
<!---
Copyright 2022 The HuggingFace Team. All rights reserved.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to contribute to diffusers?
# How to contribute to Diffusers 🧨
Everyone is welcome to contribute, and we value everybody's contribution. Code
is thus not the only way to help the community. Answering questions, helping
others, reaching out and improving the documentations are immensely valuable to
the community.
We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation not just code are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it!
It also helps us if you spread the word: reference the library from blog posts
on the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply star the repo to say "thank you".
Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=Discord&logoColor=white"></a>
Whichever way you choose to contribute, please be mindful to respect our
[code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md).
Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions. We also recommend you become familiar with the [ethical guidelines](https://huggingface.co/docs/diffusers/conceptual/ethical_guidelines) that guide our project and ask you to adhere to the same principles of transparency and responsibility.
## You can contribute in so many ways!
We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered.
There are 4 ways you can contribute to diffusers:
* Fixing outstanding issues with the existing code;
* Implementing [new diffusion pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines#contribution), [new schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) or [new models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)
* [Contributing to the examples](https://github.com/huggingface/diffusers/tree/main/examples) or to the documentation;
* Submitting issues related to bugs or desired new features.
## Overview
In particular there is a special [Good First Issue](https://github.com/huggingface/diffusers/contribute) listing.
It will give you a list of open Issues that are open to anybody to work on. Just comment in the issue that you'd like to work on it.
In that same listing you will also find some Issues with `Good Second Issue` label. These are
typically slightly more complicated than the Issues with just `Good First Issue` label. But if you
feel you know what you're doing, go for it.
You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to
the core library.
*All are equally valuable to the community.*
In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community.
## Submitting a new issue or feature request
* 1. Asking and answering questions on [the Diffusers discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers) or on [Discord](https://discord.gg/G7tWnz98XR).
* 2. Opening new issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues/new/choose).
* 3. Answering issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues).
* 4. Fix a simple issue, marked by the "Good first issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
* 5. Contribute to the [documentation](https://github.com/huggingface/diffusers/tree/main/docs/source).
* 6. Contribute a [Community Pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3Acommunity-examples).
* 7. Contribute to the [examples](https://github.com/huggingface/diffusers/tree/main/examples).
* 8. Fix a more difficult issue, marked by the "Good second issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22).
* 9. Add a new pipeline, model, or scheduler, see ["New Pipeline/Model"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) and ["New scheduler"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) issues. For this contribution, please have a look at [Design Philosophy](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md).
Do your best to follow these guidelines when submitting an issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.
As said before, **all contributions are valuable to the community**.
In the following, we will explain each contribution a bit more in detail.
### Did you find a bug?
For all contributions 4-9, you will need to open a PR. It is explained in detail how to do so in [Opening a pull request](#how-to-open-a-pr).
### 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord
Any question or comment related to the Diffusers library can be asked on the [discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/) or on [Discord](https://discord.gg/G7tWnz98XR). Such questions and comments include (but are not limited to):
- Reports of training or inference experiments in an attempt to share knowledge
- Presentation of personal projects
- Questions to non-official training examples
- Project proposals
- General feedback
- Paper summaries
- Asking for help on personal projects that build on top of the Diffusers library
- General questions
- Ethical questions regarding diffusion models
- ...
Every question that is asked on the forum or on Discord actively encourages the community to publicly
share knowledge and might very well help a beginner in the future that has the same question you're
having. Please do pose any questions you might have.
In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from.
**Please** keep in mind that the more effort you put into asking or answering a question, the higher
the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database.
In short, a high quality question or answer is *precise*, *concise*, *relevant*, *easy-to-understand*, *accessible*, and *well-formated/well-posed*. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section.
**NOTE about channels**:
[*The forum*](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it's easier to look up questions and answers that we posted some time ago.
In addition, questions and answers posted in the forum can easily be linked to.
In contrast, *Discord* has a chat-like format that invites fast back-and-forth communication.
While it will most likely take less time for you to get an answer to your question on Discord, your
question won't be visible anymore over time. Also, it's much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers.
### 2. Opening new issues on the GitHub issues tab
The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.
First, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on Github under Issues).
Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design.
### Do you want to implement a new diffusion pipeline / diffusion model?
In a nutshell, this means that everything that is **not** related to the **code of the Diffusers library** (including the documentation) should **not** be asked on GitHub, but rather on either the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR).
Awesome! Please provide the following information:
**Please consider the following guidelines when opening a new issue**:
- Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues).
- Please never report a new issue on another (related) issue. If another issue is highly related, please
open a new issue nevertheless and link to the related issue.
- Make sure your issue is written in English. Please use one of the great, free online translation services, such as [DeepL](https://www.deepl.com/translator) to translate from your native language to English if you are not comfortable in English.
- Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that `python -c "import diffusers; print(diffusers.__version__)"` is higher or matches the latest Diffusers version.
- Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues.
* Short description of the diffusion pipeline and link to the paper;
* Link to the implementation if it is open-source;
* Link to the model weights if they are available.
New issues usually include the following.
If you are willing to contribute the model yourself, let us know so we can best
guide you.
#### 2.1. Reproducible, minimal bug reports
### Do you want a new feature (that is not a model)?
A bug report should always have a reproducible code snippet and be as minimal and concise as possible.
This means in more detail:
- Narrow the bug down as much as you can, **do not just dump your whole code file**.
- Format your code.
- Do not include any external libraries except for Diffusers depending on them.
- **Always** provide all necessary information about your environment; for this, you can run: `diffusers-cli env` in your shell and copy-paste the displayed information to the issue.
- Explain the issue. If the reader doesn't know what the issue is and why it is an issue, she cannot solve it.
- **Always** make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell.
- If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the [Hub](https://huggingface.co) to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible.
For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section.
You can open a bug report [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&projects=&template=bug-report.yml).
#### 2.2. Feature requests
A world-class feature request addresses the following points:
1. Motivation first:
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.
If your issue is well written we're already 80% of the way there by the time you
post it.
You can open a feature request [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=).
## Start contributing! (Pull Requests)
#### 2.3 Feedback
Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look [here](https://huggingface.co/docs/diffusers/conceptual/philosophy). If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed.
If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions.
You can open an issue about feedback [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).
#### 2.4 Technical questions
Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide detail on
why this part of the code is difficult to understand.
You can open an issue about a technical question [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&template=bug-report.yml).
#### 2.5 Proposal to add a new model, scheduler, or pipeline
If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information:
* Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release.
* Link to any of its open-source implementation.
* Link to the model weights if they are available.
If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don't forget
to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it.
You can open a request for a model/pipeline/scheduler [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=New+model%2Fpipeline%2Fscheduler&template=new-model-addition.yml).
### 3. Answering issues on the GitHub issues tab
Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct.
Some tips to give a high-quality answer to an issue:
- Be as concise and minimal as possible.
- Stay on topic. An answer to the issue should concern the issue and only the issue.
- Provide links to code, papers, or other sources that prove or encourage your point.
- Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet.
Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great
help to the maintainers if you can answer such issues, encouraging the author of the issue to be
more precise, provide the link to a duplicated issue or redirect them to [the forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR).
If you have verified that the issued bug report is correct and requires a correction in the source code,
please have a look at the next sections.
For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the [Opening a pull request](#how-to-open-a-pr) section.
### 4. Fixing a "Good first issue"
*Good first issues* are marked by the [Good first issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) label. Usually, the issue already
explains how a potential solution should look so that it is easier to fix.
If the issue hasn't been closed and you would like to try to fix this issue, you can just leave a message "I would like to try this issue.". There are usually three scenarios:
- a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it.
- b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR.
- c.) There is already an open PR to fix the issue, but the issue hasn't been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR.
### 5. Contribute to the documentation
A good library **always** has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a **highly
valuable contribution**.
Contributing to the library can have many forms:
- Correcting spelling or grammatical errors.
- Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we are very happy if you take some time to correct it.
- Correct the shape or dimensions of a docstring input or output tensor.
- Clarify documentation that is hard to understand or incorrect.
- Update outdated code examples.
- Translating the documentation to another language.
Anything displayed on [the official Diffusers doc page](https://huggingface.co/docs/diffusers/index) is part of the official documentation and can be corrected, adjusted in the respective [documentation source](https://github.com/huggingface/diffusers/tree/main/docs/source).
Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally.
### 6. Contribute a community pipeline
[Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) are usually the first point of contact between the Diffusers library and the user.
Pipelines are examples of how to use Diffusers [models](https://huggingface.co/docs/diffusers/api/models/overview) and [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview).
We support two types of pipelines:
- Official Pipelines
- Community Pipelines
Both official and community pipelines follow the same design and consist of the same type of components.
Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code
resides in [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines).
In contrast, community pipelines are contributed and maintained purely by the **community** and are **not** tested.
They reside in [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and while they can be accessed via the [PyPI diffusers package](https://pypi.org/project/diffusers/), their code is not part of the PyPI distribution.
The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all
possible ways diffusion models can be used for inference, but some of them may be of interest to the community.
Officially released diffusion pipelines,
such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures
high quality of maintenance, no backward-breaking code changes, and testing.
More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library.
To add a community pipeline, one should add a <name-of-the-community>.py file to [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and adapt the [examples/community/README.md](https://github.com/huggingface/diffusers/tree/main/examples/community/README.md) to include an example of the new pipeline.
An example can be seen [here](https://github.com/huggingface/diffusers/pull/2400).
Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors.
Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the
core package.
### 7. Contribute to training examples
Diffusers examples are a collection of training scripts that reside in [examples](https://github.com/huggingface/diffusers/tree/main/examples).
We support two types of training examples:
- Official training examples
- Research training examples
Research training examples are located in [examples/research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) whereas official training examples include all folders under [examples](https://github.com/huggingface/diffusers/tree/main/examples) except the `research_projects` and `community` folders.
The official training examples are maintained by the Diffusers' core maintainers whereas the research training examples are maintained by the community.
This is because of the same reasons put forward in [6. Contribute a community pipeline](#6-contribute-a-community-pipeline) for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models.
If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the `research_projects` folder and maintained by the author.
Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the
training examples, it is required to clone the repository:
```bash
git clone https://github.com/huggingface/diffusers
```
as well as to install all additional dependencies required for training:
```bash
pip install -r /examples/<your-example-folder>/requirements.txt
```
Therefore when adding an example, the `requirements.txt` file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example's training script. See, for example, the [DreamBooth `requirements.txt` file](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/requirements.txt).
Training examples of the Diffusers library should adhere to the following philosophy:
- All the code necessary to run the examples should be found in a single Python file.
- One should be able to run the example from the command line with `python <your-example>.py --args`.
- Examples should be kept simple and serve as **an example** on how to use Diffusers for training. The purpose of example scripts is **not** to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials.
To contribute an example, it is highly recommended to look at already existing examples such as [dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) to get an idea of how they should look like.
We strongly advise contributors to make use of the [Accelerate library](https://github.com/huggingface/accelerate) as it's tightly integrated
with Diffusers.
Once an example script works, please make sure to add a comprehensive `README.md` that states how to use the example exactly. This README should include:
- An example command on how to run the example script as shown [here e.g.](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#running-locally-with-pytorch).
- A link to some training results (logs, models, ...) that show what the user can expect as shown [here e.g.](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5).
- If you are adding a non-official/research training example, **please don't forget** to add a sentence that you are maintaining this training example which includes your git handle as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/intel_opts#diffusers-examples-with-intel-optimizations).
If you are contributing to the official training examples, please also make sure to add a test to [examples/test_examples.py](https://github.com/huggingface/diffusers/blob/main/examples/test_examples.py). This is not necessary for non-official training examples.
### 8. Fixing a "Good second issue"
*Good second issues* are marked by the [Good second issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) label. Good second issues are
usually more complicated to solve than [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
The issue description usually gives less guidance on how to fix the issue and requires
a decent understanding of the library by the interested contributor.
If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn't merged and try to open an improved PR.
Good second issues are usually more difficult to get merged compared to good first issues, so don't hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged.
### 9. Adding pipelines, models, schedulers
Pipelines, models, and schedulers are the most important pieces of the Diffusers library.
They provide easy access to state-of-the-art diffusion technologies and thus allow the community to
build powerful generative AI applications.
By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem.
Diffusers has a couple of open feature requests for all three components - feel free to gloss over them
if you don't know yet what specific component you would like to add:
- [Model or pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22)
- [Scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)
Before adding any of the three components, it is strongly recommended that you give the [Philosophy guide](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) a read to better understand the design of any of the three components. Please be aware that
we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy
as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please
open a [Feedback issue](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=) instead so that it can be discussed whether a certain design
pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us.
Please make sure to add links to the original codebase/paper to the PR and ideally also ping the
original author directly on the PR so that they can follow the progress and potentially help with questions.
If you are unsure or stuck in the PR, don't hesitate to leave a message to ask for a first review or help.
## How to write a good issue
**The better your issue is written, the higher the chances that it will be quickly resolved.**
1. Make sure that you've used the correct template for your issue. You can pick between *Bug Report*, *Feature Request*, *Feedback about API Design*, *New model/pipeline/scheduler addition*, *Forum*, or a blank issue. Make sure to pick the correct one when opening [a new issue](https://github.com/huggingface/diffusers/issues/new/choose).
2. **Be precise**: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write "Error in diffusers".
3. **Reproducibility**: No reproducible code snippet == no solution. If you encounter a bug, maintainers **have to be able to reproduce** it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, *i.e.* that there are no missing imports or missing links to images, ... Your issue should contain an error message **and** a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data.
4. **Minimalistic**: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets.
5. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better.
6. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the [official GitHub formatting docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for more information.
7. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library.
## How to write a good PR
1. Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged.
2. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of "also fixing another problem while we're adding it". It is much more difficult to review pull requests that solve multiple, unrelated problems at once.
3. If helpful, try to add a code snippet that displays an example of how your addition can be used.
4. The title of your pull request should be a summary of its contribution.
5. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
6. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
7. Try to formulate and format your text as explained in [How to write a good issue](#how-to-write-a-good-issue).
8. Make sure existing tests pass;
9. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
CircleCI does not run the slow tests, but GitHub Actions does every night!
10. All public methods must have informative docstrings that work nicely with markdown. See [`pipeline_latent_diffusion.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) for an example.
11. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
[`hf-internal-testing`](https://huggingface.co/hf-internal-testing) or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images) to place these files.
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
## How to open a PR
Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
@@ -99,146 +355,105 @@ You will need basic `git` proficiency to be able to contribute to
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.
Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L426)):
Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L265)):
1. Fork the [repository](https://github.com/huggingface/diffusers) by
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
$ git clone git@github.com:<your Github handle>/diffusers.git
$ cd diffusers
$ git remote add upstream https://github.com/huggingface/diffusers.git
```
```bash
$ git clone git@github.com:<your GitHub handle>/diffusers.git
$ cd diffusers
$ git remote add upstream https://github.com/huggingface/diffusers.git
```
3. Create a new branch to hold your development changes:
```bash
$ git checkout -b a-descriptive-name-for-my-changes
```
```bash
$ git checkout -b a-descriptive-name-for-my-changes
```
**Do not** work on the `main` branch.
**Do not** work on the `main` branch.
4. Set up a development environment by running the following command in a virtual environment:
```bash
$ pip install -e ".[dev]"
```
```bash
$ pip install -e ".[dev]"
```
(If diffusers was already installed in the virtual environment, remove
it with `pip uninstall diffusers` before reinstalling it in editable
mode with the `-e` flag.)
To run the full test suite, you might need the additional dependency on `transformers` and `datasets` which requires a separate source
install:
```bash
$ git clone https://github.com/huggingface/transformers
$ cd transformers
$ pip install -e .
```
```bash
$ git clone https://github.com/huggingface/datasets
$ cd datasets
$ pip install -e .
```
If you have already cloned that repo, you might need to `git pull` to get the most recent changes in the `datasets`
library.
If you have already cloned the repo, you might need to `git pull` to get the most recent changes in the
library.
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this:
As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this:
```bash
$ pytest tests/<TEST_TO_RUN>.py
```
```bash
$ pytest tests/<TEST_TO_RUN>.py
```
You can also run the full suite with the following command, but it takes
a beefy machine to produce a result in a decent amount of time now that
Diffusers has grown a lot. Here is the command for it:
Before you run the tests, please make sure you install the dependencies required for testing. You can do so
with this command:
```bash
$ make test
```
```bash
$ pip install -e ".[test]"
```
For more information about tests, check out the
[dedicated documentation](https://huggingface.co/docs/diffusers/testing)
You can also run the full test suite with the following command, but it takes
a beefy machine to produce a result in a decent amount of time now that
Diffusers has grown a lot. Here is the command for it:
🧨 Diffusers relies on `black` and `isort` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
```bash
$ make test
```
```bash
$ make style
```
🧨 Diffusers relies on `ruff` and `isort` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
🧨 Diffusers also uses `flake8` and a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash
$ make style
```
```bash
$ make quality
```
🧨 Diffusers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality
control runs in CI, however, you can also run the same checks with:
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
```bash
$ make quality
```
```bash
$ git add modified_file.py
$ git commit
```
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:
```bash
$ git add modified_file.py
$ git commit -m "A descriptive message about your changes."
```
```bash
$ git fetch upstream
$ git rebase upstream/main
```
It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:
Push the changes to your account using:
```bash
$ git pull upstream main
```
```bash
$ git push -u origin a-descriptive-name-for-my-changes
```
Push the changes to your account using:
6. Once you are satisfied (**and the checklist below is happy too**), go to the
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.
```bash
$ git push -u origin a-descriptive-name-for-my-changes
```
6. Once you are satisfied, go to the
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.
7. It's ok if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
### Checklist
1. The title of your pull request should be a summary of its contribution;
2. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
3. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
4. Make sure existing tests pass;
5. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
- If you are adding a new tokenizer, write tests, and make sure
`RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py` passes.
CircleCI does not run the slow tests, but github actions does every night!
6. All public methods must have informative docstrings that work nicely with sphinx. See `modeling_bert.py` for an
example.
7. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
### Tests
@@ -252,7 +467,7 @@ repository, here's how to run tests with `pytest` for the library:
$ python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
In fact, that's how `make test` is implemented (sans the `pip install` line)!
In fact, that's how `make test` is implemented!
You can specify a smaller set of tests in order to test only the feature
you're working on.
@@ -265,30 +480,26 @@ have enough disk space and a good Internet connection, or a lot of patience!
$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
This means `unittest` is fully supported. Here's how to run tests with
`unittest`:
`unittest` is fully supported, here's how to run tests with it:
```bash
$ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v
```
### Style guide
For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**
### Syncing forked main with upstream (HuggingFace) main
To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead merge directly into the forked main.
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```
```bash
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing
```
### Style guide
For documentation strings, 🧨 Diffusers follows the [Google style](https://google.github.io/styleguide/pyguide.html).

View File

@@ -3,15 +3,14 @@
# make sure to test the local checkout in scripts and not the pre-installed one (don't use quotes!)
export PYTHONPATH = src
check_dirs := examples scripts src tests utils
check_dirs := examples scripts src tests utils benchmarks
modified_only_fixup:
$(eval modified_py_files := $(shell python utils/get_modified_files.py $(check_dirs)))
@if test -n "$(modified_py_files)"; then \
echo "Checking/fixing $(modified_py_files)"; \
black --preview $(modified_py_files); \
isort $(modified_py_files); \
flake8 $(modified_py_files); \
ruff check $(modified_py_files) --fix; \
ruff format $(modified_py_files);\
else \
echo "No library .py files were modified"; \
fi
@@ -41,22 +40,21 @@ repo-consistency:
# this target runs checks on all files
quality:
black --check --preview $(check_dirs)
isort --check-only $(check_dirs)
flake8 $(check_dirs)
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source
ruff check $(check_dirs) setup.py
ruff format --check $(check_dirs) setup.py
python utils/check_doc_toc.py
# Format source code automatically and check is there are any problems left that need manual fixing
extra_style_checks:
python utils/custom_init_isort.py
doc-builder style src/diffusers docs/source --max_len 119 --path_to_docs docs/source
python utils/check_doc_toc.py --fix_and_overwrite
# this target runs checks on all files and potentially modifies some of them
style:
black --preview $(check_dirs)
isort $(check_dirs)
ruff check $(check_dirs) setup.py --fix
ruff format $(check_dirs) setup.py
${MAKE} autogenerate_code
${MAKE} extra_style_checks
@@ -78,7 +76,7 @@ test:
# Run tests for examples
test-examples:
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/
python -m pytest -n auto --dist=loadfile -s -v ./examples/
# Release stuff

110
PHILOSOPHY.md Normal file
View File

@@ -0,0 +1,110 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Philosophy
🧨 Diffusers provides **state-of-the-art** pretrained diffusion models across multiple modalities.
Its purpose is to serve as a **modular toolbox** for both inference and training.
We aim at building a library that stands the test of time and therefore take API design very seriously.
In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on [PyTorch's Design Principles](https://pytorch.org/docs/stable/community/design.html#pytorch-design-philosophy). Let's go over the most important ones:
## Usability over Performance
- While Diffusers has many built-in performance-enhancing features (see [Memory and Speed](https://huggingface.co/docs/diffusers/optimization/fp16)), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library.
- Diffusers aims to be a **light-weight** package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as `accelerate`, `safetensors`, `onnx`, etc...). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages.
- Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired.
## Simple over easy
As PyTorch states, **explicit is better than implicit** and **simple is better than complex**. This design philosophy is reflected in multiple parts of the library:
- We follow PyTorch's API with methods like [`DiffusionPipeline.to`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.to) to let the user handle device management.
- Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible.
- Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers.
- Separately trained components of the diffusion pipeline, *e.g.* the text encoder, the UNet, and the variational autoencoder, each has their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training
is very simple thanks to Diffusers' ability to separate single components of the diffusion pipeline.
## Tweakable, contributor-friendly over abstraction
For large parts of the library, Diffusers adopts an important design principle of the [Transformers library](https://github.com/huggingface/transformers), which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as [Don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers.
Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable.
**However**, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because:
- Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions.
- Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions.
- Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel.
At Hugging Face, we call this design the **single-file policy** which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look
at [this blog post](https://huggingface.co/blog/transformers-design-philosophy).
In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don't follow this design fully for diffusion models is because almost all diffusion pipelines, such
as [DDPM](https://huggingface.co/docs/diffusers/api/pipelines/ddpm), [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview#stable-diffusion-pipelines), [unCLIP (DALL·E 2)](https://huggingface.co/docs/diffusers/api/pipelines/unclip) and [Imagen](https://imagen.research.google/) all rely on the same diffusion model, the [UNet](https://huggingface.co/docs/diffusers/api/models/unet2d-cond).
Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗.
We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it [directly on GitHub](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).
## Design Philosophy in Details
Now, let's look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines), [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models), and [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
Let's walk through more detailed design decisions for each class.
### Pipelines
Pipelines are designed to be easy to use (therefore do not follow [*Simple over easy*](#simple-over-easy) 100%), are not feature complete, and should loosely be seen as examples of how to use [models](#models) and [schedulers](#schedulers) for inference.
The following design principles are followed:
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as its done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [#Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
- Pipelines all inherit from [`DiffusionPipeline`].
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
- Pipelines should be used **only** for inference.
- Pipelines should be very readable, self-explanatory, and easy to tweak.
- Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs.
- Pipelines are **not** intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at [InvokeAI](https://github.com/invoke-ai/InvokeAI), [Diffuzers](https://github.com/abhishekkrthakur/diffuzers), and [lama-cleaner](https://github.com/Sanster/lama-cleaner).
- Every pipeline should have one and only one way to run it via a `__call__` method. The naming of the `__call__` arguments should be shared across all pipelines.
- Pipelines should be named after the task they are intended to solve.
- In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file.
### Models
Models are designed as configurable toolboxes that are natural extensions of [PyTorch's Module class](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). They only partly follow the **single-file policy**.
The following design principles are followed:
- Models correspond to **a type of model architecture**. *E.g.* the [`UNet2DConditionModel`] class is used for all UNet variations that expect 2D image inputs and are conditioned on some context.
- All models can be found in [`src/diffusers/models`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and every model architecture shall be defined in its file, e.g. [`unet_2d_condition.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_condition.py), [`transformer_2d.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformer_2d.py), etc...
- Models **do not** follow the single-file policy and should make use of smaller model building blocks, such as [`attention.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py), [`resnet.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py), [`embeddings.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py), etc... **Note**: This is in stark contrast to Transformers' modeling files and shows that models do not really follow the single-file policy.
- Models intend to expose complexity, just like PyTorch's `Module` class, and give clear error messages.
- Models all inherit from `ModelMixin` and `ConfigMixin`.
- Models can be optimized for performance when it doesnt demand major code changes, keep backward compatibility, and give significant memory or compute gain.
- Models should by default have the highest precision and lowest performance setting.
- To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different.
- Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and "foreseeing" future changes, *e.g.* it is usually better to add `string` "...type" arguments that can easily be extended to new future types instead of boolean `is_..._type` arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work.
- The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and
readable long-term, such as [UNet blocks](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py) and [Attention processors](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
### Schedulers
Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the **single-file policy**.
The following design principles are followed:
- All schedulers are found in [`src/diffusers/schedulers`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
- Schedulers are **not** allowed to import from large utils files and shall be kept very self-contained.
- One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper).
- If schedulers share similar functionalities, we can make use of the `#Copied from` mechanism.
- Schedulers all inherit from `SchedulerMixin` and `ConfigMixin`.
- Schedulers can be easily swapped out with the [`ConfigMixin.from_config`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method as explained in detail [here](./docs/source/en/using-diffusers/schedulers.md).
- Every scheduler has to have a `set_num_inference_steps`, and a `step` function. `set_num_inference_steps(...)` has to be called before every denoising process, *i.e.* before `step(...)` is called.
- Every scheduler exposes the timesteps to be "looped over" via a `timesteps` attribute, which is an array of timesteps the model will be called upon.
- The `step(...)` function takes a predicted model output and the "current" sample (x_t) and returns the "previous", slightly more denoised sample (x_t-1).
- Given the complexity of diffusion schedulers, the `step` function does not expose all the complexity and can be a bit of a "black box".
- In almost all cases, novel schedulers shall be implemented in a new scheduling file.

607
README.md
View File

@@ -1,6 +1,22 @@
<!---
Copyright 2022 - The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<br>
<img src="https://github.com/huggingface/diffusers/raw/main/docs/source/imgs/diffusers_library.jpg" width="400"/>
<img src="https://raw.githubusercontent.com/huggingface/diffusers/main/docs/source/en/imgs/diffusers_library.jpg" width="400"/>
<br>
<p>
<p align="center">
@@ -10,465 +26,202 @@
<a href="https://github.com/huggingface/diffusers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg">
</a>
<a href="https://pepy.tech/project/diffusers">
<img alt="GitHub release" src="https://static.pepy.tech/badge/diffusers/month">
</a>
<a href="CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg">
</a>
<a href="https://twitter.com/diffuserslib">
<img alt="X account" src="https://img.shields.io/twitter/url/https/twitter.com/diffuserslib.svg?style=social&label=Follow%20%40diffuserslib">
</a>
</p>
🤗 Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of diffusion models.
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](https://huggingface.co/docs/diffusers/conceptual/philosophy#usability-over-performance), [simple over easy](https://huggingface.co/docs/diffusers/conceptual/philosophy#simple-over-easy), and [customizability over abstractions](https://huggingface.co/docs/diffusers/conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
More precisely, 🤗 Diffusers offers:
🤗 Diffusers offers three core components:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers.
- Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).
- State-of-the-art [diffusion pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) that can be run in inference with just a few lines of code.
- Interchangeable noise [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview) for different diffusion speeds and output quality.
- Pretrained [models](https://huggingface.co/docs/diffusers/api/models/overview) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.
## Installation
### For PyTorch
We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/#installation), please refer to their official documentation.
### PyTorch
With `pip` (official package):
**With `pip`**
```bash
pip install --upgrade diffusers[torch]
```
**With `conda`**
With `conda` (maintained by the community):
```sh
conda install -c conda-forge diffusers
```
### For Flax
### Flax
**With `pip`**
With `pip` (official package):
```bash
pip install --upgrade diffusers[flax]
```
**Apple Silicon (M1/M2) support**
### Apple Silicon (M1/M2) support
Please, refer to [the documentation](https://huggingface.co/docs/diffusers/optimization/mps).
Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide.
## Contributing
## Quickstart
We ❤️ contributions from the open-source community!
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 19000+ checkpoints):
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipeline.to("cuda")
pipeline("An image of a squirrel in Picasso style").images[0]
```
You can also dig into the models and schedulers toolbox to build your own diffusion system:
```python
from diffusers import DDPMScheduler, UNet2DModel
from PIL import Image
import torch
scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda")
scheduler.set_timesteps(50)
sample_size = model.config.sample_size
noise = torch.randn((1, 3, sample_size, sample_size), device="cuda")
input = noise
for t in scheduler.timesteps:
with torch.no_grad():
noisy_residual = model(input, t).sample
prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
input = prev_noisy_sample
image = (input / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
image = Image.fromarray((image * 255).round().astype("uint8"))
image
```
Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to launch your diffusion journey today!
## How to navigate the documentation
| **Documentation** | **What can I learn?** |
|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
| [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. |
## Contribution
We ❤️ contributions from the open-source community!
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)
Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or
just hang out ☕.
## Quickstart
In order to get started, we recommend taking a look at two notebooks:
- The [Getting started with Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, and also to understand each independent building block in the library.
- The [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook summarizes diffusion models training methods. This notebook takes a step-by-step approach to training your
diffusion models on an image dataset, with explanatory graphics.
## Stable Diffusion is fully compatible with `diffusers`!
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [LAION](https://laion.ai/) and [RunwayML](https://runwayml.com/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 4GB VRAM.
See the [model card](https://huggingface.co/CompVis/stable-diffusion) for more information.
You need to accept the model license before downloading or using the Stable Diffusion weights. Please, visit the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5), read the license carefully and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section](https://huggingface.co/docs/hub/security-tokens) of the documentation.
### Text-to-Image generation with Stable Diffusion
First let's install
```bash
pip install --upgrade diffusers transformers scipy
```
Run this command to log in with your HF Hub token if you haven't before (you can skip this step if you prefer to run the model locally, follow [this](#running-the-model-locally) instead)
```bash
huggingface-cli login
```
We recommend using the model in [half-precision (`fp16`)](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) as it gives almost always the same results as full
precision while being roughly twice as fast and requiring half the amount of GPU RAM.
```python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
#### Running the model locally
If you don't want to login to Hugging Face, you can also simply download the model folder
(after having [accepted the license](https://huggingface.co/runwayml/stable-diffusion-v1-5)) and pass
the path to the local folder to the `StableDiffusionPipeline`.
```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```
Assuming the folder is stored locally under `./stable-diffusion-v1-5`, you can also run stable diffusion
without requiring an authentication token:
```python
pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
If you are limited by GPU memory, you might want to consider chunking the attention computation in addition
to using `fp16`.
The following snippet should result in less than 4GB VRAM.
```python
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
revision="fp16",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_attention_slicing()
image = pipe(prompt).images[0]
```
If you wish to use a different scheduler (e.g.: DDIM, LMS, PNDM/PLMS), you can instantiate
it before the pipeline and pass it to `from_pretrained`.
```python
from diffusers import LMSDiscreteScheduler
lms = LMSDiscreteScheduler.from_config("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
revision="fp16",
torch_dtype=torch.float16,
scheduler=lms,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
If you want to run Stable Diffusion on CPU or you want to have maximum precision on GPU,
please run the model in the default *full-precision* setting:
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
# disable the following line if you run on CPU
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### JAX/Flax
Diffusers offers a JAX / Flax implementation of Stable Diffusion for very fast inference. JAX shines specially on TPU hardware because each TPU server has 8 accelerators working in parallel, but it runs great on GPUs too.
Running the pipeline with the default PNDMScheduler:
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="flax", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
### Image-to-Image text-guided generation with Stable Diffusion
The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
# load the pipeline
device = "cuda"
model_id_or_path = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
model_id_or_path,
revision="fp16",
torch_dtype=torch.float16,
)
# or download via git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
# and pass `model_id_or_path="./stable-diffusion-v1-5"`.
pipe = pipe.to(device)
# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
prompt = "A fantasy landscape, trending on artstation"
images = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images
images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
### In-painting using Stable Diffusion
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and a text prompt. It uses a model optimized for this particular task, whose license you need to accept before use.
Please, visit the [model card](https://huggingface.co/runwayml/stable-diffusion-inpainting), read the license carefully and tick the checkbox if you agree. Note that this is an additional license, you need to accept it even if you accepted the text-to-image Stable Diffusion license in the past. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section](https://huggingface.co/docs/hub/security-tokens) of the documentation.
```python
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
### Tweak prompts reusing seeds and latents
You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](https://github.com/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb).
For more details, check out [the Stable Diffusion notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb)
and have a look into the [release notes](https://github.com/huggingface/diffusers/releases/tag/v0.2.0).
## Fine-Tuning Stable Diffusion
Fine-tuning techniques make it possible to adapt Stable Diffusion to your own dataset, or add new subjects to it. These are some of the techniques supported in `diffusers`:
Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new 'words' in the embedding space of the pipeline's text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images.
- Textual Inversion. Capture novel concepts from a small set of sample images, and associate them with new "words" in the embedding space of the text encoder. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) or [documentation](https://huggingface.co/docs/diffusers/training/text_inversion) to try for yourself.
- Dreambooth. Another technique to capture new concepts in Stable Diffusion. This method fine-tunes the UNet (and, optionally, also the text encoder) of the pipeline to achieve impressive results. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) and [training report](https://wandb.ai/psuraj/dreambooth/reports/Dreambooth-Training-Analysis--VmlldzoyNzk0NDc3) for additional details and training recommendations.
- Full Stable Diffusion fine-tuning. If you have a more sizable dataset with a specific look or style, you can fine-tune Stable Diffusion so that it outputs images following those examples. This was the approach taken to create [a Pokémon Stable Diffusion model](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) (by Justing Pinkney / Lambda Labs), [a Japanese specific version of Stable Diffusion](https://huggingface.co/spaces/rinna/japanese-stable-diffusion) (by [Rinna Co.](https://github.com/rinnakk/japanese-stable-diffusion/) and others. You can start at [our text-to-image fine-tuning example](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) and go from there.
## Stable Diffusion Community Pipelines
The release of Stable Diffusion as an open source model has fostered a lot of interesting ideas and experimentation. Our [Community Examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community) contains many ideas worth exploring, like interpolating to create animated videos, using CLIP Guidance for additional prompt fidelity, term weighting, and much more! Take a look and [contribute your own](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipelines).
## Other Examples
There are many ways to try running Diffusers! Here we outline code-focused tools (primarily using `DiffusionPipeline`s and Google Colab) and interactive web-tools.
### Running Code
If you want to run the code yourself 💻, you can try out:
- [Text-to-Image Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)
```python
# !pip install diffusers["torch"] transformers
from diffusers import DiffusionPipeline
device = "cuda"
model_id = "CompVis/ldm-text2im-large-256"
# load model and scheduler
ldm = DiffusionPipeline.from_pretrained(model_id)
ldm = ldm.to(device)
# run pipeline in inference (sample random noise and denoise)
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images[0]
# save image
image.save("squirrel.png")
```
- [Unconditional Diffusion with discrete scheduler](https://huggingface.co/google/ddpm-celebahq-256)
```python
# !pip install diffusers["torch"]
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-celebahq-256"
device = "cuda"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
ddpm.to(device)
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256)
- [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
**Other Notebooks**:
* [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
* [tweak images via repeated Stable Diffusion seeds](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
### Web Demos
If you just want to play around with some web demos, you can try out the following 🚀 Spaces:
| Model | Hugging Face Spaces |
|-------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Text-to-Image Latent Diffusion | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion) |
| Faces generator | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion) |
| DDPM with different schedulers | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/fusing/celeba-diffusion) |
| Conditional generation from sketch | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/huggingface/diffuse-the-rest) |
| Composable diffusion | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Shuang59/Composable-Diffusion) |
## Definitions
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/>
<br>
<em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
**Schedulers**: Algorithm class for both **inference** and **training**.
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training.
*Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902)
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width="800"/>
<br>
<em> Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
*Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width="550"/>
<br>
<em> Figure from ImageGen (https://imagen.research.google/). </em>
<p>
## Philosophy
- Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
## In the works
For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:
- Diffusers for audio
- Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105).
- Diffusers for video generation
- Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54)
A few pipeline components are already being worked on, namely:
- BDDMPipeline for spectrogram-to-sound vocoding
- GLIDEPipeline to support OpenAI's GLIDE model
- Grad-TTS for text to audio generation / conditional audio generation
We want diffusers to be a toolbox useful for diffusers models in general; if you find yourself limited in any way by the current API, or would like to see additional models, schedulers, or techniques, please open a [GitHub issue](https://github.com/huggingface/diffusers/issues) mentioning what you would like to see.
Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or just hang out ☕.
## Popular Tasks & Pipelines
<table>
<tr>
<th>Task</th>
<th>Pipeline</th>
<th>🤗 Hub</th>
</tr>
<tr style="border-top: 2px solid black">
<td>Unconditional Image Generation</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/ddpm"> DDPM </a></td>
<td><a href="https://huggingface.co/google/ddpm-ema-church-256"> google/ddpm-ema-church-256 </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img">Stable Diffusion Text-to-Image</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/unclip">unCLIP</a></td>
<td><a href="https://huggingface.co/kakaobrain/karlo-v1-alpha"> kakaobrain/karlo-v1-alpha </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/deepfloyd_if">DeepFloyd IF</a></td>
<td><a href="https://huggingface.co/DeepFloyd/IF-I-XL-v1.0"> DeepFloyd/IF-I-XL-v1.0 </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/kandinsky">Kandinsky</a></td>
<td><a href="https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder"> kandinsky-community/kandinsky-2-2-decoder </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/controlnet">ControlNet</a></td>
<td><a href="https://huggingface.co/lllyasviel/sd-controlnet-canny"> lllyasviel/sd-controlnet-canny </a></td>
</tr>
<tr>
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/pix2pix">InstructPix2Pix</a></td>
<td><a href="https://huggingface.co/timbrooks/instruct-pix2pix"> timbrooks/instruct-pix2pix </a></td>
</tr>
<tr>
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/img2img">Stable Diffusion Image-to-Image</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-guided Image Inpainting</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint">Stable Diffusion Inpainting</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-inpainting"> runwayml/stable-diffusion-inpainting </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Image Variation</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/image_variation">Stable Diffusion Image Variation</a></td>
<td><a href="https://huggingface.co/lambdalabs/sd-image-variations-diffusers"> lambdalabs/sd-image-variations-diffusers </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Super Resolution</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/upscale">Stable Diffusion Upscale</a></td>
<td><a href="https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler"> stabilityai/stable-diffusion-x4-upscaler </a></td>
</tr>
<tr>
<td>Super Resolution</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/latent_upscale">Stable Diffusion Latent Upscale</a></td>
<td><a href="https://huggingface.co/stabilityai/sd-x2-latent-upscaler"> stabilityai/sd-x2-latent-upscaler </a></td>
</tr>
</table>
## Popular libraries using 🧨 Diffusers
- https://github.com/microsoft/TaskMatrix
- https://github.com/invoke-ai/InvokeAI
- https://github.com/apple/ml-stable-diffusion
- https://github.com/Sanster/lama-cleaner
- https://github.com/IDEA-Research/Grounded-Segment-Anything
- https://github.com/ashawkey/stable-dreamfusion
- https://github.com/deep-floyd/IF
- https://github.com/bentoml/BentoML
- https://github.com/bmaltais/kohya_ss
- +8000 other amazing GitHub repositories 💪
Thank you for using us ❤️.
## Credits
@@ -476,7 +229,7 @@ This library concretizes previous work by many different authors and would not h
- @CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion)
- @hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion)
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim).
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim)
- @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.

316
benchmarks/base_classes.py Normal file
View File

@@ -0,0 +1,316 @@
import os
import sys
import torch
from diffusers import (
AutoPipelineForImage2Image,
AutoPipelineForInpainting,
AutoPipelineForText2Image,
ControlNetModel,
LCMScheduler,
StableDiffusionAdapterPipeline,
StableDiffusionControlNetPipeline,
StableDiffusionXLAdapterPipeline,
StableDiffusionXLControlNetPipeline,
T2IAdapter,
WuerstchenCombinedPipeline,
)
from diffusers.utils import load_image
sys.path.append(".")
from utils import ( # noqa: E402
BASE_PATH,
PROMPT,
BenchmarkInfo,
benchmark_fn,
bytes_to_giga_bytes,
flush,
generate_csv_dict,
write_to_csv,
)
RESOLUTION_MAPPING = {
"runwayml/stable-diffusion-v1-5": (512, 512),
"lllyasviel/sd-controlnet-canny": (512, 512),
"diffusers/controlnet-canny-sdxl-1.0": (1024, 1024),
"TencentARC/t2iadapter_canny_sd14v1": (512, 512),
"TencentARC/t2i-adapter-canny-sdxl-1.0": (1024, 1024),
"stabilityai/stable-diffusion-2-1": (768, 768),
"stabilityai/stable-diffusion-xl-base-1.0": (1024, 1024),
"stabilityai/stable-diffusion-xl-refiner-1.0": (1024, 1024),
"stabilityai/sdxl-turbo": (512, 512),
}
class BaseBenchmak:
pipeline_class = None
def __init__(self, args):
super().__init__()
def run_inference(self, args):
raise NotImplementedError
def benchmark(self, args):
raise NotImplementedError
def get_result_filepath(self, args):
pipeline_class_name = str(self.pipe.__class__.__name__)
name = (
args.ckpt.replace("/", "_")
+ "_"
+ pipeline_class_name
+ f"-bs@{args.batch_size}-steps@{args.num_inference_steps}-mco@{args.model_cpu_offload}-compile@{args.run_compile}.csv"
)
filepath = os.path.join(BASE_PATH, name)
return filepath
class TextToImageBenchmark(BaseBenchmak):
pipeline_class = AutoPipelineForText2Image
def __init__(self, args):
pipe = self.pipeline_class.from_pretrained(args.ckpt, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
if args.run_compile:
if not isinstance(pipe, WuerstchenCombinedPipeline):
pipe.unet.to(memory_format=torch.channels_last)
print("Run torch compile")
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
if hasattr(pipe, "movq") and getattr(pipe, "movq", None) is not None:
pipe.movq.to(memory_format=torch.channels_last)
pipe.movq = torch.compile(pipe.movq, mode="reduce-overhead", fullgraph=True)
else:
print("Run torch compile")
pipe.decoder = torch.compile(pipe.decoder, mode="reduce-overhead", fullgraph=True)
pipe.vqgan = torch.compile(pipe.vqgan, mode="reduce-overhead", fullgraph=True)
pipe.set_progress_bar_config(disable=True)
self.pipe = pipe
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
)
def benchmark(self, args):
flush()
print(f"[INFO] {self.pipe.__class__.__name__}: Running benchmark with: {vars(args)}\n")
time = benchmark_fn(self.run_inference, self.pipe, args) # in seconds.
memory = bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) # in GBs.
benchmark_info = BenchmarkInfo(time=time, memory=memory)
pipeline_class_name = str(self.pipe.__class__.__name__)
flush()
csv_dict = generate_csv_dict(
pipeline_cls=pipeline_class_name, ckpt=args.ckpt, args=args, benchmark_info=benchmark_info
)
filepath = self.get_result_filepath(args)
write_to_csv(filepath, csv_dict)
print(f"Logs written to: {filepath}")
flush()
class TurboTextToImageBenchmark(TextToImageBenchmark):
def __init__(self, args):
super().__init__(args)
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
guidance_scale=0.0,
)
class LCMLoRATextToImageBenchmark(TextToImageBenchmark):
lora_id = "latent-consistency/lcm-lora-sdxl"
def __init__(self, args):
super().__init__(args)
self.pipe.load_lora_weights(self.lora_id)
self.pipe.fuse_lora()
self.pipe.scheduler = LCMScheduler.from_config(self.pipe.scheduler.config)
def get_result_filepath(self, args):
pipeline_class_name = str(self.pipe.__class__.__name__)
name = (
self.lora_id.replace("/", "_")
+ "_"
+ pipeline_class_name
+ f"-bs@{args.batch_size}-steps@{args.num_inference_steps}-mco@{args.model_cpu_offload}-compile@{args.run_compile}.csv"
)
filepath = os.path.join(BASE_PATH, name)
return filepath
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
guidance_scale=1.0,
)
def benchmark(self, args):
flush()
print(f"[INFO] {self.pipe.__class__.__name__}: Running benchmark with: {vars(args)}\n")
time = benchmark_fn(self.run_inference, self.pipe, args) # in seconds.
memory = bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) # in GBs.
benchmark_info = BenchmarkInfo(time=time, memory=memory)
pipeline_class_name = str(self.pipe.__class__.__name__)
flush()
csv_dict = generate_csv_dict(
pipeline_cls=pipeline_class_name, ckpt=self.lora_id, args=args, benchmark_info=benchmark_info
)
filepath = self.get_result_filepath(args)
write_to_csv(filepath, csv_dict)
print(f"Logs written to: {filepath}")
flush()
class ImageToImageBenchmark(TextToImageBenchmark):
pipeline_class = AutoPipelineForImage2Image
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/benchmarking/1665_Girl_with_a_Pearl_Earring.jpg"
image = load_image(url).convert("RGB")
def __init__(self, args):
super().__init__(args)
self.image = self.image.resize(RESOLUTION_MAPPING[args.ckpt])
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
image=self.image,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
)
class TurboImageToImageBenchmark(ImageToImageBenchmark):
def __init__(self, args):
super().__init__(args)
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
image=self.image,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
guidance_scale=0.0,
strength=0.5,
)
class InpaintingBenchmark(ImageToImageBenchmark):
pipeline_class = AutoPipelineForInpainting
mask_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/benchmarking/overture-creations-5sI6fQgYIuo_mask.png"
mask = load_image(mask_url).convert("RGB")
def __init__(self, args):
super().__init__(args)
self.image = self.image.resize(RESOLUTION_MAPPING[args.ckpt])
self.mask = self.mask.resize(RESOLUTION_MAPPING[args.ckpt])
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
image=self.image,
mask_image=self.mask,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
)
class ControlNetBenchmark(TextToImageBenchmark):
pipeline_class = StableDiffusionControlNetPipeline
aux_network_class = ControlNetModel
root_ckpt = "runwayml/stable-diffusion-v1-5"
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/benchmarking/canny_image_condition.png"
image = load_image(url).convert("RGB")
def __init__(self, args):
aux_network = self.aux_network_class.from_pretrained(args.ckpt, torch_dtype=torch.float16)
pipe = self.pipeline_class.from_pretrained(self.root_ckpt, controlnet=aux_network, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.set_progress_bar_config(disable=True)
self.pipe = pipe
if args.run_compile:
pipe.unet.to(memory_format=torch.channels_last)
pipe.controlnet.to(memory_format=torch.channels_last)
print("Run torch compile")
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True)
self.image = self.image.resize(RESOLUTION_MAPPING[args.ckpt])
def run_inference(self, pipe, args):
_ = pipe(
prompt=PROMPT,
image=self.image,
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.batch_size,
)
class ControlNetSDXLBenchmark(ControlNetBenchmark):
pipeline_class = StableDiffusionXLControlNetPipeline
root_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
def __init__(self, args):
super().__init__(args)
class T2IAdapterBenchmark(ControlNetBenchmark):
pipeline_class = StableDiffusionAdapterPipeline
aux_network_class = T2IAdapter
root_ckpt = "CompVis/stable-diffusion-v1-4"
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/benchmarking/canny_for_adapter.png"
image = load_image(url).convert("L")
def __init__(self, args):
aux_network = self.aux_network_class.from_pretrained(args.ckpt, torch_dtype=torch.float16)
pipe = self.pipeline_class.from_pretrained(self.root_ckpt, adapter=aux_network, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.set_progress_bar_config(disable=True)
self.pipe = pipe
if args.run_compile:
pipe.unet.to(memory_format=torch.channels_last)
pipe.adapter.to(memory_format=torch.channels_last)
print("Run torch compile")
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
pipe.adapter = torch.compile(pipe.adapter, mode="reduce-overhead", fullgraph=True)
self.image = self.image.resize(RESOLUTION_MAPPING[args.ckpt])
class T2IAdapterSDXLBenchmark(T2IAdapterBenchmark):
pipeline_class = StableDiffusionXLAdapterPipeline
root_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/benchmarking/canny_for_adapter_sdxl.png"
image = load_image(url)
def __init__(self, args):
super().__init__(args)

View File

@@ -0,0 +1,26 @@
import argparse
import sys
sys.path.append(".")
from base_classes import ControlNetBenchmark, ControlNetSDXLBenchmark # noqa: E402
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt",
type=str,
default="lllyasviel/sd-controlnet-canny",
choices=["lllyasviel/sd-controlnet-canny", "diffusers/controlnet-canny-sdxl-1.0"],
)
parser.add_argument("--batch_size", type=int, default=1)
parser.add_argument("--num_inference_steps", type=int, default=50)
parser.add_argument("--model_cpu_offload", action="store_true")
parser.add_argument("--run_compile", action="store_true")
args = parser.parse_args()
benchmark_pipe = (
ControlNetBenchmark(args) if args.ckpt == "lllyasviel/sd-controlnet-canny" else ControlNetSDXLBenchmark(args)
)
benchmark_pipe.benchmark(args)

View File

@@ -0,0 +1,29 @@
import argparse
import sys
sys.path.append(".")
from base_classes import ImageToImageBenchmark, TurboImageToImageBenchmark # noqa: E402
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt",
type=str,
default="runwayml/stable-diffusion-v1-5",
choices=[
"runwayml/stable-diffusion-v1-5",
"stabilityai/stable-diffusion-2-1",
"stabilityai/stable-diffusion-xl-refiner-1.0",
"stabilityai/sdxl-turbo",
],
)
parser.add_argument("--batch_size", type=int, default=1)
parser.add_argument("--num_inference_steps", type=int, default=50)
parser.add_argument("--model_cpu_offload", action="store_true")
parser.add_argument("--run_compile", action="store_true")
args = parser.parse_args()
benchmark_pipe = ImageToImageBenchmark(args) if "turbo" not in args.ckpt else TurboImageToImageBenchmark(args)
benchmark_pipe.benchmark(args)

View File

@@ -0,0 +1,28 @@
import argparse
import sys
sys.path.append(".")
from base_classes import InpaintingBenchmark # noqa: E402
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt",
type=str,
default="runwayml/stable-diffusion-v1-5",
choices=[
"runwayml/stable-diffusion-v1-5",
"stabilityai/stable-diffusion-2-1",
"stabilityai/stable-diffusion-xl-base-1.0",
],
)
parser.add_argument("--batch_size", type=int, default=1)
parser.add_argument("--num_inference_steps", type=int, default=50)
parser.add_argument("--model_cpu_offload", action="store_true")
parser.add_argument("--run_compile", action="store_true")
args = parser.parse_args()
benchmark_pipe = InpaintingBenchmark(args)
benchmark_pipe.benchmark(args)

View File

@@ -0,0 +1,28 @@
import argparse
import sys
sys.path.append(".")
from base_classes import T2IAdapterBenchmark, T2IAdapterSDXLBenchmark # noqa: E402
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt",
type=str,
default="TencentARC/t2iadapter_canny_sd14v1",
choices=["TencentARC/t2iadapter_canny_sd14v1", "TencentARC/t2i-adapter-canny-sdxl-1.0"],
)
parser.add_argument("--batch_size", type=int, default=1)
parser.add_argument("--num_inference_steps", type=int, default=50)
parser.add_argument("--model_cpu_offload", action="store_true")
parser.add_argument("--run_compile", action="store_true")
args = parser.parse_args()
benchmark_pipe = (
T2IAdapterBenchmark(args)
if args.ckpt == "TencentARC/t2iadapter_canny_sd14v1"
else T2IAdapterSDXLBenchmark(args)
)
benchmark_pipe.benchmark(args)

View File

@@ -0,0 +1,23 @@
import argparse
import sys
sys.path.append(".")
from base_classes import LCMLoRATextToImageBenchmark # noqa: E402
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt",
type=str,
default="stabilityai/stable-diffusion-xl-base-1.0",
)
parser.add_argument("--batch_size", type=int, default=1)
parser.add_argument("--num_inference_steps", type=int, default=4)
parser.add_argument("--model_cpu_offload", action="store_true")
parser.add_argument("--run_compile", action="store_true")
args = parser.parse_args()
benchmark_pipe = LCMLoRATextToImageBenchmark(args)
benchmark_pipe.benchmark(args)

View File

@@ -0,0 +1,40 @@
import argparse
import sys
sys.path.append(".")
from base_classes import TextToImageBenchmark, TurboTextToImageBenchmark # noqa: E402
ALL_T2I_CKPTS = [
"runwayml/stable-diffusion-v1-5",
"segmind/SSD-1B",
"stabilityai/stable-diffusion-xl-base-1.0",
"kandinsky-community/kandinsky-2-2-decoder",
"warp-ai/wuerstchen",
"stabilityai/sdxl-turbo",
]
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt",
type=str,
default="runwayml/stable-diffusion-v1-5",
choices=ALL_T2I_CKPTS,
)
parser.add_argument("--batch_size", type=int, default=1)
parser.add_argument("--num_inference_steps", type=int, default=50)
parser.add_argument("--model_cpu_offload", action="store_true")
parser.add_argument("--run_compile", action="store_true")
args = parser.parse_args()
benchmark_cls = None
if "turbo" in args.ckpt:
benchmark_cls = TurboTextToImageBenchmark
else:
benchmark_cls = TextToImageBenchmark
benchmark_pipe = benchmark_cls(args)
benchmark_pipe.benchmark(args)

View File

@@ -0,0 +1,72 @@
import glob
import sys
import pandas as pd
from huggingface_hub import hf_hub_download, upload_file
from huggingface_hub.utils._errors import EntryNotFoundError
sys.path.append(".")
from utils import BASE_PATH, FINAL_CSV_FILE, GITHUB_SHA, REPO_ID, collate_csv # noqa: E402
def has_previous_benchmark() -> str:
csv_path = None
try:
csv_path = hf_hub_download(repo_id=REPO_ID, repo_type="dataset", filename=FINAL_CSV_FILE)
except EntryNotFoundError:
csv_path = None
return csv_path
def filter_float(value):
if isinstance(value, str):
return float(value.split()[0])
return value
def push_to_hf_dataset():
all_csvs = sorted(glob.glob(f"{BASE_PATH}/*.csv"))
collate_csv(all_csvs, FINAL_CSV_FILE)
# If there's an existing benchmark file, we should report the changes.
csv_path = has_previous_benchmark()
if csv_path is not None:
current_results = pd.read_csv(FINAL_CSV_FILE)
previous_results = pd.read_csv(csv_path)
numeric_columns = current_results.select_dtypes(include=["float64", "int64"]).columns
numeric_columns = [
c for c in numeric_columns if c not in ["batch_size", "num_inference_steps", "actual_gpu_memory (gbs)"]
]
for column in numeric_columns:
previous_results[column] = previous_results[column].map(lambda x: filter_float(x))
# Calculate the percentage change
current_results[column] = current_results[column].astype(float)
previous_results[column] = previous_results[column].astype(float)
percent_change = ((current_results[column] - previous_results[column]) / previous_results[column]) * 100
# Format the values with '+' or '-' sign and append to original values
current_results[column] = current_results[column].map(str) + percent_change.map(
lambda x: f" ({'+' if x > 0 else ''}{x:.2f}%)"
)
# There might be newly added rows. So, filter out the NaNs.
current_results[column] = current_results[column].map(lambda x: x.replace(" (nan%)", ""))
# Overwrite the current result file.
current_results.to_csv(FINAL_CSV_FILE, index=False)
commit_message = f"upload from sha: {GITHUB_SHA}" if GITHUB_SHA is not None else "upload benchmark results"
upload_file(
repo_id=REPO_ID,
path_in_repo=FINAL_CSV_FILE,
path_or_fileobj=FINAL_CSV_FILE,
repo_type="dataset",
commit_message=commit_message,
)
if __name__ == "__main__":
push_to_hf_dataset()

97
benchmarks/run_all.py Normal file
View File

@@ -0,0 +1,97 @@
import glob
import subprocess
import sys
from typing import List
sys.path.append(".")
from benchmark_text_to_image import ALL_T2I_CKPTS # noqa: E402
PATTERN = "benchmark_*.py"
class SubprocessCallException(Exception):
pass
# Taken from `test_examples_utils.py`
def run_command(command: List[str], return_stdout=False):
"""
Runs `command` with `subprocess.check_output` and will potentially return the `stdout`. Will also properly capture
if an error occurred while running `command`
"""
try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT)
if return_stdout:
if hasattr(output, "decode"):
output = output.decode("utf-8")
return output
except subprocess.CalledProcessError as e:
raise SubprocessCallException(
f"Command `{' '.join(command)}` failed with the following error:\n\n{e.output.decode()}"
) from e
def main():
python_files = glob.glob(PATTERN)
for file in python_files:
print(f"****** Running file: {file} ******")
# Run with canonical settings.
if file != "benchmark_text_to_image.py":
command = f"python {file}"
run_command(command.split())
command += " --run_compile"
run_command(command.split())
# Run variants.
for file in python_files:
if file == "benchmark_text_to_image.py":
for ckpt in ALL_T2I_CKPTS:
command = f"python {file} --ckpt {ckpt}"
if "turbo" in ckpt:
command += " --num_inference_steps 1"
run_command(command.split())
command += " --run_compile"
run_command(command.split())
elif file == "benchmark_sd_img.py":
for ckpt in ["stabilityai/stable-diffusion-xl-refiner-1.0", "stabilityai/sdxl-turbo"]:
command = f"python {file} --ckpt {ckpt}"
if ckpt == "stabilityai/sdxl-turbo":
command += " --num_inference_steps 2"
run_command(command.split())
command += " --run_compile"
run_command(command.split())
elif file == "benchmark_sd_inpainting.py":
sdxl_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
command = f"python {file} --ckpt {sdxl_ckpt}"
run_command(command.split())
command += " --run_compile"
run_command(command.split())
elif file in ["benchmark_controlnet.py", "benchmark_t2i_adapter.py"]:
sdxl_ckpt = (
"diffusers/controlnet-canny-sdxl-1.0"
if "controlnet" in file
else "TencentARC/t2i-adapter-canny-sdxl-1.0"
)
command = f"python {file} --ckpt {sdxl_ckpt}"
run_command(command.split())
command += " --run_compile"
run_command(command.split())
if __name__ == "__main__":
main()

98
benchmarks/utils.py Normal file
View File

@@ -0,0 +1,98 @@
import argparse
import csv
import gc
import os
from dataclasses import dataclass
from typing import Dict, List, Union
import torch
import torch.utils.benchmark as benchmark
GITHUB_SHA = os.getenv("GITHUB_SHA", None)
BENCHMARK_FIELDS = [
"pipeline_cls",
"ckpt_id",
"batch_size",
"num_inference_steps",
"model_cpu_offload",
"run_compile",
"time (secs)",
"memory (gbs)",
"actual_gpu_memory (gbs)",
"github_sha",
]
PROMPT = "ghibli style, a fantasy landscape with castles"
BASE_PATH = os.getenv("BASE_PATH", ".")
TOTAL_GPU_MEMORY = float(os.getenv("TOTAL_GPU_MEMORY", torch.cuda.get_device_properties(0).total_memory / (1024**3)))
REPO_ID = "diffusers/benchmarks"
FINAL_CSV_FILE = "collated_results.csv"
@dataclass
class BenchmarkInfo:
time: float
memory: float
def flush():
"""Wipes off memory."""
gc.collect()
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated()
torch.cuda.reset_peak_memory_stats()
def bytes_to_giga_bytes(bytes):
return f"{(bytes / 1024 / 1024 / 1024):.3f}"
def benchmark_fn(f, *args, **kwargs):
t0 = benchmark.Timer(
stmt="f(*args, **kwargs)",
globals={"args": args, "kwargs": kwargs, "f": f},
num_threads=torch.get_num_threads(),
)
return f"{(t0.blocked_autorange().mean):.3f}"
def generate_csv_dict(
pipeline_cls: str, ckpt: str, args: argparse.Namespace, benchmark_info: BenchmarkInfo
) -> Dict[str, Union[str, bool, float]]:
"""Packs benchmarking data into a dictionary for latter serialization."""
data_dict = {
"pipeline_cls": pipeline_cls,
"ckpt_id": ckpt,
"batch_size": args.batch_size,
"num_inference_steps": args.num_inference_steps,
"model_cpu_offload": args.model_cpu_offload,
"run_compile": args.run_compile,
"time (secs)": benchmark_info.time,
"memory (gbs)": benchmark_info.memory,
"actual_gpu_memory (gbs)": f"{(TOTAL_GPU_MEMORY):.3f}",
"github_sha": GITHUB_SHA,
}
return data_dict
def write_to_csv(file_name: str, data_dict: Dict[str, Union[str, bool, float]]):
"""Serializes a dictionary into a CSV file."""
with open(file_name, mode="w", newline="") as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=BENCHMARK_FIELDS)
writer.writeheader()
writer.writerow(data_dict)
def collate_csv(input_files: List[str], output_file: str):
"""Collates multiple identically structured CSVs into a single CSV file."""
with open(output_file, mode="w", newline="") as outfile:
writer = csv.DictWriter(outfile, fieldnames=BENCHMARK_FIELDS)
writer.writeheader()
for file in input_files:
with open(file, mode="r") as infile:
reader = csv.DictReader(infile)
for row in reader:
writer.writerow(row)

View File

@@ -11,6 +11,7 @@ RUN apt update && \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
@@ -33,7 +34,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
modelcards \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \

View File

@@ -11,6 +11,7 @@ RUN apt update && \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
@@ -35,7 +36,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
modelcards \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \

View File

@@ -11,6 +11,7 @@ RUN apt update && \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
@@ -23,9 +24,9 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
torch==2.1.2 \
torchvision==0.16.2 \
torchaudio==2.1.2 \
onnxruntime \
--extra-index-url https://download.pytorch.org/whl/cpu && \
python3 -m pip install --no-cache-dir \
@@ -33,7 +34,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
modelcards \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \

View File

@@ -11,6 +11,7 @@ RUN apt update && \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
@@ -23,9 +24,9 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
torch==2.1.2 \
torchvision==0.16.2 \
torchaudio==2.1.2 \
"onnxruntime-gpu>=1.13.1" \
--extra-index-url https://download.pytorch.org/whl/cu117 && \
python3 -m pip install --no-cache-dir \
@@ -33,7 +34,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
modelcards \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \

View File

@@ -0,0 +1,45 @@
FROM nvidia/cuda:12.1.0-runtime-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
libgl1 \
python3.9 \
python3.9-dev \
python3-pip \
python3.9-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3.9 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3.9 -m pip install --no-cache-dir --upgrade pip && \
python3.9 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
invisible_watermark && \
python3.9 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

View File

@@ -11,8 +11,10 @@ RUN apt update && \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
libgl1 \
python3.8-venv && \
rm -rf /var/lib/apt/lists
@@ -26,16 +28,18 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
torch \
torchvision \
torchaudio \
invisible_watermark \
--extra-index-url https://download.pytorch.org/whl/cpu && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
modelcards \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]
CMD ["/bin/bash"]

View File

@@ -1,4 +1,4 @@
FROM nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04
FROM nvidia/cuda:12.1.0-runtime-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
@@ -6,14 +6,16 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
python3.8 \
python3-pip \
python3.8-venv && \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
libgl1 \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
@@ -23,19 +25,21 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
--extra-index-url https://download.pytorch.org/whl/cu117 && \
torch \
torchvision \
torchaudio \
invisible_watermark && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
modelcards \
numpy \
scipy \
tensorboard \
transformers
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers \
pytorch-lightning
CMD ["/bin/bash"]
CMD ["/bin/bash"]

View File

@@ -0,0 +1,45 @@
FROM nvidia/cuda:12.1.0-runtime-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
libgl1 \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
invisible_watermark && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers \
xformers
CMD ["/bin/bash"]

268
docs/README.md Normal file
View File

@@ -0,0 +1,268 @@
<!---
Copyright 2024- The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Generating the documentation
To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
you can install them with the following command, at the root of the code repository:
```bash
pip install -e ".[docs]"
```
Then you need to install our open source documentation builder tool:
```bash
pip install git+https://github.com/huggingface/doc-builder
```
---
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.
---
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name} {path_to_docs}
```
For example:
```bash
doc-builder preview diffusers docs/source/en
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
```md
Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:
```md
Sections that were moved:
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
```
Use the relative style to link to the new file so that the versioned docs continue to work.
For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).
## Writing Documentation - Specification
The `huggingface/diffusers` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.
### Adding a new tutorial
Adding a new tutorial or section is done in two steps:
- Add a new Markdown (.md) file under `docs/source/<languageCode>`.
- Link that file in `docs/source/<languageCode>/_toctree.yml` on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.
### Adding a new pipeline/scheduler
When adding a new pipeline:
- Create a file `xxx.md` under `docs/source/<languageCode>/api/pipelines` (don't hesitate to copy an existing file as template).
- Link that file in (*Diffusers Summary*) section in `docs/source/api/pipelines/overview.md`, along with the link to the paper, and a colab notebook (if available).
- Write a short overview of the diffusion model:
- Overview with paper & authors
- Paper abstract
- Tips and tricks and how to use it best
- Possible an end-to-end example of how to use it
- Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
```
[[autodoc]] XXXPipeline
- all
- __call__
```
This will include every public method of the pipeline that is documented, as well as the `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`.
```
[[autodoc]] XXXPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
```
You can follow the same process to create a new scheduler under the `docs/source/<languageCode>/api/schedulers` folder.
### Writing source documentation
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will be converted into a link with
`pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[\`~XXXClass.method\`\].
#### Defining arguments in a method
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:
```
Args:
n_layers (`int`): The number of layers of the model.
```
If the description is too long to fit in one line, another indentation is necessary before writing the description
after the argument.
Here's an example showcasing everything so far:
```
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
[`~PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
```
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:
```py
def my_function(x: str=None, a: float=3.14):
```
then its documentation should look like this:
```
Args:
x (`str`, *optional*):
This argument controls ...
a (`float`, *optional*, defaults to `3.14`):
This argument is used to ...
```
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).
#### Writing a multi-line code block
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```
# first line of code
# second line
# etc
```
````
#### Writing a return block
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.
Here's an example of a single value return:
```
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```
Here's an example of a tuple return, comprising several objects:
```
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```
#### Adding an image
Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
## Styling the docstring
We have an automatic script running with the `make style` command that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library
This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
easily.

69
docs/TRANSLATING.md Normal file
View File

@@ -0,0 +1,69 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
### Translating the Diffusers documentation into your language
As part of our mission to democratize machine learning, we'd love to make the Diffusers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.
**🗞️ Open an issue**
To get started, navigate to the [Issues](https://github.com/huggingface/diffusers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "🌐 Translating a New Language?" from the "New issue" button.
Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.
**🍴 Fork the repository**
First, you'll need to [fork the Diffusers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.
Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:
```bash
git clone https://github.com/<YOUR-USERNAME>/diffusers.git
```
**📋 Copy-paste the English version with a new language code**
The documentation files are in one leading directory:
- [`docs/source`](https://github.com/huggingface/diffusers/tree/main/docs/source): All the documentation materials are organized here by language.
You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/diffusers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:
```bash
cd ~/path/to/diffusers/docs
cp -r source/en source/<LANG-ID>
```
Here, `<LANG-ID>` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.
**✍️ Start translating**
The fun part comes - translating the text!
The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website.
> 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/<LANG-ID>/` directory!
The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml):
```yaml
- sections:
- local: pipeline_tutorial # Do not change this! Use the same name for your .md file
title: Pipelines for inference # Translate this!
...
title: Tutorials # Translate this!
```
Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.
> 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/diffusers/issues) and tag @patrickvonplaten.

9
docs/source/_config.py Normal file
View File

@@ -0,0 +1,9 @@
# docstyle-ignore
INSTALL_CONTENT = """
# Diffusers installation
! pip install diffusers transformers datasets accelerate
# To install from source instead of the last release, comment the command above and uncomment the following one.
# ! pip install git+https://github.com/huggingface/diffusers.git
"""
notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]

View File

@@ -1,104 +0,0 @@
- sections:
- local: index
title: "🧨 Diffusers"
- local: quicktour
title: "Quicktour"
- local: installation
title: "Installation"
title: "Get started"
- sections:
- sections:
- local: using-diffusers/loading
title: "Loading Pipelines, Models, and Schedulers"
- local: using-diffusers/configuration
title: "Configuring Pipelines, Models, and Schedulers"
- local: using-diffusers/custom_pipeline_overview
title: "Loading and Adding Custom Pipelines"
title: "Loading & Hub"
- sections:
- local: using-diffusers/unconditional_image_generation
title: "Unconditional Image Generation"
- local: using-diffusers/conditional_image_generation
title: "Text-to-Image Generation"
- local: using-diffusers/img2img
title: "Text-Guided Image-to-Image"
- local: using-diffusers/inpaint
title: "Text-Guided Image-Inpainting"
- local: using-diffusers/custom_pipeline_examples
title: "Community Pipelines"
- local: using-diffusers/contribute_pipeline
title: "How to contribute a Pipeline"
title: "Pipelines for Inference"
title: "Using Diffusers"
- sections:
- local: optimization/fp16
title: "Memory and Speed"
- local: optimization/onnx
title: "ONNX"
- local: optimization/open_vino
title: "OpenVINO"
- local: optimization/mps
title: "MPS"
title: "Optimization/Special Hardware"
- sections:
- local: training/overview
title: "Overview"
- local: training/unconditional_training
title: "Unconditional Image Generation"
- local: training/text_inversion
title: "Textual Inversion"
- local: training/dreambooth
title: "Dreambooth"
- local: training/text2image
title: "Text-to-image fine-tuning"
title: "Training"
- sections:
- local: conceptual/stable_diffusion
title: "Stable Diffusion"
- local: conceptual/philosophy
title: "Philosophy"
- local: conceptual/contribution
title: "How to contribute?"
title: "Conceptual Guides"
- sections:
- sections:
- local: api/models
title: "Models"
- local: api/schedulers
title: "Schedulers"
- local: api/diffusion_pipeline
title: "Diffusion Pipeline"
- local: api/logging
title: "Logging"
- local: api/configuration
title: "Configuration"
- local: api/outputs
title: "Outputs"
title: "Main Classes"
- sections:
- local: api/pipelines/overview
title: "Overview"
- local: api/pipelines/ddim
title: "DDIM"
- local: api/pipelines/ddpm
title: "DDPM"
- local: api/pipelines/latent_diffusion
title: "Latent Diffusion"
- local: api/pipelines/latent_diffusion_uncond
title: "Unconditional Latent Diffusion"
- local: api/pipelines/pndm
title: "PNDM"
- local: api/pipelines/score_sde_ve
title: "Score SDE VE"
- local: api/pipelines/stable_diffusion
title: "Stable Diffusion"
- local: api/pipelines/stochastic_karras_ve
title: "Stochastic Karras VE"
- local: api/pipelines/dance_diffusion
title: "Dance Diffusion"
- local: api/pipelines/vq_diffusion
title: "VQ Diffusion"
- local: api/pipelines/repaint
title: "RePaint"
title: "Pipelines"
title: "API"

View File

@@ -1,42 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pipelines
The [`DiffusionPipeline`] is the easiest way to load any pretrained diffusion pipeline from the [Hub](https://huggingface.co/models?library=diffusers) and to use it in inference.
<Tip>
One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual
components of diffusion pipelines are usually trained individually, so we suggest to directly work
with [`UNetModel`] and [`UNetConditionModel`].
</Tip>
Any diffusion pipeline that is loaded with [`~DiffusionPipeline.from_pretrained`] will automatically
detect the pipeline type, *e.g.* [`StableDiffusionPipeline`] and consequently load each component of the
pipeline and pass them into the `__init__` function of the pipeline, *e.g.* [`~StableDiffusionPipeline.__init__`].
Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrained`].
## DiffusionPipeline
[[autodoc]] DiffusionPipeline
- from_pretrained
- save_pretrained
- to
- device
- components
## ImagePipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipeline_utils.ImagePipelineOutput

View File

@@ -1,98 +0,0 @@
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Logging
🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily.
Currently the default verbosity of the library is `WARNING`.
To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
to the INFO level.
```python
import diffusers
diffusers.logging.set_verbosity_info()
```
You can also use the environment variable `DIFFUSERS_VERBOSITY` to override the default verbosity. You can set it
to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:
```bash
DIFFUSERS_VERBOSITY=error ./myprogram.py
```
Additionally, some `warnings` can be disabled by setting the environment variable
`DIFFUSERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using
[`logger.warning_advice`]. For example:
```bash
DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```
Here is an example of how to use the same logger as the library in your own module or script:
```python
from diffusers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("diffusers")
logger.info("INFO")
logger.warning("WARN")
```
All the methods of this logging module are documented below, the main ones are
[`logging.get_verbosity`] to get the current level of verbosity in the logger and
[`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
- `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` (int value, 50): only report the most
critical errors.
- `diffusers.logging.ERROR` (int value, 40): only report errors.
- `diffusers.logging.WARNING` or `diffusers.logging.WARN` (int value, 30): only reports error and
warnings. This the default level used by the library.
- `diffusers.logging.INFO` (int value, 20): reports error, warnings and basic information.
- `diffusers.logging.DEBUG` (int value, 10): report all information.
By default, `tqdm` progress bars will be displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] can be used to suppress or unsuppress this behavior.
## Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
## Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar

View File

@@ -1,74 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Models
Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models.
The primary function of these models is to denoise an input sample, by modeling the distribution $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$.
The models are built on the base class ['ModelMixin'] that is a `torch.nn.module` with basic functionality for saving and loading models both locally and from the HuggingFace hub.
## ModelMixin
[[autodoc]] ModelMixin
## UNet2DOutput
[[autodoc]] models.unet_2d.UNet2DOutput
## UNet1DModel
[[autodoc]] UNet1DModel
## UNet2DModel
[[autodoc]] UNet2DModel
## UNet2DConditionOutput
[[autodoc]] models.unet_2d_condition.UNet2DConditionOutput
## UNet2DConditionModel
[[autodoc]] UNet2DConditionModel
## DecoderOutput
[[autodoc]] models.vae.DecoderOutput
## VQEncoderOutput
[[autodoc]] models.vae.VQEncoderOutput
## VQModel
[[autodoc]] VQModel
## AutoencoderKLOutput
[[autodoc]] models.vae.AutoencoderKLOutput
## AutoencoderKL
[[autodoc]] AutoencoderKL
## Transformer2DModel
[[autodoc]] Transformer2DModel
## Transformer2DModelOutput
[[autodoc]] models.attention.Transformer2DModelOutput
## FlaxModelMixin
[[autodoc]] FlaxModelMixin
## FlaxUNet2DConditionOutput
[[autodoc]] models.unet_2d_condition_flax.FlaxUNet2DConditionOutput
## FlaxUNet2DConditionModel
[[autodoc]] FlaxUNet2DConditionModel
## FlaxDecoderOutput
[[autodoc]] models.vae_flax.FlaxDecoderOutput
## FlaxAutoencoderKLOutput
[[autodoc]] models.vae_flax.FlaxAutoencoderKLOutput
## FlaxAutoencoderKL
[[autodoc]] FlaxAutoencoderKL

View File

@@ -1,55 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# BaseOutputs
All models have outputs that are instances of subclasses of [`~utils.BaseOutput`]. Those are
data structures containing all the information returned by the model, but that can also be used as tuples or
dictionaries.
Let's see how this looks in an example:
```python
from diffusers import DDIMPipeline
pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
outputs = pipeline()
```
The `outputs` object is a [`~pipeline_utils.ImagePipelineOutput`], as we can see in the
documentation of that class below, it means it has an image attribute.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`:
```python
outputs.images
```
or via keyword lookup
```python
outputs["images"]
```
When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values.
Here for instance, we could retrieve images via indexing:
```python
outputs[:1]
```
which will return the tuple `(outputs.images)` for instance.
## BaseOutput
[[autodoc]] utils.BaseOutput
- to_tuple

View File

@@ -1,34 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DDIM
## Overview
[Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The abstract of the paper is the following:
Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
The original codebase of this paper can be found [here](https://github.com/ermongroup/ddim).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_ddim.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddim/pipeline_ddim.py) | *Unconditional Image Generation* | - |
## DDIMPipeline
[[autodoc]] DDIMPipeline
- __call__

View File

@@ -1,36 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DDPM
## Overview
[Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.
The abstract of the paper is the following:
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
The original codebase of this paper can be found [here](https://github.com/hojonathanho/diffusion).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_ddpm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddpm/pipeline_ddpm.py) | *Unconditional Image Generation* | - |
# DDPMPipeline
[[autodoc]] DDPMPipeline
- __call__

View File

@@ -1,41 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Unconditional Latent Diffusion
## Overview
Unconditional Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract of the paper is the following:
*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).
## Tips:
-
-
-
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_latent_diffusion_uncond.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py) | *Unconditional Image Generation* | - |
## Examples:
## LDMPipeline
[[autodoc]] LDMPipeline
- __call__

View File

@@ -1,186 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pipelines
Pipelines provide a simple way to run state-of-the-art diffusion models in inference.
Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler
components - all of which are needed to have a functioning end-to-end diffusion system.
As an example, [Stable Diffusion](https://huggingface.co/blog/stable_diffusion) has three independently trained models:
- [Autoencoder](./api/models#vae)
- [Conditional Unet](./api/models#UNet2DConditionModel)
- [CLIP text encoder](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/clip#transformers.CLIPTextModel)
- a scheduler component, [scheduler](./api/scheduler#pndm),
- a [CLIPFeatureExtractor](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/clip#transformers.CLIPFeatureExtractor),
- as well as a [safety checker](./stable_diffusion#safety_checker).
All of these components are necessary to run stable diffusion in inference even though they were trained
or created independently from each other.
To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API.
More specifically, we strive to provide pipelines that
- 1. can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (*e.g.* [LDMTextToImagePipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/latent_diffusion), uses the officially released weights of [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)),
- 2. have a simple user interface to run the model in inference (see the [Pipelines API](#pipelines-api) section),
- 3. are easy to understand with code that is self-explanatory and can be read along-side the official paper (see [Pipelines summary](#pipelines-summary)),
- 4. can easily be contributed by the community (see the [Contribution](#contribution) section).
**Note** that pipelines do not (and should not) offer any training functionality.
If you are looking for *official* training examples, please have a look at [examples](https://github.com/huggingface/diffusers/tree/main/examples).
## 🧨 Diffusers Summary
The following table summarizes all officially supported pipelines, their corresponding paper, and if
available a colab notebook to directly try them out.
| Pipeline | Paper | Tasks | Colab
|---|---|:---:|:---:|
| [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
| [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
| [latent_diffusion_uncond](./latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
| [pndm](./pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
| [score_sde_ve](./score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [score_sde_vp](./score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [stable_diffusion](./stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [stable_diffusion](./stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [stable_diffusion](./stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
| [stochastic_karras_ve](./stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [vq_diffusion](./vq_diffusion) | [**Vector Quantized Diffusion Model for Text-to-Image Synthesis**](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
| [repaint](./repaint) | [**RePaint: Inpainting using Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2201.09865) | Image Inpainting |
**Note**: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers.
However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the [Examples](#examples) below.
## Pipelines API
Diffusion models often consist of multiple independently-trained models or other previously existing components.
Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one.
During inference, we however want to be able to easily load all components and use them in inference - even if one component, *e.g.* CLIP's text encoder, originates from a different library, such as [Transformers](https://github.com/huggingface/transformers). To that end, all pipelines provide the following functionality:
- [`from_pretrained` method](../diffusion_pipeline) that accepts a Hugging Face Hub repository id, *e.g.* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) or a path to a local directory, *e.g.*
"./stable-diffusion". To correctly retrieve which models and components should be loaded, one has to provide a `model_index.json` file, *e.g.* [runwayml/stable-diffusion-v1-5/model_index.json](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), which defines all components that should be
loaded into the pipelines. More specifically, for each model/component one needs to define the format `<name>: ["<library>", "<class name>"]`. `<name>` is the attribute name given to the loaded instance of `<class name>` which can be found in the library or pipeline folder called `"<library>"`.
- [`save_pretrained`](../diffusion_pipeline) that accepts a local path, *e.g.* `./stable-diffusion` under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, *e.g.* `./stable_diffusion/unet`.
In addition, a `model_index.json` file is created at the root of the local path, *e.g.* `./stable_diffusion/model_index.json` so that the complete pipeline can again be instantiated
from the local path.
- [`to`](../diffusion_pipeline) which accepts a `string` or `torch.device` to move all models that are of type `torch.nn.Module` to the passed device. The behavior is fully analogous to [PyTorch's `to` method](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to).
- [`__call__`] method to use the pipeline in inference. `__call__` defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the `__call__` method can strongly vary from pipeline to pipeline. *E.g.* a text-to-image pipeline, such as [`StableDiffusionPipeline`](./stable_diffusion) should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as [DDPMPipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/ddpm) on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for
each pipeline, one should look directly into the respective pipeline.
**Note**: All pipelines have PyTorch's autograd disabled by decorating the `__call__` method with a [`torch.no_grad`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) decorator because pipelines should
not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community)
## Contribution
We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire
all of our pipelines to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**.
- **Self-contained**: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the [`DiffusionPipeline` class](.../diffusion_pipeline) or be directly attached to the model and scheduler components of the pipeline.
- **Easy-to-use**: Pipelines should be extremely easy to use - one should be able to load the pipeline and
use it for its designated task, *e.g.* text-to-image generation, in just a couple of lines of code. Most
logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the `__call__` method.
- **Easy-to-tweak**: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part from pre-processing to diffusing to post-processing can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community). If you feel that an important pipeline should be part of the official pipelines but isn't, a contribution to the [official pipelines](./overview) would be even better.
- **One-purpose-only**: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, *e.g.* image2image translation and in-painting, pipelines shall be used for one task only to keep them *easy-to-tweak* and *readable*.
## Examples
### Text-to-Image generation with Stable Diffusion
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### Image-to-Image text-guided generation with Stable Diffusion
The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.
```python
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
# load the pipeline
device = "cuda"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="fp16", torch_dtype=torch.float16
).to(device)
# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
prompt = "A fantasy landscape, trending on artstation"
images = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images
images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
### Tweak prompts reusing seeds and latents
You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](https://github.com/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb).
### In-painting using Stable Diffusion
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and text prompt.
```python
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)

View File

@@ -1,35 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PNDM
## Overview
[Pseudo Numerical methods for Diffusion Models on manifolds](https://arxiv.org/abs/2202.09778) (PNDM) by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao.
The abstract of the paper is the following:
Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules.
The original codebase can be found [here](https://github.com/luping-liu/PNDM).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_pndm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pndm/pipeline_pndm.py) | *Unconditional Image Generation* | - |
## PNDMPipeline
[[autodoc]] pipelines.pndm.pipeline_pndm.PNDMPipeline
- __call__

View File

@@ -1,77 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# RePaint
## Overview
[RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2201.09865) (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool.
The abstract of the paper is the following:
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions.
The original codebase can be found [here](https://github.com/andreas128/RePaint).
## Available Pipelines:
| Pipeline | Tasks | Colab
|-------------------------------------------------------------------------------------------------------------------------------|--------------------|:---:|
| [pipeline_repaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/repaint/pipeline_repaint.py) | *Image Inpainting* | - |
## Usage example
```python
from io import BytesIO
import torch
import PIL
import requests
from diffusers import RePaintPipeline, RePaintScheduler
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png"
mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"
# Load the original image and the mask as PIL images
original_image = download_image(img_url).resize((256, 256))
mask_image = download_image(mask_url).resize((256, 256))
# Load the RePaint scheduler and pipeline based on a pretrained DDPM model
scheduler = RePaintScheduler.from_config("google/ddpm-ema-celebahq-256")
pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler)
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
output = pipe(
original_image=original_image,
mask_image=mask_image,
num_inference_steps=250,
eta=0.0,
jump_length=10,
jump_n_sample=10,
generator=generator,
)
inpainted_image = output.images[0]
```
## RePaintPipeline
[[autodoc]] pipelines.repaint.pipeline_repaint.RePaintPipeline
- __call__

View File

@@ -1,36 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Score SDE VE
## Overview
[Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) (Score SDE) by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole.
The abstract of the paper is the following:
Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
The original codebase can be found [here](https://github.com/yang-song/score_sde_pytorch).
This pipeline implements the Variance Expanding (VE) variant of the method.
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_score_sde_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py) | *Unconditional Image Generation* | - |
## ScoreSdeVePipeline
[[autodoc]] ScoreSdeVePipeline
- __call__

View File

@@ -1,86 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stable diffusion pipelines
Stable Diffusion is a text-to-image _latent diffusion_ model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.
Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the [specific pipeline for latent diffusion](pipelines/latent_diffusion) that is part of 🤗 Diffusers.
For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official [launch announcement post](https://stability.ai/blog/stable-diffusion-announcement) and [this section of our own blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work).
*Tips*:
- To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb)
*Overview*:
| Pipeline | Tasks | Colab | Demo
|---|---|:---:|:---:|
| [pipeline_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) | [🤗 Stable Diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)
| [pipeline_stable_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [pipeline_stable_diffusion_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | **Experimental** *Text-Guided Image Inpainting* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
## Tips
### How to load and use different schedulers.
The stable diffusion pipeline uses [`PNDMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can pass the `scheduler` argument to `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
euler_scheduler = EulerDiscreteScheduler.from_config("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)
```
### How to conver all use cases with multiple or single pipeline
If you want to use all possible use cases in a single `DiffusionPipeline` you can either:
- Make use of the [Stable Diffusion Mega Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#stable-diffusion-mega) or
- Make use of the `components` functionality to instantiate all components in the most memory-efficient way:
```python
>>> from diffusers import (
... StableDiffusionPipeline,
... StableDiffusionImg2ImgPipeline,
... StableDiffusionInpaintPipeline,
... )
>>> img2text = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> img2img = StableDiffusionImg2ImgPipeline(**img2text.components)
>>> inpaint = StableDiffusionInpaintPipeline(**img2text.components)
>>> # now you can use img2text(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline
```
## StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
## StableDiffusionPipeline
[[autodoc]] StableDiffusionPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
## StableDiffusionImg2ImgPipeline
[[autodoc]] StableDiffusionImg2ImgPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
## StableDiffusionInpaintPipeline
[[autodoc]] StableDiffusionInpaintPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing

View File

@@ -1,35 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stochastic Karras VE
## Overview
[Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
The abstract of the paper is the following:
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.
This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models.
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_stochastic_karras_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py) | *Unconditional Image Generation* | - |
## KarrasVePipeline
[[autodoc]] KarrasVePipeline
- __call__

View File

@@ -1,34 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VQDiffusion
## Overview
[Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo
The abstract of the paper is the following:
We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality.
The original codebase can be found [here](https://github.com/microsoft/VQ-Diffusion).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_vq_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py) | *Text-to-Image Generation* | - |
## VQDiffusionPipeline
[[autodoc]] pipelines.vq_diffusion.pipeline_vq_diffusion.VQDiffusionPipeline
- __call__

View File

@@ -1,145 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Schedulers
Diffusers contains multiple pre-built schedule functions for the diffusion process.
## What is a scheduler?
The schedule functions, denoted *Schedulers* in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample.
- Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.
- adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images.
- for inference, the scheduler defines how to update a sample based on an output from a pretrained model.
- Schedulers are often defined by a *noise schedule* and an *update rule* to solve the differential equation solution.
### Discrete versus continuous schedulers
All schedulers take in a timestep to predict the updated version of the sample being diffused.
The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps.
Different algorithms use timesteps that both discrete (accepting `int` inputs), such as the [`DDPMScheduler`] or [`PNDMScheduler`], and continuous (accepting `float` inputs), such as the score-based schedulers [`ScoreSdeVeScheduler`] or [`ScoreSdeVpScheduler`].
## Designing Re-usable schedulers
The core design principle between the schedule functions is to be model, system, and framework independent.
This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update.
To this end, the design of schedulers is such that:
- Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality.
- Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists).
## API
The core API for any new scheduler must follow a limited structure.
- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively.
- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task.
- Schedulers should be framework-specific.
The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers.
### SchedulerMixin
[[autodoc]] SchedulerMixin
### SchedulerOutput
The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call.
[[autodoc]] schedulers.scheduling_utils.SchedulerOutput
### Implemented Schedulers
#### Denoising diffusion implicit models (DDIM)
Original paper can be found here.
[[autodoc]] DDIMScheduler
#### Denoising diffusion probabilistic models (DDPM)
Original paper can be found [here](https://arxiv.org/abs/2010.02502).
[[autodoc]] DDPMScheduler
#### Variance exploding, stochastic sampling from Karras et. al
Original paper can be found [here](https://arxiv.org/abs/2006.11239).
[[autodoc]] KarrasVeScheduler
#### Linear multistep scheduler for discrete beta schedules
Original implementation can be found [here](https://arxiv.org/abs/2206.00364).
[[autodoc]] LMSDiscreteScheduler
#### Pseudo numerical methods for diffusion models (PNDM)
Original implementation can be found [here](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181).
[[autodoc]] PNDMScheduler
#### variance exploding stochastic differential equation (VE-SDE) scheduler
Original paper can be found [here](https://arxiv.org/abs/2011.13456).
[[autodoc]] ScoreSdeVeScheduler
#### improved pseudo numerical methods for diffusion models (iPNDM)
Original implementation can be found [here](https://github.com/crowsonkb/v-diffusion-pytorch/blob/987f8985e38208345c1959b0ea767a625831cc9b/diffusion/sampling.py#L296).
[[autodoc]] IPNDMScheduler
#### variance preserving stochastic differential equation (VP-SDE) scheduler
Original paper can be found [here](https://arxiv.org/abs/2011.13456).
<Tip warning={true}>
Score SDE-VP is under construction.
</Tip>
[[autodoc]] schedulers.scheduling_sde_vp.ScoreSdeVpScheduler
#### Euler scheduler
Euler scheduler (Algorithm 2) from the paper [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Karras et al. (2022). Based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51) implementation by Katherine Crowson.
Fast scheduler which often times generates good outputs with 20-30 steps.
[[autodoc]] EulerDiscreteScheduler
#### Euler Ancestral scheduler
Ancestral sampling with Euler method steps. Based on the original (k-diffusion)[https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72] implementation by Katherine Crowson.
Fast scheduler which often times generates good outputs with 20-30 steps.
[[autodoc]] EulerAncestralDiscreteScheduler
#### VQDiffusionScheduler
Original paper can be found [here](https://arxiv.org/abs/2111.14822)
[[autodoc]] VQDiffusionScheduler
#### RePaint scheduler
DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks.
Intended for use with [`RePaintPipeline`].
Based on the paper [RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2201.09865)
and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint
[[autodoc]] RePaintScheduler

View File

@@ -1,291 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to contribute to Diffusers 🧨
We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation not just code are valued and appreciated. Answering questions, helping others, reaching out and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it!
It also helps us if you spread the word: reference the library from blog posts
on the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply star the repo to say "thank you".
We encourage everyone to start by saying 👋 in our public Discord channel. We discuss the hottest trends about diffusion models, ask questions, show-off personal projects, help each other with contributions, or just hang out ☕. <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>
Whichever way you choose to contribute, we strive to be part of an open, welcoming and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions.
## Overview
You can contribute in so many ways! Just to name a few:
* Fixing outstanding issues with the existing code.
* Implementing [new diffusion pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines#contribution), [new schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) or [new models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models).
* [Contributing to the examples](https://github.com/huggingface/diffusers/tree/main/examples).
* [Contributing to the documentation](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Submitting issues related to bugs or desired new features.
*All are equally valuable to the community.*
### Browse GitHub issues for suggestions
If you need inspiration, you can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library. There are a few filters that can be helpful:
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute and getting started with the codebase.
- See [New pipeline/model](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models or diffusion pipelines.
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) to work on new samplers and schedulers.
## Submitting a new issue or feature request
Do your best to follow these guidelines when submitting an issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.
### Did you find a bug?
The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.
First, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on GitHub under Issues).
### Do you want to implement a new diffusion pipeline / diffusion model?
Awesome! Please provide the following information:
* Short description of the diffusion pipeline and link to the paper;
* Link to the implementation if it is open-source;
* Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can best
guide you.
### Do you want a new feature (that is not a model)?
A world-class feature request addresses the following points:
1. Motivation first:
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.
If your issue is well written we're already 80% of the way there by the time you
post it.
## Start contributing! (Pull Requests)
Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to be able to contribute to
🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.
Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L212)):
1. Fork the [repository](https://github.com/huggingface/diffusers) by
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
$ git clone git@github.com:<your Github handle>/diffusers.git
$ cd diffusers
$ git remote add upstream https://github.com/huggingface/diffusers.git
```
3. Create a new branch to hold your development changes:
```bash
$ git checkout -b a-descriptive-name-for-my-changes
```
**Do not** work on the `main` branch.
4. Set up a development environment by running the following command in a virtual environment:
```bash
$ pip install -e ".[dev]"
```
(If Diffusers was already installed in the virtual environment, remove
it with `pip uninstall diffusers` before reinstalling it in editable
mode with the `-e` flag.)
To run the full test suite, you might need the additional dependency on `transformers` and `datasets` which requires a separate source
install:
```bash
$ git clone https://github.com/huggingface/transformers
$ cd transformers
$ pip install -e .
```
```bash
$ git clone https://github.com/huggingface/datasets
$ cd datasets
$ pip install -e .
```
If you have already cloned that repo, you might need to `git pull` to get the most recent changes in the `datasets`
library.
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this:
```bash
$ pytest tests/<TEST_TO_RUN>.py
```
You can also run the full suite with the following command, but it takes
a beefy machine to produce a result in a decent amount of time now that
Diffusers has grown a lot. Here is the command for it:
```bash
$ make test
```
For more information about tests, check out the
[dedicated documentation](https://huggingface.co/docs/diffusers/testing)
🧨 Diffusers relies on `black` and `isort` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
```bash
$ make style
```
🧨 Diffusers also uses `flake8` and a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash
$ make quality
```
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
```bash
$ git add modified_file.py
$ git commit
```
It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:
```bash
$ git fetch upstream
$ git rebase upstream/main
```
Push the changes to your account using:
```bash
$ git push -u origin a-descriptive-name-for-my-changes
```
6. Once you are satisfied (**and the checklist below is happy too**), go to the
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.
7. It's ok if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
### Checklist
1. The title of your pull request should be a summary of its contribution;
2. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
3. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
4. Make sure existing tests pass;
5. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
- If you are adding a new tokenizer, write tests, and make sure
`RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py` passes.
CircleCI does not run the slow tests, but GitHub actions does every night!
6. All public methods must have informative docstrings that work nicely with sphinx. See `[pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py)` for an example.
7. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
### Tests
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests).
We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, here's how to run tests with `pytest` for the library:
```bash
$ python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
In fact, that's how `make test` is implemented!
You can specify a smaller set of tests in order to test only the feature
you're working on.
By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models — make sure you
have enough disk space and a good Internet connection, or a lot of patience!
```bash
$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
`unittest` is fully supported, here's how to run tests with it:
```bash
$ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v
```
### Syncing forked main with upstream (HuggingFace) main
To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing
```
### Style guide
For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**

View File

@@ -1,17 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Philosophy
- Readability and clarity are preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and use well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio. This is one of the guiding goals even if the initial pipelines are devoted to vision tasks.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementations and can include components of other libraries, such as text encoders. Examples of diffusion pipelines are [Glide](https://github.com/openai/glide-text2im), [Latent Diffusion](https://github.com/CompVis/latent-diffusion) and [Stable Diffusion](https://github.com/compvis/stable-diffusion).

434
docs/source/en/_toctree.yml Normal file
View File

@@ -0,0 +1,434 @@
- sections:
- local: index
title: 🧨 Diffusers
- local: quicktour
title: Quicktour
- local: stable_diffusion
title: Effective and efficient diffusion
- local: installation
title: Installation
title: Get started
- sections:
- local: tutorials/tutorial_overview
title: Overview
- local: using-diffusers/write_own_pipeline
title: Understanding pipelines, models and schedulers
- local: tutorials/autopipeline
title: AutoPipeline
- local: tutorials/basic_training
title: Train a diffusion model
- local: tutorials/using_peft_for_inference
title: Inference with PEFT
- local: tutorials/fast_diffusion
title: Accelerate inference of text-to-image diffusion models
title: Tutorials
- sections:
- sections:
- local: using-diffusers/loading_overview
title: Overview
- local: using-diffusers/loading
title: Load pipelines, models, and schedulers
- local: using-diffusers/schedulers
title: Load and compare different schedulers
- local: using-diffusers/custom_pipeline_overview
title: Load community pipelines and components
- local: using-diffusers/using_safetensors
title: Load safetensors
- local: using-diffusers/other-formats
title: Load different Stable Diffusion formats
- local: using-diffusers/loading_adapters
title: Load adapters
- local: using-diffusers/push_to_hub
title: Push files to the Hub
title: Loading & Hub
- sections:
- local: using-diffusers/pipeline_overview
title: Overview
- local: using-diffusers/unconditional_image_generation
title: Unconditional image generation
- local: using-diffusers/conditional_image_generation
title: Text-to-image
- local: using-diffusers/img2img
title: Image-to-image
- local: using-diffusers/inpaint
title: Inpainting
- local: using-diffusers/depth2img
title: Depth-to-image
title: Tasks
- sections:
- local: using-diffusers/textual_inversion_inference
title: Textual inversion
- local: training/distributed_inference
title: Distributed inference with multiple GPUs
- local: using-diffusers/reusing_seeds
title: Improve image quality with deterministic generation
- local: using-diffusers/control_brightness
title: Control image brightness
- local: using-diffusers/weighted_prompts
title: Prompt weighting
- local: using-diffusers/freeu
title: Improve generation quality with FreeU
title: Techniques
- sections:
- local: using-diffusers/pipeline_overview
title: Overview
- local: using-diffusers/sdxl
title: Stable Diffusion XL
- local: using-diffusers/sdxl_turbo
title: SDXL Turbo
- local: using-diffusers/kandinsky
title: Kandinsky
- local: using-diffusers/controlnet
title: ControlNet
- local: using-diffusers/shap-e
title: Shap-E
- local: using-diffusers/diffedit
title: DiffEdit
- local: using-diffusers/distilled_sd
title: Distilled Stable Diffusion inference
- local: using-diffusers/callback
title: Pipeline callbacks
- local: using-diffusers/reproducibility
title: Create reproducible pipelines
- local: using-diffusers/custom_pipeline_examples
title: Community pipelines
- local: using-diffusers/contribute_pipeline
title: Contribute a community pipeline
- local: using-diffusers/inference_with_lcm_lora
title: Latent Consistency Model-LoRA
- local: using-diffusers/inference_with_lcm
title: Latent Consistency Model
- local: using-diffusers/svd
title: Stable Video Diffusion
title: Specific pipeline examples
- sections:
- local: training/overview
title: Overview
- local: training/create_dataset
title: Create a dataset for training
- local: training/adapt_a_model
title: Adapt a model to a new task
- sections:
- local: training/unconditional_training
title: Unconditional image generation
- local: training/text2image
title: Text-to-image
- local: training/sdxl
title: Stable Diffusion XL
- local: training/kandinsky
title: Kandinsky 2.2
- local: training/wuerstchen
title: Wuerstchen
- local: training/controlnet
title: ControlNet
- local: training/t2i_adapters
title: T2I-Adapters
- local: training/instructpix2pix
title: InstructPix2Pix
title: Models
- sections:
- local: training/text_inversion
title: Textual Inversion
- local: training/dreambooth
title: DreamBooth
- local: training/lora
title: LoRA
- local: training/custom_diffusion
title: Custom Diffusion
- local: training/lcm_distill
title: Latent Consistency Distillation
- local: training/ddpo
title: Reinforcement learning training with DDPO
title: Methods
title: Training
- sections:
- local: using-diffusers/other-modalities
title: Other Modalities
title: Taking Diffusers Beyond Images
title: Using Diffusers
- sections:
- local: optimization/opt_overview
title: Overview
- sections:
- local: optimization/fp16
title: Speed up inference
- local: optimization/memory
title: Reduce memory usage
- local: optimization/torch2.0
title: PyTorch 2.0
- local: optimization/xformers
title: xFormers
- local: optimization/tome
title: Token merging
- local: optimization/deepcache
title: DeepCache
title: General optimizations
- sections:
- local: using-diffusers/stable_diffusion_jax_how_to
title: JAX/Flax
- local: optimization/onnx
title: ONNX
- local: optimization/open_vino
title: OpenVINO
- local: optimization/coreml
title: Core ML
title: Optimized model types
- sections:
- local: optimization/mps
title: Metal Performance Shaders (MPS)
- local: optimization/habana
title: Habana Gaudi
title: Optimized hardware
title: Optimization
- sections:
- local: conceptual/philosophy
title: Philosophy
- local: using-diffusers/controlling_generation
title: Controlled generation
- local: conceptual/contribution
title: How to contribute?
- local: conceptual/ethical_guidelines
title: Diffusers' Ethical Guidelines
- local: conceptual/evaluation
title: Evaluating Diffusion Models
title: Conceptual Guides
- sections:
- sections:
- local: api/configuration
title: Configuration
- local: api/logging
title: Logging
- local: api/outputs
title: Outputs
title: Main Classes
- sections:
- local: api/loaders/ip_adapter
title: IP-Adapter
- local: api/loaders/lora
title: LoRA
- local: api/loaders/single_file
title: Single files
- local: api/loaders/textual_inversion
title: Textual Inversion
- local: api/loaders/unet
title: UNet
- local: api/loaders/peft
title: PEFT
title: Loaders
- sections:
- local: api/models/overview
title: Overview
- local: api/models/unet
title: UNet1DModel
- local: api/models/unet2d
title: UNet2DModel
- local: api/models/unet2d-cond
title: UNet2DConditionModel
- local: api/models/unet3d-cond
title: UNet3DConditionModel
- local: api/models/unet-motion
title: UNetMotionModel
- local: api/models/uvit2d
title: UViT2DModel
- local: api/models/vq
title: VQModel
- local: api/models/autoencoderkl
title: AutoencoderKL
- local: api/models/asymmetricautoencoderkl
title: AsymmetricAutoencoderKL
- local: api/models/autoencoder_tiny
title: Tiny AutoEncoder
- local: api/models/consistency_decoder_vae
title: ConsistencyDecoderVAE
- local: api/models/transformer2d
title: Transformer2D
- local: api/models/transformer_temporal
title: Transformer Temporal
- local: api/models/prior_transformer
title: Prior Transformer
- local: api/models/controlnet
title: ControlNet
title: Models
- sections:
- local: api/pipelines/overview
title: Overview
- local: api/pipelines/amused
title: aMUSEd
- local: api/pipelines/animatediff
title: AnimateDiff
- local: api/pipelines/attend_and_excite
title: Attend-and-Excite
- local: api/pipelines/audioldm
title: AudioLDM
- local: api/pipelines/audioldm2
title: AudioLDM 2
- local: api/pipelines/auto_pipeline
title: AutoPipeline
- local: api/pipelines/blip_diffusion
title: BLIP-Diffusion
- local: api/pipelines/consistency_models
title: Consistency Models
- local: api/pipelines/controlnet
title: ControlNet
- local: api/pipelines/controlnet_sdxl
title: ControlNet with Stable Diffusion XL
- local: api/pipelines/dance_diffusion
title: Dance Diffusion
- local: api/pipelines/ddim
title: DDIM
- local: api/pipelines/ddpm
title: DDPM
- local: api/pipelines/deepfloyd_if
title: DeepFloyd IF
- local: api/pipelines/diffedit
title: DiffEdit
- local: api/pipelines/dit
title: DiT
- local: api/pipelines/i2vgenxl
title: I2VGen-XL
- local: api/pipelines/pix2pix
title: InstructPix2Pix
- local: api/pipelines/kandinsky
title: Kandinsky 2.1
- local: api/pipelines/kandinsky_v22
title: Kandinsky 2.2
- local: api/pipelines/kandinsky3
title: Kandinsky 3
- local: api/pipelines/latent_consistency_models
title: Latent Consistency Models
- local: api/pipelines/latent_diffusion
title: Latent Diffusion
- local: api/pipelines/panorama
title: MultiDiffusion
- local: api/pipelines/musicldm
title: MusicLDM
- local: api/pipelines/paint_by_example
title: Paint by Example
- local: api/pipelines/pia
title: Personalized Image Animator (PIA)
- local: api/pipelines/pixart
title: PixArt-α
- local: api/pipelines/self_attention_guidance
title: Self-Attention Guidance
- local: api/pipelines/semantic_stable_diffusion
title: Semantic Guidance
- local: api/pipelines/shap_e
title: Shap-E
- sections:
- local: api/pipelines/stable_diffusion/overview
title: Overview
- local: api/pipelines/stable_diffusion/text2img
title: Text-to-image
- local: api/pipelines/stable_diffusion/img2img
title: Image-to-image
- local: api/pipelines/stable_diffusion/inpaint
title: Inpainting
- local: api/pipelines/stable_diffusion/depth2img
title: Depth-to-image
- local: api/pipelines/stable_diffusion/image_variation
title: Image variation
- local: api/pipelines/stable_diffusion/stable_diffusion_safe
title: Safe Stable Diffusion
- local: api/pipelines/stable_diffusion/stable_diffusion_2
title: Stable Diffusion 2
- local: api/pipelines/stable_diffusion/stable_diffusion_xl
title: Stable Diffusion XL
- local: api/pipelines/stable_diffusion/sdxl_turbo
title: SDXL Turbo
- local: api/pipelines/stable_diffusion/latent_upscale
title: Latent upscaler
- local: api/pipelines/stable_diffusion/upscale
title: Super-resolution
- local: api/pipelines/stable_diffusion/k_diffusion
title: K-Diffusion
- local: api/pipelines/stable_diffusion/ldm3d_diffusion
title: LDM3D Text-to-(RGB, Depth), Text-to-(RGB-pano, Depth-pano), LDM3D Upscaler
- local: api/pipelines/stable_diffusion/adapter
title: Stable Diffusion T2I-Adapter
- local: api/pipelines/stable_diffusion/gligen
title: GLIGEN (Grounded Language-to-Image Generation)
title: Stable Diffusion
- local: api/pipelines/stable_unclip
title: Stable unCLIP
- local: api/pipelines/text_to_video
title: Text-to-video
- local: api/pipelines/text_to_video_zero
title: Text2Video-Zero
- local: api/pipelines/unclip
title: unCLIP
- local: api/pipelines/unidiffuser
title: UniDiffuser
- local: api/pipelines/value_guided_sampling
title: Value-guided sampling
- local: api/pipelines/wuerstchen
title: Wuerstchen
title: Pipelines
- sections:
- local: api/schedulers/overview
title: Overview
- local: api/schedulers/cm_stochastic_iterative
title: CMStochasticIterativeScheduler
- local: api/schedulers/consistency_decoder
title: ConsistencyDecoderScheduler
- local: api/schedulers/ddim_inverse
title: DDIMInverseScheduler
- local: api/schedulers/ddim
title: DDIMScheduler
- local: api/schedulers/ddpm
title: DDPMScheduler
- local: api/schedulers/deis
title: DEISMultistepScheduler
- local: api/schedulers/multistep_dpm_solver_inverse
title: DPMSolverMultistepInverse
- local: api/schedulers/multistep_dpm_solver
title: DPMSolverMultistepScheduler
- local: api/schedulers/dpm_sde
title: DPMSolverSDEScheduler
- local: api/schedulers/singlestep_dpm_solver
title: DPMSolverSinglestepScheduler
- local: api/schedulers/euler_ancestral
title: EulerAncestralDiscreteScheduler
- local: api/schedulers/euler
title: EulerDiscreteScheduler
- local: api/schedulers/heun
title: HeunDiscreteScheduler
- local: api/schedulers/ipndm
title: IPNDMScheduler
- local: api/schedulers/stochastic_karras_ve
title: KarrasVeScheduler
- local: api/schedulers/dpm_discrete_ancestral
title: KDPM2AncestralDiscreteScheduler
- local: api/schedulers/dpm_discrete
title: KDPM2DiscreteScheduler
- local: api/schedulers/lcm
title: LCMScheduler
- local: api/schedulers/lms_discrete
title: LMSDiscreteScheduler
- local: api/schedulers/pndm
title: PNDMScheduler
- local: api/schedulers/repaint
title: RePaintScheduler
- local: api/schedulers/score_sde_ve
title: ScoreSdeVeScheduler
- local: api/schedulers/score_sde_vp
title: ScoreSdeVpScheduler
- local: api/schedulers/unipc
title: UniPCMultistepScheduler
- local: api/schedulers/vq_diffusion
title: VQDiffusionScheduler
title: Schedulers
- sections:
- local: api/internal_classes_overview
title: Overview
- local: api/attnprocessor
title: Attention Processor
- local: api/activations
title: Custom activation functions
- local: api/normalization
title: Custom normalization layers
- local: api/utilities
title: Utilities
- local: api/image_processor
title: VAE Image Processor
title: Internal classes
title: API

View File

@@ -1,4 +1,4 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -10,12 +10,18 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
# Activation functions
Customized activation functions for supporting various models in 🤗 Diffusers.
# Configuration
## GELU
The handling of configurations in Diffusers is with the `ConfigMixin` class.
[[autodoc]] models.activations.GELU
[[autodoc]] ConfigMixin
## GEGLU
Under further construction 🚧, open a [PR](https://github.com/huggingface/diffusers/compare) if you want to contribute!
[[autodoc]] models.activations.GEGLU
## ApproximateGELU
[[autodoc]] models.activations.ApproximateGELU

View File

@@ -0,0 +1,60 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Attention Processor
An attention processor is a class for applying different types of attention mechanisms.
## AttnProcessor
[[autodoc]] models.attention_processor.AttnProcessor
## AttnProcessor2_0
[[autodoc]] models.attention_processor.AttnProcessor2_0
## FusedAttnProcessor2_0
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
## LoRAAttnProcessor
[[autodoc]] models.attention_processor.LoRAAttnProcessor
## LoRAAttnProcessor2_0
[[autodoc]] models.attention_processor.LoRAAttnProcessor2_0
## CustomDiffusionAttnProcessor
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor
## CustomDiffusionAttnProcessor2_0
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor2_0
## AttnAddedKVProcessor
[[autodoc]] models.attention_processor.AttnAddedKVProcessor
## AttnAddedKVProcessor2_0
[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0
## LoRAAttnAddedKVProcessor
[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor
## XFormersAttnProcessor
[[autodoc]] models.attention_processor.XFormersAttnProcessor
## LoRAXFormersAttnProcessor
[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor
## CustomDiffusionXFormersAttnProcessor
[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor
## SlicedAttnProcessor
[[autodoc]] models.attention_processor.SlicedAttnProcessor
## SlicedAttnAddedKVProcessor
[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor

View File

@@ -1,4 +1,4 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -12,12 +12,19 @@ specific language governing permissions and limitations under the License.
# Configuration
In Diffusers, schedulers of type [`schedulers.scheduling_utils.SchedulerMixin`], and models of type [`ModelMixin`] inherit from [`ConfigMixin`] which conveniently takes care of storing all parameters that are
passed to the respective `__init__` methods in a JSON-configuration file.
Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from [`ModelMixin`] inherit from [`ConfigMixin`] which stores all the parameters that are passed to their respective `__init__` methods in a JSON-configuration file.
TODO(PVP) - add example and better info here
<Tip>
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`.
</Tip>
## ConfigMixin
[[autodoc]] ConfigMixin
- load_config
- from_config
- save_config
- to_json_file
- to_json_string

View File

@@ -0,0 +1,27 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VAE Image Processor
The [`VaeImageProcessor`] provides a unified API for [`StableDiffusionPipeline`]s to prepare image inputs for VAE encoding and post-processing outputs once they're decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
All pipelines with [`VaeImageProcessor`] accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the `output_type` argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the `output_type` argument (for example `output_type="latent"`). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines.
## VaeImageProcessor
[[autodoc]] image_processor.VaeImageProcessor
## VaeImageProcessorLDM3D
The [`VaeImageProcessorLDM3D`] accepts RGB and depth inputs and returns RGB and depth outputs.
[[autodoc]] image_processor.VaeImageProcessorLDM3D

View File

@@ -0,0 +1,15 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Overview
The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you're interested in building a diffusion model with some custom parts or if you're interested in some of our helper utilities for working with 🤗 Diffusers.

View File

@@ -0,0 +1,25 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# IP-Adapter
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs.
<Tip>
Learn how to load an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/loading_adapters#ip-adapter) loading guide.
</Tip>
## IPAdapterMixin
[[autodoc]] loaders.ip_adapter.IPAdapterMixin

View File

@@ -0,0 +1,32 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# LoRA
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights:
- [`LoraLoaderMixin`] provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
- [`StableDiffusionXLLoraLoaderMixin`] is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl) version of the [`LoraLoaderMixin`] class for loading and saving LoRA weights. It can only be used with the SDXL model.
<Tip>
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
</Tip>
## LoraLoaderMixin
[[autodoc]] loaders.lora.LoraLoaderMixin
## StableDiffusionXLLoraLoaderMixin
[[autodoc]] loaders.lora.StableDiffusionXLLoraLoaderMixin

View File

@@ -0,0 +1,25 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PEFT
Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`~loaders.peft.PeftAdapterMixin`] class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`] to load an adapter.
<Tip>
Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.
</Tip>
## PeftAdapterMixin
[[autodoc]] loaders.peft.PeftAdapterMixin

View File

@@ -0,0 +1,37 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Single files
Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a `ckpt` or `safetensors` file. These single file types are typically produced from community trained models. There are three classes for loading single file weights:
- [`FromSingleFileMixin`] supports loading pretrained pipeline weights stored in a single file, which can either be a `ckpt` or `safetensors` file.
- [`FromOriginalVAEMixin`] supports loading a pretrained [`AutoencoderKL`] from pretrained ControlNet weights stored in a single file, which can either be a `ckpt` or `safetensors` file.
- [`FromOriginalControlnetMixin`] supports loading pretrained ControlNet weights stored in a single file, which can either be a `ckpt` or `safetensors` file.
<Tip>
To learn more about how to load single file weights, see the [Load different Stable Diffusion formats](../../using-diffusers/other-formats) loading guide.
</Tip>
## FromSingleFileMixin
[[autodoc]] loaders.single_file.FromSingleFileMixin
## FromOriginalVAEMixin
[[autodoc]] loaders.autoencoder.FromOriginalVAEMixin
## FromOriginalControlnetMixin
[[autodoc]] loaders.controlnet.FromOriginalControlNetMixin

View File

@@ -0,0 +1,27 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Textual Inversion
Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder.
[`TextualInversionLoaderMixin`] provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings.
<Tip>
To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide.
</Tip>
## TextualInversionLoaderMixin
[[autodoc]] loaders.textual_inversion.TextualInversionLoaderMixin

View File

@@ -0,0 +1,27 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNet
Some training methods - like LoRA and Custom Diffusion - typically target the UNet's attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model's parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you're *only* loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the [`~loaders.LoraLoaderMixin.load_lora_weights`] function instead.
The [`UNet2DConditionLoadersMixin`] class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters.
<Tip>
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
</Tip>
## UNet2DConditionLoadersMixin
[[autodoc]] loaders.unet.UNet2DConditionLoadersMixin

View File

@@ -0,0 +1,96 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Logging
🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to `WARNING`.
To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the `INFO` level.
```python
import diffusers
diffusers.logging.set_verbosity_info()
```
You can also use the environment variable `DIFFUSERS_VERBOSITY` to override the default verbosity. You can set it
to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:
```bash
DIFFUSERS_VERBOSITY=error ./myprogram.py
```
Additionally, some `warnings` can be disabled by setting the environment variable
`DIFFUSERS_NO_ADVISORY_WARNINGS` to a true value, like `1`. This disables any warning logged by
[`logger.warning_advice`]. For example:
```bash
DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```
Here is an example of how to use the same logger as the library in your own module or script:
```python
from diffusers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("diffusers")
logger.info("INFO")
logger.warning("WARN")
```
All methods of the logging module are documented below. The main methods are
[`logging.get_verbosity`] to get the current level of verbosity in the logger and
[`logging.set_verbosity`] to set the verbosity to the level of your choice.
In order from the least verbose to the most verbose:
| Method | Integer value | Description |
|----------------------------------------------------------:|--------------:|----------------------------------------------------:|
| `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` | 50 | only report the most critical errors |
| `diffusers.logging.ERROR` | 40 | only report errors |
| `diffusers.logging.WARNING` or `diffusers.logging.WARN` | 30 | only report errors and warnings (default) |
| `diffusers.logging.INFO` | 20 | only report errors, warnings, and basic information |
| `diffusers.logging.DEBUG` | 10 | report all information |
By default, `tqdm` progress bars are displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] are used to enable or disable this behavior.
## Base setters
[[autodoc]] utils.logging.set_verbosity_error
[[autodoc]] utils.logging.set_verbosity_warning
[[autodoc]] utils.logging.set_verbosity_info
[[autodoc]] utils.logging.set_verbosity_debug
## Other functions
[[autodoc]] utils.logging.get_verbosity
[[autodoc]] utils.logging.set_verbosity
[[autodoc]] utils.logging.get_logger
[[autodoc]] utils.logging.enable_default_handler
[[autodoc]] utils.logging.disable_default_handler
[[autodoc]] utils.logging.enable_explicit_format
[[autodoc]] utils.logging.reset_format
[[autodoc]] utils.logging.enable_progress_bar
[[autodoc]] utils.logging.disable_progress_bar

View File

@@ -0,0 +1,60 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AsymmetricAutoencoderKL
Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: [Designing a Better Asymmetric VQGAN for StableDiffusion](https://arxiv.org/abs/2306.04632) by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua.
The abstract from the paper is:
*StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN*
Evaluation results can be found in section 4.1 of the original paper.
## Available checkpoints
* [https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5](https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5)
* [https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2](https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2)
## Example Usage
```python
from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline
from diffusers.utils import load_image, make_image_grid
prompt = "a photo of a person with beard"
img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png"
mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"
original_image = load_image(img_url).resize((512, 512))
mask_image = load_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting")
pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5")
pipe.to("cuda")
image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0]
make_image_grid([original_image, mask_image, image], rows=1, cols=3)
```
## AsymmetricAutoencoderKL
[[autodoc]] models.autoencoders.autoencoder_asym_kl.AsymmetricAutoencoderKL
## AutoencoderKLOutput
[[autodoc]] models.autoencoders.autoencoder_kl.AutoencoderKLOutput
## DecoderOutput
[[autodoc]] models.autoencoders.vae.DecoderOutput

View File

@@ -0,0 +1,57 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Tiny AutoEncoder
Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in [madebyollin/taesd](https://github.com/madebyollin/taesd) by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion's VAE that can quickly decode the latents in a [`StableDiffusionPipeline`] or [`StableDiffusionXLPipeline`] almost instantly.
To use with Stable Diffusion v-2.1:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderTiny
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image
```
To use with Stable Diffusion XL 1.0
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderTiny
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image
```
## AutoencoderTiny
[[autodoc]] AutoencoderTiny
## AutoencoderTinyOutput
[[autodoc]] models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput

View File

@@ -0,0 +1,58 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AutoencoderKL
The variational autoencoder (VAE) model with KL loss was introduced in [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114v11) by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images.
The abstract from the paper is:
*How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.*
## Loading from the original format
By default the [`AutoencoderKL`] should be loaded with [`~ModelMixin.from_pretrained`], but it can also be loaded
from the original format using [`FromOriginalVAEMixin.from_single_file`] as follows:
```py
from diffusers import AutoencoderKL
url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file
model = AutoencoderKL.from_single_file(url)
```
## AutoencoderKL
[[autodoc]] AutoencoderKL
- decode
- encode
- all
## AutoencoderKLOutput
[[autodoc]] models.autoencoders.autoencoder_kl.AutoencoderKLOutput
## DecoderOutput
[[autodoc]] models.autoencoders.vae.DecoderOutput
## FlaxAutoencoderKL
[[autodoc]] FlaxAutoencoderKL
## FlaxAutoencoderKLOutput
[[autodoc]] models.vae_flax.FlaxAutoencoderKLOutput
## FlaxDecoderOutput
[[autodoc]] models.vae_flax.FlaxDecoderOutput

View File

@@ -0,0 +1,18 @@
# Consistency Decoder
Consistency decoder can be used to decode the latents from the denoising UNet in the [`StableDiffusionPipeline`]. This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3).
The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder).
<Tip warning={true}>
Inference is only supported for 2 iterations as of now.
</Tip>
The pipeline could not have been contributed without the help of [madebyollin](https://github.com/madebyollin) and [mrsteyk](https://github.com/mrsteyk) from [this issue](https://github.com/openai/consistencydecoder/issues/1).
## ConsistencyDecoderVAE
[[autodoc]] ConsistencyDecoderVAE
- all
- decode

View File

@@ -0,0 +1,50 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# ControlNet
The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.
The abstract from the paper is:
*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*
## Loading from the original format
By default the [`ControlNetModel`] should be loaded with [`~ModelMixin.from_pretrained`], but it can also be loaded
from the original format using [`FromOriginalControlnetMixin.from_single_file`] as follows:
```py
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
controlnet = ControlNetModel.from_single_file(url)
url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
```
## ControlNetModel
[[autodoc]] ControlNetModel
## ControlNetOutput
[[autodoc]] models.controlnet.ControlNetOutput
## FlaxControlNetModel
[[autodoc]] FlaxControlNetModel
## FlaxControlNetOutput
[[autodoc]] models.controlnet_flax.FlaxControlNetOutput

View File

@@ -0,0 +1,28 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Models
🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution \\(p_{\theta}(x_{t-1}|x_{t})\\).
All models are built from the base [`ModelMixin`] class which is a [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) providing basic functionality for saving and loading models, locally and from the Hugging Face Hub.
## ModelMixin
[[autodoc]] ModelMixin
## FlaxModelMixin
[[autodoc]] FlaxModelMixin
## PushToHubMixin
[[autodoc]] utils.PushToHubMixin

View File

@@ -0,0 +1,27 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Prior Transformer
The Prior Transformer was originally introduced in [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://huggingface.co/papers/2204.06125) by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process.
The abstract from the paper is:
*Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.*
## PriorTransformer
[[autodoc]] PriorTransformer
## PriorTransformerOutput
[[autodoc]] models.transformers.prior_transformer.PriorTransformerOutput

View File

@@ -0,0 +1,41 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Transformer2D
A Transformer model for image-like data from [CompVis](https://huggingface.co/CompVis) that is based on the [Vision Transformer](https://huggingface.co/papers/2010.11929) introduced by Dosovitskiy et al. The [`Transformer2DModel`] accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.
When the input is **continuous**:
1. Project the input and reshape it to `(batch_size, sequence_length, feature_dimension)`.
2. Apply the Transformer blocks in the standard way.
3. Reshape to image.
When the input is **discrete**:
<Tip>
It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don't contain a prediction for the masked pixel because the unnoised image cannot be masked.
</Tip>
1. Convert input (classes of latent pixels) to embeddings and apply positional embeddings.
2. Apply the Transformer blocks in the standard way.
3. Predict classes of unnoised image.
## Transformer2DModel
[[autodoc]] Transformer2DModel
## Transformer2DModelOutput
[[autodoc]] models.transformers.transformer_2d.Transformer2DModelOutput

View File

@@ -1,4 +1,4 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -10,6 +10,14 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
# Stable Diffusion
# Transformer Temporal
Please visit this [very in-detail blog post](https://huggingface.co/blog/stable_diffusion) on Stable Diffusion!
A Transformer model for video-like data.
## TransformerTemporalModel
[[autodoc]] models.transformers.transformer_temporal.TransformerTemporalModel
## TransformerTemporalModelOutput
[[autodoc]] models.transformers.transformer_temporal.TransformerTemporalModelOutput

View File

@@ -0,0 +1,25 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNetMotionModel
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
## UNetMotionModel
[[autodoc]] UNetMotionModel
## UNet3DConditionOutput
[[autodoc]] models.unets.unet_3d_condition.UNet3DConditionOutput

View File

@@ -0,0 +1,25 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNet1DModel
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 1D UNet model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
## UNet1DModel
[[autodoc]] UNet1DModel
## UNet1DOutput
[[autodoc]] models.unets.unet_1d.UNet1DOutput

View File

@@ -0,0 +1,31 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNet2DConditionModel
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
## UNet2DConditionModel
[[autodoc]] UNet2DConditionModel
## UNet2DConditionOutput
[[autodoc]] models.unets.unet_2d_condition.UNet2DConditionOutput
## FlaxUNet2DConditionModel
[[autodoc]] models.unets.unet_2d_condition_flax.FlaxUNet2DConditionModel
## FlaxUNet2DConditionOutput
[[autodoc]] models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput

View File

@@ -0,0 +1,25 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNet2DModel
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
## UNet2DModel
[[autodoc]] UNet2DModel
## UNet2DOutput
[[autodoc]] models.unets.unet_2d.UNet2DOutput

View File

@@ -0,0 +1,25 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNet3DConditionModel
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
## UNet3DConditionModel
[[autodoc]] UNet3DConditionModel
## UNet3DConditionOutput
[[autodoc]] models.unets.unet_3d_condition.UNet3DConditionOutput

View File

@@ -0,0 +1,39 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UVit2DModel
The [U-ViT](https://hf.co/papers/2301.11093) model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality.
The abstract from the paper is:
*Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet.*
## UVit2DModel
[[autodoc]] UVit2DModel
## UVit2DConvEmbed
[[autodoc]] models.unets.uvit_2d.UVit2DConvEmbed
## UVitBlock
[[autodoc]] models.unets.uvit_2d.UVitBlock
## ConvNextBlock
[[autodoc]] models.unets.uvit_2d.ConvNextBlock
## ConvMlmLayer
[[autodoc]] models.unets.uvit_2d.ConvMlmLayer

View File

@@ -0,0 +1,27 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VQModel
The VQ-VAE model was introduced in [Neural Discrete Representation Learning](https://huggingface.co/papers/1711.00937) by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike [`AutoencoderKL`], the [`VQModel`] works in a quantized latent space.
The abstract from the paper is:
*Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.*
## VQModel
[[autodoc]] VQModel
## VQEncoderOutput
[[autodoc]] models.vq_model.VQEncoderOutput

Some files were not shown because too many files have changed in this diff Show More