Sayak Paul
140bdfde06
[docs] add details concerning diffusers-specific bits. ( #6375 )
...
add details concerning diffusers-specific bits.
2024-12-23 13:02:05 +05:30
Sayak Paul
dda1746d63
remove delete documentation trigger workflows. ( #6373 )
2024-12-23 13:02:05 +05:30
Adrian Punga
d553a488cc
Fix support for MPS in KDPM2AncestralDiscreteScheduler ( #6365 )
...
Fix support for MPS
MPS doesn't support float64
2024-12-23 13:02:05 +05:30
YiYi Xu
2b469f7d3f
[refactor embeddings] gligen + ip-adapter ( #6244 )
...
* refactor ip-adapter-imageproj, gligen
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-12-23 13:02:05 +05:30
Sayak Paul
f72503473e
[Training examples] Follow up of #6306 ( #6346 )
...
* add to dreambooth lora.
* add: t2i lora.
* add: sdxl t2i lora.
* style
* lcm lora sdxl.
* unwrap
* fix: enable_adapters().
2024-12-23 13:02:05 +05:30
apolinário
a7b0d8f714
Fix keys for lora format on advanced training scripts ( #6361 )
...
fix keys for lora format on advanced training scripts
2024-12-23 13:02:05 +05:30
apolinário
372bb17ebc
Add PEFT to advanced training script ( #6294 )
...
* Fix ProdigyOPT in SDXL Dreambooth script
* style
* style
* Add PEFT to Advanced Training Script
* style
* style
* ✨ style ✨
* change order for logic operation
* add lora alpha
* style
* Align PEFT to new format
* Update train_dreambooth_lora_sdxl_advanced.py
Apply #6355 fix
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2024-12-23 13:02:05 +05:30
Dhruv Nair
5df4485ecb
Fix chunking in SVD ( #6350 )
...
fix
2024-12-23 13:02:05 +05:30
Andy W
ffe5b44159
Fix "push_to_hub only create repo in consistency model lora SDXL training script" ( #6102 )
...
* fix
* style fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
dg845
fbc43ee3aa
Fix LCM distillation bug when creating the guidance scale embeddings using multiple GPUs. ( #6279 )
...
Fix bug when creating the guidance embeddings using multiple GPUs.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
Jianqi Pan
02124fe691
fix: use retrieve_latents ( #6337 )
2024-12-23 13:02:05 +05:30
Dhruv Nair
47308a91f6
Move ControlNetXS into Community Folder ( #6316 )
...
* update
* update
* update
* update
* update
* make style
* remove docs
* update
* move to research folder.
* fix-copies
* remove _toctree entry.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
Sayak Paul
0af5b440f9
[LoRA] make LoRAs trained with peft loadable when peft isn't installed ( #6306 )
...
* spit diffusers-native format from the get go.
* rejig the peft_to_diffusers mapping.
2024-12-23 13:02:05 +05:30
Will Berman
1de21a5996
amused update links to new repo ( #6344 )
...
* amused update links to new repo
* lint
2024-12-23 13:02:05 +05:30
Justin Ruan
a5dd7b50ab
Remove unused parameters and fixed FutureWarning ( #6317 )
...
* Remove unused parameters and fixed `FutureWarning`
* Fixed wrong config instance
* update unittest for `DDIMInverseScheduler`
2024-12-23 13:02:05 +05:30
YiYi Xu
1256065580
adding auto1111 features to inpainting pipeline ( #6072 )
...
* add inpaint_full_res
* fix
* update
* move get_crop_region to image processor
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* move apply_overlay to image processor
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2024-12-23 13:02:05 +05:30
priprapre
a673191219
[SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run ( #6270 )
...
Replace the link for wandb log with the correct run
2024-12-23 13:02:05 +05:30
Sayak Paul
7570fcef40
[Docs] fix: video rendering on svd. ( #6330 )
...
fix: video rendering on svd.
2024-12-23 13:02:05 +05:30
Will Berman
98f2ae0011
amused other pipelines docs ( #6343 )
...
other pipelines
2024-12-23 13:02:05 +05:30
Dhruv Nair
bfcc067f50
Add AnimateDiff conversion scripts ( #6340 )
...
* add scripts
* update
2024-12-23 13:02:05 +05:30
Dhruv Nair
f959a96fe4
Update Animatediff docs ( #6341 )
...
* update
* update
* update
2024-12-23 13:02:05 +05:30
Dhruv Nair
d310bf3589
Interruptable Pipelines ( #5867 )
...
* add interruptable pipelines
* add tests
* updatemsmq
* add interrupt property
* make fix copies
* Revert "make fix copies"
This reverts commit 914b35332b .
* add docs
* add tutorial
* Update docs/source/en/tutorials/interrupting_diffusion_process.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/interrupting_diffusion_process.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update
* fix quality issues
* fix
* update
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
dg845
ccdc5a2635
Add rescale_betas_zero_snr Argument to DDPMScheduler ( #6305 )
...
* Add rescale_betas_zero_snr argument to DDPMScheduler.
* Propagate rescale_betas_zero_snr changes to DDPMParallelScheduler.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
Sayak Paul
14ab0ead5e
[Diffusion fast] add doc for diffusion fast ( #6311 )
...
* add doc for diffusion fast
* add entry to _toctree
* Apply suggestions from code review
* fix titlew
* fix: title entry
* add note about fuse_qkv_projections
2024-12-23 13:02:05 +05:30
Younes Belkada
c07374f725
[Peft / Lora] Add adapter_names in fuse_lora ( #5823 )
...
* add adapter_name in fuse
* add tesrt
* up
* fix CI
* adapt from suggestion
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* change to `require_peft_version_greater`
* change variable names in test
* Update src/diffusers/loaders/lora.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* break into 2 lines
* final comments
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-12-23 13:02:05 +05:30
Sayak Paul
cdece7835d
[Training] Add datasets version of LCM LoRA SDXL ( #5778 )
...
* add: script to train lcm lora for sdxl with 🤗 datasets
* suit up the args.
* remove comments.
* fix num_update_steps
* fix batch unmarshalling
* fix num_update_steps_per_epoch
* fix; dataloading.
* fix microconditions.
* unconditional predictions debug
* fix batch size.
* no need to use use_auth_token
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* make vae encoding batch size an arg
* final serialization in kohya
* style
* state dict rejigging
* feat: no separate teacher unet.
* debug
* fix state dict serialization
* debug
* debug
* debug
* remove prints.
* remove kohya utility and make style
* fix serialization
* fix
* add test
* add peft dependency.
* add: peft
* remove peft
* autocast device determination from accelerator
* autocast
* reduce lora rank.
* remove unneeded space
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* style
* remove prompt dropout.
* also save in native diffusers ckpt format.
* debug
* debug
* debug
* better formation of the null embeddings.
* remove space.
* autocast fixes.
* autocast fix.
* hacky
* remove lora_sayak
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* style
* make log validation leaner.
* move back enabled in.
* fix: log_validation call.
* add: checkpointing tests
* taking my chances to see if disabling autocasting has any effect?
* start debugging
* name
* name
* name
* more debug
* more debug
* index
* remove index.
* print length
* print length
* print length
* move unet.train() after add_adapter()
* disable some prints.
* enable_adapters() manually.
* remove prints.
* some changes.
* fix params_to_optimize
* more fixes
* debug
* debug
* remove print
* disable grad for certain contexts.
* Add support for IPAdapterFull (#5911 )
* Add support for IPAdapterFull
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Fix a bug in `add_noise` function (#6085 )
* fix
* copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [Advanced Diffusion Script] Add Widget default text (#6100 )
add widget
* [Advanced Training Script] Fix pipe example (#6106 )
* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901 )
* adapter for StableDiffusionControlNetImg2ImgPipeline
* fix-copies
* fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* IP adapter support for most pipelines (#5900 )
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
* update tests
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
* revert changes to sd_attend_and_excite and sd_upscale
* make style
* fix broken tests
* update ip-adapter implementation to latest
* apply suggestions from review
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix: lora_alpha
* make vae casting conditional/
* param upcasting
* propagate comments from https://github.com/huggingface/diffusers/pull/6145
Co-authored-by: dg845 <dgu8957@gmail.com >
* [Peft] fix saving / loading when unet is not "unet" (#6046 )
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [Wuerstchen] fix fp16 training and correct lora args (#6245 )
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [docs] fix: animatediff docs (#6339 )
fix: animatediff docs
* add: note about the new script in readme_sdxl.
* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046 )"
This reverts commit 4c7e983bb5 .
* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245 )"
This reverts commit 0bb9cf0216 .
* Revert "[docs] fix: animatediff docs (#6339 )"
This reverts commit 11659a6f74 .
* remove tokenize_prompt().
* assistive comments around enable_adapters() and diable_adapters().
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com >
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com >
Co-authored-by: dg845 <dgu8957@gmail.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2024-12-23 13:02:05 +05:30
Sayak Paul
4bcd9f1385
[docs] fix: animatediff docs ( #6339 )
...
fix: animatediff docs
2024-12-23 13:02:05 +05:30
Kashif Rasul
e2463883bf
[Wuerstchen] fix fp16 training and correct lora args ( #6245 )
...
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
Kashif Rasul
6c979d83d7
[Peft] fix saving / loading when unet is not "unet" ( #6046 )
...
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
dg845
bc2c7fb89b
Change LCM-LoRA README Script Example Learning Rates to 1e-4 ( #6304 )
...
Change README LCM-LoRA example learning rates to 1e-4.
2024-12-23 13:02:05 +05:30
Jianqi Pan
ed89b4dfc4
fix: cannot set guidance_scale ( #6326 )
...
fix: set guidance_scale
2024-12-23 13:02:05 +05:30
Sayak Paul
5ed8dcfa1d
[Tests] Speed up example tests ( #6319 )
...
* remove validation args from textual onverson tests
* reduce number of train steps in textual inversion tests
* fix: directories.
* debig
* fix: directories.
* remove validation tests from textual onversion
* try reducing the time of test_text_to_image_checkpointing_use_ema
* fix: directories
* speed up test_text_to_image_checkpointing
* speed up test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* fix
* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* set checkpoints_total_limit to 2.
* test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints speed up
* speed up test_unconditional_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* debug
* fix: directories.
* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit
* speed up: test_controlnet_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_controlnet_sdxl
* speed up dreambooth tests
* speed up test_dreambooth_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_custom_diffusion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_text_to_image_lora_sdxl_text_encoder_checkpointing_checkpoints_total_limit
* speed up # checkpoint-2 should have been deleted
* speed up examples/text_to_image/test_text_to_image.py::TextToImage::test_text_to_image_checkpointing_checkpoints_total_limit
* additional speed ups
* style
2024-12-23 13:02:05 +05:30
Sayak Paul
bba008c1e1
fix: lora peft dummy components ( #6308 )
...
* fix: lora peft dummy components
* fix: dummy components
2024-12-23 13:02:05 +05:30
Sayak Paul
7727fa6926
fix: t2i apdater paper link ( #6314 )
2024-12-23 13:02:05 +05:30
mwkldeveloper
a1a427c097
fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same in train_text_to_image_lora.py ( #6259 )
...
* fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same
* format source code
* format code
* remove the autocast blocks within the pipeline
* add autocast blocks to pipeline caller in train_text_to_image_lora.py
2024-12-23 13:02:05 +05:30
Celestial Phineas
e01916d369
Fix typos in the ValueError for a nested image list as StableDiffusionControlNetPipeline input. ( #6286 )
...
Fixed typos in the `ValueError` for a nested image list as input.
2024-12-23 13:02:05 +05:30
Dhruv Nair
aeb2cf6dc6
LoRA Unfusion test fix ( #6291 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:05 +05:30
Sayak Paul
6943a96d92
[LoRA PEFT] fix LoRA loading so that correct alphas are parsed ( #6225 )
...
* initialize alpha too.
* add: test
* remove config parsing
* store rank
* debug
* remove faulty test
2024-12-23 13:02:04 +05:30
apolinário
d53f44cd12
Fix Prodigy optimizer in SDXL Dreambooth script ( #6290 )
...
* Fix ProdigyOPT in SDXL Dreambooth script
* style
* style
2024-12-23 13:02:04 +05:30
Bingxin Ke
f151182f46
[Community Pipeline] Add Marigold Monocular Depth Estimation ( #6249 )
...
* [Community Pipeline] Add Marigold Monocular Depth Estimation
- add single-file pipeline
- update README
* fix format - add one blank line
* format script with ruff
* use direct image link in example code
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:04 +05:30
Pedro Cuenca
1a68c0b174
Allow diffusers to load with Flax, w/o PyTorch ( #6272 )
2024-12-23 13:02:04 +05:30
Dhruv Nair
4b3bec158c
Remove ONNX inpaint legacy ( #6269 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-23 13:02:04 +05:30
Will Berman
8d0bf4f3da
open muse ( #5437 )
...
amused
rename
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
AdaLayerNormContinuous default values
custom micro conditioning
micro conditioning docs
put lookup from codebook in constructor
fix conversion script
remove manual fused flash attn kernel
add training script
temp remove training script
add dummy gradient checkpointing func
clarify temperatures is an instance variable by setting it
remove additional SkipFF block args
hardcode norm args
rename tests folder
fix paths and samples
fix tests
add training script
training readme
lora saving and loading
non-lora saving/loading
some readme fixes
guards
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/train_amused.py
Co-authored-by: Suraj Patil <surajp815@gmail.com >
vae upcasting
add fp16 integration tests
use tuple for micro cond
copyrights
remove casts
delegate to torch.nn.LayerNorm
move temperature to pipeline call
upsampling/downsampling changes
2024-12-23 13:02:04 +05:30
Sayak Paul
b7ff3022ee
[Refactor] move ldm3d out of stable_diffusion. ( #6263 )
...
ldm3d.
2024-12-23 13:02:04 +05:30
Dhruv Nair
df906657bc
update
2023-12-21 13:41:54 +00:00
Sayak Paul
ab0459f2b7
[Deprecated pipelines] remove pix2pix zero from init ( #6268 )
...
remove pix2pix zero from init
2023-12-21 18:17:28 +05:30
Sayak Paul
9c7cc36011
[Refactor] move panorama out of stable_diffusion ( #6262 )
...
* move panorama out.
* fix: diffedit
* fix: import.
* fix: impirt
2023-12-21 18:17:05 +05:30
Sayak Paul
325f6c53ed
[Refactor] move attend and excite out of stable_diffusion. ( #6261 )
...
* move attend and excite out.
* fix: import
* fix diffedit
2023-12-21 16:49:32 +05:30
Benjamin Bossan
43979c2890
TST Fix LoRA test that fails with PEFT >= 0.7.0 ( #6216 )
...
See #6185 for context.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-21 11:50:05 +01:00
Sayak Paul
9ea6ac1b07
[Refactor] move sag out of stable_diffusion ( #6264 )
...
move sag out of .
2023-12-21 16:09:49 +05:30