Compare commits

...

147 Commits

Author SHA1 Message Date
Patrick von Platen
180841bbde Release: v0.12.0 2023-01-25 15:48:00 +02:00
Patrick von Platen
6ba2231d72 Reproducibility 3/3 (#1924)
* make tests deterministic

* run slow tests

* prepare for testing

* finish

* refactor

* add print statements

* finish more

* correct some test failures

* more fixes

* set up to correct tests

* more corrections

* up

* fix more

* more prints

* add

* up

* up

* up

* uP

* uP

* more fixes

* uP

* up

* up

* up

* up

* fix more

* up

* up

* clean tests

* up

* up

* up

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* make

* correct

* finish

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-25 13:44:22 +01:00
Patrick von Platen
008c22d334 Improve transformers versions handling (#2104) 2023-01-25 12:50:54 +01:00
Patrick von Platen
b562b6611f Allow directly passing text embeddings to Stable Diffusion Pipeline for prompt weighting (#2071)
* add text embeds to sd

* add text embeds to sd

* finish tests

* finish

* finish

* make style

* fix tests

* make style

* make style

* up

* better docs

* fix

* fix

* new try

* up

* up

* finish
2023-01-25 12:29:49 +01:00
Sayak Paul
c1184918c5 [docs] Adds a doc on LoRA support for diffusers (#2086)
* add: a doc on LoRA support in diffusers.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* apply PR suggestions.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* remove visually incoherent elements.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-25 12:23:12 +01:00
apolinario
263b968041 Add lora tag to the model tags (#2103)
* Add `lora` tag to the model tags

For lora training

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-25 12:17:59 +01:00
Suraj Patil
480d8846a9 [doc] update example for pix2pix (#2101)
update example for pix2pix
2023-01-25 11:22:09 +01:00
patil-suraj
9dbf78e2f1 Merge branch 'main' of https://github.com/huggingface/diffusers 2023-01-25 09:12:49 +01:00
patil-suraj
9aa6fcab60 fix docs for center_crop 2023-01-25 09:12:47 +01:00
Pedro Cuenca
f37d880f6a Remove wandb from text_to_image requirements.txt (#2092) 2023-01-25 08:54:14 +01:00
Will Berman
febaf86302 [docs] [dreambooth] note random crop (#2085)
* [docs] [dreambooth] note random crop

documenting default random crop behavior
2023-01-24 08:37:34 -08:00
Takuma Mori
16bb5058b9 xFormers attention op arg (#2049)
* allow passing op to xFormers attention

original code by @patil-suraj
huggingface/diffusers@ae0cc0b71f

* correct style by `make style`

* add attention_op arg documents

* add usage example to docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add usage example to docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* code style correction by `make style`

* Update docstring code to a valid python example

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update docstring code to a valid python example

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style correction by `make style`

* Update code exmaple to fully functional

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-24 17:26:04 +01:00
Lincoln Stein
7533e3d7e6 [Feat] checkpoint_merger works on local models as well as ones that use safetensors (#2060)
* allow a local model directory to be used for merging

* moved checkpoint merge bugfix into main for testing

* possibly fix local variable "config_dict" referenced before assignment

* fix deprecation warning

* debugging...

* debugging

* allow safetensors

* safetensors try again

* fix syntax error

* further debugging

* fix logic error when checkpoint 2 is none

* more debugging...

* more debuging...

* more debugging...

* more debugging...

* debugging

* clean up status reporting

* skip the requires_safety_checker boolean

* moved checkpoint merge bugfix into main for testing

* possibly fix local variable "config_dict" referenced before assignment

* fix deprecation warning

* allow safetensors

* fix logic error when checkpoint 2 is none

* clean up status reporting

* undo hack to use private repo for community pipelines

* allow a local model directory to be used for merging

* allow safetensors

* clean up status reporting

* reformatted with black

* sort imported modules correctly

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix import style error

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-24 16:35:17 +01:00
Yuta Hayashibe
418331094d Run inference on a specific condition and fix call of manual_seed() (#2074) 2023-01-24 14:19:22 +01:00
Suraj Patil
fc8afa3ab5 [dreambooth] fix multi on gpu. (#2088)
unwrap model on multi gpu
2023-01-24 13:23:56 +01:00
Pedro Cuenca
31336dae3b Fix resume epoch for all training scripts except textual_inversion (#2079) 2023-01-24 12:02:41 +01:00
Pedro Cuenca
0e98e83927 [lora] Log images when using tensorboard (#2078)
* [lora] Log images when using tensorboard.

* Specify image format instead of transposing.

As discussed with @sayakpaul.

* Style
2023-01-24 10:26:39 +01:00
Pedro Cuenca
f4dddaf5ee [textual_inversion] Fix resuming state when using gradient checkpointing (#2072)
* Fix resuming state when using gradient checkpointing.

Also, allow --resume_from_checkpoint to be used when the checkpoint does
not yet exist (a normal training run will be started).

* style
2023-01-24 10:25:41 +01:00
Patrick von Platen
7d8b4f7f8e [Paint by example] Fix cpu offload for paint by example (#2062)
* [Paint by example] Fix paint by example

* fix more

* final fix
2023-01-24 10:05:50 +01:00
Gleb Akhmerov
a66f2baeb7 Dreambooth: reduce VRAM usage (#2039)
* Dreambooth: use `optimizer.zero_grad(set_to_none=True)` to reduce VRAM usage

* Allow the user to control `optimizer.zero_grad(set_to_none=True)` with --set_grads_to_none

* Update Dreambooth readme

* Fix link in readme

* Fix header size in readme
2023-01-23 12:21:03 +01:00
Suraj Patil
6fedb29f11 [examples] add dataloader_num_workers argument (#2070)
add --dataloader_num_workers argument
2023-01-23 10:58:03 +01:00
cafe+ai — かふぇあい
d75ad93ca7 Safetensors loading in "convert_diffusers_to_original_stable_diffusion" (#2054)
* Safetensors loading in "convert_diffusers_to_original_stable_diffusion"

Adds diffusers format saftetensors loading support

* Fix import sort order: convert_diffusers_to_original_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-23 09:44:55 +01:00
Sayak Paul
ffb3a26c5c [LoRA] Adds example on text2image fine-tuning with LoRA (#2031)
* example on fine-tuning with LoRA.

* apply make quality.

* fix: pipeline loading.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply suggestions for PR review.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply make style and make quality.

* chore: remove mention of dreambooth from text2image.

* add: weight path and wandb run link.

* Apply suggestions from code review

* apply make style.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-23 08:31:07 +01:00
wrong.wang
b15a951a48 add community pipeline: StableUnCLIPPipeline (#2037)
* add community pipeline: StableUnCLIPPipeline

* reformt stable_unclip.py with isort and black
2023-01-22 21:03:42 +01:00
Patrick von Platen
69c76173fa fix tests 2023-01-22 14:31:05 +02:00
Patrick von Platen
926b34b40c improve tests 2023-01-22 14:30:15 +02:00
Patrick von Platen
8d326e61cf Correct Pix2Pix example (#2056)
* Correct Pix2Pix example

- no advertisement of revision -> it'll be deprecated soon
- by default safety checker should be used

* Update docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx

* up
2023-01-21 15:56:29 +01:00
Patrick von Platen
59b7339a84 [From pretrained] Don't download .safetensors files if safetensors is… (#2057)
* [From pretrained] Don't download .safetensors files if safetensors is not available

* tests

* tests

* up
2023-01-21 15:51:33 +01:00
Suraj Patil
aa265f74bd [StableDiffusionInstructPix2Pix] use cpu generator in slow tests (#2051)
* use cpu generator in slow tests

* ifx get_inputs
2023-01-20 21:43:00 +02:00
Damian Stewart
3d2f24b099 Module-ise "original stable diffusion to diffusers" conversion script (#2019)
* convert __main__ to a function call and call it

* add missing type hint

* make style check pass

* move loading to src/diffusers

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 17:30:44 +01:00
Lucain
bcb476797c Remove modelcards dependency (#2050)
* Switch to huggingface_hub.ModelCard

* Remove modelcards dependency in favor of Jinja2
2023-01-20 16:39:42 +01:00
Lucain
5ea4be86ab Create repo before cloning in examples (#2047)
* Create repo before cloning in examples

* code quality
2023-01-20 16:38:37 +01:00
Suraj Patil
e5ff75540c Add InstructPix2Pix pipeline (#2040)
* being pix2pix

* ifx

* cfg image_latents

* fix some docstr

* fix

* fix

* hack

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add comments to explain the hack

* move __call__ to the top

* doc

* remove height and width

* remove depreications

* fix doc str

* quality

* fast tests

* chnage model id

* fast tests

* fix test

* address Pedro's comments

* copyright

* Simple doc page.

* Apply suggestions from code review

* style

* Remove import

* address some review comments

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 16:25:46 +01:00
hysts
3ecbbd6288 Minor fix in the documentation of LoRA (#2045)
Fix
2023-01-20 13:19:54 +01:00
Anton Lozhkov
7c82a16fc1 Fix EMA for multi-gpu training in the unconditional example (#1930)
* improve EMA

* style

* one EMA model

* quality

* fix tests

* fix test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* re organise the unconditional script

* backwards compatibility

* default to init values for some args

* fix ort script

* issubclass => isinstance

* update state_dict

* docstr

* doc

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* use .to if device is passed

* deprecate device

* make flake happy

* fix typo

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 11:35:55 +01:00
Patrick von Platen
f354dd9e2f [Save Pretrained] Remove dead code lines that can accidentally remove pytorch files (#2038)
correct safetensors
2023-01-19 10:11:27 +01:00
Patrick von Platen
007c914c70 [Lora] Model card (#2032)
* [Lora] up lora training

* finish

* finish

* finish model card
2023-01-19 09:44:02 +01:00
Patrick von Platen
3c07840b1b make style 2023-01-19 02:22:47 +01:00
Joqsan
fcb2ec8c2f Fix typos and minor redundancies (#2029)
fix typos and minor redundancies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 02:22:18 +01:00
Patrick von Platen
013955b5a7 [Dit] Fix dit tests (#2034)
* [Dit] Fix dit tests

* up
2023-01-19 01:50:22 +01:00
Patrick von Platen
ed616bd8a8 [LoRA] Add LoRA training script (#1884)
* [Lora] first upload

* add first lora version

* upload

* more

* first training

* up

* correct

* improve

* finish loaders and inference

* up

* up

* fix more

* up

* finish more

* finish more

* up

* up

* change year

* revert year change

* Change lines

* Add cloneofsimo as co-author.

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>

* finish

* fix docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* upload

* finish

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-18 18:05:51 +01:00
Suraj Patil
ac3fc64906 fix dit doc header (#2027)
The link in the main heading needs to be rendered on the doc page. It displays the text as is.
2023-01-18 10:31:24 +01:00
Kashif Rasul
37d113cce7 DiT Pipeline (#1806)
* added dit model

* import

* initial pipeline

* initial convert script

* initial pipeline

* make style

* raise valueerror

* single function

* rename classes

* use DDIMScheduler

* timesteps embedder

* samples to cpu

* fix var names

* fix numpy type

* use timesteps class for proj

* fix typo

* fix arg name

* flip_sin_to_cos and better var names

* fix C shape cal

* make style

* remove unused imports

* cleanup

* add back patch_size

* initial dit doc

* typo

* Update docs/source/api/pipelines/dit.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* added copyright license headers

* added example usage and toc

* fix variable names asserts

* remove comment

* added docs

* fix typo

* upstream changes

* set proper device for drop_ids

* added initial dit pipeline test

* update docs

* fix imports

* make fix-copies

* isort

* fix imports

* get rid of more magic numbers

* fix code when guidance is off

* remove block_kwargs

* cleanup script

* removed to_2tuple

* use FeedForward class instead of another MLP

* style

* work on mergint DiTBlock with BasicTransformerBlock

* added missing final_dropout and args to BasicTransformerBlock

* use norm from block

* fix arg

* remove unused arg

* fix call to class_embedder

* use timesteps

* make style

* attn_output gets multiplied

* removed commented code

* use Transformer2D

* use self.is_input_patches

* fix flags

* fixed conversion to use Transformer2DModel

* fixes for pipeline

* remove dit.py

* fix timesteps device

* use randn_tensor and fix fp16 inf.

* timesteps_emb already the right dtype

* fix dit test class

* fix test and style

* fix norm2 usage in vq-diffusion

* added author names to pipeline and lmagenet labels link

* fix tests

* use norm_type as string

* rename dit to transformer

* fix name

* fix test

* set  norm_type = "layer" by default

* fix tests

* do not skip common tests

* Update src/diffusers/models/attention.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* revert AdaLayerNorm API

* fix norm_type name

* make sure all components are in eval mode

* revert norm2 API

* compact

* finish deprecation

* add slow tests

* remove @

* refactor some stuff

* upload

* Update src/diffusers/pipelines/dit/pipeline_dit.py

* finish more

* finish docs

* improve docs

* finish docs

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-17 23:09:29 +01:00
Pedro Cuenca
7e29b747f9 Check k-diffusion version is at least 0.0.12 (#2022)
* Check k-diffusion version is at least 0.0.12

* make style
2023-01-17 21:16:46 +01:00
Jerry Jiarui XU
a43bdd01cd [Flax] Add Flax inpainting impl (#1966)
* [Flax] Add Flax inpainting impl

* fixed copies, add README.md

* fixed README.md

* add test

* format

* update README.md
2023-01-17 10:42:04 +01:00
Patrick von Platen
f77ff56158 [Docs] No more autocast (#2021)
no more autocast
2023-01-17 10:31:25 +01:00
Suraj Patil
f861cde14f [train_unconditional] fix LR scheduler init (#2010)
fix lr scheduler
2023-01-17 10:11:46 +01:00
William Dalheim
b2ea8a84e9 Change PNDMPipeline to use PNDMScheduler (#2003)
* pndmpipeline uses pndmscheduler

* formatted pipeline_pndm
2023-01-16 15:34:59 +01:00
Will Berman
07c0fe4b87 Use pipeline tests mixin for UnCLIP pipeline tests + unCLIP MPS fixes (#1908)
re: https://github.com/huggingface/diffusers/issues/1857

We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1).

- [x] add mixin to UnCLIPPipelineFastTests
- [x] add mixin to UnCLIPImageVariationPipelineFastTests
- [x] Move UnCLIPPipeline flags in mixin to base class
- [x] Small MPS fixes for F.pad and F.interpolate
- [x] Made test unCLIP model's dimensions smaller to run tests faster
2023-01-16 15:21:58 +01:00
Haofan Wang
1e651ca2c9 Fix typos in ColossalAI example (#2001)
fix typos
2023-01-16 15:21:04 +01:00
Patrick von Platen
522f8aa7b2 [Black] Update black library (#2007) 2023-01-16 15:16:28 +01:00
Patrick von Platen
8a3f0c1f71 [Conversion] Improve safetensors (#1989) 2023-01-16 14:26:56 +01:00
Patrick von Platen
f6a5c359cc [Community] Fix merger (#2006)
* [Community] Fix merger

* finish
2023-01-16 14:25:25 +01:00
蓝色的秋风
651c5adf8a [Conversion] Support convert diffusers to safetensors (#1996)
fix: support diffusers to safetensors
2023-01-16 12:58:01 +01:00
Erin
cc2cc00d20 Add tests for 2D UNet blocks (#1945)
* test unet blocks 2d

* change to randn_tensor

* mps flaky
2023-01-16 12:53:05 +01:00
Pedro Cuenca
8f58159159 Fix a couple typos in Dreambooth readme (#2004)
Fix a couple typos in Dreambooth readme.
2023-01-16 11:38:07 +01:00
Sayak Paul
216d190178 Update README.md to include our blog post (#1998)
* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-16 09:16:54 +01:00
Vladimir Sotnikov
9b37ed33b5 [SD Img2Img] resize source images to multiple of 8 instead of 32 (#1571)
* [Stable Diffusion Img2Img] resize source images to integer multiple of 8 instead of 32

* [Alt Diffusion Img2Img] resize source images to multiple of 8 instead of 32

* [Img2Img] fix AltDiffusion Img2Img resolution test

* [Img2Img] add Stable Diffusion Img2Img resolution test

* [Cycle Diffusion] round resolution to multiplies of 8 instead of 32

* [ONNX SD Img2Img] round resolution to multiplies of 64 instead of 32

* [SD Depth2Img] round resolution to multiplies of 8 instead of 32

* [Repaint] round resolution to multiplies of 8 instead of 32

* fix make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-13 16:02:22 +01:00
Patrick von Platen
135567f18e make style 2023-01-12 20:02:41 +00:00
Walter Hugo Lopez Pinaya
9a5d3322e7 Update docstring (#1971) 2023-01-12 21:01:40 +01:00
camenduru
f73ed17961 Allow converting Flax to PyTorch by adding a "from_flax" keyword (#1900)
* from_flax

* oops

* oops

* make style with pip install -e ".[dev]"

* oops

* now code quality happy 😋

* allow_patterns += FLAX_WEIGHTS_NAME

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* for test

* bye bye is_flax_available()

* oops

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* make style

* add test

* finihs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 20:00:35 +01:00
Katsuya
9147c4c954 Fix unused upcast_attn flag in convert_original_stable_diffusion_to_diffusers script (#1942)
Fix unused upcast_attn flag in sd to diffusers script
2023-01-12 19:55:40 +01:00
Patrick von Platen
6d3adf6570 Fix slow tests (#1983)
* [Slow tests] Fix tests

* Update tests/pipelines/karras_ve/test_karras_ve.py
2023-01-12 18:24:51 +01:00
Patrick von Platen
dbdd585cad Example tests (#1982)
* Example tests

* fix
2023-01-12 17:39:37 +01:00
klopsahlong
7f0eb35af3 Research project multi subject dreambooth (#1948)
* implemented multi subject dreambooth in research_projects

* minor edits to readme

* added style and quality fixes

Co-authored-by: Krista Opsahl-Ong <kristaopsahlong@gmail.com>
2023-01-12 11:42:35 +01:00
Haofan Wang
40aa162808 [Docs] Update README.md (#1960)
Update README.md
2023-01-12 11:34:10 +01:00
Patrick von Platen
f06e4e5579 make style 2023-01-12 11:33:11 +01:00
Patrick von Platen
57f7d25934 [CPU offload] correct cpu offload (#1968)
* [CPU offload] correct cpu offload

* [CPU offload] correct cpu offload

* finish

* finish

* Update docs/source/en/optimization/fp16.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 11:23:52 +01:00
Michael Krasnyk
50b6513531 Update CLIPGuidedStableDiffusion.feature_extractor.size to fix TypeError (#1938)
TypeError: Size should be int or sequence. Got <class 'dict'>
2023-01-11 10:52:48 +01:00
Patrick von Platen
d1d5451b64 [Community] Correct checkpoint merger (#1965) 2023-01-10 15:11:50 +01:00
Patrick von Platen
f6f1ec3a7c allow loading ddpm models into ddim (#1932) 2023-01-10 14:52:32 +01:00
Patrick von Platen
beb932c5d1 [Conversion SD] Make sure weirdly sorted keys work as well (#1959) 2023-01-10 01:23:14 +01:00
andreemic
4401e6aa2b fix typo in imagic_stable_diffusion.py (#1956)
changed named arg to self.tokenizer from `truncaton` to `truncation` according to Tokenizer docs (https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.truncation)
2023-01-09 20:34:19 +01:00
Fazzie-Maqianli
089f0f4c98 update to latest colossalai (#1951) 2023-01-09 19:47:41 +01:00
Patrick von Platen
aba2a65d6a Add automatic doc sorting (#1940)
* automatically sort docs

* add new check toc doc

* add new check toc doc

* add

* add new check toc doc

* add

* add new check toc doc

* correct

* finalize
2023-01-06 17:24:47 +01:00
vvssttkk
9f4c4f5e82 fix path to logo (#1939) 2023-01-06 16:12:30 +01:00
Patrick von Platen
409387889d [Conversion] Make sure ema weights are extracted correctly (#1937)
* [Conversion] Make sure ema weights are extracted correctly

* up

* finish
2023-01-06 07:08:39 +01:00
Patrick von Platen
2533f92532 [Stable Diffusion Guide] 101 Stable Diffusion Guide directly into the docs (#1927)
* finish

* improve

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-05 21:14:23 +01:00
Patrick von Platen
f6af0d1f33 move to intro 2023-01-05 20:57:43 +01:00
Will Berman
247b5feea1 [dreambooth] low precision guard (#1916)
* [dreambooth] low precision guard

* fix

* add docs to cli args

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 16:54:56 +01:00
Shubhamai
7101c7316b [StableDiffusionimg2img] validating input type (#1913)
* [StableDiffusionimg2img] validating input type

* fixing tests

* running make style

* make fix-copies
2023-01-05 12:01:00 +01:00
Chanran Kim
f6f4176294 [Docs] Add TRANSLATING.md file (#1920)
* init for korean docs

* edit build yml file for multi language docs

* edit one more build yml file for multi language docs

* add title for get_frontmatter error

* add translating.md

* default language for docs is en

* Update docs/TRANSLATING.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 09:42:43 +01:00
Fazzie-Maqianli
d8062ad700 Feature/colossalai (#1793)
Support ColossalAi for Dreamblooth
2023-01-05 08:53:42 +01:00
qsh-zh
be99201a56 feat : add log-rho deis multistep scheduler (#1432)
* feat : add log-rho deis multistep deis

* docs :fix typo

* docs : add docs for impl algo

* docs : remove duplicate ref

* finish deis

* add docs

* fix

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 00:09:30 +01:00
Patrick von Platen
9b63854886 Improve reproduceability 2/3 (#1906)
* [Repro] Correct reproducability

* up

* up

* uP

* up

* need better image

* allow conversion from no state dict checkpoints

* up

* up

* up

* up

* check tensors

* check tensors

* check tensors

* check tensors

* next try

* up

* up

* better name

* up

* up

* Apply suggestions from code review

* correct more

* up

* replace all torch randn

* fix

* correct

* correct

* finish

* fix more

* up
2023-01-04 23:51:17 +01:00
Peter Willemsen
67e2f95cc4 New Pipeline: Tiled-upscaling with depth perception to avoid blurry spots (#1615)
* added first version of the tiled upscaling pipeline

* reformatted to pass code quality tests
2023-01-04 23:07:58 +01:00
Chanran Kim
75d53cc839 Init for korean docs (#1910)
* init for korean docs

* edit build yml file for multi language docs

* edit one more build yml file for multi language docs

* add title for get_frontmatter error
2023-01-04 22:59:42 +01:00
Erin
9e17983d9f Test ResnetBlock2D (#1850)
* test resnet block

* fix code format required by isort

* add torch device

* nit
2023-01-04 22:57:32 +01:00
Patrick von Platen
cb8a3dbe34 make style 2023-01-04 21:50:17 +00:00
Yasyf Mohamedali
bcd6f3f9ce Various Fixes for Flax Dreambooth (#1782)
* Various Fixes for Flax Dreambooth

- Correctly update the progress bar every epoch
- Allow specifying a pretrained VAE
- Allow specifying a revision to pretrained models
- Cache compiled models between invocations (speeds up TPU execution a lot!)
- Save intermediate checkpoints by specifying `save_steps`

* Don't die when save_steps is not set.

* Address CR

* Address comments

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-04 22:49:56 +01:00
Alex Redden
19a0ce4a47 Fix lr-scaling store_true & default=True cli argument for textual_inversion training. (#1090)
Fix default lr-scaling cli argument
2023-01-04 15:43:41 +01:00
Yasyf Mohamedali
856331c61b Support training SD V2 with Flax (#1783)
* Support training SD V2 with Flax

Mostly involves supporting a v_prediction scheduler.

The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead.

* Add to other top-level files.
2023-01-04 13:19:22 +01:00
Manfred Lindmark
f7154f859c Fix --resume_from_checkpoint step in train_text_to_image.py (#1914)
fix resume step in train_text_to_image example
2023-01-04 13:05:55 +01:00
Joqsan
675ef1ffbd fix: DDPMScheduler.set_timesteps() (#1912) 2023-01-04 13:02:50 +01:00
Patrick von Platen
d67c305120 allow conversion from no state dict checkpoints 2023-01-03 19:48:13 +00:00
Patrick von Platen
2bd53a940c [Docs] Remove duplicated API doc string (#1901)
only have api docstring once for sD
2023-01-03 19:11:48 +01:00
Patrick von Platen
8ed08e4270 [Deterministic torch randn] Allow tensors to be generated on CPU (#1902)
* [Deterministic torch randn] Allow tensors to be generated on CPU

* fix more

* up

* fix more

* up

* Update src/diffusers/utils/torch_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Apply suggestions from code review

* up

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-03 18:22:40 +01:00
neverix
0df83c79e4 Fixes in comments in SD2 D2I (#1903) 2023-01-03 16:24:36 +01:00
Robert Dargavel Smith
4a7e4cec38 Add condtional generation to AudioDiffusionPipeline (#1826)
* Add condtional generation

* add fast test for conditional audio generation
2023-01-03 14:09:14 +01:00
aengusng8
f45c675d2c [addresses issue #1642] add add_noise to scheduling-sde-ve (#1827)
* add add_noise to scheduling-sde-ve

* run Black formater
2023-01-03 14:08:41 +01:00
Anton Lozhkov
1bf4f0da7e Add accelerate and xformers versions to diffusers-cli env (#1898)
Add accelerate and xformers to diffusers-cli env
2023-01-03 13:51:27 +01:00
Anton Lozhkov
f17fae641c Add UnCLIPImageVariationPipeline to dummy imports (#1897)
* Add UnCLIPImageVariationPipeline to dummy imports

* style
2023-01-03 11:57:56 +01:00
YiYi Xu
da31075700 updated doc for stable diffusion pipelines (#1770)
* add a doc page for each pipeline under api/pipelines/stable_diffusion
* add pipeline examples to docstrings
* updated stable_diffusion_2 page
* updated default markdown syntax to list methods based on https://github.com/huggingface/diffusers/pull/1870
* add function decorator

Co-authored-by: yiyixuxu <yixu@Yis-MacBook-Pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 11:35:51 -10:00
Pedro Cuenca
8c14ca3d43 Fixes to the help for report_to in training scripts (#1888)
Fixes to the help for report_to in training scripts.
2023-01-02 15:53:28 +01:00
Suraj Patil
fa1f4701e8 [examples] misc fixes (#1886)
* misc fixes

* more comments

* Update examples/textual_inversion/textual_inversion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* set transformers verbosity to warning

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 14:09:01 +01:00
agizmo
423c3a4cc6 Update ONNX Pipelines to use np.float64 instead of np.float (#1789)
Numpy 1.24 had removed the "float" scalar alias as it was depricated in v1.20.
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
https://numpy.org/devdocs/release/1.24.0-notes.html#expired-deprecations
2023-01-02 13:21:49 +01:00
Pedro Cuenca
f769d74b0f Fix typo in train_dreambooth_inpaint (#1885)
Fix typo in train_dreambooth_inpaint.
2023-01-02 11:50:58 +01:00
Patrick von Platen
21bbc633c4 [Attention] Finish refactor attention file (#1879)
* [Attention] Finish refactor attention file

* correct more

* fix

* more fixes

* correct

* up
2023-01-01 19:18:10 +01:00
Suraj Patil
62608a9102 [train_text_to_image] allow using non-ema weights for training (#1834)
* allow using non-ema weights for training

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* address more review comment

* reorganise a few lines

* always pad text to max_length to match original training

* ifx collate_fn

* remove unused code

* don't prepare ema_unet, don't register lr scheduler

* style

* assert => ValueError

* add allow_tf32

* set log level

* fix comment

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 21:49:47 +01:00
Suraj Patil
e4fe941312 [examples] update loss computation (#1861)
update loss computation
2022-12-30 14:32:38 +01:00
Patrick von Platen
ac3738462b [Docs] Improve docs (#1870)
* [Docs] Improve docs

* up
2022-12-30 13:50:01 +01:00
Pedro Cuenca
a6e2c1fe5c Fix ema decay (#1868)
* Fix ema decay and clarify nomenclature.

* Rename var.
2022-12-30 12:42:42 +01:00
Patrick von Platen
b28ab30215 [Unclip] Make sure text_embeddings & image_embeddings can directly be passed to enable interpolation tasks. (#1858)
* [Unclip] Make sure latents can be reused

* allow one to directly pass embeddings

* up

* make unclip for text work

* finish allowing to pass embeddings

* correct more

* make style
2022-12-30 12:18:19 +01:00
Patrick von Platen
29b2c93c90 Make repo structure consistent (#1862)
* move files a bit

* more refactors

* fix more

* more fixes

* fix more onnx

* make style

* upload

* fix

* up

* fix more

* up again

* up

* small fix

* Update src/diffusers/__init__.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 11:51:08 +01:00
Simon Kirsten
ab0e92fdc8 Flax: Fix img2img and align with other pipeline (#1824)
* Flax: Add components function

* Flax: Fix img2img and align with other pipeline

* Flax: Fix PRNGKey type

* Refactor strength to start_timestep

* Fix preprocess images

* Fix processed_images dimen

* latents.shape -> latents_shape

* Fix typo

* Remove "static" comment

* Remove unnecessary optional types in _generate

* Apply doc-builder code style.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-29 18:56:03 +01:00
Suraj Patil
9ea7052f0e [textual inversion] add gradient checkpointing and small fixes. (#1848)
Co-authored-by: Henrik Forstén <henrik.forsten@gmail.com>

* update TI script

* make flake happy

* fix typo
2022-12-29 15:02:29 +01:00
Patrick von Platen
03bf877bf4 [StableDiffusionInpaint] Correct test (#1859) 2022-12-29 14:47:56 +01:00
Patrick von Platen
f2e521c499 [Dtype] Align dtype casting behavior with Transformers and Accelerate (#1725)
* [Dtype] Align automatic dtype

* up

* up

* fix

* re-add accelerate
2022-12-29 14:36:02 +01:00
Patrick von Platen
debc74f442 [Versatile Diffusion] Fix cross_attention_kwargs (#1849)
fix versatile
2022-12-28 18:49:04 +01:00
Partho
2ba42aa9b1 [Community Pipeline] MagicMix (#1839)
* initial

* type hints

* update scheduler type hint

* add to README

* add example generation to README

* v -> mix_factor

* load scheduler from pretrained
2022-12-28 17:02:53 +01:00
Will Berman
53c8147afe unCLIP image variation (#1781)
* unCLIP image variation

* remove prior comment re: @pcuenca

* stable diffusion -> unCLIP re: @pcuenca

* add copy froms re: @patil-suraj
2022-12-28 14:17:09 +01:00
kabachuha
cf5265ad41 Allow selecting precision to make Dreambooth class images (#1832)
* allow selecting precision to make DB class images

addresses #1831

* add prior_generation_precision argument

* correct prior_generation_precision's description

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-27 19:51:32 +01:00
Katsuya
8874027efc Make xformers optional even if it is available (#1753)
* Make xformers optional even if it is available

* Raise exception if xformers is used but not available

* Rename use_xformers to enable_xformers_memory_efficient_attention

* Add a note about xformers in README

* Reformat code style
2022-12-27 19:47:50 +01:00
Christopher Friesen
b693aff795 fix: resize transform now preserves aspect ratio (#1804) 2022-12-27 15:10:25 +01:00
William Held
8a4c3e50bd Width was typod as weight (#1800)
* Width was typod as weight

* Run Black
2022-12-27 15:09:21 +01:00
Pedro Cuenca
68e24259af Avoid duplicating PyTorch + safetensors downloads. (#1836) 2022-12-27 14:58:15 +01:00
camenduru
1f1b6c6544 Device to use (e.g. cpu, cuda:0, cuda:1, etc.) (#1844)
* Device to use (e.g. cpu, cuda:0, cuda:1, etc.)

* "cuda" if torch.cuda.is_available() else "cpu"
2022-12-27 14:42:56 +01:00
Pedro Cuenca
df2b548e89 Make safety_checker optional in more pipelines (#1796)
* Make safety_checker optional in more pipelines.

* Remove inappropriate comment in inpaint pipeline.

* InPaint Test: set feature_extractor to None.

* Remove import

* img2img test: set feature_extractor to None.

* inpaint sd2 test: set feature_extractor to None.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-25 21:58:45 +01:00
Daquan Lin
b6d4702301 fix small mistake in annotation: 32 -> 64 (#1780)
Fix inconsistencies between code and comments in the function 'preprocess'
2022-12-24 19:56:57 +01:00
Suraj Patil
9be94d9c66 [textual_inversion] unwrap_model text encoder before accessing weights (#1816)
* unwrap_model text encoder before accessing weights

* fix another call

* fix the right call
2022-12-23 16:46:24 +01:00
Patrick von Platen
f2acfb67ac Remove hardcoded names from PT scripts (#1778)
* Remove hardcoded names from PT scripts

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-23 15:36:29 +01:00
Prathik Rao
8aa4372aea reorder model wrap + bug fix (#1799)
* reorder model wrap

* bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-22 14:51:47 +01:00
Pedro Cuenca
6043838971 Fix OOM when using PyTorch with JAX installed. (#1795)
Don't initialize Jax on startup.
2022-12-21 14:07:24 +01:00
Patrick von Platen
4125756e88 Refactor cross attention and allow mechanism to tweak cross attention function (#1639)
* first proposal

* rename

* up

* Apply suggestions from code review

* better

* up

* finish

* up

* rename

* correct versatile

* up

* up

* up

* up

* fix

* Apply suggestions from code review

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add error message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 18:49:05 +01:00
Dhruv Naik
a9190badf7 Add Flax stable diffusion img2img pipeline (#1355)
* add flax img2img pipeline

* update pipeline

* black format file

* remove argg from get_timesteps

* update get_timesteps

* fix bug: make use of timesteps for for_loop

* black file

* black, isort, flake8

* update docstring

* update readme

* update flax img2img readme

* update sd pipeline init

* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* update inits

* revert change

* update var name to image, typo

* update readme

* return new t_start instead of modified timestep

* black format

* isort files

* update docs

* fix-copies

* update prng_seed typing

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:25:08 +01:00
Suraj Patil
d07f73003d Fix num images per prompt unclip (#1787)
* use repeat_interleave

* fix repeat

* Trigger Build

* don't install accelerate from main

* install released accelrate for mps test

* Remove additional accelerate installation from main.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:03:38 +01:00
Pedro Cuenca
a6fb9407fd Dreambooth docs: minor fixes (#1758)
* Section header for in-painting, inference from checkpoint.

* Inference: link to section to perform inference from checkpoint.

* Move Dreambooth in-painting instructions to the proper place.
2022-12-20 08:39:16 +01:00
Patrick von Platen
261a448c6a Correct hf hub download (#1767)
* allow model download when no internet

* up

* make style
2022-12-20 02:07:15 +01:00
Simon Kirsten
f106ab40b3 [Flax] Stateless schedulers, fixes and refactors (#1661)
* [Flax] Stateless schedulers, fixes and refactors

* Remove scheduling_common_flax and some renames

* Update src/diffusers/schedulers/scheduling_pndm_flax.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:42:41 +01:00
Emil Bogomolov
d87cc15977 expose polynomial:power and cosine_with_restarts:num_cycles params (#1737)
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py

* fix formatting

* fix style

* Update src/diffusers/optimization.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:41:37 +01:00
Patrick von Platen
e29dc97215 make style 2022-12-20 01:38:45 +01:00
Ilmari Heikkinen
8e4733b3c3 Only test for xformers when enabling them #1773 (#1776)
* only check for xformers when xformers are enabled

* only test for xformers when enabling them
2022-12-20 01:38:28 +01:00
Prathik Rao
847daf25c7 update train_unconditional_ort.py (#1775)
* reflect changes

* run make style

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-19 23:58:55 +01:00
Pedro Cuenca
9f8c915a75 [Dreambooth] flax fixes (#1765)
* Fail if there are less images than the effective batch size.

* Remove lr-scheduler arg as it's currently ignored.

* Make guidance_scale work for batch_size > 1.
2022-12-19 20:42:25 +01:00
Anton Lozhkov
8331da4683 Bump to 0.12.0.dev0 (#1771) 2022-12-19 18:44:08 +01:00
Anton Lozhkov
f1a32203aa [Tests] Fix UnCLIP cpu offload tests (#1769) 2022-12-19 18:25:08 +01:00
Nan Liu
6f15026330 update composable diffusion for an updated diffuser library (#1697)
* update composable diffusion for an updated diffuser library

* fix style/quality for code

* Revert "fix style/quality for code"

This reverts commit 71f2349763.

* update style

* reduce memory usage by computing score sequentially
2022-12-19 18:03:40 +01:00
354 changed files with 23704 additions and 7427 deletions

View File

@@ -13,5 +13,6 @@ jobs:
with:
commit_sha: ${{ github.sha }}
package: diffusers
languages: en ko
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}

View File

@@ -14,3 +14,4 @@ jobs:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: diffusers
languages: en ko

View File

@@ -61,8 +61,8 @@ jobs:
- name: Install dependencies
run: |
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
@@ -159,4 +159,4 @@ jobs:
uses: actions/upload-artifact@v2
with:
name: torch_mps_test_reports
path: reports
path: reports

View File

@@ -59,8 +59,8 @@ jobs:
run: |
apt-get update && apt-get install libsndfile1-dev -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |

View File

@@ -61,8 +61,8 @@ jobs:
- name: Install dependencies
run: |
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
@@ -153,4 +153,4 @@ jobs:
uses: actions/upload-artifact@v2
with:
name: examples_test_reports
path: reports
path: reports

View File

@@ -45,12 +45,14 @@ quality:
isort --check-only $(check_dirs)
flake8 $(check_dirs)
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source
python utils/check_doc_toc.py
# Format source code automatically and check is there are any problems left that need manual fixing
extra_style_checks:
python utils/custom_init_isort.py
doc-builder style src/diffusers docs/source --max_len 119 --path_to_docs docs/source
python utils/check_doc_toc.py --fix_and_overwrite
# this target runs checks on all files and potentially modifies some of them

View File

@@ -1,6 +1,6 @@
<p align="center">
<br>
<img src="https://github.com/huggingface/diffusers/raw/main/docs/source/imgs/diffusers_library.jpg" width="400"/>
<img src="./docs/source/en/imgs/diffusers_library.jpg" width="400"/>
<br>
<p>
<p align="center">
@@ -235,6 +235,102 @@ images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
Diffusers also has a Image-to-Image generation pipeline with Flax/Jax
```python
import jax
import numpy as np
import jax.numpy as jnp
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import requests
from io import BytesIO
from PIL import Image
from diffusers import FlaxStableDiffusionImg2ImgPipeline
def create_key(seed=0):
return jax.random.PRNGKey(seed)
rng = create_key(0)
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_img = Image.open(BytesIO(response.content)).convert("RGB")
init_img = init_img.resize((768, 512))
prompts = "A fantasy landscape, trending on artstation"
pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="flax",
dtype=jnp.bfloat16,
)
num_samples = jax.device_count()
rng = jax.random.split(rng, jax.device_count())
prompt_ids, processed_image = pipeline.prepare_inputs(prompt=[prompts]*num_samples, image = [init_img]*num_samples)
p_params = replicate(params)
prompt_ids = shard(prompt_ids)
processed_image = shard(processed_image)
output = pipeline(
prompt_ids=prompt_ids,
image=processed_image,
params=p_params,
prng_seed=rng,
strength=0.75,
num_inference_steps=50,
jit=True,
height=512,
width=768).images
output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
```
Diffusers also has a Text-guided inpainting pipeline with Flax/Jax
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import PIL
import requests
from io import BytesIO
from diffusers import FlaxStableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained("xvjiarui/stable-diffusion-2-inpainting")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
init_image = num_samples * [init_image]
mask_image = num_samples * [mask_image]
prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs(prompt, init_image, mask_image)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
processed_masked_images = shard(processed_masked_images)
processed_masks = shard(processed_masks)
images = pipeline(prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
### Image-to-Image text-guided generation with Stable Diffusion
The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.

View File

@@ -34,8 +34,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
modelcards \
numpy \
scipy \
tensorboard \

View File

@@ -36,8 +36,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
modelcards \
numpy \
scipy \
tensorboard \

View File

@@ -34,8 +34,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
modelcards \
numpy \
scipy \
tensorboard \

View File

@@ -34,8 +34,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
modelcards \
numpy \
scipy \
tensorboard \

View File

@@ -33,8 +33,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
modelcards \
numpy \
scipy \
tensorboard \

View File

@@ -33,8 +33,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip && \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
modelcards \
numpy \
scipy \
tensorboard \

View File

@@ -54,7 +54,7 @@ doc-builder preview {package_name} {path_to_docs}
For example:
```bash
doc-builder preview diffusers docs/source/
doc-builder preview diffusers docs/source/en
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
@@ -126,23 +126,28 @@ When adding a new pipeline:
- Paper abstract
- Tips and tricks and how to use it best
- Possible an end-to-end example of how to use it
- Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. Usually as follows:
```
## XXXPipeline
[[autodoc]] XXXPipeline
```
This will include every public method of the pipeline that is documented. You can specify which methods should be in the docs:
- Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
```
## XXXPipeline
[[autodoc]] XXXPipeline
- all
- __call__
```
This will include every public method of the pipeline that is documented, as well as the `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`.
```
[[autodoc]] XXXPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
```
You can follow the same process to create a new scheduler under the `docs/source/api/schedulers` folder
### Writing source documentation
@@ -155,9 +160,9 @@ adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`funct
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`pipeline_utils.ImagePipelineOutput\`\]. This will be converted into a link with
`pipeline_utils.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~pipeline_utils.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.
provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will be converted into a link with
`pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].

57
docs/TRANSLATING.md Normal file
View File

@@ -0,0 +1,57 @@
### Translating the Diffusers documentation into your language
As part of our mission to democratize machine learning, we'd love to make the Diffusers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.
**🗞️ Open an issue**
To get started, navigate to the [Issues](https://github.com/huggingface/diffusers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button.
Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.
**🍴 Fork the repository**
First, you'll need to [fork the Diffusers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.
Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:
```bash
git clone https://github.com/YOUR-USERNAME/diffusers.git
```
**📋 Copy-paste the English version with a new language code**
The documentation files are in one leading directory:
- [`docs/source`](https://github.com/huggingface/diffusers/tree/main/docs/source): All the documentation materials are organized here by language.
You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/diffusers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:
```bash
cd ~/path/to/diffusers/docs
cp -r source/en source/LANG-ID
```
Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.
**✍️ Start translating**
The fun part comes - translating the text!
The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website.
> 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory!
The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml):
```yaml
- sections:
- local: pipeline_tutorial # Do not change this! Use the same name for your .md file
title: Pipelines for inference # Translate this!
...
title: Tutorials # Translate this!
```
Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.
> 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/diffusers/issues) and tag @patrickvonplaten.

View File

@@ -1,178 +0,0 @@
- sections:
- local: index
title: "🧨 Diffusers"
- local: quicktour
title: "Quicktour"
- local: installation
title: "Installation"
title: "Get started"
- sections:
- sections:
- local: using-diffusers/loading
title: "Loading Pipelines, Models, and Schedulers"
- local: using-diffusers/schedulers
title: "Using different Schedulers"
- local: using-diffusers/configuration
title: "Configuring Pipelines, Models, and Schedulers"
- local: using-diffusers/custom_pipeline_overview
title: "Loading and Adding Custom Pipelines"
title: "Loading & Hub"
- sections:
- local: using-diffusers/unconditional_image_generation
title: "Unconditional Image Generation"
- local: using-diffusers/conditional_image_generation
title: "Text-to-Image Generation"
- local: using-diffusers/img2img
title: "Text-Guided Image-to-Image"
- local: using-diffusers/inpaint
title: "Text-Guided Image-Inpainting"
- local: using-diffusers/depth2img
title: "Text-Guided Depth-to-Image"
- local: using-diffusers/reusing_seeds
title: "Reusing seeds for deterministic generation"
- local: using-diffusers/custom_pipeline_examples
title: "Community Pipelines"
- local: using-diffusers/contribute_pipeline
title: "How to contribute a Pipeline"
title: "Pipelines for Inference"
- sections:
- local: using-diffusers/rl
title: "Reinforcement Learning"
- local: using-diffusers/audio
title: "Audio"
- local: using-diffusers/other-modalities
title: "Other Modalities"
title: "Taking Diffusers Beyond Images"
title: "Using Diffusers"
- sections:
- local: optimization/fp16
title: "Memory and Speed"
- local: optimization/xformers
title: "xFormers"
- local: optimization/onnx
title: "ONNX"
- local: optimization/open_vino
title: "OpenVINO"
- local: optimization/mps
title: "MPS"
- local: optimization/habana
title: "Habana Gaudi"
title: "Optimization/Special Hardware"
- sections:
- local: training/overview
title: "Overview"
- local: training/unconditional_training
title: "Unconditional Image Generation"
- local: training/text_inversion
title: "Textual Inversion"
- local: training/dreambooth
title: "Dreambooth"
- local: training/text2image
title: "Text-to-image fine-tuning"
title: "Training"
- sections:
- local: conceptual/stable_diffusion
title: "Stable Diffusion"
- local: conceptual/philosophy
title: "Philosophy"
- local: conceptual/contribution
title: "How to contribute?"
title: "Conceptual Guides"
- sections:
- sections:
- local: api/models
title: "Models"
- local: api/diffusion_pipeline
title: "Diffusion Pipeline"
- local: api/logging
title: "Logging"
- local: api/configuration
title: "Configuration"
- local: api/outputs
title: "Outputs"
title: "Main Classes"
- sections:
- local: api/pipelines/overview
title: "Overview"
- local: api/pipelines/alt_diffusion
title: "AltDiffusion"
- local: api/pipelines/cycle_diffusion
title: "Cycle Diffusion"
- local: api/pipelines/ddim
title: "DDIM"
- local: api/pipelines/ddpm
title: "DDPM"
- local: api/pipelines/latent_diffusion
title: "Latent Diffusion"
- local: api/pipelines/latent_diffusion_uncond
title: "Unconditional Latent Diffusion"
- local: api/pipelines/paint_by_example
title: "PaintByExample"
- local: api/pipelines/pndm
title: "PNDM"
- local: api/pipelines/score_sde_ve
title: "Score SDE VE"
- local: api/pipelines/stable_diffusion
title: "Stable Diffusion"
- local: api/pipelines/stable_diffusion_2
title: "Stable Diffusion 2"
- local: api/pipelines/stable_diffusion_safe
title: "Safe Stable Diffusion"
- local: api/pipelines/stochastic_karras_ve
title: "Stochastic Karras VE"
- local: api/pipelines/dance_diffusion
title: "Dance Diffusion"
- local: api/pipelines/unclip
title: "UnCLIP"
- local: api/pipelines/versatile_diffusion
title: "Versatile Diffusion"
- local: api/pipelines/vq_diffusion
title: "VQ Diffusion"
- local: api/pipelines/repaint
title: "RePaint"
- local: api/pipelines/audio_diffusion
title: "Audio Diffusion"
title: "Pipelines"
- sections:
- local: api/schedulers/overview
title: "Overview"
- local: api/schedulers/ddim
title: "DDIM"
- local: api/schedulers/ddpm
title: "DDPM"
- local: api/schedulers/singlestep_dpm_solver
title: "Singlestep DPM-Solver"
- local: api/schedulers/multistep_dpm_solver
title: "Multistep DPM-Solver"
- local: api/schedulers/heun
title: "Heun Scheduler"
- local: api/schedulers/dpm_discrete
title: "DPM Discrete Scheduler"
- local: api/schedulers/dpm_discrete_ancestral
title: "DPM Discrete Scheduler with ancestral sampling"
- local: api/schedulers/stochastic_karras_ve
title: "Stochastic Kerras VE"
- local: api/schedulers/lms_discrete
title: "Linear Multistep"
- local: api/schedulers/pndm
title: "PNDM"
- local: api/schedulers/score_sde_ve
title: "VE-SDE"
- local: api/schedulers/ipndm
title: "IPNDM"
- local: api/schedulers/score_sde_vp
title: "VP-SDE"
- local: api/schedulers/euler
title: "Euler scheduler"
- local: api/schedulers/euler_ancestral
title: "Euler Ancestral Scheduler"
- local: api/schedulers/vq_diffusion
title: "VQDiffusionScheduler"
- local: api/schedulers/repaint
title: "RePaint Scheduler"
title: "Schedulers"
- sections:
- local: api/experimental/rl
title: "RL Planning"
title: "Experimental Features"
title: "API"

204
docs/source/en/_toctree.yml Normal file
View File

@@ -0,0 +1,204 @@
- sections:
- local: index
title: 🧨 Diffusers
- local: quicktour
title: Quicktour
- local: stable_diffusion
title: Stable Diffusion
- local: installation
title: Installation
title: Get started
- sections:
- sections:
- local: using-diffusers/loading
title: Loading Pipelines, Models, and Schedulers
- local: using-diffusers/schedulers
title: Using different Schedulers
- local: using-diffusers/configuration
title: Configuring Pipelines, Models, and Schedulers
- local: using-diffusers/custom_pipeline_overview
title: Loading and Adding Custom Pipelines
title: Loading & Hub
- sections:
- local: using-diffusers/unconditional_image_generation
title: Unconditional Image Generation
- local: using-diffusers/conditional_image_generation
title: Text-to-Image Generation
- local: using-diffusers/img2img
title: Text-Guided Image-to-Image
- local: using-diffusers/inpaint
title: Text-Guided Image-Inpainting
- local: using-diffusers/depth2img
title: Text-Guided Depth-to-Image
- local: using-diffusers/reusing_seeds
title: Reusing seeds for deterministic generation
- local: using-diffusers/reproducibility
title: Reproducibility
- local: using-diffusers/custom_pipeline_examples
title: Community Pipelines
- local: using-diffusers/contribute_pipeline
title: How to contribute a Pipeline
title: Pipelines for Inference
- sections:
- local: using-diffusers/rl
title: Reinforcement Learning
- local: using-diffusers/audio
title: Audio
- local: using-diffusers/other-modalities
title: Other Modalities
title: Taking Diffusers Beyond Images
title: Using Diffusers
- sections:
- local: optimization/fp16
title: Memory and Speed
- local: optimization/xformers
title: xFormers
- local: optimization/onnx
title: ONNX
- local: optimization/open_vino
title: OpenVINO
- local: optimization/mps
title: MPS
- local: optimization/habana
title: Habana Gaudi
title: Optimization/Special Hardware
- sections:
- local: training/overview
title: Overview
- local: training/unconditional_training
title: Unconditional Image Generation
- local: training/text_inversion
title: Textual Inversion
- local: training/dreambooth
title: Dreambooth
- local: training/text2image
title: Text-to-image fine-tuning
- local: training/lora
title: LoRA Support in Diffusers
title: Training
- sections:
- local: conceptual/philosophy
title: Philosophy
- local: conceptual/contribution
title: How to contribute?
title: Conceptual Guides
- sections:
- sections:
- local: api/models
title: Models
- local: api/diffusion_pipeline
title: Diffusion Pipeline
- local: api/logging
title: Logging
- local: api/configuration
title: Configuration
- local: api/outputs
title: Outputs
- local: api/loaders
title: Loaders
title: Main Classes
- sections:
- local: api/pipelines/overview
title: Overview
- local: api/pipelines/alt_diffusion
title: AltDiffusion
- local: api/pipelines/audio_diffusion
title: Audio Diffusion
- local: api/pipelines/cycle_diffusion
title: Cycle Diffusion
- local: api/pipelines/dance_diffusion
title: Dance Diffusion
- local: api/pipelines/ddim
title: DDIM
- local: api/pipelines/ddpm
title: DDPM
- local: api/pipelines/dit
title: DiT
- local: api/pipelines/latent_diffusion
title: Latent Diffusion
- local: api/pipelines/paint_by_example
title: PaintByExample
- local: api/pipelines/pndm
title: PNDM
- local: api/pipelines/repaint
title: RePaint
- local: api/pipelines/stable_diffusion_safe
title: Safe Stable Diffusion
- local: api/pipelines/score_sde_ve
title: Score SDE VE
- sections:
- local: api/pipelines/stable_diffusion/overview
title: Overview
- local: api/pipelines/stable_diffusion/text2img
title: Text-to-Image
- local: api/pipelines/stable_diffusion/img2img
title: Image-to-Image
- local: api/pipelines/stable_diffusion/inpaint
title: Inpaint
- local: api/pipelines/stable_diffusion/depth2img
title: Depth-to-Image
- local: api/pipelines/stable_diffusion/image_variation
title: Image-Variation
- local: api/pipelines/stable_diffusion/upscale
title: Super-Resolution
- local: api/pipelines/stable_diffusion/pix2pix
title: InstructPix2Pix
title: Stable Diffusion
- local: api/pipelines/stable_diffusion_2
title: Stable Diffusion 2
- local: api/pipelines/stochastic_karras_ve
title: Stochastic Karras VE
- local: api/pipelines/unclip
title: UnCLIP
- local: api/pipelines/latent_diffusion_uncond
title: Unconditional Latent Diffusion
- local: api/pipelines/versatile_diffusion
title: Versatile Diffusion
- local: api/pipelines/vq_diffusion
title: VQ Diffusion
title: Pipelines
- sections:
- local: api/schedulers/overview
title: Overview
- local: api/schedulers/ddim
title: DDIM
- local: api/schedulers/ddpm
title: DDPM
- local: api/schedulers/deis
title: DEIS
- local: api/schedulers/dpm_discrete
title: DPM Discrete Scheduler
- local: api/schedulers/dpm_discrete_ancestral
title: DPM Discrete Scheduler with ancestral sampling
- local: api/schedulers/euler_ancestral
title: Euler Ancestral Scheduler
- local: api/schedulers/euler
title: Euler scheduler
- local: api/schedulers/heun
title: Heun Scheduler
- local: api/schedulers/ipndm
title: IPNDM
- local: api/schedulers/lms_discrete
title: Linear Multistep
- local: api/schedulers/multistep_dpm_solver
title: Multistep DPM-Solver
- local: api/schedulers/pndm
title: PNDM
- local: api/schedulers/repaint
title: RePaint Scheduler
- local: api/schedulers/singlestep_dpm_solver
title: Singlestep DPM-Solver
- local: api/schedulers/stochastic_karras_ve
title: Stochastic Kerras VE
- local: api/schedulers/score_sde_ve
title: VE-SDE
- local: api/schedulers/score_sde_vp
title: VP-SDE
- local: api/schedulers/vq_diffusion
title: VQDiffusionScheduler
title: Schedulers
- sections:
- local: api/experimental/rl
title: RL Planning
title: Experimental Features
title: API

View File

@@ -30,13 +30,17 @@ Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrain
## DiffusionPipeline
[[autodoc]] DiffusionPipeline
- from_pretrained
- save_pretrained
- to
- all
- __call__
- device
- components
- to
## ImagePipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipeline_utils.ImagePipelineOutput
[[autodoc]] pipelines.ImagePipelineOutput
## AudioPipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipelines.AudioPipelineOutput

View File

@@ -0,0 +1,30 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Loaders
There are many weights to train adapter neural networks for diffusion models, such as
- [Textual Inversion](./training/text_inversion.mdx)
- [LoRA](https://github.com/cloneofsimo/lora)
- [Hypernetworks](https://arxiv.org/abs/1609.09106)
Such adapter neural networks often only consist of a fraction of the number of weights compared
to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use
API to load such adapter neural networks via the [`loaders.py` module](https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders.py).
**Note**: This module is still highly experimental and prone to future changes.
## LoaderMixins
### UNet2DConditionLoadersMixin
[[autodoc]] loaders.UNet2DConditionLoadersMixin

View File

@@ -1,4 +1,4 @@
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

View File

@@ -41,13 +41,13 @@ The models are built on the base class ['ModelMixin'] that is a `torch.nn.module
[[autodoc]] models.vae.DecoderOutput
## VQEncoderOutput
[[autodoc]] models.vae.VQEncoderOutput
[[autodoc]] models.vq_model.VQEncoderOutput
## VQModel
[[autodoc]] VQModel
## AutoencoderKLOutput
[[autodoc]] models.vae.AutoencoderKLOutput
[[autodoc]] models.autoencoder_kl.AutoencoderKLOutput
## AutoencoderKL
[[autodoc]] AutoencoderKL
@@ -56,7 +56,7 @@ The models are built on the base class ['ModelMixin'] that is a `torch.nn.module
[[autodoc]] Transformer2DModel
## Transformer2DModelOutput
[[autodoc]] models.attention.Transformer2DModelOutput
[[autodoc]] models.transformer_2d.Transformer2DModelOutput
## PriorTransformer
[[autodoc]] models.prior_transformer.PriorTransformer

View File

@@ -25,7 +25,7 @@ pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
outputs = pipeline()
```
The `outputs` object is a [`~pipeline_utils.ImagePipelineOutput`], as we can see in the
The `outputs` object is a [`~pipelines.ImagePipelineOutput`], as we can see in the
documentation of that class below, it means it has an image attribute.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`:

View File

@@ -28,7 +28,7 @@ The abstract of the paper is the following:
## Tips
- AltDiffusion is conceptually exaclty the same as [Stable Diffusion](./api/pipelines/stable_diffusion).
- AltDiffusion is conceptually exaclty the same as [Stable Diffusion](./api/pipelines/stable_diffusion/overview).
- *Run AltDiffusion*
@@ -69,15 +69,15 @@ If you want to use all possible use cases in a single `DiffusionPipeline` we rec
## AltDiffusionPipelineOutput
[[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput
- all
- __call__
## AltDiffusionPipeline
[[autodoc]] AltDiffusionPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
## AltDiffusionImg2ImgPipeline
[[autodoc]] AltDiffusionImg2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing

View File

@@ -91,12 +91,8 @@ display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
## AudioDiffusionPipeline
[[autodoc]] AudioDiffusionPipeline
- __call__
- encode
- slerp
- all
- __call__
## Mel
[[autodoc]] Mel
- audio_slice_to_image
- image_to_audio

View File

@@ -96,4 +96,5 @@ image.save("black_to_blue.png")
## CycleDiffusionPipeline
[[autodoc]] CycleDiffusionPipeline
- all
- __call__

View File

@@ -30,4 +30,5 @@ The original codebase of this implementation can be found [here](https://github.
## DanceDiffusionPipeline
[[autodoc]] DanceDiffusionPipeline
- __call__
- all
- __call__

View File

@@ -32,4 +32,5 @@ For questions, feel free to contact the author on [tsong.me](https://tsong.me/).
## DDIMPipeline
[[autodoc]] DDIMPipeline
- __call__
- all
- __call__

View File

@@ -33,4 +33,5 @@ The original codebase of this paper can be found [here](https://github.com/hojon
# DDPMPipeline
[[autodoc]] DDPMPipeline
- __call__
- all
- __call__

View File

@@ -0,0 +1,59 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Scalable Diffusion Models with Transformers (DiT)
## Overview
[Scalable Diffusion Models with Transformers](https://arxiv.org/abs/2212.09748) (DiT) by William Peebles and Saining Xie.
The abstract of the paper is the following:
*We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.*
The original codebase of this paper can be found here: [facebookresearch/dit](https://github.com/facebookresearch/dit).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_dit.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py) | *Conditional Image Generation* | - |
## Usage example
```python
from diffusers import DiTPipeline, DPMSolverMultistepScheduler
import torch
pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
# pick words from Imagenet class labels
pipe.labels # to print all available words
# pick words that exist in ImageNet
words = ["white shark", "umbrella"]
class_ids = pipe.get_label_ids(words)
generator = torch.manual_seed(33)
output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator)
image = output.images[0] # label 'white shark'
```
## DiTPipeline
[[autodoc]] DiTPipeline
- all
- __call__

View File

@@ -40,8 +40,10 @@ The original codebase can be found [here](https://github.com/CompVis/latent-diff
## LDMTextToImagePipeline
[[autodoc]] LDMTextToImagePipeline
- __call__
- all
- __call__
## LDMSuperResolutionPipeline
[[autodoc]] LDMSuperResolutionPipeline
- __call__
- all
- __call__

View File

@@ -38,4 +38,5 @@ The original codebase can be found [here](https://github.com/CompVis/latent-diff
## LDMPipeline
[[autodoc]] LDMPipeline
- __call__
- all
- __call__

View File

@@ -69,5 +69,6 @@ image
```
## PaintByExamplePipeline
[[autodoc]] pipelines.paint_by_example.pipeline_paint_by_example.PaintByExamplePipeline
- __call__
[[autodoc]] PaintByExamplePipeline
- all
- __call__

View File

@@ -30,6 +30,6 @@ The original codebase can be found [here](https://github.com/luping-liu/PNDM).
## PNDMPipeline
[[autodoc]] pipelines.pndm.pipeline_pndm.PNDMPipeline
- __call__
[[autodoc]] PNDMPipeline
- all
- __call__

View File

@@ -72,6 +72,6 @@ inpainted_image = output.images[0]
```
## RePaintPipeline
[[autodoc]] pipelines.repaint.pipeline_repaint.RePaintPipeline
- __call__
[[autodoc]] RePaintPipeline
- all
- __call__

View File

@@ -32,5 +32,5 @@ This pipeline implements the Variance Expanding (VE) variant of the method.
## ScoreSdeVePipeline
[[autodoc]] ScoreSdeVePipeline
- __call__
- all
- __call__

View File

@@ -0,0 +1,33 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Depth-to-Image Generation
## StableDiffusionDepth2ImgPipeline
The depth-guided stable diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), as part of Stable Diffusion 2.0. It uses [MiDas](https://github.com/isl-org/MiDaS) to infer depth based on an image.
[`StableDiffusionDepth2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images as well as a `depth_map` to preserve the images structure.
The original codebase can be found here:
- *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion)
Available Checkpoints are:
- *stable-diffusion-2-depth*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth)
[[autodoc]] StableDiffusionDepth2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,31 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Image Variation
## StableDiffusionImageVariationPipeline
[`StableDiffusionImageVariationPipeline`] lets you generate variations from an input image using Stable Diffusion. It uses a fine-tuned version of Stable Diffusion model, trained by [Justin Pinkney](https://www.justinpinkney.com/) (@Buntworthy) at [Lambda](https://lambdalabs.com/)
The original codebase can be found here:
[Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations)
Available Checkpoints are:
- *sd-image-variations-diffusers*: [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
[[autodoc]] StableDiffusionImageVariationPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,29 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Image-to-Image Generation
## StableDiffusionImg2ImgPipeline
The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion.
The original codebase can be found here: [CampVis/stable-diffusion](https://github.com/CompVis/stable-diffusion/blob/main/scripts/img2img.py)
[`StableDiffusionImg2ImgPipeline`] is compatible with all Stable Diffusion checkpoints for [Text-to-Image](./text2img)
[[autodoc]] StableDiffusionImg2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,33 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Text-Guided Image Inpainting
## StableDiffusionInpaintPipeline
The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionInpaintPipeline`] lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion.
The original codebase can be found here:
- *Stable Diffusion V1*: [CampVis/stable-diffusion](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion)
- *Stable Diffusion V2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#image-inpainting-with-stable-diffusion)
Available checkpoints are:
- *stable-diffusion-inpainting (512x512 resolution)*: [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting)
- *stable-diffusion-2-inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
[[autodoc]] StableDiffusionInpaintPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -25,9 +25,15 @@ For more details about how Stable Diffusion works and how it differs from the ba
| Pipeline | Tasks | Colab | Demo
|---|---|:---:|:---:|
| [pipeline_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) | [🤗 Stable Diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)
| [pipeline_stable_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [pipeline_stable_diffusion_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | **Experimental** *Text-Guided Image Inpainting* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
| [StableDiffusionPipeline](./text2img) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) | [🤗 Stable Diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)
| [StableDiffusionImg2ImgPipeline](./img2img) | *Image-to-Image Text-Guided Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [StableDiffusionInpaintPipeline](./inpaint) | **Experimental** *Text-Guided Image Inpainting* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
| [StableDiffusionDepth2ImgPipeline](./depth2img) | **Experimental** *Depth-to-Image Text-Guided Generation * | | Coming soon
| [StableDiffusionImageVariationPipeline](./image_variation) | **Experimental** *Image Variation Generation * | | [🤗 Stable Diffusion Image Variations](https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations)
| [StableDiffusionUpscalePipeline](./upscale) | **Experimental** *Text-Guided Image Super-Resolution * | | Coming soon
| [StableDiffusionInstructPix2PixPipeline](./pix2pix) | **Experimental** *Text-Based Image Editing * | | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/spaces/timbrooks/instruct-pix2pix)
## Tips
@@ -70,54 +76,3 @@ If you want to use all possible use cases in a single `DiffusionPipeline` you ca
## StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
## StableDiffusionPipeline
[[autodoc]] StableDiffusionPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_vae_slicing
- disable_vae_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionImg2ImgPipeline
[[autodoc]] StableDiffusionImg2ImgPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionInpaintPipeline
[[autodoc]] StableDiffusionInpaintPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionDepth2ImgPipeline
[[autodoc]] StableDiffusionDepth2ImgPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionImageVariationPipeline
[[autodoc]] StableDiffusionImageVariationPipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionUpscalePipeline
[[autodoc]] StableDiffusionUpscalePipeline
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,70 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# InstructPix2Pix: Learning to Follow Image Editing Instructions
## Overview
[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) by Tim Brooks, Aleksander Holynski and Alexei A. Efros.
The abstract of the paper is the following:
*We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.*
Resources:
* [Project Page](https://www.timothybrooks.com/instruct-pix2pix).
* [Paper](https://arxiv.org/abs/2211.09800).
* [Original Code](https://github.com/timothybrooks/instruct-pix2pix).
* [Demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix).
## Available Pipelines:
| Pipeline | Tasks | Demo
|---|---|:---:|
| [StableDiffusionInstructPix2PixPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py) | *Text-Based Image Editing* | [🤗 Space](https://huggingface.co/spaces/timbrooks/instruct-pix2pix) |
<!-- TODO: add Colab -->
## Usage example
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(url)
prompt = "make the mountains snowy"
edit = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images[0]
images[0].save("snowy_mountains.png")
```
## StableDiffusionInstructPix2PixPipeline
[[autodoc]] StableDiffusionInstructPix2PixPipeline
- __call__
- all

View File

@@ -0,0 +1,39 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Text-to-Image Generation
## StableDiffusionPipeline
The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photo-realistic images given any text input using Stable Diffusion.
The original codebase can be found here:
- *Stable Diffusion V1*: [CampVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
- *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion)
Available Checkpoints are:
- *stable-diffusion-v1-4 (512x512 resolution)* [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
- *stable-diffusion-v1-5 (512x512 resolution)* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- *stable-diffusion-2-base (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base)
- *stable-diffusion-2 (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)
- *stable-diffusion-2-1-base (512x512 resolution)* [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- *stable-diffusion-2-1 (768x768 resolution)*: [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
[[autodoc]] StableDiffusionPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_vae_slicing
- disable_vae_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,32 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Super-Resolution
## StableDiffusionUpscalePipeline
The upscaler diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), as part of Stable Diffusion 2.0. [`StableDiffusionUpscalePipeline`] can be used to enhance the resolution of input images by a factor of 4.
The original codebase can be found here:
- *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#image-upscaling-with-stable-diffusion)
Available Checkpoints are:
- *stabilityai/stable-diffusion-x4-upscaler (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)
[[autodoc]] StableDiffusionUpscalePipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -24,17 +24,20 @@ For more details about how Stable Diffusion 2 works and how it differs from Stab
### Available checkpoints:
Note that the architecture is more or less identical to [Stable Diffusion 1](./api/pipelines/stable_diffusion) so please refer to [this page](./api/pipelines/stable_diffusion) for API documentation.
Note that the architecture is more or less identical to [Stable Diffusion 1](./stable_diffusion/overview) so please refer to [this page](./stable_diffusion/overview) for API documentation.
- *Text-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) with [`StableDiffusionPipeline`]
- *Text-to-Image (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) with [`StableDiffusionPipeline`]
- *Image Inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) with [`StableDiffusionInpaintPipeline`]
- *Image Upscaling (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) [`StableDiffusionUpscalePipeline`]
- *Super-Resolution (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) [`StableDiffusionUpscalePipeline`]
- *Depth-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) with [`StableDiffusionDepth2ImagePipeline`]
We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest scheduler there is.
- *Text-to-Image (512x512 resolution)*:
### Text-to-Image
- *Text-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) with [`StableDiffusionPipeline`]
```python
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
@@ -51,7 +54,7 @@ image = pipe(prompt, num_inference_steps=25).images[0]
image.save("astronaut.png")
```
- *Text-to-Image (768x768 resolution)*:
- *Text-to-Image (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) with [`StableDiffusionPipeline`]
```python
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
@@ -68,7 +71,9 @@ image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0]
image.save("astronaut.png")
```
- *Image Inpainting (512x512 resolution)*:
### Image Inpainting
- *Image Inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) with [`StableDiffusionInpaintPipeline`]
```python
import PIL
@@ -102,7 +107,10 @@ image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inferen
image.save("yellow_cat.png")
```
- *Image Upscaling (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) [`StableDiffusionUpscalePipeline`]
### Super-Resolution
- *Image Upscaling (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) with [`StableDiffusionUpscalePipeline`]
```python
import requests
@@ -126,16 +134,10 @@ upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
upscaled_image.save("upsampled_cat.png")
```
### Depth-to-Image
- *Depth-Guided Text-to-Image*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) [`StableDiffusionDepth2ImagePipeline`]
**Installation**
```bash
!pip install -U git+https://github.com/huggingface/transformers.git
!pip install diffusers[torch]
```
**Example**
```python
import torch

View File

@@ -28,7 +28,7 @@ The abstract of the paper is the following:
## Tips
- Safe Stable Diffusion may also be used with weights of [Stable Diffusion](./api/pipelines/stable_diffusion).
- Safe Stable Diffusion may also be used with weights of [Stable Diffusion](./api/pipelines/stable_diffusion/text2img).
### Run Safe Stable Diffusion
@@ -81,10 +81,10 @@ To use a different scheduler, you can either change it via the [`ConfigMixin.fro
## StableDiffusionSafePipelineOutput
[[autodoc]] pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput
- all
- __call__
## StableDiffusionPipelineSafe
[[autodoc]] StableDiffusionPipelineSafe
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing

View File

@@ -32,4 +32,5 @@ This pipeline implements the Stochastic sampling tailored to the Variance-Expand
## KarrasVePipeline
[[autodoc]] KarrasVePipeline
- __call__
- all
- __call__

View File

@@ -24,8 +24,14 @@ The unCLIP model in diffusers comes from kakaobrain's karlo and the original cod
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_unclip.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip.py) | *Text-to-Image Generation* | - |
| [pipeline_unclip_image_variation.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py) | *Image-Guided Image Generation* | - |
## UnCLIPPipeline
[[autodoc]] pipelines.unclip.pipeline_unclip.UnCLIPPipeline
- __call__
[[autodoc]] UnCLIPPipeline
- all
- __call__
[[autodoc]] UnCLIPImageVariationPipeline
- all
- __call__

View File

@@ -20,7 +20,7 @@ The abstract of the paper is the following:
## Tips
- VersatileDiffusion is conceptually very similar as [Stable Diffusion](./api/pipelines/stable_diffusion), but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image.
- VersatileDiffusion is conceptually very similar as [Stable Diffusion](./api/pipelines/stable_diffusion/overview), but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image.
### *Run VersatileDiffusion*
@@ -56,18 +56,15 @@ To use a different scheduler, you can either change it via the [`ConfigMixin.fro
## VersatileDiffusionTextToImagePipeline
[[autodoc]] VersatileDiffusionTextToImagePipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
## VersatileDiffusionImageVariationPipeline
[[autodoc]] VersatileDiffusionImageVariationPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
## VersatileDiffusionDualGuidedPipeline
[[autodoc]] VersatileDiffusionDualGuidedPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing

View File

@@ -30,5 +30,6 @@ The original codebase can be found [here](https://github.com/microsoft/VQ-Diffus
## VQDiffusionPipeline
[[autodoc]] pipelines.vq_diffusion.pipeline_vq_diffusion.VQDiffusionPipeline
- __call__
[[autodoc]] VQDiffusionPipeline
- all
- __call__

View File

@@ -0,0 +1,22 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DEIS
Fast Sampling of Diffusion Models with Exponential Integrator.
## Overview
Original paper can be found [here](https://arxiv.org/abs/2204.13902). The original implementation can be found [here](https://github.com/qsh-zh/deis).
## DEISMultistepScheduler
[[autodoc]] DEISMultistepScheduler

View File

@@ -37,6 +37,7 @@ To this end, the design of schedulers is such that:
- Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality.
- Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists).
- Many diffusion pipelines, such as [`StableDiffusionPipeline`] and [`DiTPipeline`] can use any of [`KarrasDiffusionSchedulers`]
## Schedulers Summary
@@ -80,4 +81,6 @@ The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...
[[autodoc]] schedulers.scheduling_utils.SchedulerOutput
### KarrasDiffusionSchedulers
[[autodoc]] schedulers.scheduling_utils.KarrasDiffusionSchedulers

View File

Before

Width:  |  Height:  |  Size: 102 KiB

After

Width:  |  Height:  |  Size: 102 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -47,9 +47,9 @@ available a colab notebook to directly try them out.
| [pndm](./api/pipelines/pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
| [score_sde_ve](./api/pipelines/score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [score_sde_vp](./api/pipelines/score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [stable_diffusion](./api/pipelines/stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion/text2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion/img2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion/inpaint) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |

View File

@@ -20,7 +20,6 @@ We'll discuss how the following settings impact performance and memory.
| ---------------- | ------- | ------- |
| original | 9.50s | x1 |
| cuDNN auto-tuner | 9.37s | x1.01 |
| autocast (fp16) | 5.47s | x1.74 |
| fp16 | 3.61s | x2.63 |
| channels last | 3.30s | x2.88 |
| traced UNet | 3.21s | x2.96 |
@@ -54,27 +53,9 @@ import torch
torch.backends.cuda.matmul.allow_tf32 = True
```
## Automatic mixed precision (AMP)
If you use a CUDA GPU, you can take advantage of `torch.autocast` to perform inference roughly twice as fast at the cost of slightly lower precision. All you need to do is put your inference call inside an `autocast` context manager. The following example shows how to do it using Stable Diffusion text-to-image generation as an example:
```Python
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt).images[0]
```
Despite the precision loss, in our experience the final image results look the same as the `float32` versions. Feel free to experiment and report back!
## Half precision weights
To save more GPU memory and get even more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named `fp16`, and telling PyTorch to use the `float16` type when loading them:
To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named `fp16`, and telling PyTorch to use the `float16` type when loading them:
```Python
pipe = StableDiffusionPipeline.from_pretrained(
@@ -88,6 +69,11 @@ prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
<Tip warning={true}>
It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure
float16 precision.
</Tip>
## Sliced attention for additional memory savings
For even additional memory savings, you can use a sliced version of attention that performs the computation in steps instead of all at once.
@@ -149,7 +135,7 @@ You may see a small performance boost in VAE decode on multi-image batches. Ther
## Offloading to CPU with accelerate for memory savings
For additional memory savings, you can offload the weights to CPU and load them to GPU when performing the forward pass.
For additional memory savings, you can offload the weights to CPU and only load them to GPU when performing the forward pass.
To perform CPU offloading, all you have to do is invoke [`~StableDiffusionPipeline.enable_sequential_cpu_offload`]:
@@ -162,16 +148,15 @@ pipe = StableDiffusionPipeline.from_pretrained(
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_sequential_cpu_offload()
image = pipe(prompt).images[0]
```
And you can get the memory consumption to < 2GB.
And you can get the memory consumption to < 3GB.
If is also possible to chain it with attention slicing for minimal memory consumption, running it in as little as < 800mb of GPU vRAM:
If is also possible to chain it with attention slicing for minimal memory consumption (< 2GB).
```Python
import torch
@@ -182,7 +167,6 @@ pipe = StableDiffusionPipeline.from_pretrained(
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_sequential_cpu_offload()
@@ -191,6 +175,8 @@ pipe.enable_attention_slicing(1)
image = pipe(prompt).images[0]
```
**Note**: When using `enable_sequential_cpu_offload()`, it is important to **not** move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal. See [this issue](https://github.com/huggingface/diffusers/issues/1934) for more information.
## Using Channels Last memory format
Channels last memory format is an alternative way of ordering NCHW tensors in memory preserving dimensions ordering. Channels last tensors ordered in such a way that channels become the densest dimension (aka storing images pixel-per-pixel). Since not all operators currently support channels last format it may result in a worst performance, so it's better to try it and see if it works for your model.
@@ -357,4 +343,4 @@ with torch.inference_mode():
# optional: You can disable it via
# pipe.disable_xformers_memory_efficient_attention()
```
```

View File

@@ -22,7 +22,7 @@ pip install --upgrade diffusers accelerate transformers
```
- [`accelerate`](https://huggingface.co/docs/accelerate/index) speeds up model loading for inference and training
- [`transformers`](https://huggingface.co/docs/transformers/index) is required to run the most popular diffusion models, such as [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion)
- [`transformers`](https://huggingface.co/docs/transformers/index) is required to run the most popular diffusion models, such as [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview)
## DiffusionPipeline

View File

@@ -0,0 +1,333 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# The Stable Diffusion Guide 🎨
<a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_101_guide.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## Intro
Stable Diffusion is a [Latent Diffusion model](https://github.com/CompVis/latent-diffusion) developed by researchers from the Machine Vision and Learning group at LMU Munich, *a.k.a* CompVis.
Model checkpoints were publicly released at the end of August 2022 by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out [the official blog post](https://stability.ai/blog/stable-diffusion-public-release).
Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints **faster**, **more memory efficient**, and **more performant**.
🧨 Diffusers offers a simple API to run stable diffusion with all memory, computing, and quality improvements.
This notebook walks you through the improvements one-by-one so you can best leverage [`StableDiffusionPipeline`] for **inference**.
## Prompt Engineering 🎨
When running *Stable Diffusion* in inference, we usually want to generate a certain type, or style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation.
So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time.
This can be done by both improving the **computational efficiency** (speed) and the **memory efficiency** (GPU RAM).
Let's start by looking into computational efficiency first.
Throughout the notebook, we will focus on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):
``` python
model_id = "runwayml/stable-diffusion-v1-5"
```
Let's load the pipeline.
## Speed Optimization
``` python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(model_id)
```
We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:
``` python
prompt = "portrait photo of a old warrior chief"
```
To begin with, we should make sure we run inference on GPU, so let's move the pipeline to GPU, just like you would with any PyTorch module.
``` python
pipe = pipe.to("cuda")
```
To generate an image, you should use the [~`StableDiffusionPipeline.__call__`] method.
To make sure we can reproduce more or less the same image in every call, let's make use of the generator. See the documentation on reproducibility [here](./conceptual/reproducibility) for more information.
``` python
generator = torch.Generator("cuda").manual_seed(0)
```
Now, let's take a spin on it.
``` python
image = pipe(prompt, generator=generator).images[0]
image
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_1.png)
Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if your allocated GPU is better than a T4).
The default run we did above used full float32 precision and ran the default number of inference steps (50). The easiest speed-ups come from switching to float16 (or half) precision and simply running fewer inference steps. Let's load the model now in float16 instead.
``` python
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
```
And we can again call the pipeline to generate an image.
``` python
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, generator=generator).images[0]
image
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_2.png)
Cool, this is almost three times as fast for arguably the same image quality.
We strongly suggest always running your pipelines in float16 as so far we have very rarely seen degradations in quality because of it.
Next, let's see if we need to use 50 inference steps or whether we could use significantly fewer. The number of inference steps is associated with the denoising scheduler we use. Choosing a more efficient scheduler could help us decrease the number of steps.
Let's have a look at all the schedulers the stable diffusion pipeline is compatible with.
``` python
pipe.scheduler.compatibles
```
```
[diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler,
diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler,
diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler,
diffusers.schedulers.scheduling_pndm.PNDMScheduler,
diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler,
diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler,
diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler,
diffusers.schedulers.scheduling_ddpm.DDPMScheduler,
diffusers.schedulers.scheduling_ddim.DDIMScheduler]
```
Cool, that's a lot of schedulers.
🧨 Diffusers is constantly adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend taking a look at the official documentation [here](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview).
Alright, right now Stable Diffusion is using the `PNDMScheduler` which usually requires around 50 inference steps. However, other schedulers such as `DPMSolverMultistepScheduler` or `DPMSolverSinglestepScheduler` seem to get away with just 20 to 25 inference steps. Let's try them out.
You can set a new scheduler by making use of the [from_config](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) function.
``` python
from diffusers import DPMSolverMultistepScheduler
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
```
Now, let's try to reduce the number of inference steps to just 20.
``` python
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
image
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_3.png)
The image now does look a little different, but it's arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍.
## Memory Optimization
Less memory used in generation indirectly implies more speed, since we're often trying to maximize how many images we can generate per second. Usually, the more images per inference run, the more images per second too.
The easiest way to see how many images we can generate at once is to simply try it out, and see when we get a *"Out-of-memory (OOM)"* error.
We can run batched inference by simply passing a list of prompts and generators. Let's define a quick function that generates a batch for us.
``` python
def get_inputs(batch_size=1):
generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)]
prompts = batch_size * [prompt]
num_inference_steps = 20
return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps}
```
This function returns a list of prompts and a list of generators, so we can reuse the generator that produced a result we like.
We also need a method that allows us to easily display a batch of images.
``` python
from PIL import Image
def image_grid(imgs, rows=2, cols=2):
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
```
Cool, let's see how much memory we can use starting with `batch_size=4`.
``` python
images = pipe(**get_inputs(batch_size=4)).images
image_grid(images)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_4.png)
Going over a batch_size of 4 will error out in this notebook (assuming we are running it on a T4 GPU). Also, we can see we only generate slightly more images per second (3.75s/image) compared to 4s/image previously.
However, the community has found some nice tricks to improve the memory constraints further. After stable diffusion was released, the community found improvements within days and shared them freely over GitHub - open-source at its finest! I believe the original idea came from [this](https://github.com/basujindal/stable-diffusion/pull/117) GitHub thread.
By far most of the memory is taken up by the cross-attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amount of memory.
It can easily be enabled by calling `enable_attention_slicing` as is documented [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.enable_attention_slicing).
``` python
pipe.enable_attention_slicing()
```
Great, now that attention slicing is enabled, let's try to double the batch size again, going for `batch_size=8`.
``` python
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_5.png)
Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs).
We're at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality.
Next, let's look into how to improve the quality!
## Quality Improvements
Now that our image generation pipeline is blazing fast, let's try to get maximum image quality.
First of all, image quality is extremely subjective, so it's difficult to make general claims here.
The most obvious step to take to improve quality is to use *better checkpoints*. Since the release of Stable Diffusion, many improved versions have been released, which are summarized here:
- *Official Release - 22 Aug 2022*: [Stable-Diffusion 1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
- *20 October 2022*: [Stable-Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- *24 Nov 2022*: [Stable-Diffusion 2.0](https://huggingface.co/stabilityai/stable-diffusion-2-0)
- *7 Dec 2022*: [Stable-Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
Newer versions don't necessarily mean better image quality with the same parameters. People mentioned that *2.0* is slightly worse than *1.5* for certain prompts, but given the right prompt engineering *2.0* and *2.1* seem to be better.
Overall, we strongly recommend just trying the models out and reading up on advice online (e.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example [this nice blog post](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/).
Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction.
We recommend having a look at all [diffusers checkpoints sorted by downloads and trying out the different checkpoints](https://huggingface.co/models?library=diffusers).
For the following, we will stick to v1.5 for simplicity.
Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at [this blog post](https://huggingface.co/blog/stable_diffusion).
Let's load [stabilityai's newest auto-decoder](https://huggingface.co/stabilityai/stable-diffusion-2-1).
``` python
from diffusers import AutoencoderKL
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda")
```
Now we can set it to the vae of the pipeline to use it.
``` python
pipe.vae = vae
```
Let's run the same prompt as before to compare quality.
``` python
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_6.png)
Seems like the difference is only very minor, but the new generations are arguably a bit *sharper*.
Cool, finally, let's look a bit into prompt engineering.
Our goal was to generate a photo of an old warrior chief. Let's now try to bring a bit more color into the photos and make the look more impressive.
Originally our prompt was "*portrait photo of an old warrior chief*".
To improve the prompt, it often helps to add cues that could have been used online to save high-quality photos, as well as add more details.
Essentially, when doing prompt engineering, one has to think:
- How was the photo or similar photos of the one I want probably stored on the internet?
- What additional detail can I give that steers the models into the style that I want?
Cool, let's add more details.
``` python
prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes"
```
and let's also add some cues that usually help to generate higher quality images.
``` python
prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta"
prompt
```
Cool, let's now try this prompt.
``` python
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_7.png)
Pretty impressive! We got some very high-quality image generations there. The 2nd image is my personal favorite, so I'll re-use this seed and see whether I can tweak the prompts slightly by using "oldest warrior", "old", "", and "young" instead of "old".
``` python
prompts = [
"portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
"portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
"portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
"portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
]
generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image
images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images
image_grid(images)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_8.png)
The first picture looks nice! The eye movement slightly changed and looks nice. This finished up our 101-guide on how to use Stable Diffusion 🤗.
For more information on optimization or other guides, I recommend taking a look at the following:
- [Blog post about Stable Diffusion](https://huggingface.co/blog/stable_diffusion): In-detail blog post explaining Stable Diffusion.
- [FlashAttention](https://huggingface.co/docs/diffusers/optimization/xformers): XFormers flash attention can optimize your model even further with more speed and memory improvements.
- [Dreambooth](https://huggingface.co/docs/diffusers/training/dreambooth) - Quickly customize the model by fine-tuning it.
- [General info on Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview) - Info on other tasks that are powered by Stable Diffusion.

View File

@@ -283,3 +283,5 @@ image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
image.save("dog-bucket.png")
```
You may also run inference from [any of the saved training checkpoints](#performing-inference-using-a-saved-checkpoint).

View File

@@ -0,0 +1,155 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# LoRA Support in Diffusers
Diffusers supports LoRA for faster fine-tuning of Stable Diffusion, allowing greater memory efficiency and easier portability.
Low-Rank Adaption of Large Language Models was first introduced by Microsoft in
[LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*.
In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition weight matrices (called **update matrices**)
to existing weights and **only** training those newly added weights. This has a couple of advantages:
- Previous pretrained weights are kept frozen so that the model is not so prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114).
- Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable.
- LoRA matrices are generally added to the attention layers of the original model and they control to which extent the model is adapted toward new training images via a `scale` parameter.
**__Note that the usage of LoRA is not just limited to attention layers. In the original LoRA work, the authors found out that just amending
the attention layers of a language model is sufficient to obtain good downstream performance with great efficiency. This is why, it's common
to just add the LoRA weights to the attention layers of a model.__**
[cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
<Tip>
LoRA allows us to achieve greater memory efficiency since the pretrained weights are kept frozen and only the LoRA weights are trained, thereby
allowing us to run fine-tuning on consumer GPUs like Tesla T4, RTX 3080 or even RTX 2080 Ti! One can get access to GPUs like T4 in the free
tiers of Kaggle Kernels and Google Colab Notebooks.
</Tip>
## Getting started with LoRA for fine-tuning
Stable Diffusion can be fine-tuned in different ways:
* [Textual inversion](https://huggingface.co/docs/diffusers/main/en/training/text_inversion)
* [DreamBooth](https://huggingface.co/docs/diffusers/main/en/training/dreambooth)
* [Text2Image fine-tuning](https://huggingface.co/docs/diffusers/main/en/training/text2image)
We provide two end-to-end examples that show how to run fine-tuning with LoRA:
* [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#training-with-low-rank-adaptation-of-large-language-models-lora)
* [Text2Image](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image#training-with-lora)
If you want to perform DreamBooth training with LoRA, for instance, you would run:
```bash
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export INSTANCE_DIR="path-to-instance-images"
export OUTPUT_DIR="path-to-save-model"
accelerate launch train_dreambooth_lora.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--checkpointing_steps=100 \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--validation_prompt="A photo of sks dog in a bucket" \
--validation_epochs=50 \
--seed="0" \
--push_to_hub
```
A similar process can be followed to fully fine-tune Stable Diffusion on a custom dataset using the
`examples/text_to_image/train_text_to_image_lora.py` script.
Refer to the respective examples linked above to learn more.
<Tip>
When using LoRA we can use a much higher learning rate (typically 1e-4 as opposed to ~1e-6) compared to non-LoRA Dreambooth fine-tuning.
</Tip>
But there is no free lunch. For the given dataset and expected generation quality, you'd still need to experiment with
different hyperparameters. Here are some important ones:
* Training time
* Learning rate
* Number of training steps
* Inference time
* Number of steps
* Scheduler type
Additionally, you can follow [this blog](https://huggingface.co/blog/dreambooth) that documents some of our experimental
findings for performing DreamBooth training of Stable Diffusion.
When fine-tuning, the LoRA update matrices are only added to the attention layers. To enable this, we added new weight
loading functionalities. Their details are available [here](https://huggingface.co/docs/diffusers/main/en/api/loaders).
## Inference
Assuming you used the `examples/text_to_image/train_text_to_image_lora.py` to fine-tune Stable Diffusion on the [Pokemon
dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions), you can perform inference like so:
```py
from diffusers import StableDiffusionPipeline
import torch
model_path = "sayakpaul/sd-model-finetuned-lora-t4"
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")
prompt = "A pokemon with blue eyes."
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
image.save("pokemon.png")
```
Here are some example images you can expect:
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pokemon-collage.png"/>
[`sayakpaul/sd-model-finetuned-lora-t4`](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4) contains [LoRA fine-tuned update matrices](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/pytorch_lora_weights.bin)
which is only 3 MBs in size. During inference, the pre-trained Stable Diffusion checkpoints are loaded alongside these update
matrices and then they are combined to run inference.
You can use the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library to retrieve the base model
from [`sayakpaul/sd-model-finetuned-lora-t4`](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4) like so:
```py
from huggingface_hub.repocard import RepoCard
card = RepoCard.load("sayakpaul/sd-model-finetuned-lora-t4")
base_model = card.data.to_dict()["base_model"]
# 'CompVis/stable-diffusion-v1-4'
```
And then you can use `pipe = StableDiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.float16)`.
This is especially useful when you don't want to hardcode the base model identifier during initializing the `StableDiffusionPipeline`.
Inference for DreamBooth training remains the same. Check
[this section](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#inference-1) for more details.
## Known limitations
* Currently, we only support LoRA for the attention layers of [`UNet2DConditionModel`](https://huggingface.co/docs/diffusers/main/en/api/models#diffusers.UNet2DConditionModel).

View File

@@ -37,6 +37,7 @@ Training examples show how to pretrain or fine-tune diffusion models for a varie
- [Text-to-Image Training](./text2image)
- [Text Inversion](./text_inversion)
- [Dreambooth](./dreambooth)
- [LoRA Support](./lora)
If possible, please [install xFormers](../optimization/xformers) for memory efficient attention. This could help make your training faster and less memory intensive.

Some files were not shown because too many files have changed in this diff Show More