Compare commits

...

1407 Commits

Author SHA1 Message Date
Patrick von Platen
180841bbde Release: v0.12.0 2023-01-25 15:48:00 +02:00
Patrick von Platen
6ba2231d72 Reproducibility 3/3 (#1924)
* make tests deterministic

* run slow tests

* prepare for testing

* finish

* refactor

* add print statements

* finish more

* correct some test failures

* more fixes

* set up to correct tests

* more corrections

* up

* fix more

* more prints

* add

* up

* up

* up

* uP

* uP

* more fixes

* uP

* up

* up

* up

* up

* fix more

* up

* up

* clean tests

* up

* up

* up

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* make

* correct

* finish

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-25 13:44:22 +01:00
Patrick von Platen
008c22d334 Improve transformers versions handling (#2104) 2023-01-25 12:50:54 +01:00
Patrick von Platen
b562b6611f Allow directly passing text embeddings to Stable Diffusion Pipeline for prompt weighting (#2071)
* add text embeds to sd

* add text embeds to sd

* finish tests

* finish

* finish

* make style

* fix tests

* make style

* make style

* up

* better docs

* fix

* fix

* new try

* up

* up

* finish
2023-01-25 12:29:49 +01:00
Sayak Paul
c1184918c5 [docs] Adds a doc on LoRA support for diffusers (#2086)
* add: a doc on LoRA support in diffusers.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* apply PR suggestions.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* remove visually incoherent elements.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-25 12:23:12 +01:00
apolinario
263b968041 Add lora tag to the model tags (#2103)
* Add `lora` tag to the model tags

For lora training

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-25 12:17:59 +01:00
Suraj Patil
480d8846a9 [doc] update example for pix2pix (#2101)
update example for pix2pix
2023-01-25 11:22:09 +01:00
patil-suraj
9dbf78e2f1 Merge branch 'main' of https://github.com/huggingface/diffusers 2023-01-25 09:12:49 +01:00
patil-suraj
9aa6fcab60 fix docs for center_crop 2023-01-25 09:12:47 +01:00
Pedro Cuenca
f37d880f6a Remove wandb from text_to_image requirements.txt (#2092) 2023-01-25 08:54:14 +01:00
Will Berman
febaf86302 [docs] [dreambooth] note random crop (#2085)
* [docs] [dreambooth] note random crop

documenting default random crop behavior
2023-01-24 08:37:34 -08:00
Takuma Mori
16bb5058b9 xFormers attention op arg (#2049)
* allow passing op to xFormers attention

original code by @patil-suraj
huggingface/diffusers@ae0cc0b71f

* correct style by `make style`

* add attention_op arg documents

* add usage example to docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add usage example to docstring

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* code style correction by `make style`

* Update docstring code to a valid python example

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update docstring code to a valid python example

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style correction by `make style`

* Update code exmaple to fully functional

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-24 17:26:04 +01:00
Lincoln Stein
7533e3d7e6 [Feat] checkpoint_merger works on local models as well as ones that use safetensors (#2060)
* allow a local model directory to be used for merging

* moved checkpoint merge bugfix into main for testing

* possibly fix local variable "config_dict" referenced before assignment

* fix deprecation warning

* debugging...

* debugging

* allow safetensors

* safetensors try again

* fix syntax error

* further debugging

* fix logic error when checkpoint 2 is none

* more debugging...

* more debuging...

* more debugging...

* more debugging...

* debugging

* clean up status reporting

* skip the requires_safety_checker boolean

* moved checkpoint merge bugfix into main for testing

* possibly fix local variable "config_dict" referenced before assignment

* fix deprecation warning

* allow safetensors

* fix logic error when checkpoint 2 is none

* clean up status reporting

* undo hack to use private repo for community pipelines

* allow a local model directory to be used for merging

* allow safetensors

* clean up status reporting

* reformatted with black

* sort imported modules correctly

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update examples/community/checkpoint_merger.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix import style error

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-24 16:35:17 +01:00
Yuta Hayashibe
418331094d Run inference on a specific condition and fix call of manual_seed() (#2074) 2023-01-24 14:19:22 +01:00
Suraj Patil
fc8afa3ab5 [dreambooth] fix multi on gpu. (#2088)
unwrap model on multi gpu
2023-01-24 13:23:56 +01:00
Pedro Cuenca
31336dae3b Fix resume epoch for all training scripts except textual_inversion (#2079) 2023-01-24 12:02:41 +01:00
Pedro Cuenca
0e98e83927 [lora] Log images when using tensorboard (#2078)
* [lora] Log images when using tensorboard.

* Specify image format instead of transposing.

As discussed with @sayakpaul.

* Style
2023-01-24 10:26:39 +01:00
Pedro Cuenca
f4dddaf5ee [textual_inversion] Fix resuming state when using gradient checkpointing (#2072)
* Fix resuming state when using gradient checkpointing.

Also, allow --resume_from_checkpoint to be used when the checkpoint does
not yet exist (a normal training run will be started).

* style
2023-01-24 10:25:41 +01:00
Patrick von Platen
7d8b4f7f8e [Paint by example] Fix cpu offload for paint by example (#2062)
* [Paint by example] Fix paint by example

* fix more

* final fix
2023-01-24 10:05:50 +01:00
Gleb Akhmerov
a66f2baeb7 Dreambooth: reduce VRAM usage (#2039)
* Dreambooth: use `optimizer.zero_grad(set_to_none=True)` to reduce VRAM usage

* Allow the user to control `optimizer.zero_grad(set_to_none=True)` with --set_grads_to_none

* Update Dreambooth readme

* Fix link in readme

* Fix header size in readme
2023-01-23 12:21:03 +01:00
Suraj Patil
6fedb29f11 [examples] add dataloader_num_workers argument (#2070)
add --dataloader_num_workers argument
2023-01-23 10:58:03 +01:00
cafe+ai — かふぇあい
d75ad93ca7 Safetensors loading in "convert_diffusers_to_original_stable_diffusion" (#2054)
* Safetensors loading in "convert_diffusers_to_original_stable_diffusion"

Adds diffusers format saftetensors loading support

* Fix import sort order: convert_diffusers_to_original_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-23 09:44:55 +01:00
Sayak Paul
ffb3a26c5c [LoRA] Adds example on text2image fine-tuning with LoRA (#2031)
* example on fine-tuning with LoRA.

* apply make quality.

* fix: pipeline loading.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply suggestions for PR review.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply make style and make quality.

* chore: remove mention of dreambooth from text2image.

* add: weight path and wandb run link.

* Apply suggestions from code review

* apply make style.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-23 08:31:07 +01:00
wrong.wang
b15a951a48 add community pipeline: StableUnCLIPPipeline (#2037)
* add community pipeline: StableUnCLIPPipeline

* reformt stable_unclip.py with isort and black
2023-01-22 21:03:42 +01:00
Patrick von Platen
69c76173fa fix tests 2023-01-22 14:31:05 +02:00
Patrick von Platen
926b34b40c improve tests 2023-01-22 14:30:15 +02:00
Patrick von Platen
8d326e61cf Correct Pix2Pix example (#2056)
* Correct Pix2Pix example

- no advertisement of revision -> it'll be deprecated soon
- by default safety checker should be used

* Update docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx

* up
2023-01-21 15:56:29 +01:00
Patrick von Platen
59b7339a84 [From pretrained] Don't download .safetensors files if safetensors is… (#2057)
* [From pretrained] Don't download .safetensors files if safetensors is not available

* tests

* tests

* up
2023-01-21 15:51:33 +01:00
Suraj Patil
aa265f74bd [StableDiffusionInstructPix2Pix] use cpu generator in slow tests (#2051)
* use cpu generator in slow tests

* ifx get_inputs
2023-01-20 21:43:00 +02:00
Damian Stewart
3d2f24b099 Module-ise "original stable diffusion to diffusers" conversion script (#2019)
* convert __main__ to a function call and call it

* add missing type hint

* make style check pass

* move loading to src/diffusers

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 17:30:44 +01:00
Lucain
bcb476797c Remove modelcards dependency (#2050)
* Switch to huggingface_hub.ModelCard

* Remove modelcards dependency in favor of Jinja2
2023-01-20 16:39:42 +01:00
Lucain
5ea4be86ab Create repo before cloning in examples (#2047)
* Create repo before cloning in examples

* code quality
2023-01-20 16:38:37 +01:00
Suraj Patil
e5ff75540c Add InstructPix2Pix pipeline (#2040)
* being pix2pix

* ifx

* cfg image_latents

* fix some docstr

* fix

* fix

* hack

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add comments to explain the hack

* move __call__ to the top

* doc

* remove height and width

* remove depreications

* fix doc str

* quality

* fast tests

* chnage model id

* fast tests

* fix test

* address Pedro's comments

* copyright

* Simple doc page.

* Apply suggestions from code review

* style

* Remove import

* address some review comments

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 16:25:46 +01:00
hysts
3ecbbd6288 Minor fix in the documentation of LoRA (#2045)
Fix
2023-01-20 13:19:54 +01:00
Anton Lozhkov
7c82a16fc1 Fix EMA for multi-gpu training in the unconditional example (#1930)
* improve EMA

* style

* one EMA model

* quality

* fix tests

* fix test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* re organise the unconditional script

* backwards compatibility

* default to init values for some args

* fix ort script

* issubclass => isinstance

* update state_dict

* docstr

* doc

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* use .to if device is passed

* deprecate device

* make flake happy

* fix typo

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 11:35:55 +01:00
Patrick von Platen
f354dd9e2f [Save Pretrained] Remove dead code lines that can accidentally remove pytorch files (#2038)
correct safetensors
2023-01-19 10:11:27 +01:00
Patrick von Platen
007c914c70 [Lora] Model card (#2032)
* [Lora] up lora training

* finish

* finish

* finish model card
2023-01-19 09:44:02 +01:00
Patrick von Platen
3c07840b1b make style 2023-01-19 02:22:47 +01:00
Joqsan
fcb2ec8c2f Fix typos and minor redundancies (#2029)
fix typos and minor redundancies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 02:22:18 +01:00
Patrick von Platen
013955b5a7 [Dit] Fix dit tests (#2034)
* [Dit] Fix dit tests

* up
2023-01-19 01:50:22 +01:00
Patrick von Platen
ed616bd8a8 [LoRA] Add LoRA training script (#1884)
* [Lora] first upload

* add first lora version

* upload

* more

* first training

* up

* correct

* improve

* finish loaders and inference

* up

* up

* fix more

* up

* finish more

* finish more

* up

* up

* change year

* revert year change

* Change lines

* Add cloneofsimo as co-author.

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>

* finish

* fix docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* upload

* finish

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-18 18:05:51 +01:00
Suraj Patil
ac3fc64906 fix dit doc header (#2027)
The link in the main heading needs to be rendered on the doc page. It displays the text as is.
2023-01-18 10:31:24 +01:00
Kashif Rasul
37d113cce7 DiT Pipeline (#1806)
* added dit model

* import

* initial pipeline

* initial convert script

* initial pipeline

* make style

* raise valueerror

* single function

* rename classes

* use DDIMScheduler

* timesteps embedder

* samples to cpu

* fix var names

* fix numpy type

* use timesteps class for proj

* fix typo

* fix arg name

* flip_sin_to_cos and better var names

* fix C shape cal

* make style

* remove unused imports

* cleanup

* add back patch_size

* initial dit doc

* typo

* Update docs/source/api/pipelines/dit.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* added copyright license headers

* added example usage and toc

* fix variable names asserts

* remove comment

* added docs

* fix typo

* upstream changes

* set proper device for drop_ids

* added initial dit pipeline test

* update docs

* fix imports

* make fix-copies

* isort

* fix imports

* get rid of more magic numbers

* fix code when guidance is off

* remove block_kwargs

* cleanup script

* removed to_2tuple

* use FeedForward class instead of another MLP

* style

* work on mergint DiTBlock with BasicTransformerBlock

* added missing final_dropout and args to BasicTransformerBlock

* use norm from block

* fix arg

* remove unused arg

* fix call to class_embedder

* use timesteps

* make style

* attn_output gets multiplied

* removed commented code

* use Transformer2D

* use self.is_input_patches

* fix flags

* fixed conversion to use Transformer2DModel

* fixes for pipeline

* remove dit.py

* fix timesteps device

* use randn_tensor and fix fp16 inf.

* timesteps_emb already the right dtype

* fix dit test class

* fix test and style

* fix norm2 usage in vq-diffusion

* added author names to pipeline and lmagenet labels link

* fix tests

* use norm_type as string

* rename dit to transformer

* fix name

* fix test

* set  norm_type = "layer" by default

* fix tests

* do not skip common tests

* Update src/diffusers/models/attention.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* revert AdaLayerNorm API

* fix norm_type name

* make sure all components are in eval mode

* revert norm2 API

* compact

* finish deprecation

* add slow tests

* remove @

* refactor some stuff

* upload

* Update src/diffusers/pipelines/dit/pipeline_dit.py

* finish more

* finish docs

* improve docs

* finish docs

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-17 23:09:29 +01:00
Pedro Cuenca
7e29b747f9 Check k-diffusion version is at least 0.0.12 (#2022)
* Check k-diffusion version is at least 0.0.12

* make style
2023-01-17 21:16:46 +01:00
Jerry Jiarui XU
a43bdd01cd [Flax] Add Flax inpainting impl (#1966)
* [Flax] Add Flax inpainting impl

* fixed copies, add README.md

* fixed README.md

* add test

* format

* update README.md
2023-01-17 10:42:04 +01:00
Patrick von Platen
f77ff56158 [Docs] No more autocast (#2021)
no more autocast
2023-01-17 10:31:25 +01:00
Suraj Patil
f861cde14f [train_unconditional] fix LR scheduler init (#2010)
fix lr scheduler
2023-01-17 10:11:46 +01:00
William Dalheim
b2ea8a84e9 Change PNDMPipeline to use PNDMScheduler (#2003)
* pndmpipeline uses pndmscheduler

* formatted pipeline_pndm
2023-01-16 15:34:59 +01:00
Will Berman
07c0fe4b87 Use pipeline tests mixin for UnCLIP pipeline tests + unCLIP MPS fixes (#1908)
re: https://github.com/huggingface/diffusers/issues/1857

We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1).

- [x] add mixin to UnCLIPPipelineFastTests
- [x] add mixin to UnCLIPImageVariationPipelineFastTests
- [x] Move UnCLIPPipeline flags in mixin to base class
- [x] Small MPS fixes for F.pad and F.interpolate
- [x] Made test unCLIP model's dimensions smaller to run tests faster
2023-01-16 15:21:58 +01:00
Haofan Wang
1e651ca2c9 Fix typos in ColossalAI example (#2001)
fix typos
2023-01-16 15:21:04 +01:00
Patrick von Platen
522f8aa7b2 [Black] Update black library (#2007) 2023-01-16 15:16:28 +01:00
Patrick von Platen
8a3f0c1f71 [Conversion] Improve safetensors (#1989) 2023-01-16 14:26:56 +01:00
Patrick von Platen
f6a5c359cc [Community] Fix merger (#2006)
* [Community] Fix merger

* finish
2023-01-16 14:25:25 +01:00
蓝色的秋风
651c5adf8a [Conversion] Support convert diffusers to safetensors (#1996)
fix: support diffusers to safetensors
2023-01-16 12:58:01 +01:00
Erin
cc2cc00d20 Add tests for 2D UNet blocks (#1945)
* test unet blocks 2d

* change to randn_tensor

* mps flaky
2023-01-16 12:53:05 +01:00
Pedro Cuenca
8f58159159 Fix a couple typos in Dreambooth readme (#2004)
Fix a couple typos in Dreambooth readme.
2023-01-16 11:38:07 +01:00
Sayak Paul
216d190178 Update README.md to include our blog post (#1998)
* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-16 09:16:54 +01:00
Vladimir Sotnikov
9b37ed33b5 [SD Img2Img] resize source images to multiple of 8 instead of 32 (#1571)
* [Stable Diffusion Img2Img] resize source images to integer multiple of 8 instead of 32

* [Alt Diffusion Img2Img] resize source images to multiple of 8 instead of 32

* [Img2Img] fix AltDiffusion Img2Img resolution test

* [Img2Img] add Stable Diffusion Img2Img resolution test

* [Cycle Diffusion] round resolution to multiplies of 8 instead of 32

* [ONNX SD Img2Img] round resolution to multiplies of 64 instead of 32

* [SD Depth2Img] round resolution to multiplies of 8 instead of 32

* [Repaint] round resolution to multiplies of 8 instead of 32

* fix make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-13 16:02:22 +01:00
Patrick von Platen
135567f18e make style 2023-01-12 20:02:41 +00:00
Walter Hugo Lopez Pinaya
9a5d3322e7 Update docstring (#1971) 2023-01-12 21:01:40 +01:00
camenduru
f73ed17961 Allow converting Flax to PyTorch by adding a "from_flax" keyword (#1900)
* from_flax

* oops

* oops

* make style with pip install -e ".[dev]"

* oops

* now code quality happy 😋

* allow_patterns += FLAX_WEIGHTS_NAME

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* for test

* bye bye is_flax_available()

* oops

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* make style

* add test

* finihs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 20:00:35 +01:00
Katsuya
9147c4c954 Fix unused upcast_attn flag in convert_original_stable_diffusion_to_diffusers script (#1942)
Fix unused upcast_attn flag in sd to diffusers script
2023-01-12 19:55:40 +01:00
Patrick von Platen
6d3adf6570 Fix slow tests (#1983)
* [Slow tests] Fix tests

* Update tests/pipelines/karras_ve/test_karras_ve.py
2023-01-12 18:24:51 +01:00
Patrick von Platen
dbdd585cad Example tests (#1982)
* Example tests

* fix
2023-01-12 17:39:37 +01:00
klopsahlong
7f0eb35af3 Research project multi subject dreambooth (#1948)
* implemented multi subject dreambooth in research_projects

* minor edits to readme

* added style and quality fixes

Co-authored-by: Krista Opsahl-Ong <kristaopsahlong@gmail.com>
2023-01-12 11:42:35 +01:00
Haofan Wang
40aa162808 [Docs] Update README.md (#1960)
Update README.md
2023-01-12 11:34:10 +01:00
Patrick von Platen
f06e4e5579 make style 2023-01-12 11:33:11 +01:00
Patrick von Platen
57f7d25934 [CPU offload] correct cpu offload (#1968)
* [CPU offload] correct cpu offload

* [CPU offload] correct cpu offload

* finish

* finish

* Update docs/source/en/optimization/fp16.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 11:23:52 +01:00
Michael Krasnyk
50b6513531 Update CLIPGuidedStableDiffusion.feature_extractor.size to fix TypeError (#1938)
TypeError: Size should be int or sequence. Got <class 'dict'>
2023-01-11 10:52:48 +01:00
Patrick von Platen
d1d5451b64 [Community] Correct checkpoint merger (#1965) 2023-01-10 15:11:50 +01:00
Patrick von Platen
f6f1ec3a7c allow loading ddpm models into ddim (#1932) 2023-01-10 14:52:32 +01:00
Patrick von Platen
beb932c5d1 [Conversion SD] Make sure weirdly sorted keys work as well (#1959) 2023-01-10 01:23:14 +01:00
andreemic
4401e6aa2b fix typo in imagic_stable_diffusion.py (#1956)
changed named arg to self.tokenizer from `truncaton` to `truncation` according to Tokenizer docs (https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.truncation)
2023-01-09 20:34:19 +01:00
Fazzie-Maqianli
089f0f4c98 update to latest colossalai (#1951) 2023-01-09 19:47:41 +01:00
Patrick von Platen
aba2a65d6a Add automatic doc sorting (#1940)
* automatically sort docs

* add new check toc doc

* add new check toc doc

* add

* add new check toc doc

* add

* add new check toc doc

* correct

* finalize
2023-01-06 17:24:47 +01:00
vvssttkk
9f4c4f5e82 fix path to logo (#1939) 2023-01-06 16:12:30 +01:00
Patrick von Platen
409387889d [Conversion] Make sure ema weights are extracted correctly (#1937)
* [Conversion] Make sure ema weights are extracted correctly

* up

* finish
2023-01-06 07:08:39 +01:00
Patrick von Platen
2533f92532 [Stable Diffusion Guide] 101 Stable Diffusion Guide directly into the docs (#1927)
* finish

* improve

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-05 21:14:23 +01:00
Patrick von Platen
f6af0d1f33 move to intro 2023-01-05 20:57:43 +01:00
Will Berman
247b5feea1 [dreambooth] low precision guard (#1916)
* [dreambooth] low precision guard

* fix

* add docs to cli args

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 16:54:56 +01:00
Shubhamai
7101c7316b [StableDiffusionimg2img] validating input type (#1913)
* [StableDiffusionimg2img] validating input type

* fixing tests

* running make style

* make fix-copies
2023-01-05 12:01:00 +01:00
Chanran Kim
f6f4176294 [Docs] Add TRANSLATING.md file (#1920)
* init for korean docs

* edit build yml file for multi language docs

* edit one more build yml file for multi language docs

* add title for get_frontmatter error

* add translating.md

* default language for docs is en

* Update docs/TRANSLATING.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 09:42:43 +01:00
Fazzie-Maqianli
d8062ad700 Feature/colossalai (#1793)
Support ColossalAi for Dreamblooth
2023-01-05 08:53:42 +01:00
qsh-zh
be99201a56 feat : add log-rho deis multistep scheduler (#1432)
* feat : add log-rho deis multistep deis

* docs :fix typo

* docs : add docs for impl algo

* docs : remove duplicate ref

* finish deis

* add docs

* fix

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 00:09:30 +01:00
Patrick von Platen
9b63854886 Improve reproduceability 2/3 (#1906)
* [Repro] Correct reproducability

* up

* up

* uP

* up

* need better image

* allow conversion from no state dict checkpoints

* up

* up

* up

* up

* check tensors

* check tensors

* check tensors

* check tensors

* next try

* up

* up

* better name

* up

* up

* Apply suggestions from code review

* correct more

* up

* replace all torch randn

* fix

* correct

* correct

* finish

* fix more

* up
2023-01-04 23:51:17 +01:00
Peter Willemsen
67e2f95cc4 New Pipeline: Tiled-upscaling with depth perception to avoid blurry spots (#1615)
* added first version of the tiled upscaling pipeline

* reformatted to pass code quality tests
2023-01-04 23:07:58 +01:00
Chanran Kim
75d53cc839 Init for korean docs (#1910)
* init for korean docs

* edit build yml file for multi language docs

* edit one more build yml file for multi language docs

* add title for get_frontmatter error
2023-01-04 22:59:42 +01:00
Erin
9e17983d9f Test ResnetBlock2D (#1850)
* test resnet block

* fix code format required by isort

* add torch device

* nit
2023-01-04 22:57:32 +01:00
Patrick von Platen
cb8a3dbe34 make style 2023-01-04 21:50:17 +00:00
Yasyf Mohamedali
bcd6f3f9ce Various Fixes for Flax Dreambooth (#1782)
* Various Fixes for Flax Dreambooth

- Correctly update the progress bar every epoch
- Allow specifying a pretrained VAE
- Allow specifying a revision to pretrained models
- Cache compiled models between invocations (speeds up TPU execution a lot!)
- Save intermediate checkpoints by specifying `save_steps`

* Don't die when save_steps is not set.

* Address CR

* Address comments

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-04 22:49:56 +01:00
Alex Redden
19a0ce4a47 Fix lr-scaling store_true & default=True cli argument for textual_inversion training. (#1090)
Fix default lr-scaling cli argument
2023-01-04 15:43:41 +01:00
Yasyf Mohamedali
856331c61b Support training SD V2 with Flax (#1783)
* Support training SD V2 with Flax

Mostly involves supporting a v_prediction scheduler.

The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead.

* Add to other top-level files.
2023-01-04 13:19:22 +01:00
Manfred Lindmark
f7154f859c Fix --resume_from_checkpoint step in train_text_to_image.py (#1914)
fix resume step in train_text_to_image example
2023-01-04 13:05:55 +01:00
Joqsan
675ef1ffbd fix: DDPMScheduler.set_timesteps() (#1912) 2023-01-04 13:02:50 +01:00
Patrick von Platen
d67c305120 allow conversion from no state dict checkpoints 2023-01-03 19:48:13 +00:00
Patrick von Platen
2bd53a940c [Docs] Remove duplicated API doc string (#1901)
only have api docstring once for sD
2023-01-03 19:11:48 +01:00
Patrick von Platen
8ed08e4270 [Deterministic torch randn] Allow tensors to be generated on CPU (#1902)
* [Deterministic torch randn] Allow tensors to be generated on CPU

* fix more

* up

* fix more

* up

* Update src/diffusers/utils/torch_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Apply suggestions from code review

* up

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-03 18:22:40 +01:00
neverix
0df83c79e4 Fixes in comments in SD2 D2I (#1903) 2023-01-03 16:24:36 +01:00
Robert Dargavel Smith
4a7e4cec38 Add condtional generation to AudioDiffusionPipeline (#1826)
* Add condtional generation

* add fast test for conditional audio generation
2023-01-03 14:09:14 +01:00
aengusng8
f45c675d2c [addresses issue #1642] add add_noise to scheduling-sde-ve (#1827)
* add add_noise to scheduling-sde-ve

* run Black formater
2023-01-03 14:08:41 +01:00
Anton Lozhkov
1bf4f0da7e Add accelerate and xformers versions to diffusers-cli env (#1898)
Add accelerate and xformers to diffusers-cli env
2023-01-03 13:51:27 +01:00
Anton Lozhkov
f17fae641c Add UnCLIPImageVariationPipeline to dummy imports (#1897)
* Add UnCLIPImageVariationPipeline to dummy imports

* style
2023-01-03 11:57:56 +01:00
YiYi Xu
da31075700 updated doc for stable diffusion pipelines (#1770)
* add a doc page for each pipeline under api/pipelines/stable_diffusion
* add pipeline examples to docstrings
* updated stable_diffusion_2 page
* updated default markdown syntax to list methods based on https://github.com/huggingface/diffusers/pull/1870
* add function decorator

Co-authored-by: yiyixuxu <yixu@Yis-MacBook-Pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 11:35:51 -10:00
Pedro Cuenca
8c14ca3d43 Fixes to the help for report_to in training scripts (#1888)
Fixes to the help for report_to in training scripts.
2023-01-02 15:53:28 +01:00
Suraj Patil
fa1f4701e8 [examples] misc fixes (#1886)
* misc fixes

* more comments

* Update examples/textual_inversion/textual_inversion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* set transformers verbosity to warning

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 14:09:01 +01:00
agizmo
423c3a4cc6 Update ONNX Pipelines to use np.float64 instead of np.float (#1789)
Numpy 1.24 had removed the "float" scalar alias as it was depricated in v1.20.
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
https://numpy.org/devdocs/release/1.24.0-notes.html#expired-deprecations
2023-01-02 13:21:49 +01:00
Pedro Cuenca
f769d74b0f Fix typo in train_dreambooth_inpaint (#1885)
Fix typo in train_dreambooth_inpaint.
2023-01-02 11:50:58 +01:00
Patrick von Platen
21bbc633c4 [Attention] Finish refactor attention file (#1879)
* [Attention] Finish refactor attention file

* correct more

* fix

* more fixes

* correct

* up
2023-01-01 19:18:10 +01:00
Suraj Patil
62608a9102 [train_text_to_image] allow using non-ema weights for training (#1834)
* allow using non-ema weights for training

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* address more review comment

* reorganise a few lines

* always pad text to max_length to match original training

* ifx collate_fn

* remove unused code

* don't prepare ema_unet, don't register lr scheduler

* style

* assert => ValueError

* add allow_tf32

* set log level

* fix comment

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 21:49:47 +01:00
Suraj Patil
e4fe941312 [examples] update loss computation (#1861)
update loss computation
2022-12-30 14:32:38 +01:00
Patrick von Platen
ac3738462b [Docs] Improve docs (#1870)
* [Docs] Improve docs

* up
2022-12-30 13:50:01 +01:00
Pedro Cuenca
a6e2c1fe5c Fix ema decay (#1868)
* Fix ema decay and clarify nomenclature.

* Rename var.
2022-12-30 12:42:42 +01:00
Patrick von Platen
b28ab30215 [Unclip] Make sure text_embeddings & image_embeddings can directly be passed to enable interpolation tasks. (#1858)
* [Unclip] Make sure latents can be reused

* allow one to directly pass embeddings

* up

* make unclip for text work

* finish allowing to pass embeddings

* correct more

* make style
2022-12-30 12:18:19 +01:00
Patrick von Platen
29b2c93c90 Make repo structure consistent (#1862)
* move files a bit

* more refactors

* fix more

* more fixes

* fix more onnx

* make style

* upload

* fix

* up

* fix more

* up again

* up

* small fix

* Update src/diffusers/__init__.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 11:51:08 +01:00
Simon Kirsten
ab0e92fdc8 Flax: Fix img2img and align with other pipeline (#1824)
* Flax: Add components function

* Flax: Fix img2img and align with other pipeline

* Flax: Fix PRNGKey type

* Refactor strength to start_timestep

* Fix preprocess images

* Fix processed_images dimen

* latents.shape -> latents_shape

* Fix typo

* Remove "static" comment

* Remove unnecessary optional types in _generate

* Apply doc-builder code style.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-29 18:56:03 +01:00
Suraj Patil
9ea7052f0e [textual inversion] add gradient checkpointing and small fixes. (#1848)
Co-authored-by: Henrik Forstén <henrik.forsten@gmail.com>

* update TI script

* make flake happy

* fix typo
2022-12-29 15:02:29 +01:00
Patrick von Platen
03bf877bf4 [StableDiffusionInpaint] Correct test (#1859) 2022-12-29 14:47:56 +01:00
Patrick von Platen
f2e521c499 [Dtype] Align dtype casting behavior with Transformers and Accelerate (#1725)
* [Dtype] Align automatic dtype

* up

* up

* fix

* re-add accelerate
2022-12-29 14:36:02 +01:00
Patrick von Platen
debc74f442 [Versatile Diffusion] Fix cross_attention_kwargs (#1849)
fix versatile
2022-12-28 18:49:04 +01:00
Partho
2ba42aa9b1 [Community Pipeline] MagicMix (#1839)
* initial

* type hints

* update scheduler type hint

* add to README

* add example generation to README

* v -> mix_factor

* load scheduler from pretrained
2022-12-28 17:02:53 +01:00
Will Berman
53c8147afe unCLIP image variation (#1781)
* unCLIP image variation

* remove prior comment re: @pcuenca

* stable diffusion -> unCLIP re: @pcuenca

* add copy froms re: @patil-suraj
2022-12-28 14:17:09 +01:00
kabachuha
cf5265ad41 Allow selecting precision to make Dreambooth class images (#1832)
* allow selecting precision to make DB class images

addresses #1831

* add prior_generation_precision argument

* correct prior_generation_precision's description

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-27 19:51:32 +01:00
Katsuya
8874027efc Make xformers optional even if it is available (#1753)
* Make xformers optional even if it is available

* Raise exception if xformers is used but not available

* Rename use_xformers to enable_xformers_memory_efficient_attention

* Add a note about xformers in README

* Reformat code style
2022-12-27 19:47:50 +01:00
Christopher Friesen
b693aff795 fix: resize transform now preserves aspect ratio (#1804) 2022-12-27 15:10:25 +01:00
William Held
8a4c3e50bd Width was typod as weight (#1800)
* Width was typod as weight

* Run Black
2022-12-27 15:09:21 +01:00
Pedro Cuenca
68e24259af Avoid duplicating PyTorch + safetensors downloads. (#1836) 2022-12-27 14:58:15 +01:00
camenduru
1f1b6c6544 Device to use (e.g. cpu, cuda:0, cuda:1, etc.) (#1844)
* Device to use (e.g. cpu, cuda:0, cuda:1, etc.)

* "cuda" if torch.cuda.is_available() else "cpu"
2022-12-27 14:42:56 +01:00
Pedro Cuenca
df2b548e89 Make safety_checker optional in more pipelines (#1796)
* Make safety_checker optional in more pipelines.

* Remove inappropriate comment in inpaint pipeline.

* InPaint Test: set feature_extractor to None.

* Remove import

* img2img test: set feature_extractor to None.

* inpaint sd2 test: set feature_extractor to None.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-25 21:58:45 +01:00
Daquan Lin
b6d4702301 fix small mistake in annotation: 32 -> 64 (#1780)
Fix inconsistencies between code and comments in the function 'preprocess'
2022-12-24 19:56:57 +01:00
Suraj Patil
9be94d9c66 [textual_inversion] unwrap_model text encoder before accessing weights (#1816)
* unwrap_model text encoder before accessing weights

* fix another call

* fix the right call
2022-12-23 16:46:24 +01:00
Patrick von Platen
f2acfb67ac Remove hardcoded names from PT scripts (#1778)
* Remove hardcoded names from PT scripts

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-23 15:36:29 +01:00
Prathik Rao
8aa4372aea reorder model wrap + bug fix (#1799)
* reorder model wrap

* bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-22 14:51:47 +01:00
Pedro Cuenca
6043838971 Fix OOM when using PyTorch with JAX installed. (#1795)
Don't initialize Jax on startup.
2022-12-21 14:07:24 +01:00
Patrick von Platen
4125756e88 Refactor cross attention and allow mechanism to tweak cross attention function (#1639)
* first proposal

* rename

* up

* Apply suggestions from code review

* better

* up

* finish

* up

* rename

* correct versatile

* up

* up

* up

* up

* fix

* Apply suggestions from code review

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add error message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 18:49:05 +01:00
Dhruv Naik
a9190badf7 Add Flax stable diffusion img2img pipeline (#1355)
* add flax img2img pipeline

* update pipeline

* black format file

* remove argg from get_timesteps

* update get_timesteps

* fix bug: make use of timesteps for for_loop

* black file

* black, isort, flake8

* update docstring

* update readme

* update flax img2img readme

* update sd pipeline init

* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* update inits

* revert change

* update var name to image, typo

* update readme

* return new t_start instead of modified timestep

* black format

* isort files

* update docs

* fix-copies

* update prng_seed typing

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:25:08 +01:00
Suraj Patil
d07f73003d Fix num images per prompt unclip (#1787)
* use repeat_interleave

* fix repeat

* Trigger Build

* don't install accelerate from main

* install released accelrate for mps test

* Remove additional accelerate installation from main.

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:03:38 +01:00
Pedro Cuenca
a6fb9407fd Dreambooth docs: minor fixes (#1758)
* Section header for in-painting, inference from checkpoint.

* Inference: link to section to perform inference from checkpoint.

* Move Dreambooth in-painting instructions to the proper place.
2022-12-20 08:39:16 +01:00
Patrick von Platen
261a448c6a Correct hf hub download (#1767)
* allow model download when no internet

* up

* make style
2022-12-20 02:07:15 +01:00
Simon Kirsten
f106ab40b3 [Flax] Stateless schedulers, fixes and refactors (#1661)
* [Flax] Stateless schedulers, fixes and refactors

* Remove scheduling_common_flax and some renames

* Update src/diffusers/schedulers/scheduling_pndm_flax.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:42:41 +01:00
Emil Bogomolov
d87cc15977 expose polynomial:power and cosine_with_restarts:num_cycles params (#1737)
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py

* fix formatting

* fix style

* Update src/diffusers/optimization.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:41:37 +01:00
Patrick von Platen
e29dc97215 make style 2022-12-20 01:38:45 +01:00
Ilmari Heikkinen
8e4733b3c3 Only test for xformers when enabling them #1773 (#1776)
* only check for xformers when xformers are enabled

* only test for xformers when enabling them
2022-12-20 01:38:28 +01:00
Prathik Rao
847daf25c7 update train_unconditional_ort.py (#1775)
* reflect changes

* run make style

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-19 23:58:55 +01:00
Pedro Cuenca
9f8c915a75 [Dreambooth] flax fixes (#1765)
* Fail if there are less images than the effective batch size.

* Remove lr-scheduler arg as it's currently ignored.

* Make guidance_scale work for batch_size > 1.
2022-12-19 20:42:25 +01:00
Anton Lozhkov
8331da4683 Bump to 0.12.0.dev0 (#1771) 2022-12-19 18:44:08 +01:00
Anton Lozhkov
f1a32203aa [Tests] Fix UnCLIP cpu offload tests (#1769) 2022-12-19 18:25:08 +01:00
Nan Liu
6f15026330 update composable diffusion for an updated diffuser library (#1697)
* update composable diffusion for an updated diffuser library

* fix style/quality for code

* Revert "fix style/quality for code"

This reverts commit 71f2349763.

* update style

* reduce memory usage by computing score sequentially
2022-12-19 18:03:40 +01:00
anton-
a5edb981a7 [Patch] Return import for the unclip pipeline loader 2022-12-19 17:56:42 +01:00
anton-
54796b7e43 Release: v0.11.0 2022-12-19 17:43:22 +01:00
Anton Lozhkov
4cb887e0a7 Transformers version req for UnCLIP (#1766)
* Transformers version req for UnCLIP

* add to the list
2022-12-19 17:11:17 +01:00
Anish Shah
9f657f106d [Examples] Update train_unconditional.py to include logging argument for Wandb (#1719)
Update train_unconditional.py

Add logger flag to choose between tensorboard and wandb
2022-12-19 16:57:03 +01:00
Patrick von Platen
ce1c27adc8 [Revision] Don't recommend using revision (#1764) 2022-12-19 16:25:41 +01:00
Patrick von Platen
b267d28566 [Versatile] fix attention mask (#1763) 2022-12-19 15:58:39 +01:00
Anton Lozhkov
c7b4acfb37 Add CPU offloading to UnCLIP (#1761)
* Add CPU offloading to UnCLIP

* use fp32 for testing the offload
2022-12-19 14:44:08 +01:00
Suraj Patil
be38b2d711 [UnCLIPPipeline] fix num_images_per_prompt (#1762)
duplicate maks for num_images_per_prompt
2022-12-19 14:32:46 +01:00
Anton Lozhkov
32a5d70c42 Support attn2==None for xformers (#1759) 2022-12-19 12:43:30 +01:00
Patrick von Platen
429e5449c1 Add attention mask to uclip (#1756)
* Remove bogus file

* [Unclip] Add efficient attention

* [Unclip] Add efficient attention
2022-12-19 12:10:46 +01:00
Anton Lozhkov
dc7cd893fd Add resnet_time_scale_shift to VD layers (#1757) 2022-12-19 12:01:46 +01:00
Mikołaj Siedlarek
8890758823 Correct help text for scheduler_type flag in scripts. (#1749) 2022-12-19 11:27:23 +01:00
Will Berman
b25843e799 unCLIP docs (#1754)
* [unCLIP docs] markdown

* [unCLIP docs] UnCLIPPipeline
2022-12-19 10:27:32 +01:00
Will Berman
830a9d1f01 [fix] pipeline_unclip generator (#1751)
* [fix] pipeline_unclip generator

pass generator to all schedulers

* fix fast tests test data
2022-12-19 10:27:18 +01:00
Will Berman
2dcf64b72a kakaobrain unCLIP (#1428)
* [wip] attention block updates

* [wip] unCLIP unet decoder and super res

* [wip] unCLIP prior transformer

* [wip] scheduler changes

* [wip] text proj utility class

* [wip] UnCLIPPipeline

* [wip] kakaobrain unCLIP convert script

* [unCLIP pipeline] fixes re: @patrickvonplaten

remove callbacks

move denoising loops into call function

* UNCLIPScheduler re: @patrickvonplaten

Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
DDPM scheduler with changes to support karlo

* mask -> attention_mask re: @patrickvonplaten

* [DDPMScheduler] remove leftover change

* [docs] PriorTransformer

* [docs] UNet2DConditionModel and UNet2DModel

* [nit] UNCLIPScheduler -> UnCLIPScheduler

matches existing unclip naming better

* [docs] SchedulingUnCLIP

* [docs] UnCLIPTextProjModel

* refactor

* finish licenses

* rename all to attention_mask and prep in models

* more renaming

* don't expose unused configs

* final renaming fixes

* remove x attn mask when not necessary

* configure kakao script to use new class embedding config

* fix copies

* [tests] UnCLIPScheduler

* finish x attn

* finish

* remove more

* rename condition blocks

* clean more

* Apply suggestions from code review

* up

* fix

* [tests] UnCLIPPipelineFastTests

* remove unused imports

* [tests] UnCLIPPipelineIntegrationTests

* correct

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-18 15:15:30 -08:00
Patrick von Platen
402b9560b2 Remove license accept ticks 2022-12-19 00:10:17 +01:00
Anton Lozhkov
c2a38ef9df Fix/update the LDM pipeline and tests (#1743)
* Fix/update LDM tests

* batched generators
2022-12-18 11:49:53 +01:00
Anton Lozhkov
08cc36ddff Fix MPS fast test warnings (#1744)
* unset level
2022-12-17 22:57:30 +01:00
Peter
723e8f6bb4 Fix ONNX img2img preprocessing (#1736)
Co-authored-by: Peter <peterto@users.noreply.github.com>
2022-12-17 13:12:10 +01:00
Patrick von Platen
c53a850604 [Batched Generators] This PR adds generators that are useful to make batched generation fully reproducible (#1718)
* [Batched Generators] all batched generators

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* hey

* up again

* fix tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct tests

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-17 11:13:16 +01:00
Anton Lozhkov
086c7f9ea8 Nightly integration tests (#1664)
* [WIP] Nightly integration tests

* initial SD tests

* update SD slow tests

* style

* repaint

* ImageVariations

* style

* finish imgvar

* img2img tests

* debug

* inpaint 1.5

* inpaint legacy

* torch isn't happy about deterministic ops

* allclose -> max diff for shorter logs

* add SD2

* debug

* Update tests/pipelines/stable_diffusion_2/test_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/stable_diffusion/test_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix refs

* Update src/diffusers/utils/testing_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix refs

* remove debug

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-16 18:51:11 +01:00
Pedro Cuenca
acd317810b Docs: recommend xformers (#1724)
* Fix links to flash attention.

* Add xformers installation instructions.

* Make link to xformers install more prominent.

* Link to xformers install from training docs.
2022-12-16 15:49:01 +01:00
Patrick von Platen
c6d0dff4a3 Fix ldm tests on master by not running the CPU tests on GPU (#1729) 2022-12-16 15:28:40 +01:00
Anton Lozhkov
a40095dd22 Fix ONNX img2img preprocessing and add fast tests coverage (#1727)
* Fix ONNX img2img preprocessing and add fast tests coverage

* revert

* disable progressbars
2022-12-16 15:24:16 +01:00
Partho
727434c206 Accept latents as optional input in Latent Diffusion pipeline (#1723)
* Latent Diffusion pipeline accept latents

* make style

* check for mps

randn does not work reproducibly on mps
2022-12-16 12:13:41 +01:00
YiYi Xu
21e61eb3a9 Added a README page for docs and a "schedulers" page (#1710)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-15 13:04:40 -10:00
Haihao Shen
c891330f79 Add examples with Intel optimizations (#1579)
* Add examples with Intel optimizations (BF16 fine-tuning and inference)

* Remove unused package

* Add README for intel_opts and refine the description for research projects

* Add notes of intel opts for diffusers
2022-12-15 21:16:27 +01:00
jiqing-feng
c5f04d4e34 apply amp bf16 on textual inversion (#1465)
* add conf.yaml

* enable bf16

enable amp bf16 for unet forward

fix style

fix readme

remove useless file

* change amp to full bf16

* align

* make stype

* fix format
2022-12-15 21:15:23 +01:00
CyberMeow
61dec53356 Improve pipeline_stable_diffusion_inpaint_legacy.py (#1585)
* update inpaint_legacy to allow the use of predicted noise to construct intermediate diffused images

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 20:59:31 +01:00
Pedro Cuenca
badddee0ef Add state checkpointing to other training scripts (#1687)
* Add state checkpointing to other training scripts

* Fix first_epoch

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update Dreambooth checkpoint help message.

* Dreambooth docs: checkpoints, inference from a checkpoint.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 19:49:40 +01:00
Anton Lozhkov
13994b2d3f RePaint fast tests and API conforming (#1701)
* add fast tests

* better tests and fp16

* batch fixes

* Reuse preprocessing

* quickfix
2022-12-15 18:35:31 +01:00
anton-
ea90bf2ba1 skip mps 2022-12-15 18:01:39 +01:00
Chino
8cecc66a74 Fix the bug that torch version less than 1.12 throws TypeError (#1671) 2022-12-14 21:29:39 +01:00
Anton Lozhkov
35b66c8e32 [Readme] Clarify package owners (#1707)
Specify that we don't actively monitor the conda scripts
2022-12-14 20:49:36 +01:00
Patrick von Platen
013edb641a Update main docs (#1706)
* Remove bogus file

* [Docs] Remove mentioning of gated access since no longer exsits

* add docs to index

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-14 20:33:54 +01:00
Anton Lozhkov
86ac3ea1d7 Delete _ 2022-12-14 13:52:29 +01:00
Anton Lozhkov
ef3fcbb688 Remove all local telemetry (#1702) 2022-12-14 12:56:35 +01:00
Prathik Rao
7c823c2ed7 manually update train_unconditional_ort (#1694)
* manually update train_unconditional_ort

* formatting

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-14 11:35:41 +01:00
Pedro Cuenca
784beee969 Dreambooth: use warnings instead of logger in parse_args() (#1688)
Use warnings instead of logger in parse_args()

logger requires an `Accelerator`.
2022-12-13 22:01:48 +01:00
Patrick von Platen
8b7cb962a5 make style 2022-12-13 17:01:18 +00:00
Patrick von Platen
e1bb8f6188 [Community pipeline] Add github mechanism (#1680)
* [Community pipeline] Add github mechanism

* better

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* adapt

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-13 18:01:00 +01:00
Patrick von Platen
e62dd5cfa8 Change one-step dummy pipeline for testing (#1690)
Change the one-step dummy pipeline for testing
2022-12-13 16:55:49 +01:00
w4ffl35
07f95503e5 Disable telemetry when DISABLE_TELEMETRY is set (#1686)
fixed #1685 - disables telemetry when DISABLE_TELEMETRY and HF_HUB_OFFLINE is set
2022-12-13 16:28:07 +01:00
Pedro Cuenca
e01d6cf295 Dreambooth: save / restore training state (#1668)
* Dreambooth: save / restore training state.

* make style

* Rename vars for clarity.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Remove unused import

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-13 15:16:44 +01:00
Patrick von Platen
244e16a7ab [Version] Bump to 0.11.0.dev0 (#1682)
upgrade version
2022-12-13 13:51:36 +01:00
Patrick von Platen
b345c74d4d Make sure all pipelines can run with batched input (#1669)
* [SD] Make sure batched input works correctly

* uP

* uP

* up

* up

* uP

* up

* fix mask stuff

* up

* uP

* more up

* up

* uP

* up

* finish

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-13 12:50:15 +01:00
apolinario
b417042291 Fix wrong type checking in convert_diffusers_to_original_stable_diffusion.py (#1681)
* Fix type checking remainders

* Remove IS_V20_MODEL flag always being True

Co-authored-by: apolinario <joaopaulo.passos+multimodal@gmail.com>
2022-12-13 12:44:20 +01:00
Suvaditya Mukherjee
40c16ed2f0 Added Community pipeline for comparing Stable Diffusion v1.1-4 checkpoints (#1584)
* Added Community pipeline for comparing Stable Diffusion v1.1-4

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* Made changes to provide support for current iteration of from_pretrained and added example

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* updated a small spelling error

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

* added pipeline entry to table

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>

Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
2022-12-13 11:31:30 +01:00
Patrick von Platen
69de9b2eaa [Textual Inversion] Do not update other embeddings (#1665) 2022-12-12 17:44:39 +01:00
Patrick von Platen
3ce6380d3a [SD] Make sure scheduler is correct when converting (#1667) 2022-12-12 16:57:48 +01:00
Cyberes
d2dc4de303 Handle missing global_step key in scripts/convert_original_stable_diffusion_to_diffusers.py (#1612)
handle missing global_step key and don't download config if it already exists
2022-12-12 16:10:52 +01:00
Kangfu Mei
ded3299d68 fix bug if we don't do_classifier_free_guidance (#1601)
* fix bug if we don't do_classifier_free_guidance

* update for copied diffusers.pipelines..alt_diffusion..pipeline_alt_diffusion.AltDiffusionPipeline
2022-12-12 15:02:13 +01:00
Patrick von Platen
8bf5e59931 Deprecate init image correctly (#1649)
Deprecate init image correctl
2022-12-12 15:00:20 +01:00
Prathik Rao
4645e28355 tensor format ort bug fix (#1557)
bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: anton- <anton@huggingface.co>
2022-12-12 13:56:02 +01:00
Lukas Struppek
589330595d VersatileDiffusion: fix input processing (#1568)
* fix versatile diffusion input

* merge main

* `make fix-copies`

Co-authored-by: anton- <anton@huggingface.co>
2022-12-12 13:45:27 +01:00
lawfordp2017
31444f5790 Add text encoder conversion (#1559)
* Initial code for attempt at improving SD <--> diffusers conversions for v2.0

* Updates to support round-trip between orig. SD 2.0 and diffusers models

* Corrected formatting to Black standard

* Correcting import formatting

* Fixed imports (properly this time)

* add some corrections

* remove inference files

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-12 10:07:42 +01:00
M-A
c3b2f97534 Remove unnecessary kwargs in depth2img (#1648)
Since no deprecate() call was done, the typos were silently ignored.
2022-12-11 17:00:50 +01:00
Patrick von Platen
fc94c60c83 Remove unnecessary offset in img2img (#1653)
remove unnecessary offset in img2img
2022-12-10 19:26:25 +01:00
Pedro Cuenca
ea64a7860a Allow k pipeline to generate > 1 images (#1645)
Allow k pipeline to generate > 1 images.
2022-12-10 17:54:02 +01:00
Tim Hinderliter
2868d99181 dreambooth: fix #1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16 (#1618)
* dreambooth: fix #1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16

* dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for #1566

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-10 15:45:45 +01:00
Pedro Cuenca
0c18d02cc9 Remove spurious arg in training scripts (#1644)
Remove spurious arg in training scripts.
2022-12-10 13:57:20 +01:00
Patrick von Platen
6b68afd8e4 do not automatically enable xformers (#1640)
* do not automatically enable xformers

* uP
2022-12-09 18:28:36 +01:00
anton-
63c4944998 Patch release: v0.10.2 2022-12-09 17:59:32 +01:00
Anton Lozhkov
3ebe40fc5f Adapt to forced transformers version in some dependent libraries (#1638)
* Adapt to forced transformers version in some dependent libraries

* style

* Update __init__.py

* update requires_backends
2022-12-09 17:42:53 +01:00
Anton Lozhkov
089252542c V0.10.1 patch (#1637)
* Re-add xformers enable to UNet2DCondition (#1627)

* finish

* fix

* Update tests/models/test_models_unet_2d.py

* style

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Release: v0.10.1

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-09 17:04:25 +01:00
Patrick von Platen
cd91fc06fe Re-add xformers enable to UNet2DCondition (#1627)
* finish

* fix

* Update tests/models/test_models_unet_2d.py

* style

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-12-09 14:05:38 +01:00
Pedro Cuenca
ff65c2d72b Don't assume 512x512 in k-diffusion pipeline (#1625)
Don't assume 512x512 in k-diffusion pipeline.
2022-12-09 11:50:29 +01:00
Haofan Wang
f1b726e46e Update requirements.txt (#1623)
* Update requirements.txt

* Update requirements_flax.txt

* Update requirements.txt

* Update requirements_flax.txt

* Update requirements.txt

* Update requirements_flax.txt
2022-12-09 08:35:27 +01:00
SkyTNT
f242eba4fd Fix lpw stable diffusion pipeline compatibility (#1622) 2022-12-09 08:30:26 +01:00
anton-
3faf204c49 Release: v0.10.0 2022-12-08 19:24:10 +01:00
Suraj Patil
5383188c7e StableDiffusionDepth2ImgPipeline (#1531)
* begin depth pipeline

* add depth estimation model

* fix prepare_depth_mask

* add a comment about autocast

* copied from, quality, cleanup

* begin tests

* handle tensors

* norm image tensor

* fix batch size

* fix tests

* fix enable_sequential_cpu_offload

* fix save load

* fix test_save_load_float16

* fix test_save_load_optional_components

* fix test_float16_inference

* fix test_cpu_offload_forward_pass

* fix test_dict_tuple_outputs_equivalent

* up

* fix fast tests

* fix test_stable_diffusion_img2img_multiple_init_images

* fix few more fast tests

* don't use device map for DPT

* fix test_stable_diffusion_pipeline_with_sequential_cpu_offloading

* accept external depth maps

* prepare_depth_mask -> prepare_depth_map

* fix file name

* fix file name

* quality

* check transformers version

* fix test names

* use skipif

* fix import

* add docs

* skip tests on mps

* correct version

* uP

* Update docs/source/api/pipelines/stable_diffusion_2.mdx

* fix fix-copies

* fix fix-copies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: anton- <anton@huggingface.co>
2022-12-08 18:25:12 +01:00
Anton Lozhkov
dbe0719246 Fix PyCharm/VSCode static type checking for dummy objects (#1596)
* Fix PyCharm/VSCode static type checking for dummy objects

* Re-add dummies

* Fix AudioDiffusion imports

* fix import

* fix import

* Update utils/check_dummies.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/utils/import_utils.py

* Update src/diffusers/__init__.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/__init__.py

* fix double import

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-08 14:02:11 +01:00
Anton Lozhkov
03566d8689 Delete hi 2022-12-08 13:07:25 +01:00
Suraj Patil
a934e5bc6c [Versatile Diffusion] add upcast_attention (#1605)
add upcast_attention arg
2022-12-08 13:03:32 +01:00
Patrick von Platen
a643c6300e [K Diffusion] Add k diffusion sampler natively (#1603)
* uP

* uP
2022-12-08 12:48:37 +01:00
Ben Sherman
326de41915 Trivial fix for undefined symbol in train_dreambooth.py (#1598)
easy fix for undefined name in train_dreambooth.py

import_model_class_from_model_name_or_path loads a pretrained model
and refers to args.revision in a context where args is undefined. I modified
the function to take revision as an argument and modified the invocation
of the function to pass in the revision from args. Seems like this was caused
by a cut and paste.
2022-12-07 21:39:48 +01:00
Anton Lozhkov
eb1abee693 [ONNX] Fix flaky tests (#1593)
* [ONNX] Fix flaky tests

* revert
2022-12-07 19:53:13 +01:00
Pedro Cuenca
5e0369219f Make cross-attention check more robust (#1560)
* Make cross-attention check more robust.

* Fix copies.
2022-12-07 18:33:29 +01:00
Nathan Lambert
bea7eb4314 Update RL docs for better sharing / adding models (#1563)
* init docs update

* style

* fix bad colab formatting, add pipeline comment

* update todo
2022-12-07 09:08:12 -08:00
Randolph-zeng
ca68ab3eef Update scheduling_repaint.py (#1582)
* Update scheduling_repaint.py

* update the expected image

Co-authored-by: anton- <anton@huggingface.co>
2022-12-07 17:41:07 +01:00
Suraj Patil
ced7c9601a fix upcast in slice attention (#1591)
* fix upcast in slice attention

* fix dtype

* add test

* fix test
2022-12-07 15:14:34 +01:00
Cheng Lu
8e74efad01 Add Singlestep DPM-Solver (singlestep high-order schedulers) (#1442)
* add singlestep dpmsolver

* fix a style typo

* fix a style typo

* add docs

* finish

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-07 15:03:58 +01:00
Pedro Cuenca
6a7f1f0965 Flax: avoid recompilation when params change (#1096)
* Do not recompile when guidance_scale changes.

* Remove debug for simplicity.

* make style

* Make guidance_scale an array.

* Make DEBUG a constant to avoid passing it down.

* Add comments for clarification.
2022-12-07 14:50:55 +01:00
Suraj Patil
170ebd288f [UNet2DConditionModel] add an option to upcast attention to fp32 (#1590)
upcast attention
2022-12-07 14:36:22 +01:00
Anton Lozhkov
dc87f526d4 Fix common tests for FP16 (#1588)
* Fix common tests for FP16

* revert
2022-12-07 14:09:51 +01:00
Fantasy-Studio
d9b5b43d46 Correct order height & width in pipeline_paint_by_example.py (#1589)
Update pipeline_paint_by_example.py
2022-12-07 13:40:56 +01:00
Anton Lozhkov
bb2d7cacc0 Add from_pretrained telemetry (#1461)
* Add from_pretrained usage logging

* Add classes

* add a telemetry notice

* macos
2022-12-07 11:56:21 +01:00
Patrick von Platen
4f3ddb6cca [Paint by Example] Better default for image width (#1587) 2022-12-07 11:43:28 +01:00
SkyTNT
4eb9ad0d1c [Community Pipeline] fix lpw_stable_diffusion (#1570)
* fix lpw_stable_diffusion

* rollback preprocess_mask resample
2022-12-07 11:20:01 +01:00
Patrick von Platen
896c98a2ae Add paint by example (#1533)
* add paint by example

* mkae loading possibel

* up

* Update src/diffusers/models/attention.py

* up

* finalize weight structure

* make example work

* make it work

* up

* up

* fix

* del

* add

* update

* Apply suggestions from code review

* correct transformer 2d

* finish

* up

* up

* up

* up

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

* up

* finish

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-07 11:06:30 +01:00
Anton Lozhkov
02d83c9ff1 Standardize fast pipeline tests with PipelineTestMixin (#1526)
* [WIP] Standardize fast pipeline tests with PipelineTestMixin

* refactor the sd tests a bit

* add more common tests

* add xformers

* add progressbar test

* cleanup

* upd fp16

* CycleDiffusionPipelineFastTests

* DanceDiffusionPipelineFastTests

* AltDiffusionPipelineFastTests

* StableDiffusion2PipelineFastTests

* StableDiffusion2InpaintPipelineFastTests

* StableDiffusionImageVariationPipelineFastTests

* StableDiffusionImg2ImgPipelineFastTests

* StableDiffusionInpaintPipelineFastTests

* remove unused mixins

* quality

* add missing inits

* try to fix mps tests

* fix mps tests

* add mps warmups

* skip for some pipelines

* style

* Update tests/test_pipelines_common.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-06 18:35:30 +01:00
Suraj Patil
9e1102990a [dreambooth] make collate_fn global (#1547)
make collate_fn global
2022-12-06 14:41:53 +01:00
Suraj Patil
c228331068 [examples] add check_min_version (#1550)
* add check_min_version for examples

* move __version__ to the top

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix comment

* fix error_message

* adapt the install message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-06 14:36:50 +01:00
Patrick von Platen
ae4112d2bb Mega community pipeline (#1561)
* Mega community pipeline

* fix
2022-12-06 11:18:53 +01:00
Will Berman
af04479e85 [docs] [dreambooth training] default accelerate config (#1564) 2022-12-05 18:24:32 -08:00
Patrick von Platen
9a52e33eb6 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-12-05 19:38:05 +00:00
Patrick von Platen
c524fd8589 correct librosa import 2022-12-05 19:37:46 +00:00
Pedro Cuenca
2cfdf37537 Fix typo (#1558)
* Fix typo in pipeline_stable_diffusion.py

Fixes a typo in a warning message

* Fix copies.

* Fix copies

Co-authored-by: Scott <scott@scottinallca.ps>
2022-12-05 20:31:35 +01:00
Patrick von Platen
62b497c418 [Docs] Correct docs (#1554) 2022-12-05 19:54:20 +01:00
Patrick von Platen
922d56a19c Correct type from int to str in conversion script sd 2022-12-05 18:51:29 +00:00
Patrick von Platen
ae854746ab [Community download] Fix cache dir (#1555)
* [Community download] Fix cache dir

* up
2022-12-05 18:52:55 +01:00
Robert Dargavel Smith
48d0123f0f add AudioDiffusionPipeline and LatentAudioDiffusionPipeline #1334 (#1426)
* add AudioDiffusionPipeline and LatentAudioDiffusionPipeline

* add docs to toc

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* Update pr_tests.yml

Fix tests

* parent 499ff34b3e
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041721 +0000

parent 499ff34b3e
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041704 +0000

add colab notebook

[Flax] Fix loading scheduler from subfolder (#1319)

[FLAX] Fix loading scheduler from subfolder

Fix/Enable all schedulers for in-painting (#1331)

* inpaint fix k lms

* onnox as well

* up

Correct path to schedlure (#1322)

* [Examples] Correct path

* uP

Avoid nested fix-copies (#1332)

* Avoid nested `# Copied from` statements during `make fix-copies`

* style

Fix img2img speed with LMS-Discrete Scheduler (#896)

Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Fix the order of casts for onnx inpainting (#1338)

Legacy Inpainting Pipeline for Onnx Models (#1237)

* Add legacy inpainting pipeline compatibility for onnx

* remove commented out line

* Add onnx legacy inpainting test

* Fix slow decorators

* pep8 styling

* isort styling

* dummy object

* ordering consistency

* style

* docstring styles

* Refactor common prompt encoding pattern

* Update tests to permanent repository home

* support all available schedulers until ONNX IO binding is available

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* updated styling from PR suggested feedback

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

Jax infer support negative prompt (#1337)

* support negative prompts in sd jax pipeline

* pass batched neg_prompt

* only encode when negative prompt is None

Co-authored-by: Juan Acevedo <jfacevedo@google.com>

Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)

Minor change to Imagic Readme

Missing dir causes an error when running the example code.

make style

change the sample model (#1352)

* Update alt_diffusion.mdx

* Update alt_diffusion.mdx

Add bit diffusion [WIP] (#971)

* Create bit_diffusion.py

Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG

* adding bit diffusion to new branch

ran tests

* tests

* tests

* tests

* tests

* removed test folders + added to README

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* move Mel to module in pipeline construction, make librosa optional

* fix imports

* fix copy & paste error in comment

* fix style

* add missing register_to_config

* fix class docstrings

* fix class docstrings

* tweak docstrings

* tweak docstrings

* update slow test

* put trailing commas back

* respect alphabetical order

* remove LatentAudioDiffusion, make vqvae optional

* move Mel from models back to pipelines :-)

* allow loading of pretrained audiodiffusion models

* fix tests

* fix dummies

* remove reference to latent_audio_diffusion in docs

* unused import

* inherit from SchedulerMixin to make loadable

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-05 18:06:30 +01:00
Patrick von Platen
459b8ca81a Research folder (#1553)
* Research folder

* Update examples/research_projects/README.md

* up
2022-12-05 17:58:35 +01:00
Suraj Patil
bce65cd13a [refactor] make set_attention_slice recursive (#1532)
* make attn slice recursive

* remove set_attention_slice from blocks

* fix copies

* make enable_attention_slicing base class method of DiffusionPipeline

* fix set_attention_slice

* fix set_attention_slice

* fix copies

* add tests

* up

* up

* up

* update

* up

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-05 17:31:04 +01:00
Adalberto
e289998932 fix mask discrepancies in train_dreambooth_inpaint (#1529)
The mask and instance image were being cropped in different ways without --center_crop, causing the model to learn to ignore the mask in some cases. This PR fixes that and generate more consistent results.
2022-12-05 17:26:36 +01:00
Suraj Patil
634be6e53d [examples] use from_pretrained to load scheduler (#1549)
us from_pretrained to load scheduler
2022-12-05 15:32:24 +01:00
allo-
d1bcbf38ca [textual_inversion] Add an option for only saving the embeddings (#781)
[textual_inversion] Add an option to only save embeddings

Add an command line option --only_save_embeds to the example script, for
not saving the full model. Then only the learned embeddings are saved,
which can be added to the original model at runtime in a similar way as
they are created in the training script.
Saving the full model is forced when --push_to_hub is used. (Implements #759)
2022-12-05 14:45:13 +01:00
Patrick von Platen
df7cd5fe3f Update bug-report.yml 2022-12-05 14:39:35 +01:00
Naga Sai Abhinay
c28d6945b8 [Community Pipeline] Checkpoint Merger based on Automatic1111 (#1472)
* Add checkpoint_merger pipeline

* Added missing docs for a parameter.

* Fomratting fixes.

* Fixed code quality issues.

* Bug fix: Off by 1 index

* Added docs for pipeline
2022-12-05 14:36:55 +01:00
Patrick von Platen
5177e65ff0 Update bug-report.yml 2022-12-05 14:17:04 +01:00
Patrick von Platen
60ac5fc235 Update bug-report.yml 2022-12-05 14:13:02 +01:00
Patrick von Platen
19b01749f0 Update bug-report.yml 2022-12-05 14:10:25 +01:00
Patrick von Platen
a980ef2f08 Update bug-report.yml (#1548)
* Update bug-report.yml

* Update bug-report.yml

* Update bug-report.yml
2022-12-05 14:03:54 +01:00
Patrick von Platen
7932971542 [Upscaling] Fix batch size (#1525) 2022-12-05 13:28:55 +01:00
Benjamin Lefaudeux
720dbfc985 Compute embedding distances with torch.cdist (#1459)
small but mighty
2022-12-05 12:37:05 +01:00
Patrick von Platen
513fc68104 [Stable Diffusion Inpaint] Allow tensor as input image & mask (#1527)
up
2022-12-05 12:18:02 +01:00
Anton Lozhkov
cc22bda5f6 [CI] Add slow MPS tests (#1104)
* [CI] Add slow MPS tests

* fix yml

* temporarily resolve caching

* Tests: fix mps crashes.

* Skip test_load_pipeline_from_git on mps.

Not compatible with float16.

* Increase tolerance, use CPU generator, alt. slices.

* Move to nightly

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-05 11:50:24 +01:00
Ilmari Heikkinen
daebee0963 Add xformers attention to VAE (#1507)
* Add xformers attention to VAE

* Simplify VAE xformers code

* Update src/diffusers/models/attention.py

Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-03 15:08:11 +01:00
Matthieu Bizien
ae368e42d2 [Proposal] Support saving to safetensors (#1494)
* Add parameter safe_serialization to DiffusionPipeline.save_pretrained

* Add option safe_serialization on ModelMixin.save_pretrained

* Add test test_save_safe_serialization

* Black

* Re-trigger the CI

* Fix doc-builder

* Validate files are saved as safetensor in test_save_safe_serialization
2022-12-02 18:33:16 +01:00
Patrick von Platen
cf4664e885 fix tests 2022-12-02 17:27:58 +00:00
Patrick von Platen
7222a8eadf make style 2022-12-02 17:18:50 +00:00
bachr
155d272cc1 Update FlaxLMSDiscreteScheduler (#1474)
- Add the missing `scale_model_input` method to `FlaxLMSDiscreteScheduler`
- Use `jnp.append` for appending to `state.derivatives`
- Use `jnp.delete` to pop from `state.derivatives`
2022-12-02 18:18:30 +01:00
Adalberto
2b30b1090f Create train_dreambooth_inpaint.py (#1091)
* Create train_dreambooth_inpaint.py

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training

* Update train_dreambooth_inpaint.py

refactored train_dreambooth_inpaint with black

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

Fix prior preservation

* add instructions to readme, fix SD2 compatibility
2022-12-02 18:06:57 +01:00
Antoine Bouthors
3ad49eeedd Fixed mask+masked_image in sd inpaint pipeline (#1516)
* Fixed mask+masked_image in sd inpaint pipeline

Those were left unset when inputs are not PIL images

* Fixed formatting
2022-12-02 17:51:51 +01:00
Patrick von Platen
769f0be8fb Finalize 2nd order schedulers (#1503)
* up

* up

* finish

* finish

* up

* up

* finish
2022-12-02 16:38:35 +01:00
Pedro Gabriel Gengo Lourenço
4f596599f4 Fix training docs to install datasets (#1476)
Fixed doc to install from training packages
2022-12-02 15:52:04 +01:00
Dhruv Naik
f57a2e0745 Fix Imagic example (#1520)
fix typo, remove incorrect arguments from .train()
2022-12-02 15:06:04 +01:00
Pedro Cuenca
3ceaa280bd Do not use torch.long in mps (#1488)
* Do not use torch.long in mps

Addresses #1056.

* Use torch.int instead of float.

* Propagate changes.

* Do not silently change float -> int.

* Propagate changes.

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-12-02 13:10:17 +01:00
Benjamin Lefaudeux
a816a87a09 [refactor] Making the xformers mem-efficient attention activation recursive (#1493)
* Moving the mem efficiient attention activation to the top + recursive

* black, too bad there's no pre-commit ?

Co-authored-by: Benjamin Lefaudeux <benjamin@photoroom.com>
2022-12-02 12:30:01 +01:00
Patrick von Platen
f21415d1d9 Update conversion script to correctly handle SD 2 (#1511)
* Conversion SD 2

* finish
2022-12-02 12:28:01 +01:00
Patrick von Platen
22b9cb086b [From pretrained] Allow returning local path (#1450)
Allow returning local path
2022-12-02 12:26:39 +01:00
Will Berman
25f850a23b [docs] [dreambooth training] num_class_images clarification (#1508) 2022-12-02 12:12:28 +01:00
Will Berman
b25ae2e6ab [docs] [dreambooth training] accelerate.utils.write_basic_config (#1513) 2022-12-02 12:11:18 +01:00
Suraj Patil
0f1c24664c fix heun scheduler (#1512) 2022-12-01 22:39:57 +01:00
Anton Lozhkov
e65b71aba4 Add an explicit --image_size to the conversion script (#1509)
* Add an explicit `--image_size` to the conversion script

* style
2022-12-01 19:22:48 +01:00
Akash Gokul
a6a25ceb61 Fix Flax flip_sin_to_cos (#1369)
* Fix Flax flip_sin_to_cos

* Adding flip_sin_to_cos

Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
2022-12-01 18:57:01 +01:00
Suraj Patil
b85bb0753e support v prediction in other schedulers (#1505)
* support v prediction in other schedulers

* v heun

* add tests for v pred

* fix tests

* fix test euler a

* v ddpm
2022-12-01 18:10:39 +01:00
fboulnois
52eb0348e5 Standardize on using image argument in all pipelines (#1361)
* feat: switch core pipelines to use image arg

* test: update tests for core pipelines

* feat: switch examples to use image arg

* docs: update docs to use image arg

* style: format code using black and doc-builder

* fix: deprecate use of init_image in all pipelines
2022-12-01 16:55:22 +01:00
Suraj Patil
2bbf8b67a7 simplyfy AttentionBlock (#1492) 2022-12-01 16:40:59 +01:00
Patrick von Platen
5a5bf7ef5a [Deprecate] Correct stacklevel (#1483)
* Correct stacklevel

* fix
2022-12-01 16:28:10 +01:00
Anton Lozhkov
9276b1e148 Replace deprecated hub utils in train_unconditional_ort (#1504)
* Replace deprecated hub utils in `train_unconditional_ort`

* typo
2022-12-01 16:00:52 +01:00
regisss
2579d42158 Add doc for Stable Diffusion on Habana Gaudi (#1496)
* Add doc for Stable Diffusion on Habana Gaudi

* Make style

* Add benchmark

* Center-align columns in the benchmark table
2022-12-01 15:43:48 +01:00
Anton Lozhkov
999044596a Bump to 0.10.0.dev0 + deprecations (#1490) 2022-11-30 15:27:56 +01:00
Pedro Cuenca
eeeb28a9ad Remove reminder comment (#1489)
Remove reminder comment.
2022-11-30 14:59:54 +01:00
Patrick von Platen
c05356497a Add better docs xformers (#1487)
* Add better docs xformers

* update

* Apply suggestions from code review

* fix
2022-11-30 13:57:45 +01:00
Patrick von Platen
1d4ad34af0 [Dreambooth] Make compatible with alt diffusion (#1470)
* [Dreambooth] Make compatible with alt diffusion

* make style

* add example
2022-11-30 13:48:17 +01:00
Patrick von Platen
20ce68f945 Fix dtype model loading (#1449)
* Add test

* up

* no bfloat16 for mps

* fix

* rename test
2022-11-30 11:31:50 +01:00
Patrick von Platen
110ffe2589 Allow saving trained betas (#1468) 2022-11-30 10:05:51 +01:00
Anton Lozhkov
0b7225e918 Add ort_nightly_directml to the onnxruntime candidates (#1458)
* Add `ort_nightly_directml` to the `onnxruntime` candidates

* style
2022-11-29 14:00:41 +01:00
Anton Lozhkov
db7b7bd983 [Train unconditional] Unwrap model before EMA (#1469) 2022-11-29 13:45:42 +01:00
Rohan Taori
6a0a312370 Fix bug in half precision for DPMSolverMultistepScheduler (#1349)
* cast to float for quantile method

* add fp16 test for DPMSolverMultistepScheduler fix

* formatting update
2022-11-29 13:29:23 +01:00
Ilmari Heikkinen
c28d3c82ce StableDiffusion: Decode latents separately to run larger batches (#1150)
* StableDiffusion: Decode latents separately to run larger batches

* Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode

* Rename sliced_decode to slicing

* fix whitespace

* fix quality check and repository consistency

* VAE slicing tests and documentation

* API doc hooks for VAE slicing

* reformat vae slicing tests

* Skip VAE slicing for one-image batches

* Documentation tweaks for VAE slicing

Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
2022-11-29 13:28:14 +01:00
Alex McKinney
bcb6cc16df Updates Image to Image Inpainting community pipeline README (#1370)
* updates img2img_inpainting README

* Adds example image to community pipeline README
2022-11-29 13:17:22 +01:00
Pedro Cuenca
4d1e4e24e5 Flax support for Stable Diffusion 2 (#1423)
* Flax: start adapting to Stable Diffusion 2

* More changes.

* attention_head_dim can be a tuple.

* Fix typos

* Add simple SD 2 integration test.

Slice values taken from my Ampere GPU.

* Add simple UNet integration tests for Flax.

Note that the expected values are taken from the PyTorch results. This
ensures the Flax and PyTorch versions are not too far off.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Typos and style

* Tests: verify jax is available.

* Style

* Make flake happy

* Remove typo.

* Simple Flax SD 2 pipeline tests.

* Import order

* Remove unused import.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: @camenduru
2022-11-29 12:33:21 +01:00
Patrick von Platen
a808a85390 fix slow tests (#1467) 2022-11-29 11:48:57 +01:00
Patrick von Platen
4c54519e1a Add 2nd order heun scheduler (#1336)
* Add heun

* Finish first version of heun

* remove bogus

* finish

* finish

* improve

* up

* up

* fix more

* change progress bar

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* finish

* up

* up

* up
2022-11-28 22:56:28 +01:00
Pedro Cuenca
25f11424f6 Ensure Flax pipeline always returns numpy array (#1435)
* Ensure Flax pipeline always returns numpy array.

* Clarify documentation.
2022-11-28 18:02:13 +01:00
Pedro Cuenca
89300131d2 Fix Flax from_pt (#1436)
Fix Flax `from_pt`.

It worked for models but not for pipelines.
Accidentally broken in #1107.
2022-11-28 18:01:29 +01:00
Suraj Patil
6c56f05097 v-prediction training support (#1455)
* add get_velocity

* add v prediction for training

* fix saving

* add revision arg

* fix saving

* save checkpoints dreambooth

* fix saving embeds

* add instruction in readme

* quality

* noise_pred -> model_pred
2022-11-28 17:46:54 +01:00
Patrick von Platen
77fc197f70 Speed up test and remove kwargs from call (#1446)
Remove kwargs from call
2022-11-28 17:28:19 +01:00
Anton Lozhkov
edf22c052e Hotfix for AttributeErrors in OnnxStableDiffusionInpaintPipelineLegacy (#1448) 2022-11-28 14:18:14 +01:00
Nicolas Patry
5755d16868 [Proposal] Support loading from safetensors if file is present. (#1357)
* [Proposal] Support loading from safetensors if file is present.

* Style.

* Fix.

* Adding some test to check loading logic.

+ modify download logic to not download pytorch file if not necessary.

* Fixing the logic.

* Adressing comments.

* factor out into a function.

* Remove dead function.

* Typo.

* Extra fetch only if safetensors is there.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-28 10:39:42 +01:00
anton-
6b02323a60 Release: v0.9.0 2022-11-25 17:47:36 +01:00
Kashif Rasul
462a79d39a [Docs] fixed some typos (#1425)
fixed typos
2022-11-25 17:44:07 +01:00
Patrick von Platen
6883294d44 SD2 docs (#1424)
* up

* up

* up

* up
2022-11-25 17:23:21 +01:00
Kashif Rasul
b9e921feea added initial v-pred support to DPM-solver (#1421)
* added initial v-pred support to DPM-solver

* fix sign

* added v_prediction to flax

* fixed typo
2022-11-25 17:12:58 +01:00
Patrick von Platen
7684518377 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-25 15:15:09 +00:00
Patrick von Platen
520bb082be fixes tests 2022-11-25 15:15:05 +00:00
Suraj Patil
9ec5084a9c StableDiffusionUpscalePipeline (#1396)
* StableDiffusionUpscalePipeline

* fix a few things

* make it better

* fix image batching

* run vae in fp32

* fix docstr

* resize to mul of 64

* doc

* remove safety_checker

* add max_noise_level

* fix Copied

* begin tests

* slow tests

* default max_noise_level

* remove kwargs

* doc

* fix

* fix fast tests

* fix fast tests

* no sf

* don't offload vae

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-25 16:13:16 +01:00
Anton Lozhkov
02aa4ef12e Add tests for Stable Diffusion 2 V-prediction 768x768 (#1420) 2022-11-25 15:14:13 +01:00
Patrick von Platen
8faa822ddc Allow to set config params directly in init (#1419)
* fix

* fix deprecated kwargs logic

* add tests

* finish
2022-11-25 15:07:09 +01:00
Anton Lozhkov
86aa747da9 Fix ONNX conversion and inference (#1416) 2022-11-25 14:51:17 +01:00
Pedro Cuenca
d52388f486 Deprecate predict_epsilon (#1393)
* Adapt ddpm, ddpmsolver to prediction_type.

* Deprecate predict_epsilon in __init__.

* Bring FlaxDDIMScheduler up to date with DDIMScheduler.

* Set prediction_type as an ivar for consistency.

* Convert pipeline_ddpm

* Adapt tests.

* Adapt unconditional training script.

* Adapt BitDiffusion example.

* Add missing kwargs in dpmsolver_multistep

* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.

* make style

* Remove import no longer in use.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Use config.prediction_type everywhere

* Add a couple of Flax prediction type tests.

* make style

* fix register deprecated arg

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-25 14:02:15 +01:00
Kashif Rasul
babfb8a020 [MPS] call contiguous after permute (#1411)
* call contiguous after permute

Fixes for MPS device

* Fix MPS UserWarning

* make style

* Revert "Fix MPS UserWarning"

This reverts commit b46c32810e.
2022-11-25 13:59:56 +01:00
Patrick von Platen
35099b207e [Versatile Diffusion] Fix remaining tests (#1418)
fix all tests
2022-11-25 13:40:41 +01:00
Patrick von Platen
2c6bc0f13b small fix 2022-11-25 12:04:15 +00:00
Patrick von Platen
2902109061 Fix all stable diffusion (#1415)
* up

* uP
2022-11-25 12:53:10 +01:00
Patrick von Platen
f26cde3dff fix clip guided (#1414) 2022-11-25 12:04:40 +01:00
Patrick von Platen
9f10c545cb Fix sample size conversion script (#1408)
up
2022-11-25 11:26:27 +01:00
Anton Lozhkov
5c10e68a1f Add SD2 inpainting integration tests (#1412)
SD2 inpainting integration tests
2022-11-25 11:25:49 +01:00
Anton Lozhkov
d50e321745 Support SD2 attention slicing (#1397)
* Support SD2 attention slicing

* Support SD2 attention slicing

* Add more copies

* Use attn_num_head_channels in blocks

* fix-copies

* Update tests

* fix imports
2022-11-24 22:42:59 +01:00
Patrick von Platen
8e2c4cd56c Deprecate sample size (#1406)
* up

* up

* fix

* uP

* more fixes

* up

* uP

* up

* up

* uP

* fix final tests
2022-11-24 22:32:44 +01:00
Anton Lozhkov
bb2c64a08c Add the new SD2 attention params to the VD text unet (#1400) 2022-11-24 21:57:27 +01:00
Patrick von Platen
05a36d5c1a Upscaling fixed (#1402)
* Upscaling fixed

* up

* more fixes

* fix

* more fixes

* finish again

* up
2022-11-24 20:33:52 +01:00
Patrick von Platen
cbfed0c256 [Config] Add optional arguments (#1395)
* Optional Components

* uP

* finish

* finish

* finish

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* Update src/diffusers/pipeline_utils.py

* improve

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-24 20:05:41 +01:00
Patrick von Platen
e0e86b7470 Make height and width optional (#1401)
* fix

* add test

* fix test

* uP

* up

* fix some tests
2022-11-24 18:23:59 +01:00
Anton Lozhkov
81d8f4a9e1 Version 0.9.0.dev0 (#1394) 2022-11-24 14:54:29 +01:00
Suraj Patil
cecdd8bdd1 Adapt UNet2D for supre-resolution (#1385)
* allow disabling self attention

* add class_embedding

* fix copies

* fix condition

* fix copies

* do_self_attention -> only_cross_attention

* fix copies

* num_classes -> num_class_embeds

* fix default value
2022-11-24 14:49:03 +01:00
Suraj Patil
30f6f44104 add v prediction (#1386)
* add v prediction

* adat euler for v pred

* velocity -> v_prediction

* simplify

* fix naming

* Update src/diffusers/schedulers/scheduling_euler_discrete.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-24 12:25:19 +01:00
Patrick von Platen
9f476388fa trailing . fix 2022-11-24 00:53:57 +01:00
Patrick von Platen
9479052dde fix trailing . dep object 2022-11-24 00:33:32 +01:00
Patrick von Platen
35d8186172 [Bad dependencies] Fix imports (#1382)
* fix imports

* better error

* up

* finish
2022-11-24 00:24:05 +01:00
Suraj Patil
1524122532 [Transformer2DModel] don't norm twice (#1381)
don't norm twice
2022-11-24 00:12:45 +01:00
Suraj Patil
f07a16e09b update unet2d (#1376)
* boom boom

* remove duplicate arg

* add use_linear_proj arg

* fix copies

* style

* add fast tests

* use_linear_proj -> use_linear_projection
2022-11-23 20:46:30 +01:00
anton-l
16a32c9dab Release: v0.8.0 2022-11-23 19:12:31 +01:00
Patrick von Platen
2625fb59dc [Versatile Diffusion] Add versatile diffusion model (#1283)
* up

* convert dual unet

* revert dual attn

* adapt for vd-official

* test the full pipeline

* mixed inference

* mixed inference for text2img

* add image prompting

* fix clip norm

* split text2img and img2img

* fix format

* refactor text2img

* mega pipeline

* add optimus

* refactor image var

* wip text_unet

* text unet end to end

* update tests

* reshape

* fix image to text

* add some first docs

* dual guided pipeline

* fix token ratio

* propose change

* dual transformer as a native module

* DualTransformer(nn.Module)

* DualTransformer(nn.Module)

* correct unconditional image

* save-load with mega pipeline

* remove image to text

* up

* uP

* fix

* up

* final fix

* remove_unused_weights

* test updates

* save progress

* uP

* fix dual prompts

* some fixes

* finish

* style

* finish renaming

* up

* fix

* fix

* fix

* finish

Co-authored-by: anton-l <anton@huggingface.co>
2022-11-23 19:03:45 +01:00
Suraj Patil
0eb507f2af StableDiffusionImageVariationPipeline (#1365)
* add StableDiffusionImageVariationPipeline

* add ini init

* use CLIPVisionModelWithProjection

* fix _encode_image

* add copied from

* fix copies

* add doc

* handle tensor in _encode_image

* add tests

* correct model_id

* remove copied from in enable_sequential_cpu_offload

* fix tests

* make slow tests pass

* update slow tests

* use temp model for now

* fix test_stable_diffusion_img_variation_intermediate_state

* fix test_stable_diffusion_img_variation_intermediate_state

* check for torch.Tensor

* quality

* fix name

* fix slow tests

* install transformers from source

* fix install

* fix install

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* input_image -> image

* remove deprication warnings

* fix test_stable_diffusion_img_variation_multiple_images

* make flake happy

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-23 14:36:39 +01:00
Suraj Patil
9e234d8048 handle fp16 in UNet2DModel (#1216)
* make sure fp16 runs well

* add fp16 test for superes

* Update src/diffusers/models/unet_2d.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* gen on cuda

* always run fast inferecne test on cpu

* run on cpu

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-23 11:13:34 +01:00
Penn
8fd3a74322 Fix using non-square images with UNet2DModel and DDIM/DDPM pipelines (#1289)
* fix non square images with UNet2DModel and DDIM/DDPM pipelines

* fix unet_2d `sample_size` docstring

* update pipeline tests for unet uncond

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-23 11:11:39 +01:00
regisss
44e56de9aa Replace logger.warn by logger.warning (#1366) 2022-11-22 20:44:34 +01:00
Suraj Patil
2d6d4edbbd use memory_efficient_attention by default (#1354)
* use memory_efficient_attention by default

* Update src/diffusers/models/attention.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-22 13:37:17 +01:00
Suraj Patil
8b84f85192 [examples] fix mixed_precision arg (#1359)
* use accelerator to check mixed_precision

* default `mixed_precision` to `None`

* pass mixed_precision to accelerate launch
2022-11-22 13:35:23 +01:00
Manuel Brack
e50c25d808 Add Safe Stable Diffusion Pipeline (#1244)
* Add pipeline_stable_diffusion_safe.py to pipelines

* Fix repository consistency

Ran make fix-copies after adding new pipline

* Add Paper/Equation reference for parameters to doc string

* Ensure code style and quality

* Perform code refactoring

* Fix copies inherited from merge with huggingface/main

* Add docs

* Fix code style

* Fix errors in documentation

* Fix refactoring error

* remove debugging print statement

* added Safe Latent Diffusion tests

* Fix style

* Fix style

* Add pre-defined safety configurations

* Fix line-break

* fix some tests

* finish

* Change safety checker

* Add missing safety_checker.py file

* Remove unused imports

Co-authored-by: PatrickSchrML <patrick_schramowski@hotmail.de>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-22 11:51:30 +01:00
Patrick von Platen
182eb959e5 [Community Pipelines] K-Diffusion Pipeline (#1360)
* up

* add readme

* up

* uP
2022-11-21 18:45:50 +01:00
Birch-san
ad93593345 perf: prefer batched matmuls for attention (#1203)
perf: prefer batched matmuls for attention. added fast-path to Decoder when num_heads=1
2022-11-21 15:01:11 +01:00
Stuti R
78a6eed2d7 Add bit diffusion [WIP] (#971)
* Create bit_diffusion.py

Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG

* adding bit diffusion to new branch

ran tests

* tests

* tests

* tests

* tests

* removed test folders + added to README

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-21 11:50:32 +01:00
shunxing1234
94b27fb8da change the sample model (#1352)
* Update alt_diffusion.mdx

* Update alt_diffusion.mdx
2022-11-21 11:28:25 +01:00
Patrick von Platen
ab1f01e634 make style 2022-11-20 19:37:28 +01:00
Patrick von Platen
2b31740d54 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-11-20 19:37:14 +01:00
Victor Schmidt
3bec90ff2c Handle batches and Tensors in pipeline_stable_diffusion_inpaint.py:prepare_mask_and_masked_image (#1003)
* Handle batches and Tensors in `prepare_mask_and_masked_image`

* `blackfy`
upgrade `black`

* handle mask as `np.array`

* add docstring

* revert `black` changes with smaller line length

* missing ValueError in docstring

* raise `TypeError` for image as tensor but not mask

* typo in mask shape selection

* check for batch dim

* fix: wrong indentation

* add tests

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-20 19:33:09 +01:00
Ki
eb2425b88c Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
Minor change to Imagic Readme

Missing dir causes an error when running the example code.
2022-11-20 18:59:56 +01:00
Ki
44efcbda0a Update README.md: IMAGIC example code snippet misspelling (#1346)
Update README.md

Minor spelling mistake.
2022-11-20 18:56:57 +01:00
Juan Acevedo
7bbbfbfd18 Jax infer support negative prompt (#1337)
* support negative prompts in sd jax pipeline

* pass batched neg_prompt

* only encode when negative prompt is None

Co-authored-by: Juan Acevedo <jfacevedo@google.com>
2022-11-19 20:51:52 +01:00
Clayton Sims
30220905c4 Legacy Inpainting Pipeline for Onnx Models (#1237)
* Add legacy inpainting pipeline compatibility for onnx

* remove commented out line

* Add onnx legacy inpainting test

* Fix slow decorators

* pep8 styling

* isort styling

* dummy object

* ordering consistency

* style

* docstring styles

* Refactor common prompt encoding pattern

* Update tests to permanent repository home

* support all available schedulers until ONNX IO binding is available

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* updated styling from PR suggested feedback

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-11-18 16:33:12 +01:00
Anton Lozhkov
7240318179 Fix the order of casts for onnx inpainting (#1338) 2022-11-18 16:30:07 +01:00
NotNANtoN
aa2ce41b99 Fix img2img speed with LMS-Discrete Scheduler (#896)
Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-18 16:01:57 +01:00
Anton Lozhkov
81fa2d688d Avoid nested fix-copies (#1332)
* Avoid nested `# Copied from` statements during `make fix-copies`

* style
2022-11-18 15:33:57 +01:00
Patrick von Platen
195e437ac5 Correct path to schedlure (#1322)
* [Examples] Correct path

* uP
2022-11-18 12:32:49 +01:00
Patrick von Platen
fcfdd95f0b Fix/Enable all schedulers for in-painting (#1331)
* inpaint fix k lms

* onnox as well

* up
2022-11-18 12:32:17 +01:00
Simon Kirsten
5dcef138bf [Flax] Fix loading scheduler from subfolder (#1319)
[FLAX] Fix loading scheduler from subfolder
2022-11-18 11:31:07 +01:00
Nathan Lambert
0cfbb51b0c add docs for multi-modal examples (#1227)
* add docs for multi-modal

* many changes

* fix docs build

* fix links

* Update docs/source/using-diffusers/other-modalities.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-17 10:25:49 -08:00
Patrick von Platen
b9b7039f0e img2text Typo (#1329)
* make fix copies again

* Fix typo
2022-11-17 16:48:15 +01:00
Patrick von Platen
63b34191b9 Fix typo 2022-11-17 16:47:19 +01:00
Patrick von Platen
b21a463aa9 rg Merge branch 'main' of https://github.com/huggingface/diffusers 2022-11-17 16:46:33 +01:00
Anton Lozhkov
e05ca84f41 [ONNX] Support Euler schedulers (#1328) 2022-11-17 16:37:35 +01:00
Patrick von Platen
3b48620f5e Merge branch 'main' of https://github.com/huggingface/diffusers 2022-11-17 16:14:53 +01:00
Patrick von Platen
632dacea2f [Custom pipeline] Easier loading of local pipelines (#1327)
* [Custom pipeline] Easier loading of local pipelines

* upgrade black
2022-11-17 16:00:26 +01:00
Patrick von Platen
3fb28c44a3 xMerge branch 'main' of https://github.com/huggingface/diffusers 2022-11-17 15:50:36 +01:00
Patrick von Platen
2dd12e38af make fix copies again 2022-11-17 15:50:33 +01:00
Prathik Rao
3346ec3acd integrate ort (#1110)
* integrate ort

* use return_dict=False

* revert unet return value change

* revert unet return value change

* add note to readme

* adjust readme

* add contact

* `make style`

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-17 15:48:41 +01:00
Anton Lozhkov
61719bf26c Fix gpu_id (#1326) 2022-11-17 15:41:33 +01:00
Patrick von Platen
b3911f89a3 make fix copies 2022-11-17 15:06:23 +01:00
Patrick von Platen
245e9cc7ff fix make style 2022-11-17 15:03:31 +01:00
Pedro Cuenca
1138d63b51 Temporary local test for PIL_INTERPOLATION (#1317)
* Temporary local test for PIL_INTERPOLATION

* Fix examples too.
2022-11-16 18:42:21 +01:00
Dhruv Karan
afdd7bb635 [Community Pipeline] CLIPSeg + StableDiffusionInpainting (#1250)
* text inpainting

* refactor
2022-11-16 18:18:51 +01:00
Kamal Raj
aa5c4c2609 doc string args shape fix (#1243)
* doc string args shape fix

* fix styling
2022-11-16 18:03:44 +01:00
Will Berman
f1fcfdeec5 vq diffusion classifier free sampling (#1294)
* vq diffusion classifier free sampling

* correct

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-16 17:51:43 +01:00
dblunk88
09d0546ad0 cpu offloading: mutli GPU support (#1143)
mutli GPU support
2022-11-16 17:40:16 +01:00
Patrick von Platen
65d136e067 Add improved handling of pil (#1309)
* Better error message for transformers dummy

* [PIL] Better deprecation functionality

* up
2022-11-16 15:58:22 +01:00
Suraj Patil
46893adacd [AltDiffusion] add tests (#1311)
* being tests

* fix model ids

* don't use safety checker in tests

* add im2img2 tests

* fix integration tests

* integration tests

* style

* add sentencepiece in test dep

* quality

* 4 decimalk points

* fix im2img test

* increase the tok slightly
2022-11-16 15:40:26 +01:00
Mishig
327ddc8770 Revert "Update pr docs actions" (#1307)
Revert "Update pr docs actions (#1194)"

This reverts commit 32b0736d8a.
2022-11-16 11:46:13 +01:00
Patrick von Platen
af9ee8736c Better error message for transformers dummy (#1306) 2022-11-16 10:28:19 +01:00
Patrick von Platen
8a73064576 Add AltDiffusion (#1299)
* add conversion script for vae

* up

* up

* some fixes

* add text model

* use the correct config

* add docs

* move model in it's own file

* move model in its own file

* pass attenion mask to text encoder

* pass attn mask to uncond inputs

* quality

* fix image2image

* add imag2image in init

* fix import

* fix one more import

* fix import, dummy objetcs

* fix copied from

* up

* finish

Co-authored-by: patil-suraj <surajp815@gmail.com>
2022-11-15 21:32:26 +01:00
Patrick von Platen
4625f04bc0 remove bogus files 2022-11-15 17:34:00 +00:00
Patrick von Platen
554b374d20 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-15 17:17:47 +00:00
Patrick von Platen
a0520193e1 Add Scheduler.from_pretrained and better scheduler changing (#1286)
* add conversion script for vae

* uP

* uP

* more changes

* push

* up

* finish again

* up

* up

* up

* up

* finish

* up

* uP

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 18:15:13 +01:00
Glenn 'devalias' Grant
db1cb0b1a2 [dreambooth] link to bitsandbytes readme for installation (#1229)
* add 'conda install cudatoolkit' to dreambooth 'training on 16GB' example 

fixes https://github.com/huggingface/diffusers/issues/1207

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 12:53:54 +01:00
Dhruv Naik
610e2a6fd9 Fix incorrect link to Stable Diffusion notebook (#1291)
Update README.md
2022-11-15 10:19:35 +01:00
Nan Liu
07f9e56d51 add source link to composable diffusion model (#1293) 2022-11-15 10:19:06 +01:00
Joshua Lochner
57525bb418 Fix documentation typo for UNet2DModel and UNet2DConditionModel (#1275)
* Fix documentation typo

* Fix other typo
2022-11-14 22:54:09 +01:00
Nathan Lambert
7c5fef81e0 Add UNet 1d for RL model for planning + colab (#105)
* re-add RL model code

* match model forward api

* add register_to_config, pass training tests

* fix tests, update forward outputs

* remove unused code, some comments

* add to docs

* remove extra embedding code

* unify time embedding

* remove conv1d output sequential

* remove sequential from conv1dblock

* style and deleting duplicated code

* clean files

* remove unused variables

* clean variables

* add 1d resnet block structure for downsample

* rename as unet1d

* fix renaming

* rename files

* add get_block(...) api

* unify args for model1d like model2d

* minor cleaning

* fix docs

* improve 1d resnet blocks

* fix tests, remove permuts

* fix style

* add output activation

* rename flax blocks file

* Add Value Function and corresponding example script to Diffuser implementation (#884)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* update post merge of scripts

* add mdiblock / outblock architecture

* Pipeline cleanup (#947)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

* clean up comments

* convert older script to using pipeline and add readme

* rename scripts

* style, update tests

* delete unet rl model file

* remove imports in src

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* Update src/diffusers/models/unet_1d_blocks.py

* Update tests/test_models_unet.py

* RL Cleanup v2 (#965)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

* clean up comments

* convert older script to using pipeline and add readme

* rename scripts

* style, update tests

* delete unet rl model file

* remove imports in src

* add specific vf block and update tests

* style

* Update tests/test_models_unet.py

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* fix quality in tests

* fix quality style, split test file

* fix checks / tests

* make timesteps closer to main

* unify block API

* unify forward api

* delete lines in examples

* style

* examples style

* all tests pass

* make style

* make dance_diff test pass

* Refactoring RL PR (#1200)

* init file changes

* add import utils

* finish cleaning files, imports

* remove import flags

* clean examples

* fix imports, tests for merge

* update readmes

* hotfix for tests

* quality

* fix some tests

* change defaults

* more mps test fixes

* unet1d defaults

* do not default import experimental

* defaults for tests

* fix tests

* fix-copies

* fix

* changes per Patrik's comments (#1285)

* changes per Patrik's comments

* update conversion script

* fix renaming

* skip more mps tests

* last test fix

* Update examples/rl/README.md

Co-authored-by: Ben Glickenhaus <benglickenhaus@gmail.com>
2022-11-14 13:48:48 -08:00
Patrick von Platen
d5ab55e437 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-14 21:10:47 +00:00
Suraj Patil
a8d0977769 [StableDiffusionInpaintPipeline] fix batch_size for mask and masked latents (#1279)
fix bs for mask and masked latents
2022-11-14 22:03:10 +01:00
Patrick von Platen
e4ffadc429 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-14 21:01:39 +00:00
Patrick von Platen
ec7c8d32b0 add conversion script for vae 2022-11-14 19:43:17 +00:00
Partho
c9b3463703 Fix wrong link in text2img fine-tuning documentation (#1282)
fix link typo
2022-11-14 20:42:14 +01:00
Lime-Cakes
33d7e89c42 Edited attention.py for older xformers (#1270)
Older versions of xformers require query, key, value to be contiguous, this calls .contiguous() on q/k/v before passing to xformers.
2022-11-14 13:35:47 +01:00
Patrick von Platen
b3c5e086e5 Finalize stable diffusion refactor (#1269)
* finish

* cleaner

* more fixes

* refactor

* make fix copies

* refactor cycle diffusion

* finish

* finish2

* Apply suggestions from code review
2022-11-13 23:54:30 +01:00
Patrick von Platen
4c660d16d0 [Stable Diffusion] Fix padding / truncation (#1226)
* [Stable Diffusion] Fix padding / truncation

* finish
2022-11-13 20:19:55 +01:00
ruanrz
8171566163 [Docs] improve img2img example (#1193)
update img2img example
2022-11-11 12:28:20 +01:00
Pedro Cuenca
045157a46f Fix Flax usage comments (#1211)
* Fix Flax usage comments (they didn't work).

* Spell out dtype

* make style
2022-11-10 16:00:17 +01:00
apolinario
a09d47532d Add a reference to the name 'Sampler' (#1172)
* Add a reference to the name 'Sampler'

- Facilitate people that are familiar with the name samplers to understand that we call that schedulers
- Better SEO if people are googling for samplers to find our library as well

* Update README.md with a reference to 'Sampler'
2022-11-10 14:37:42 +01:00
Anton Lozhkov
2e980ac9a0 [Tests] Adjust TPU test values (#1233)
* [Tests] Adjust TPU test values

* slow tests

* remaining refs
2022-11-10 00:44:42 +01:00
Anton Lozhkov
0feb21a18c [Tests] Fix mps+generator fast tests (#1230)
* [Tests] Fix mps+generator fast tests

* mps for Euler

* retry

* warmup issue again?

* fix reproducible initial noise

* Revert "fix reproducible initial noise"

This reverts commit f300d05cb9.

* fix reproducible initial noise

* fix device
2022-11-10 00:09:22 +01:00
Patrick von Platen
187de44352 Fix device on save/load tests 2022-11-09 22:18:14 +00:00
Anton Lozhkov
7d0c272939 Match the generator device to the pipeline for DDPM and DDIM (#1222)
* Match the generator device to the pipeline for DDPM and DDIM

* style

* fix

* update values

* fix fast tests

* trigger slow tests

* deprecate

* last value fixes

* mps fixes
2022-11-09 23:00:23 +01:00
Patrick von Platen
3d98dc763a Factor out encode text with Copied from (#1224)
* up

* more fixes

* fix

* finalize

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* upload models

* up
2022-11-09 22:18:57 +01:00
exo-pla-net
13f388eeb2 Improve documentation for the LPW pipeline (#1182) 2022-11-09 21:39:27 +01:00
Pedro Cuenca
af279434d0 Flax tests: don't hardcode number of devices (#1175)
Flax tests: don't hardcode number of devices.

This makes it possible to test on CPU/GPU. However, expected slices are
only checked when there are 8 devices.
2022-11-09 20:04:43 +01:00
Jesse Casey
4969f46511 apply repeat_interleave fix for mps to stable diffusion image2image pipeline (#1135)
copy from other pipeline
2022-11-09 20:01:31 +01:00
Patrick von Platen
6c0335c7f9 DDIM docs (#1219) 2022-11-09 16:02:11 +01:00
Patrick von Platen
0248541dea [Conversion] Improve conversion script (#1218)
up
2022-11-09 15:46:08 +01:00
Duong A. Nguyen
5a59f9b717 Add LDM Super Resolution pipeline (#1116)
* Add ldm super resolution pipeline

* style

* fix copies

* style

* fix doc

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* add doc

* address comments

* address comments

* fix doc

* minor

* add tests

* add tests

* load text encoder from subfolder

* fix test

* fix test

* style

* style

* handle mps latents

* unfix typo

* unfix typo

* Update tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix set_timesteps mps

* fix set_timesteps mps

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style

* test 64x64 instead of 256x256

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-09 13:42:16 +01:00
Patrick von Platen
b93fe08545 [Loading] Make sure loading edge cases work (#1192)
* [Loading] Make edge cases work

* up

* finish

* up
2022-11-09 12:28:56 +01:00
Duong A. Nguyen
3f7edc5f72 Fix layer names convert LDM script (#1206)
fix script convert LDM
2022-11-09 12:08:30 +01:00
Suraj Patil
cd77a03651 [CLIPGuidedStableDiffusion] support DDIM scheduler (#1190)
add ddim in clip guided
2022-11-09 11:46:12 +01:00
camenduru
663f0c1963 [Flax] fix extra copy pasta 🍝 (#1187) 2022-11-09 11:34:15 +01:00
Patrick von Platen
6cf72a9b1e Fix slow tests (#1210)
* fix tests

* Fix more

* more
2022-11-09 11:22:12 +01:00
Anton Lozhkov
24895a1f49 Fix cpu offloading (#1177)
* Fix cpu offloading

* get offloaded devices locally for SD pipelines
2022-11-09 10:28:10 +01:00
Nathan Lambert
598ff76bbf add licenses to pipelines (#1201)
add licenses
2022-11-09 10:06:49 +01:00
Patrick von Platen
249d9bc0e7 [Scheduler] Move predict epsilon to init (#1155)
* [Scheduler] Move predict epsilon to init

* up

* uP

* uP

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-08 18:08:08 +01:00
Suraj Patil
5786b0e2f7 handle dtype xformers attention (#1196)
handle dtype xformers
2022-11-08 17:15:23 +01:00
Mishig
32b0736d8a Update pr docs actions (#1194) 2022-11-08 16:38:09 +01:00
Pedro Cuenca
614c182f94 Restore compatibility with deprecated StableDiffusionOnnxPipeline (#1191)
* Restore compatibility with old ONNX pipeline.

I think it broke in #552.

* Add missing attribute `vae_encoder`
2022-11-08 15:08:35 +01:00
Anton Lozhkov
11f7d6f3cc [ONNX] Improve ONNXPipeline scheduler compatibility, fix safety_checker (#1173)
* [ONNX] Improve ONNX scheduler compatibility, fix safety_checker

* typo
2022-11-08 14:39:11 +01:00
Yuta Hayashibe
555203e1fa Warning for invalid options without "--with_prior_preservation" (#1065)
* Make errors for invalid options without "--with_prior_preservation"

* Make --instance_prompt required

* Removed needless check because --instance_data_dir is marked with required

* Updated messages

* Use logger.warning instead of raise errors

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-08 14:31:13 +01:00
Pedro Cuenca
813744e5f3 MPS schedulers: don't use float64 (#1169)
* Schedulers: don't use float64 on mps

* Test set_timesteps() on device (float schedulers).

* SD pipeline: use device in set_timesteps.

* SD in-painting pipeline: use device in set_timesteps.

* Tests: fix mps crashes.

* Skip test_load_pipeline_from_git on mps.

Not compatible with float16.

* Use device.type instead of str in Euler schedulers.
2022-11-08 13:11:33 +01:00
Suraj Patil
5a8b356922 [DDIMScheduler] fix noise device in ddim step (#1189)
* fix noise device in ddim sched

* fix typo

* self.device -> device

* remove duplicated if

* use str device

* don't use str for device
2022-11-08 13:11:12 +01:00
Pedro Cuenca
20a05d6a50 Fix small typo (#1178)
Unless it's intentional, lol
2022-11-08 12:30:51 +01:00
Patrick von Platen
c3dcb6749b Update config.yml 2022-11-08 11:31:15 +01:00
Pedro Cuenca
fa6e5209a8 Link to Dreambooth blog post instead of W&B report (#1180)
Link to Dreambooth blog post instead of W&B report.
2022-11-07 21:59:36 +01:00
Duong A. Nguyen
ac4c695d97 [Flax examples] Load text encoder from subfolder (#1147)
load text encoder from subfolder
2022-11-07 21:26:59 +01:00
JuanCarlosPi
01733238a6 [Community Pipeline] Add multilingual stable diffusion to community pipelines (#1142)
* Add multilingual_stable_diffusion.py file

* Add multilingual stable diffusion to examples README file

* Update examples/community/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-07 21:11:59 +01:00
Alex McKinney
bcdb3d594c Community pipeline img2img inpainting (#1114)
* adds image to image inpainting with `PIL.Image.Image` inputs
the base implementation claims to support `torch.Tensor` but seems it
would also fail in this case.

* `make style` and `make quality`

* updates community examples readme

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-07 21:06:52 +01:00
Patrick von Platen
72eae64d67 Fix dtype safety checker inpaint legacy (#1137)
* [Stable Diffusion Inpaint Legacy] Fiix some things

* uP
2022-11-07 20:57:45 +01:00
Patrick von Platen
de7536281a fix image docs 2022-11-07 17:25:13 +01:00
Patrick von Platen
b500df1155 [Docs] Add loading script (#1174)
* add loading script

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* correct

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* uP

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-07 17:15:41 +01:00
Pedro Cuenca
0dd8c6b4db Fix community pipeline links (#1162)
* Change title to match the sidebar in _toctree.

* Fix custom pipe link, add link to contribute.

* Fix community pipeline links.
2022-11-07 14:32:51 +01:00
Duong A. Nguyen
cd502b25cf Fix typo latens -> latents (#1171)
fix typo
2022-11-07 13:34:45 +01:00
Pedro Cuenca
e86a280c45 Remove warning about half precision on MPS (#1163)
Remove warning about half precision on MPS.
2022-11-07 12:27:17 +01:00
Cheng Lu
b4a1ed8544 Add multistep DPM-Solver discrete scheduler (#1132)
* add dpmsolver discrete pytorch scheduler

* fix some typos in dpm-solver pytorch

* add dpm-solver pytorch in stable-diffusion pipeline

* add jax/flax version dpm-solver

* change code style

* change code style

* add docs

* add `add_noise` method for dpmsolver

* add pytorch unit test for dpmsolver

* add dummy object for pytorch dpmsolver

* Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* resolve the code comments

* rename the file

* change class name

* fix code style

* add auto docs for dpmsolver multistep

* add more explanations for the stabilizing trick (for steps < 15)

* delete the dummy file

* change the API name of predict_epsilon, algorithm_type and solver_type

* add compatible lists

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-06 22:49:55 +01:00
Pedro Cuenca
08a6dc8a58 Flax: Flip sin to cos in time embeddings (#1149)
Flip sin to cos in t embeddings.

This was assumed in the previous implementation, but now the default is
the opposite.

Fixes #1145.
2022-11-05 22:17:41 +01:00
Chen Wu (吴尘)
9d8943b7e7 Add CycleDiffusion pipeline using Stable Diffusion (#888)
* Add CycleDiffusion pipeline for Stable Diffusion

* Add the option of passing noise to DDIMScheduler

Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator.

* Update README.md

* Update README.md

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update import format

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* add two tests

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update README.md

* Rename pipeline name as suggested in the latest reviewer comment

* Update test_pipelines.py

* Update test_pipelines.py

* Update test_pipelines.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Remove the generator

This generator does not control all randomness during sampling, which can be misleading.

* Update optimal hyperparameters

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

* uP

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

* Replace assert with ValueError

* finish docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-04 20:51:06 +01:00
Pi Esposito
1172c9634b add enable sequential cpu offloading to other stable diffusion pipelines (#1085)
* add enable sequential cpu offloading to other stable diffusion pipelines

* trigger ci

* fix styling

* interpolate before converting to device to avoid breking when cpu_offload is enabled with fp16

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>

* style again I need to stop forgething this thing

* fix inpainting bug that could cause device misalignment

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 19:25:28 +01:00
Anton Lozhkov
2fcae69f2a Bump to 0.8.0.dev0 (#1131)
* Bump to 0.8.0.dev0

* deprecate int timesteps

* style
2022-11-04 19:06:24 +01:00
SkyTNT
a480229463 [Community Pipeline] lpw_stable_diffusion: add xformers_memory_efficient_attention and sequential_cpu_offload (#1130)
lpw_stable_diffusion: xformers and cpu_offload
2022-11-04 18:38:37 +01:00
Chenguo Lin
5b20d3b3d7 fix the parameter naming in self.downsamplers (#1108)
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 18:05:19 +01:00
Lewington-pitsos
2c108693cc Test precision increases (#1113)
* increase the precision of slice-based tests and make the default test case easier to single out

* increase precision of unit tests which already rely on float comparisons

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 17:54:01 +01:00
webbigdata-jp
af7b1c3bf2 fix 404 link in example/README.mb (#1136)
fix 404 link in README.mb
2022-11-04 16:45:58 +01:00
Patrick von Platen
1d0f3c211e Move accelerate to a soft-dependency (#1134)
* finish

* finish

* Update src/diffusers/modeling_utils.py

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* more fixes

* fix

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-04 14:58:52 +01:00
Duong A. Nguyen
c62b3a2e7e [Flax] Fix sample batch size DreamBooth (#1129)
fix sample batch size
2022-11-04 13:49:57 +01:00
Patrick von Platen
bde4880c9c make style 2022-11-03 17:57:51 +00:00
Patrick von Platen
a24862cdaf Correct VQDiffusion Pipeline import 2022-11-03 17:55:14 +00:00
Patrick von Platen
9eb389f298 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-11-03 17:55:03 +00:00
Patrick von Platen
33108bfa6b Correct VQDiffusion Pipeline import 2022-11-03 17:54:48 +00:00
anton-l
1578679ff4 Release: v0.7.0 2022-11-03 18:47:20 +01:00
Pedro Cuenca
118c5be94a Docs: Do not require PyTorch nightlies (#1123)
Do not require PyTorch nightlies.
2022-11-03 18:17:23 +01:00
Suraj Patil
7b030a7d68 handle device for randn in euler step (#1124)
* handle device for randn in euler step

* convert device to str
2022-11-03 18:13:18 +01:00
Patrick von Platen
42bb459457 [Low cpu memory] Correct naming and improve default usage (#1122)
* correct naming

* finish

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-03 18:11:18 +01:00
Patrick von Platen
988c82227d fix copies 2022-11-03 17:32:39 +01:00
Suraj Patil
7482178162 default fast model loading 🔥 (#1115)
* make accelerate hard dep

* default fast init

* move params to cpu when device map is None

* handle device_map=None

* handle torch < 1.9

* remove device_map="auto"

* style

* add accelerate in torch extra

* remove accelerate from extras["test"]

* raise an error if torch is available but not accelerate

* update installation docs

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* improve defautl loading speed even further, allow disabling fats loading

* address review comments

* adapt the tests

* fix test_stable_diffusion_fast_load

* fix test_read_init

* temp fix for dummy checks

* Trigger Build

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-03 17:25:57 +01:00
Will Berman
ef2ea33c3b VQ-diffusion (#658)
* Changes for VQ-diffusion VQVAE

Add specify dimension of embeddings to VQModel:
`VQModel` will by default set the dimension of embeddings to the number
of latent channels. The VQ-diffusion VQVAE has a smaller
embedding dimension, 128, than number of latent channels, 256.

Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
unet block helpers. VQ-diffusion's VQVAE uses those two block types.

* Changes for VQ-diffusion transformer

Modify attention.py so SpatialTransformer can be used for
VQ-diffusion's transformer.

SpatialTransformer:
- Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
- `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
- modified forward pass to take optional timestep embeddings

ImagePositionalEmbeddings:
- added to provide positional embeddings to discrete inputs for latent pixels

BasicTransformerBlock:
- norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
- modified forward pass to take optional timestep embeddings

CrossAttention:
- now may optionally take a bias parameter for its query, key, and value linear layers

FeedForward:
- Internal layers are now configurable

ApproximateGELU:
- Activation function in VQ-diffusion's feedforward layer

AdaLayerNorm:
- Norm layer modified to incorporate timestep embeddings

* Add VQ-diffusion scheduler

* Add VQ-diffusion pipeline

* Add VQ-diffusion convert script to diffusers

* Add VQ-diffusion dummy objects

* Add VQ-diffusion markdown docs

* Add VQ-diffusion tests

* some renaming

* some fixes

* more renaming

* correct

* fix typo

* correct weights

* finalize

* fix tests

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish

* finish

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-03 16:10:28 +01:00
Pedro Cuenca
269109dbfb Continuation of #1035 (#1120)
* remove batch size from repeat

* repeat empty string if uncond_tokens is none

* fix inpaint pipes

* return back whitespace to pass code quality

* Apply suggestions from code review

* Fix typos.

Co-authored-by: Had <had-95@yandex.ru>
2022-11-03 15:49:20 +01:00
Revist
d38c804320 feat: add repaint (#974)
* feat: add repaint

* fix: fix quality check with `make fix-copies`

* fix: remove old unnecessary arg

* chore: change default to DDPM (looks better in experiments)

* ".to(device)" changed to "device="

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* make generator device-specific

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* make generator device-specific and change shape

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* fix: add preprocessing for image and mask

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* fix: update test

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/pipelines/repaint/pipeline_repaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add docs and examples

* Fix toctree

Co-authored-by: fja <fja@zurich.ibm.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-03 15:42:46 +01:00
Anton Lozhkov
4a38166afe Allow saving None pipeline components (#1118)
* Allow saving `None` pipeline components

* support flax as well

* style
2022-11-03 15:41:33 +01:00
Anton Lozhkov
0edf9ca082 Fix hub-dependent tests for PRs (#1119)
* Remove the hub token

* replace repos

* style
2022-11-03 15:24:32 +01:00
Patrick von Platen
c39a511b5f [Loading] Ignore unneeded files (#1107)
* [Loading] Ignore unneeded files

* up
2022-11-02 19:20:42 +01:00
Denis
cbcd0512f0 Training to predict x0 in training example (#1031)
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly

* Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly"

This reverts commit c5efb52564.

* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly

* fixed code style

Co-authored-by: lukovnikov <lukovnikov@users.noreply.github.com>
2022-11-02 17:43:40 +01:00
Kashif Rasul
0b61cea347 [Flax] time embedding (#1081)
* initial get_sinusoidal_embeddings

* added asserts

* better var name

* fix docs
2022-11-02 16:54:30 +01:00
Yuta Hayashibe
33c487455e Fix padding in dreambooth (#1030) 2022-11-02 16:37:05 +01:00
Grigory Sizov
5cd29d623a Fix tests for equivalence of DDIM and DDPM pipelines (#1069)
* Fix equality test for ddim and ddpm

* add docs for use_clipped_model_output in DDIM

* fix inline comment

* reorder imports in test_pipelines.py

* Ignore use_clipped_model_output if scheduler doesn't take it
2022-11-02 14:50:32 +01:00
Omiita
1216a3b122 Fix a small typo of a variable name (#1063)
Fix a small typo

fix a typo in `models/attention.py`.
weight -> width
2022-11-02 14:46:52 +01:00
Anton Lozhkov
4e59bcc680 [CI] Framework and hardware-specific CI tests (#997)
* [WIP][CI] Framework and hardware-specific docker images for CI tests

* username

* fix cpu

* try out the image

* push latest

* update workspace

* no root isolation for actions

* add a flax image

* flax and onnx matrix

* fix runners

* add reports

* onnxruntime image

* retry tpu

* fix

* fix

* build onnxruntime

* naming

* onnxruntime-gpu image

* onnxruntime-gpu image, slow tests

* latest jax version

* trigger flax

* run flax tests in one thread

* fast flax tests on cpu

* fast flax tests on cpu

* trigger slow tests

* rebuild torch cuda

* force cuda provider

* fix onnxruntime tests

* trigger slow

* don't specify gpu for tpu

* optimize

* memory limit

* fix flax tests

* disable docker cache
2022-11-02 14:07:07 +01:00
Suraj Patil
b1ec61ee45 fix model card url in text inversion readme. (#1103)
Update README.md
2022-11-02 14:02:52 +01:00
Jonathan Rahn
0025626cd9 fix typo in examples dreambooth README.md (#1073)
Update README.md

fixed typo
2022-11-02 13:15:30 +01:00
Patrick von Platen
d53ffbbdf4 Rename latent (#1102)
* Rename latent

* uP
2022-11-02 11:59:00 +01:00
rafael
bdbcaa9852 lpw_stable_diffusion: Add is_cancelled_callback (#1053)
* [Community Pipelines] lpw_stable_diffusion: Add is_cancelled_callback

* [Community pipelines] lpw_stable_diffusion_onnx: Add is_cancelled_callback
2022-11-02 11:51:18 +01:00
Lewington-pitsos
8ee21915bf Integration tests precision improvement for inpainting (#1052)
* improve test precision

get tests passing with greater precision using lewington images

* make old numpy load function a wrapper around a more flexible numpy loading function

* adhere to black formatting

* add more black formatting

* adhere to isort

* loosen precision and replace path

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-02 11:47:26 +01:00
Suraj Patil
8608795711 [docs] add euler scheduler in docs, how to use differnet schedulers (#1089)
* add euler scheduler in docs

* add a section for how to use different scheds

* address patrck's comments
2022-11-02 11:32:46 +01:00
MatthieuTPHR
98c42134a5 Up to 2x speedup on GPUs using memory efficient attention (#532)
* 2x speedup using memory efficient attention

* remove einops dependency

* Swap K, M in op instantiation

* Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter

* make xformers a soft dependency

* remove one-liner functions

* change one letter variable to appropriate names

* Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method

* Add memory efficient attention toggle to img2img and inpaint pipelines

* Clearer management of xformers' availability

* update optimizations markdown to add info about memory efficient attention

* add benchmarks for TITAN RTX

* More detailed explanation of how the mem eff benchmark were ran

* Removing autocast from optimization markdown

* import_utils: import torch only if is available

Co-authored-by: Nouamane Tazi <nouamane98@gmail.com>
2022-11-02 10:29:06 +01:00
MarkRich
a793b1fe7e Add imagic to community pipelines (#958)
* initial commit to add imagic to stable diffusion community pipelines

* remove some testing changes

* comments from PR review for imagic stable diffusion

* remove changes from pipeline_stable_diffusion as part of imagic pipeline

* clean up example code and add line back in to pipeline_stable_diffusion for imagic pipeline

* remove unused functions

* small code quality changes for imagic pipeline

* clean up readme

* remove hardcoded logging values for imagic community example

* undo change for DDIMScheduler
2022-11-01 11:17:51 +01:00
Laurent Mazare
7fb4b882b9 Remove some unused parameter in CrossAttnUpBlock2D (#1034)
Remove some unused parameter

The `downsample_padding` parameter does not seem to be used in `CrossAttnUpBlock2D` (or by any up block for that matter) so removing it.
2022-10-31 19:15:15 +01:00
Patrick von Platen
888468dd90 Remove nn sequential (#1086)
* Remove nn sequential

* up
2022-10-31 19:01:42 +01:00
Patrick von Platen
17c2c0600b [Tests] Fix slow tests (#1087) 2022-10-31 18:59:58 +01:00
Patrick von Platen
010bc4ea19 incorrect model id 2022-10-31 16:35:59 +00:00
Patrick von Platen
c18941b01a [Better scheduler docs] Improve usage examples of schedulers (#890)
* [Better scheduler docs] Improve usage examples of schedulers

* finish

* fix warnings and add test

* finish

* more replacements

* adapt fast tests hf token

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Integrate compatibility with euler

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-31 17:26:30 +01:00
hlky
a1ea8c01c3 k-diffusion-euler (#1019)
* k-diffusion-euler

* make style make quality

* make fix-copies

* fix tests for euler a

* Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/schedulers/scheduling_euler_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/schedulers/scheduling_euler_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* remove unused arg and method

* update doc

* quality

* make flake happy

* use logger instead of warn

* raise error instead of deprication

* don't require scipy

* pass generator in step

* fix tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update tests/test_scheduler.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* remove unused generator

* pass generator as extra_step_kwargs

* update tests

* pass generator as kwarg

* pass generator as kwarg

* quality

* fix test for lms

* fix tests

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-31 16:20:38 +01:00
Pedro Cuenca
bf7b0bc25b Allow safety_checker to be None when using CPU offload (#1078)
Allow None safety_checker when using CPU offload.
2022-10-31 15:03:33 +01:00
Patrick von Platen
e4d264e4eb [GitBot] Automatically close issues after inactivitiy (#1079)
* [GitBot] Automatically close issues after inactivitiy

* improve

* Add unstale

* typo

Co-authored-by: anton-l <anton@huggingface.co>
2022-10-31 14:06:03 +01:00
Anton Lozhkov
1606eb994a Fix pipelines user_agent, ignore CI requests (#1058)
* Fix pipelines user_agent, ignore CI requests

* fix circular import

* N/A versions

* N/A versions
2022-10-31 13:38:43 +01:00
Patrick von Platen
82d56cf192 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-10-31 09:13:40 +00:00
Patrick von Platen
707b8684b3 fix slow test 2022-10-31 09:13:37 +00:00
Jonatan Kłosko
8e4fd686e0 Move safety detection to model call in Flax safety checker (#1023)
* Move safety detection to model call in Flax safety checker

* Update src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py
2022-10-30 20:07:55 +01:00
Pedro Cuenca
95414bd6bf Experimental: allow fp16 in mps (#961)
* Docs: refer to pre-RC version of PyTorch 1.13.0.

* Remove temporary workaround for unavailable op.

* Update comment to make it less ambiguous.

* Remove use of contiguous in mps.

It appears to not longer be necessary.

* Special case: use einsum for much better performance in mps

* Update mps docs.

* MPS: make pipeline work in half precision.
2022-10-29 21:09:32 +02:00
Pedro Cuenca
a59f9990fc Tests: upgrade PyTorch cuda to 11.7 to fix examples tests. (#1048)
Tests: upgrade PyTorch cuda to 11.7.

Otherwise the cuda versions of torch and torchvision mismatch, and
examples tests fail. We were requesting cuda 11.6 for PyTorch, and the
default torchvision (via setup.py).

Another option would be to include torchvision in the same pip install
line as torch.
2022-10-29 20:27:00 +02:00
MarkRich
1fc208825d Add seed resizing to community pipelines (#1011)
* add seed resizing to community examples

* actually add the file responsible for seed resizing
2022-10-29 09:31:42 +02:00
Nathan Lambert
12fd0736dc clean incomplete pages (#1008) 2022-10-29 09:28:26 +02:00
Minwoo Byeon
fc0ca47456 Fix speedup ratio in fp16.mdx (#837) 2022-10-29 09:26:23 +02:00
Pedro Cuenca
6b185b6acd Update training and fine-tuning docs (#1020)
* Update training and fine-tuning docs.

* Update examples README.

* Update README.

* Add Flax fine-tuning section.

* Accept suggestion

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Accept suggestion

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-28 21:02:08 +02:00
Patrick von Platen
81b6fbf19d higher precision for vae 2022-10-28 18:19:06 +00:00
Patrick von Platen
a7ae808ee2 increase tolerance 2022-10-28 17:50:22 +00:00
Patrick von Platen
ea01a4c7f9 fix 2022-10-28 16:55:43 +00:00
Patrick von Platen
cbbb29398a hot fix 2022-10-28 16:55:21 +00:00
Patrick von Platen
d37f08da72 [Tests] no random latents anymore (#1045) 2022-10-28 18:52:25 +02:00
Patrick von Platen
c4ef1efe46 [Tests] Better prints (#1043) 2022-10-28 17:38:31 +02:00
Patrick von Platen
8d6487f3cb Fix some failing tests (#1041)
* up

* up

* up

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* Apply suggestions from code review
2022-10-28 17:05:00 +02:00
Patrick von Platen
d2d9764f35 [Tests] Speed up slow tests (#1040)
* [Tests] Speed up slow tests

* Up

* up
2022-10-28 14:46:39 +02:00
Patrick von Platen
a80480f0f2 [Tests] Improve unet / vae tests (#1018)
* improve tests

* up

* finish

* upload

* add init

* up

* finish vae

* finish

* reduce loading time with device_map

* remove device_map from CPU

* uP
2022-10-28 13:43:26 +02:00
Nouamane Tazi
ab079f27cf fix F.interpolate() for large batch sizes (#1006)
* fix `upsample_nearest_nhwc` for large bsz

* fix `upsample_nearest_nhwc` for large bsz
2022-10-28 11:25:21 +02:00
Duong A. Nguyen
1e07b6b334 [Flax SD finetune] Fix dtype (#1038)
fix jnp dtype
2022-10-28 11:21:34 +02:00
Anton Lozhkov
fb38bb1621 Support grayscale images in numpy_to_pil (#1025) 2022-10-27 22:44:35 +02:00
Pi Esposito
de00c63217 Document sequential CPU offload method on Stable Diffusion pipeline (#1024)
* document cpu offloading method

* address review comments

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-27 16:52:21 +02:00
Anton Lozhkov
a6314a8d4e Add --dataloader_num_workers to the DDPM training example (#1027) 2022-10-27 15:55:36 +02:00
Denis
939ec17e91 Probably nicer to specify dependency on tensorboard in the training example (#998)
tensorboard import in readme, otherwise accelerator.trackers[0] out of range

Co-authored-by: lukovnikov <lukovnikov@users.noreply.github.com>
2022-10-27 15:55:18 +02:00
Suraj Patil
eceeebdf91 Update train_dreambooth.py 2022-10-27 15:51:11 +02:00
Suraj Patil
52f2128dc6 update readme for flax examples (#1026) 2022-10-27 15:25:25 +02:00
Anton Lozhkov
fbcc383340 Deprecate init_git_repo, refactor train_unconditional.py (#1022)
Deprecate `init_git_repo` and `push_to_hub`, refactor `train_unconditional.py`
2022-10-27 15:16:59 +02:00
Duong A. Nguyen
90f91adb0e [Flax] Add DreamBooth (#1001)
* [Flax] Add DreamBooth

* fix sample rng

* style

* not reuse rng

* add dtype for mixed precision training

* Add Flax example
2022-10-27 14:25:04 +02:00
Duong A. Nguyen
4623f095f3 [DreamBooth] Set train mode for text encoder (#1012)
Set train mode for text encoder
2022-10-27 14:19:13 +02:00
Duong A. Nguyen
abe058221c [Flax] Add finetune Stable Diffusion (#999)
* [Flax] Add finetune Stable Diffusion

* temporary fix

* drop_last and seed

* add dtype for mixed precision training

* style

* Add Flax example
2022-10-27 14:08:21 +02:00
Patrick von Platen
3be9fa97d6 [Accelerate model loading] Fix meta device and super low memory usage (#1016)
* [Accelerate model loading] Fix meta device and super low memory usage

* better naming
2022-10-27 12:11:42 +02:00
Suraj Patil
e92a603cab fix dreambooth script. (#1017)
make input_args optional
2022-10-27 11:44:06 +02:00
Pedro Cuenca
1d04e1b4de Continuation of #942: additional float64 failure (#996)
* Add failing test for #940.

* Do not use torch.float64 in mps.

* style

* Temporarily skip add_noise for IPNDMScheduler.

Until #990 is addressed.

* Fix additional float64 error in mps.

* Improve add_noise test

* Slight edit – I think it's clearer this way.
2022-10-27 10:21:40 +02:00
Duong A. Nguyen
a23ad87d7a [Flax] Add Textual Inversion (#880)
* add textual inversion flax

* make style

* make style

* replicate vae and unet params

* make style

* minor

* save after end of training

* style

* Temporary fix

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Add Flax instruction

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-26 22:28:55 +02:00
Brian Whicheloe
d3d22ce5a8 Small modification to enable usage by external scripts (#956)
* Make training code usable by external scripts

Add parameter inputs to training and argument parsing function to allow this script to be used by an external call.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-26 18:46:56 +02:00
Simon Kirsten
8332c1a6d9 Enable multi-process DataLoader for dreambooth (#950) 2022-10-26 17:24:48 +02:00
Hu Ye
bd06dd023f [inpaint pipeline] fix bug for multiple prompts inputs (#959) 2022-10-26 16:41:57 +02:00
Pi Esposito
b2e2d1411c minimal stable diffusion GPU memory usage with accelerate hooks (#850)
* add method to enable cuda with minimal gpu usage to stable diffusion

* add test to minimal cuda memory usage

* ensure all models but unet are onn torch.float32

* move to cpu_offload along with minor internal changes to make it work

* make it test against accelerate master branch

* coming back, its official: I don't know how to make it test againt the master branch from accelerate

* make it install accelerate from master on tests

* go back to accelerate>=0.11

* undo prettier formatting on yml files

* undo prettier formatting on yml files againn
2022-10-26 15:52:57 +02:00
Julien Simon
2f0fcf4fa8 Add missing import (#979) 2022-10-26 15:45:39 +02:00
Yuta Hayashibe
cc436087d3 Fix typos (#978) 2022-10-26 15:32:47 +02:00
Hu Ye
d7d6841406 fix a bug in the new version (#957)
remove tensor_format in the new version
2022-10-26 14:26:17 +02:00
Patrick von Platen
d9cfe325a5 CompVis -> diffusers script - allow converting from merged checkpoint to either EMA or non-EMA (#991)
* improve script

* up
2022-10-26 12:32:07 +02:00
Pedro Cuenca
0343d8f531 Do not use torch.float64 on the mps device (#942)
* Add failing test for #940.

* Do not use torch.float64 in mps.

* style

* Temporarily skip add_noise for IPNDMScheduler.

Until #990 is addressed.
2022-10-26 11:56:43 +02:00
Yuta Hayashibe
4b9f58952a Add --pretrained_model_name_revision option to train_dreambooth.py (#933)
* Add --pretrained_model_name_revision option to train_dreambooth.py

* Renamed --pretrained_model_name_revision to --revision
2022-10-25 21:38:23 +02:00
Ella Charlaix
e2243de5f2 Fix typo in documentation title (#975) 2022-10-25 20:20:16 +02:00
Patrick von Platen
59f0ce82eb [Dance Diffusion] Better naming (#981)
uP
2022-10-25 19:52:41 +02:00
Patrick von Platen
365ff8f76d [Dance Diffusion] FP16 (#980)
* add in fp16

* up
2022-10-25 19:33:43 +02:00
Patrick von Platen
88fa6b7d68 [Dance Diffusion] Add dance diffusion (#803)
* start

* add more logic

* Update src/diffusers/models/unet_2d_condition_flax.py

* match weights

* up

* make model work

* making class more general, fixing missed file rename

* small fix

* make new conversion work

* up

* finalize conversion

* up

* first batch of variable renamings

* remove c and c_prev var names

* add mid and out block structure

* add pipeline

* up

* finish conversion

* finish

* upload

* more fixes

* Apply suggestions from code review

* add attr

* up

* uP

* up

* finish tests

* finish

* uP

* finish

* fix test

* up

* naming consistency in tests

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* remove hardcoded 16

* Remove bogus

* fix some stuff

* finish

* improve logging

* docs

* upload

Co-authored-by: Nathan Lambert <nol@berkeley.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-25 18:39:25 +02:00
SkyTNT
0b42b074b4 [Onnx] support half-precision and fix bugs for onnx pipelines (#932)
* [Onnx] support half-precision and fix bugs for onnx pipelines

* Update convert_stable_diffusion_checkpoint_to_onnx.py

* style

* fix has_nsfw_concept

* Update convert_stable_diffusion_checkpoint_to_onnx.py

* fix style
2022-10-25 16:48:53 +02:00
Pedro Cuenca
3d02c92187 mps changes for PyTorch 1.13 (#926)
* Docs: refer to pre-RC version of PyTorch 1.13.0.

* Remove temporary workaround for unavailable op.

* Update comment to make it less ambiguous.

* Remove use of contiguous in mps.

It appears to not longer be necessary.

* Special case: use einsum for much better performance in mps

* Update mps docs.

* Minor doc update.

* Accept suggestion

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-25 16:41:51 +02:00
Anton Lozhkov
28b134e627 [Tests] Fix mps reproducibility issue when running with pytest-xdist (#976)
* [WIP] Debugging mps DDIM tests

* revert num_steps

* check warmup with a generator

* more warmup!

* remove xdist

* just use a single process
2022-10-25 15:28:08 +02:00
Kashif Rasul
240abddfbc [Flax] added broadcast_to_shape_from_left helper and Scheduler tests (#864)
* added broadcast_to_shape_from_left helper

* initial tests

* fixed pndm tests

* shape required for pndm

* added require_flax

* fix style

* fix more imports

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-25 13:43:24 +02:00
MarkRich
38ae5a25da Add Composable diffusion to community pipeline examples (#951)
* Initial composable diffusion pipeline

* add composable stable diffusion to readme table

* Update examples/community/README.md

* Apply suggestions from code review

* Update examples/community/README.md

* Update examples/community/README.md

* Update examples/community/README.md

* up

* Update examples/community/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-25 13:30:27 +02:00
Tanishq Abraham
6e099e2c8c add num_inference_steps arg to DDPM (#935) 2022-10-25 13:08:56 +02:00
Pedro Cuenca
82044153df Fix typo: torch_type -> torch_dtype (#972)
Fix typo: torch_type -> torch_dtype
2022-10-25 13:05:44 +02:00
Nathan Lambert
2fb8fafa4b add community pipeline docs; add minimal text to some empty doc pages (#930)
* add community pipeline docs

* fix style in code snippets (lol)

* clean up loading docs

* add license to doc files

* fix some weird links
2022-10-24 14:20:08 -07:00
apolinario
8aac1f99d7 v1-5 docs updates (#921)
* Update README.md

Additionally add FLAX so the model card can be slimmer and point to this page

* Find and replace all

* v-1-5 -> v1-5

* revert test changes

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docs/source/quicktour.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/quicktour.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Revert certain references to v1-5

* Docs changes

* Apply suggestions from code review

Co-authored-by: apolinario <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-24 22:50:23 +02:00
Anton Lozhkov
2c82e0c4eb Reorganize pipeline tests (#963)
* Reorganize pipeline tests

* fix vq
2022-10-24 16:34:01 +02:00
Chenguo Lin
2d35f6733a fix a small typo in pipeline_ddpm.py (#948)
one small typo in pipeline_ddpm.py

just a small typo in one comment
2022-10-24 11:18:32 +02:00
Kashif Rasul
9bca40296e [MPS] fix mps failing tests (#934)
fix mps failing tests
2022-10-22 09:33:40 +02:00
Shyam Sudhakaran
2fdd094c10 Wildcard stable diffusion pipeline (#900)
* Initial Wildcard Stable Diffusion Pipeline

* Added some additional example usage

* style

* Added links in README and additional documentation

* Initial Wildcard Stable Diffusion Pipeline

* Added some additional example usage

* style

* Added links in README and additional documentation

* cleanup readme again

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-21 17:43:19 +02:00
mkshing
31af4d17e8 Support LMSDiscreteScheduler in LDMPipeline (#891)
* Support LMSDiscreteScheduler in LDMPipeline

This is a small change to support all schedulers such as LMSDiscreteScheduler in LDMPipeline.

What's changed
-------
* Add the `scale_model_input` function before `step` to ensure correct denoising (L77)

* Add "scale the initial noise by the standard deviation required by the scheduler"

* run `make style`

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-21 15:38:09 +02:00
Suraj Patil
dec18c8632 [Flax] dont warn for bf16 weights (#923)
dont warn for bf16 weights
2022-10-21 13:13:36 +02:00
Patrick von Platen
25dfd0f8dc [Tests] Move stable diffusion into their own files (#936)
* [Tests] Move stable diffusion into their own files

* up
2022-10-21 12:49:52 +02:00
Anton Lozhkov
32bf4fdc43 Introduce the copy mechanism (#924)
* Introduce the copy mechanism

* init tests

* fix dummy tests

* with

* update copies tests
2022-10-20 20:26:03 +02:00
Anton Lozhkov
cc36f2e7ff Bump the version to 0.7.0.dev0 (#912)
* Bump the version to 0.7.0.dev0

* deprecate offsets

* deprecate LMS timesteps

* LMS 0.7.0->0.8.0
2022-10-20 20:25:20 +02:00
SkyTNT
ba74a8be7a [Community Pipelines] Fix pad_tokens_and_weights in lpw_stable_diffusion (#925)
[Community Pipelines] fix pad_tokens_and_weights in lpw_stable_diffusion
2022-10-20 19:26:04 +02:00
Krishna Penukonda
6f6eef747c Fix Compatibility with Nvidia NGC Containers (#919)
Check if MPS backend is registered before calling is_available()
2022-10-20 19:23:42 +02:00
Suraj Patil
8be48507a0 fix test_components (#928) 2022-10-20 16:25:12 +02:00
Hanusz Leszek
4bf675f465 Dreambooth class image generation: using unique names to avoid overwriting existing image (#847)
* Add an underscore to filename if it already exists

* Use sha1sum hash instead of adding underscores
2022-10-20 15:56:15 +02:00
Suraj Patil
7674a36a34 [dreambooth] dont use safety check when generating prior images (#922)
dont' use safety check when generating prior images
2022-10-20 13:52:11 +02:00
Mikail Duzenli
a5eb7f4293 [Examples] add speech to image pipeline example (#897)
* First draft

* created the SpeechToImagePipeline class

* Corrected speech_to_image_diffusion.py style

* Added safety checker

* Corrected style

* Adding examples to README
2022-10-20 13:47:13 +02:00
Hanusz Leszek
ce7d96681c DOC Dreambooth Add --sample_batch_size=1 to the 8 GB dreambooth example script (#829)
Add --sample_batch_size=1 to the 8 GB dreambooth script
2022-10-20 13:44:37 +02:00
Patrick von Platen
db19a9d9d7 [DiffusionPipeline.from_pretrained] add warning when passing unused k… (#870)
[DiffusionPipeline.from_pretrained] add warning when passing unused kwargs
2022-10-20 13:30:01 +02:00
Patrick von Platen
4a76e5d49b [PNDM Scheduler] Make sure list cannot grow forever (#882) 2022-10-20 13:29:04 +02:00
Patrick von Platen
83f8a5ff70 [Stable Diffusion] Add components function (#889)
* [Stable Diffusion] Add components function

* uP
2022-10-20 13:28:11 +02:00
SkyTNT
2a0c823527 [Community Pipelines] Long Prompt Weighting Stable Diffusion Pipelines (#907)
* [Community Pipelines] Long Prompt Weighting

* Update README.md

* fix

* style

* fix style

* Update examples/community/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-19 22:30:46 +02:00
anton-l
ad9d7ce476 Release: 0.6.0 2022-10-19 17:38:55 +02:00
Pedro Cuenca
8124863d1f Initial docs update for new in-painting pipeline (#910)
Docs update for new in-painting pipeline.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-19 17:31:23 +02:00
Anton Lozhkov
89d124945a ONNX supervised inpainting (#906)
* ONNX supervised inpainting

* sync with the torch pipeline

* fix concat

* update ref values

* back to 8 steps

* type fix

* make fix-copies
2022-10-19 17:03:31 +02:00
Patrick von Platen
46557121e6 finish tests (#909) 2022-10-19 16:36:51 +02:00
Suraj Patil
b35d88c536 Stable diffusion inpainting. (#904)
* begin pipe

* add new pipeline

* add tests

* correct fast test

* up

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

* Update tests/test_pipelines.py

* up

* up

* make style

* add fp16 test

* doc, comments

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-19 16:11:50 +02:00
Patrick von Platen
83b696e6c0 [Communit Pipeline] Make sure "mega" uses correct inpaint pipeline (#908) 2022-10-19 15:54:07 +02:00
Patrick von Platen
6ea83608ad [Stable Diffusion Inpainting] Deprecate inpainting pipeline in favor of official one (#903)
* finish

* up

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Update src/diffusers/pipeline_utils.py

* Finish

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-19 12:55:37 +02:00
Patrick von Platen
bd216073fe make fix copies 2022-10-19 12:31:53 +02:00
Anton Lozhkov
8eb9d9703d Improve ONNX img2img numpy handling, temporarily fix the tests (#899)
* [WIP] Onnx img2img determinism

* more numpy + seed

* numpy inpainting, tolerance

* revert test workflow
2022-10-19 11:26:32 +02:00
Žilvinas Ledas
a9908ecfc1 Stable Diffusion image-to-image and inpaint using onnx. (#552)
* * Stabe Diffusion img2img using onnx.

* * Stabe Diffusion inpaint using onnx.

* Export vae_encoder, upgrade img2img, add test

* updated inpainting pipeline + test

* style

Co-authored-by: anton-l <anton@huggingface.co>
2022-10-18 17:44:01 +02:00
Suraj Patil
fbe807bf57 [dreambooth] allow fine-tuning text encoder (#883)
* allow fine-tuning text encoder

* fix a few things

* update readme
2022-10-18 17:28:51 +02:00
Hamish Friedlander
a3efa433ea Fix DDIM on Windows not using int64 for timesteps (#819) 2022-10-18 12:06:46 +02:00
Anton Lozhkov
728a3f3ec1 Rename StableDiffusionOnnxPipeline -> OnnxStableDiffusionPipeline (#887)
Rename and deprecate
2022-10-18 09:14:30 +02:00
Pedro Cuenca
100e094cc9 Fix autoencoder test (#886)
Fix autoencoder test.
2022-10-17 21:47:13 +02:00
Anton Lozhkov
cca59ce3a2 Add Apple M1 tests (#796)
* [CI] Add Apple M1 tests

* setup-python

* python build

* conda install

* remove branch

* only 3.8 is built for osx-arm

* try fetching prebuilt tokenizers

* use user cache

* update shells

* Reports and cleanup

* -> MPS

* Disable parallel tests

* Better naming

* investigate worker crash

* return xdist

* restart

* num_workers=2

* still crashing?

* faulthandler for segfaults

* faulthandler for segfaults

* remove restarts, stop on segfault

* torch version

* change installation order

* Use pre-RC version of PyTorch.

To be updated when it is released.

* Skip crashing test on MPS, add new one that works.

* Skip cuda tests in mps device.

* Actually use generator in test.

I think this was a typo.

* make style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-17 20:27:30 +02:00
Nathan Raw
627ad6e8ea Rename frame filename in interpolation community example (#881)
🎨 rename frame filename
2022-10-17 20:08:58 +02:00
apolinario
fd26624f3b Add generic inference example to community pipeline readme (#874)
Update README.md
2022-10-17 17:16:50 +02:00
Nathan Raw
dff91ee9a9 Fix table in community README.md (#879)
Update README.md
2022-10-17 16:51:25 +02:00
Pedro Cuenca
4dce37432b Fix training push_to_hub (unconditional image generation): models were not saved before pushing to hub (#868)
Fix: models were not saved before pushing to hub.
2022-10-17 15:28:56 +02:00
Patrick von Platen
52e8fdb8ae Update README.md 2022-10-17 15:25:04 +02:00
Patrick von Platen
ed6c61c6a0 Fix small community pipeline import bug and finish README (#869)
* up

* Finish
2022-10-17 15:07:48 +02:00
Patrick von Platen
146419f741 All in one Stable Diffusion Pipeline (#821)
* uP

* correct

* make style

* small change
2022-10-17 14:37:25 +02:00
Patrick von Platen
ad0e9ac7f6 Update README.md 2022-10-17 14:21:44 +02:00
Nathan Raw
ee9875ee9b Add Stable Diffusion Interpolation Example (#862)
*  Add Stable Diffusion Interpolation Example

* 💄 style

* Update examples/community/interpolate_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-17 13:48:42 +02:00
Patrick von Platen
5b94450ec3 Update README.md 2022-10-17 13:41:13 +02:00
Patrick von Platen
765a446dee Update README.md 2022-10-17 13:34:15 +02:00
Patrick von Platen
2b7d4a5c21 [DeviceMap] Make sure stable diffusion can be loaded from older trans… (#860)
[DeviceMap] Make sure stable diffusion can be loaded from older transformers versiosn
2022-10-17 00:52:17 +02:00
camenduru
93a81a3f5a Fix Flax pipeline: width and height are ignored #838 (#848)
* Fix Flax pipeline: width and height are ignored #838

* Fix Flax pipeline: width and height are ignored
2022-10-14 21:43:56 +02:00
Anton Lozhkov
1d3234cbca Remove the last of ["sample"] (#842) 2022-10-14 14:45:43 +02:00
Anton Lozhkov
52394b53e2 Bump to 0.6.0.dev0 (#831)
* Bump to 0.6.0.dev0

* Deprecate tensor_format and .samples

* style

* upd

* upd

* style

* sample -> images

* Update src/diffusers/schedulers/scheduling_ddpm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_karras_ve.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_lms_discrete.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_sde_ve.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_sde_vp.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-14 13:43:52 +02:00
Omar Sanseviero
b8c4d5801c Remove unneeded use_auth_token (#839) 2022-10-14 13:27:03 +02:00
Patrick von Platen
d3eb3b35be [Community] One step unet (#840) 2022-10-14 13:09:21 +02:00
Patrick von Platen
e48ca0f0a2 Release 0 5 1 (#833)
Patch Release: 0.5.1
2022-10-13 21:17:03 +02:00
Suraj Patil
effe9d66eb [FlaxStableDiffusionPipeline] fix bug when nsfw is detected (#832)
fix nsfw bug
2022-10-13 21:05:17 +02:00
Anton Lozhkov
0679d09083 Release: 5.0.0 (#830) 2022-10-13 18:48:50 +02:00
Patrick von Platen
1d51224403 [Flax] Complete tests (#828) 2022-10-13 18:18:32 +02:00
Patrick von Platen
7c2262640b Align PT and Flax API - allow loading checkpoint from PyTorch configs (#827)
* up

* finish

* add more tests

* up

* up

* finish
2022-10-13 17:43:06 +02:00
Pedro Cuenca
78db11dbf3 Flax safety checker (#825)
* Remove set_format in Flax pipeline.

* Remove DummyChecker.

* Run safety_checker in pipeline.

* Don't pmap on every call.

We could have decorated `generate` with `pmap`, but I wanted to keep it
in case someone wants to invoke it in non-parallel mode.

* Remove commented line

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Replicate outside __call__, prepare for optional jitting.

* Remove unnecessary clipping.

As suggested by @kashif.

* Do not jit unless requested.

* Send all args to generate.

* make style

* Remove unused imports.

* Fix docstring.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-13 17:01:47 +02:00
Patrick von Platen
e713346ad1 Give more customizable options for safety checker (#815)
* Give more customizable options for safety checker

* Apply suggestions from code review

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* Finish

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-13 15:52:26 +02:00
Anton Lozhkov
26c7df5d82 Fix type mismatch error, add tests for negative prompts (#823) 2022-10-13 15:45:42 +02:00
Anton Lozhkov
e001fededf Fix dreambooth loss type with prior_preservation and fp16 (#826)
Fix dreambooth loss type with prior preservation
2022-10-13 15:41:19 +02:00
Suraj Patil
0a09af2f0a update flax scheduler API (#822)
* update flax scheduler API

* remoev set format

* fix call to scale_model_input

* update flax pndm

* use int32

* update docstr
2022-10-13 15:40:01 +02:00
Patrick von Platen
f1d4289be8 [Flax] Add test (#824) 2022-10-13 13:55:39 +02:00
Anton Lozhkov
323a9e1f6d Add diffusers version and pipeline class to the Hub UA (#814)
* Add diffusers version and pipeline class to the Hub UA

* Fallback to class name for pipelines

* Update src/diffusers/modeling_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Remove autoclass

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-12 21:54:40 +02:00
pink-red
60c384bcd2 Fix fine-tuning compatibility with deepspeed (#816) 2022-10-12 21:43:37 +02:00
Suraj Patil
008b608f15 [train_text2image] Fix EMA and make it compatible with deepspeed. (#813)
* fix ema

* style

* add comment about copy

* style

* quality
2022-10-12 19:13:22 +02:00
Nathan Lambert
5afc2b60cd add or fix license formatting in models directory (#808)
* add or fix license formatting

* fix quality
2022-10-12 08:19:35 -07:00
anton-l
96598639c0 Revert an accidental commit
This reverts commit 679c77f8ea.
2022-10-12 17:20:44 +02:00
anton-l
80be0744a6 Merge remote-tracking branch 'origin/main' 2022-10-12 17:18:42 +02:00
anton-l
679c77f8ea Add diffusers version and pipeline class to the Hub UA 2022-10-12 17:18:32 +02:00
Patrick von Platen
db47b1e4d9 [Dummy imports] Better error message (#795)
* [Dummy imports] Better error message

* Test: load pipeline with LMS scheduler.

Fails with a cryptic message if scipy is not installed.

* Correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-12 14:41:16 +02:00
Anton Lozhkov
966e2fc461 Minor package fixes (#809) 2022-10-12 13:22:51 +02:00
Patrick von Platen
6bc11782b7 [Img2Img] Fix batch size mismatch prompts vs. init images (#793)
* [Img2Img] Fix batch size mismatch prompts vs. init images

* Remove bogus folder

* fix

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-12 13:00:36 +02:00
Patrick von Platen
c1b6ea3dce Update img2img.mdx 2022-10-12 00:52:30 +02:00
Pedro Cuenca
24b8b5cf5e mps: Alternative implementation for repeat_interleave (#766)
* mps: alt. implementation for repeat_interleave

* style

* Bump mps version of PyTorch in the documentation.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Simplify: do not check for device.

* style

* Fix repeat dimensions:

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.

* Split long lines as suggested by Suraj.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-11 20:30:09 +02:00
Omar Sanseviero
757babfcad Fix indentation in the code example (#802)
Update custom_pipelines.mdx
2022-10-11 20:26:52 +02:00
spezialspezial
e895952816 Eventually preserve this typo? :) (#804) 2022-10-11 20:06:24 +02:00
Akash Pannu
a124204490 Flax: Trickle down norm_num_groups (#789)
* pass norm_num_groups param and add tests

* set resnet_groups for FlaxUNetMidBlock2D

* fixed docstrings

* fixed typo

* using is_flax_available util and created require_flax decorator
2022-10-11 20:05:10 +02:00
Suraj Patil
66a5279a94 stable diffusion fine-tuning (#356)
* begin text2image script

* loading the datasets, preprocessing & transforms

* handle input features correctly

* add gradient checkpointing support

* fix output names

* run unet in train mode not text encoder

* use no_grad instead of freezing params

* default max steps None

* pad to longest

* don't pad when tokenizing

* fix encode on multi gpu

* fix stupid bug

* add random flip

* add ema

* fix ema

* put ema on cpu

* improve EMA model

* contiguous_format

* don't warp vae and text encode in accelerate

* remove no_grad

* use randn_like

* fix resize

* improve few things

* log epoch loss

* set log level

* don't log each step

* remove max_length from collate

* style

* add report_to option

* make scale_lr false by default

* add grad clipping

* add an option to use 8bit adam

* fix logging in multi-gpu, log every step

* more comments

* remove eval for now

* adress review comments

* add requirements file

* begin readme

* begin readme

* fix typo

* fix push to hub

* populate readme

* update readme

* remove use_auth_token from the script

* address some review comments

* better mixed precision support

* remove redundant to

* create ema model early

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* better description for train_data_dir

* add diffusers in requirements

* update dataset_name_mapping

* update readme

* add inference example

Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-11 19:03:39 +02:00
Suraj Patil
797b290ed0 support bf16 for stable diffusion (#792)
* support bf16 for stable diffusion

* fix typo

* address review comments
2022-10-11 12:02:12 +02:00
Henrik Forstén
81bdbb5e2a DreamBooth DeepSpeed support for under 8 GB VRAM training (#735)
* Support deepspeed

* Dreambooth DeepSpeed documentation

* Remove unnecessary casts, documentation

Due to recent commits some casts to half precision are not necessary
anymore.

Mention that DeepSpeed's version of Adam is about 2x faster.

* Review comments
2022-10-10 21:29:27 +02:00
Nathan Lambert
71ca10c6a4 fix typo docstring in unet2d (#798)
fix typo docstring
2022-10-10 11:25:20 -07:00
Patrick von Platen
22963ed826 Fix gradient checkpointing test (#797)
* Fix gradient checkpointing test

* more tsets
2022-10-10 19:40:33 +02:00
Patrick von Platen
fab17528da [Low CPU memory] + device map (#772)
* add accelerate to load models with smaller memory footprint

* remove low_cpu_mem_usage as it is reduntant

* move accelerate init weights context to modelling utils

* add test to ensure results are the same when loading with accelerate

* add tests to ensure ram usage gets lower when using accelerate

* move accelerate logic to single snippet under modelling utils and remove it from configuration utils

* format code using to pass quality check

* fix imports with isor

* add accelerate to test extra deps

* only import accelerate if device_map is set to auto

* move accelerate availability check to diffusers import utils

* format code

* add device map to pipeline abstraction

* lint it to pass PR quality check

* fix class check to use accelerate when using diffusers ModelMixin subclasses

* use low_cpu_mem_usage in transformers if device_map is not available

* NoModuleLayer

* comment out tests

* up

* uP

* finish

* Update src/diffusers/pipelines/stable_diffusion/safety_checker.py

* finish

* uP

* make style

Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
2022-10-10 18:05:49 +02:00
Nathan Lambert
feaa73243d add sigmoid betas (#777)
* add sigmoid betas

* convert to torch

* add comment on source
2022-10-10 08:28:10 -07:00
Nathan Lambert
a73f8b7251 Clean up resnet.py file (#780)
* clean up resnet.py

* make style and quality

* minor formatting
2022-10-10 08:27:50 -07:00
lowinli
5af6eed9ee debug an exception (#638)
* debug an exception

if dst_path is not a file, it will raise Exception in the function src_path.samefile:
FileNotFoundError: [Errno 2] No such file or directory: '/home/lilongwei/notebook/onnx_diffusion/vae_decoder/model.onnx'

* Update src/diffusers/onnx_utils.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-10-10 13:02:18 +02:00
Patrick von Platen
f3983d16ee [Tests] Fix tests (#774)
* Fix tests

* remove bogus file
2022-10-07 19:38:40 +02:00
Suraj Patil
92d7086366 [img2img, inpainting] fix fp16 inference (#769)
* handle dtype in vae and image2image pipeline

* fix inpaint in fp16

* dtype should be handled in add_noise

* style

* address review comments

* add simple fast tests to check fp16

* fix test name

* put mask in fp16
2022-10-07 17:01:51 +02:00
Suraj Patil
ec831b6a72 [schedulers] hanlde dtype in add_noise (#767)
* handle dtype in vae and image2image pipeline

* handle dtype in add noise

* don't modify vae and pipeline

* remove the if
2022-10-07 16:44:19 +02:00
Kevin Turner
cb0bf0bd0b fix(DDIM scheduler): use correct dtype for noise (#742)
Otherwise, it crashes when eta > 0 with float16.
2022-10-07 16:02:32 +02:00
James R T
e0fece2b26 Add final latent slice checks to SD pipeline intermediate state tests (#731)
This is to ensure that the final latent slices stay somewhat consistent as more changes are introduced into the library.

Signed-off-by: James R T <jamestiotio@gmail.com>

Signed-off-by: James R T <jamestiotio@gmail.com>
2022-10-07 15:50:20 +02:00
Justin Chu
75bb6d2d46 Fix ONNX conversion script opset argument type (#739)
The opset argument should be an `int` but was set as a `str`.
2022-10-07 15:47:43 +02:00
YaYaB
906e4105d7 Fix push_to_hub for dreambooth and textual_inversion (#748)
* Fix push_to_hub for dreambooth and textual_inversion

* Use repo.push_to_hub instead of push_to_hub
2022-10-07 11:50:28 +02:00
Patrick von Platen
7258dc4943 remove bogus folder no.2 2022-10-07 11:21:24 +02:00
Patrick von Platen
c93a8cc901 remove bogus folder 2022-10-07 11:20:26 +02:00
Patrick von Platen
9a95414ea1 Bump to v0.5.0dev0 2022-10-07 11:17:55 +02:00
Patrick von Platen
91ddd2a25b Release: v0.4.1 2022-10-07 10:37:31 +02:00
apolinario
fdfa7c8f15 Change fp16 error to warning (#764)
* Swap fp16 error to warning

Also remove the associated test

* Formatting

* warn -> warning

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-07 10:31:52 +02:00
anton-l
d3f1a4c0f0 Revert "Bump to v0.5.0.dev0"
This reverts commit 9531150128.
2022-10-06 20:42:14 +02:00
Patrick von Platen
ae672d58ef [Tests] Lower required memory for clip guided and fix super edge-case git pipeline module bug (#754)
* [Tests] Lower required memory

* fix

* up

* uP
2022-10-06 19:15:26 +02:00
anton-l
2fa55fc7d4 Merge remote-tracking branch 'origin/main' 2022-10-06 19:12:21 +02:00
anton-l
9531150128 Bump to v0.5.0.dev0 2022-10-06 19:12:01 +02:00
Suraj Patil
737195dd2e Created using Colaboratory 2022-10-06 19:08:00 +02:00
Suraj Patil
435433cefd Update clip_guided_stable_diffusion.py 2022-10-06 18:38:09 +02:00
anton-l
970e30606c Revert "[v0.4.0] Temporarily remove Flax modules from the public API (#755)"
This reverts commit 2e209c30cf.
2022-10-06 18:35:40 +02:00
anton-l
c15cda03ca Bump to v0.4.1.dev0 2022-10-06 18:34:59 +02:00
anton-l
0fe59b679e Merge remote-tracking branch 'origin/main' 2022-10-06 18:22:08 +02:00
anton-l
3b1d2ca1eb Release: v0.4.0 2022-10-06 18:21:57 +02:00
Suraj Patil
4581f147a6 Update clip_guided_stable_diffusion.py 2022-10-06 18:12:54 +02:00
Anton Lozhkov
2e209c30cf [v0.4.0] Temporarily remove Flax modules from the public API (#755)
Temporarily remove Flax modules from the public API
2022-10-06 18:10:36 +02:00
Patrick von Platen
9c9462f388 Python 3.7 doesn't like keys() + keys() 2022-10-06 17:43:40 +02:00
Patrick von Platen
6613a8c7ff make CI happy 2022-10-06 17:16:01 +02:00
Patrick von Platen
d9c449ea30 Custome Pipelines (#744)
* [Custom Pipelines]

* uP

* make style

* finish

* finish

* remove ipdb

* upload

* fix

* finish docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: apolinario <joaopaulo.passos@gmail.com>

* finish

* final uploads

* remove unnecessary test

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: apolinario <joaopaulo.passos@gmail.com>
2022-10-06 16:54:02 +02:00
Suraj Patil
f3128c8788 Actually fix the grad ckpt test (#734)
* use_deterministic_algorithms  for grad ckpt test

* remove eval

* Apply suggestions from code review

* Update tests/test_models_unet.py
2022-10-06 16:04:00 +02:00
Anton Lozhkov
088396824d Better steps deprecation for LMS (#753)
* Better steps deprecation for LMS

* upd
2022-10-06 15:51:25 +02:00
Anton Lozhkov
6c64741933 Raise an error when moving an fp16 pipeline to CPU (#749)
* Raise an error when moving an fp16 pipeline to CPU

* Raise an error when moving an fp16 pipeline to CPU

* style

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Improve the message

* cuda

* Update tests/test_pipelines.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-06 15:51:03 +02:00
Suraj Patil
3383f77441 update the clip guided PR according to the new API (#751) 2022-10-06 15:43:48 +02:00
Anton Lozhkov
df9c070174 Add back-compatibility to LMS timesteps (#750)
* Add back-compatibility to LMS timesteps

* style
2022-10-06 14:43:55 +02:00
Suraj Patil
c119dc4c04 allow multiple generations per prompt (#741)
* compute text embeds per prompt

* don't repeat uncond prompts

* repeat separatly

* update image2image

* fix repeat uncond embeds

* adapt inpaint pipeline

* ifx uncond tokens in img2img

* add tests and fix ucond embeds in im2img and inpaint pipe
2022-10-06 14:01:45 +02:00
Suraj Patil
367a671a06 remove use_auth_token from for TI test (#747)
remove auth token from for TI test
2022-10-06 11:13:24 +02:00
Patrick von Platen
916754ea5e make style 2022-10-06 00:51:11 +02:00
Patrick von Platen
4deb16e830 [Docs] Advertise fp16 instead of autocast (#740)
up
2022-10-05 22:20:53 +02:00
Pedro Cuenca
5493524b71 Replace messages that have empty backquotes (#738)
Replace message with empty backquotes.

This was part of #733, I was too slow to review :)
2022-10-05 20:16:30 +02:00
Suraj Patil
19e559d5e9 remove use_auth_token from remaining places (#737)
remove use_auth_token
2022-10-05 17:40:49 +02:00
Patrick von Platen
78744b6a8f No more use_auth_token=True (#733)
* up

* uP

* uP

* make style

* Apply suggestions from code review

* up

* finish
2022-10-05 17:16:15 +02:00
Nicolas Patry
3dcc75cbd4 Removing autocast for 35-25% speedup. (autocast considered harmful). (#511)
* Removing `autocast` for `35-25% speedup`.

* iQuality

* Adding a slow test.

* Fixing mps noise generation.

* Raising error on wrong device, instead of just casting on behalf of user.

* Quality.

* fix merge

Co-authored-by: Nouamane Tazi <nouamane98@gmail.com>
2022-10-05 15:33:13 +02:00
Anton Lozhkov
6b09f370c4 [Scheduler design] The pragmatic approach (#719)
* init

* improve add_noise

* [debug start] run slow test

* [debug end]

* quick revert

* Add docstrings and warnings + API tests

* Make the warning less spammy
2022-10-05 14:41:19 +02:00
Kashif Rasul
726aba089d [Pytorch] pytorch only timesteps (#724)
* pytorch timesteps

* style

* get rid of if-else

* fix test

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-05 12:55:51 +02:00
Yuta Hayashibe
60c9634a5e Avoid negative strides for tensors (#717)
* Avoid negative strides for tensors

* Changed not to make torch.tensor

* Removed a needless copy
2022-10-05 12:45:01 +02:00
Kane Wallmann
b9eea06e9f Include CLIPTextModel parameters in conversion (#695) 2022-10-05 12:22:07 +02:00
Pierre LeMoine
08d4fb6e9f [dreambooth] Using already created Path in dataset (#681)
using already created `Path` in dataset
2022-10-05 12:14:30 +02:00
Patrick von Platen
a8a3a20d36 [Tests] Add accelerate to testing (#729)
* fix accelerate for testing

* fix copies

* uP
2022-10-05 11:35:02 +02:00
NIKHIL A V
7265dd8cc8 renamed x to meaningful variable in resnet.py (#677)
* renamed single letter variables

* renamed x to meaningful variable in resnet.py

Hello @patil-suraj can you verify it
Thanks

* Reformatted using black

* renamed x to meaningful variable in resnet.py

Hello @patil-suraj can you verify it
Thanks

* reformatted the files

* modified unboundlocalerror in line 374

* removed referenced before error

* renamed single variable x -> hidden_state, p-> pad_value

Co-authored-by: Nikhil A V <nikhilav@Nikhils-MacBook-Pro.local>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-04 23:52:24 +02:00
Suraj Patil
14b9754923 [train_unconditional] fix applying clip_grad_norm_ (#721)
fix clip_grad_norm_
2022-10-04 19:04:05 +02:00
Pedro Cuenca
6b221920d7 Remove comments no longer appropriate (#716)
Remove comments no longer appropriate.

There were casting operations before, they are now gone.
2022-10-04 17:00:09 +02:00
Pedro Cuenca
215bb40882 Fix import if PyTorch is not installed (#715)
* Fix import if PyTorch is not installed.

* Style (blank line)
2022-10-04 16:59:49 +02:00
Yuta Hayashibe
5ac1f61cde Add an argument "negative_prompt" (#549)
* Add an argument "negative_prompt"

* Fix argument order

* Fix to use TypeError instead of ValueError

* Removed needless batch_size multiplying

* Fix to multiply by batch_size

* Add truncation=True for long negative prompt

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix styles

* Renamed ucond_tokens to uncond_tokens

* Added description about "negative_prompt"

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-04 16:55:38 +02:00
Yuta Hayashibe
7e92c5bc73 Fix typos (#718)
* Fix typos

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-04 15:22:14 +02:00
Pi Esposito
4d1cce2fd0 add accelerate to load models with smaller memory footprint (#361)
* add accelerate to load models with smaller memory footprint

* remove low_cpu_mem_usage as it is reduntant

* move accelerate init weights context to modelling utils

* add test to ensure results are the same when loading with accelerate

* add tests to ensure ram usage gets lower when using accelerate

* move accelerate logic to single snippet under modelling utils and remove it from configuration utils

* format code using to pass quality check

* fix imports with isor

* add accelerate to test extra deps

* only import accelerate if device_map is set to auto

* move accelerate availability check to diffusers import utils

* format code

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-04 15:21:40 +02:00
Tanishq Abraham
09859a3cd0 Update schedulers README.md (#694)
* Update links in schedulers README.md

* Update src/diffusers/schedulers/README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-04 15:06:21 +02:00
Kashif Rasul
f1b9ee7ed9 [Docs] fix docstring for issue #709 (#710)
fix docstring

fixes #709
2022-10-04 15:06:11 +02:00
Josh Achiam
4ff4d4db12 Checkpoint conversion script from Diffusers => Stable Diffusion (CompVis) (#701)
* Conversion script

* ran black

* ran isort

* remove unused import

* map location so everything gets loaded onto CPU before conversion

* ran black again

* Update setup.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-04 13:33:38 +02:00
Patrick von Platen
f1484b81b0 [Utils] Add deprecate function and move testing_utils under utils (#659)
* [Utils] Add deprecate function

* up

* up

* uP

* up

* up

* up

* up

* uP

* up

* fix

* up

* move to deprecation utils file

* fix

* fix

* fix more
2022-10-03 23:44:24 +02:00
Anton Lozhkov
1070e1a38a [CI] Speed up slow tests (#708)
* [CI] Localize the HF cache

* pip cache

* de-env

* refactor matrix

* fix fast cache

* less onnx steps

* revert

* revert pip cache

* revert pip cache

* remove debugging trigger
2022-10-03 22:16:23 +02:00
Patrick von Platen
b35bac4d3b [Support PyTorch 1.8] Remove inference mode (#707) 2022-10-03 22:14:58 +02:00
Pedro Cuenca
688031c592 Fix import with Flax but without PyTorch (#688)
* Don't use `load_state_dict` if torch is not installed.

* Define `SchedulerOutput` to use torch or flax arrays.

* Don't import LMSDiscreteScheduler without torch.

* Create distinct FlaxSchedulerOutput.

* Additional changes required for FlaxSchedulerMixin

* Do not import torch pipelines in Flax.

* Revert "Define `SchedulerOutput` to use torch or flax arrays."

This reverts commit f653140134.

* Prefix Flax scheduler outputs for consistency.

* make style

* FlaxSchedulerOutput is now a dataclass.

* Don't use f-string without placeholders.

* Add blank line.

* Style (docstrings)
2022-10-03 16:23:45 +02:00
Krishna Penukonda
7d0ba5921b Fix type annotations on StableDiffusionPipeline.__call__ (#682)
Fixed type annotations on StableDiffusionPipeline::__call__
2022-10-03 15:38:24 +02:00
Pedro Cuenca
249b36cc38 Flax: add shape argument to set_timesteps (#690)
* Flax: add shape argument to set_timesteps

* style
2022-10-03 15:07:09 +02:00
Patrick von Platen
500ca5a907 Forgot to add the OG! 2022-10-03 13:15:07 +02:00
Suraj Patil
14f4af8f5b [dreambooth] fix applying clip_grad_norm_ (#686)
fix applying clip grad norm
2022-10-03 10:54:01 +02:00
James R T
2558977bc7 Add callback parameters for Stable Diffusion pipelines (#521)
* Add callback parameters for Stable Diffusion pipelines

Signed-off-by: James R T <jamestiotio@gmail.com>

* Lint code with `black --preview`

Signed-off-by: James R T <jamestiotio@gmail.com>

* Refactor callback implementation for Stable Diffusion pipelines

* Fix missing imports

Signed-off-by: James R T <jamestiotio@gmail.com>

* Fix documentation format

Signed-off-by: James R T <jamestiotio@gmail.com>

* Add kwargs parameter to standardize with other pipelines

Signed-off-by: James R T <jamestiotio@gmail.com>

* Modify Stable Diffusion pipeline callback parameters

Signed-off-by: James R T <jamestiotio@gmail.com>

* Remove useless imports

Signed-off-by: James R T <jamestiotio@gmail.com>

* Change types for timestep and onnx latents

* Fix docstring style

* Return decode_latents and run_safety_checker back into __call__

* Remove unused imports

* Add intermediate state tests for Stable Diffusion pipelines

Signed-off-by: James R T <jamestiotio@gmail.com>

* Fix intermediate state tests for Stable Diffusion pipelines

Signed-off-by: James R T <jamestiotio@gmail.com>

Signed-off-by: James R T <jamestiotio@gmail.com>
2022-10-02 19:56:36 +02:00
Omar Sanseviero
5156acc476 Fix BibText citation (#693)
* Fix BibText citation

* Update README.md
2022-10-01 10:15:32 +02:00
Nouamane Tazi
b2cfc7a04c Fix slow tests (#689)
* revert using baddbmm in attention
- to fix `test_stable_diffusion_memory_chunking` test

* styling
2022-09-30 18:45:02 +02:00
Patrick von Platen
552b967020 Update README.md 2022-09-30 16:37:13 +02:00
Patrick von Platen
bb0f2a0f54 Update README.md 2022-09-30 16:35:55 +02:00
Nouamane Tazi
daa22050c7 [docs] fix table in fp16.mdx (#683) 2022-09-30 15:15:22 +02:00
Ryan Russell
877bec8a91 refactor: update ldm-bert config.json url closes #675 (#680)
refactor: update ldm-bert `config.json` url

Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-30 11:52:14 +02:00
Josh Achiam
a784be2ebe Allow resolutions that are not multiples of 64 (#505)
* Allow resolutions that are not multiples of 64

* ran black

* fix bug

* add test

* more explanation

* more comments

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-30 09:54:40 +02:00
Nouamane Tazi
9ebaea545f Optimize Stable Diffusion (#371)
* initial commit

* make UNet stream capturable

* try to fix noise_pred value

* remove cuda graph and keep NB

* non blocking unet with PNDMScheduler

* make timesteps np arrays for pndm scheduler
because lists don't get formatted to tensors in `self.set_format`

* make max async in pndm

* use channel last format in unet

* avoid moving timesteps device in each unet call

* avoid memcpy op in `get_timestep_embedding`

* add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`

* update TODO

* replace `channels_last` kwarg with `memory_format` for more generality

* revert the channels_last changes to leave it for another PR

* remove non_blocking when moving input ids to device

* remove blocking from all .to() operations at beginning of pipeline

* fix merging

* fix merging

* model can run in other precisions without autocast

* attn refactoring

* Revert "attn refactoring"

This reverts commit 0c70c0e189.

* remove restriction to run conv_norm in fp32

* use `baddbmm` instead of `matmul`for better in attention for better perf

* removing all reshapes to test perf

* Revert "removing all reshapes to test perf"

This reverts commit 006ccb8a8c.

* add shapes comments

* hardcore whats needed for jitting

* Revert "hardcore whats needed for jitting"

This reverts commit 2fa9c698ea.

* Revert "remove restriction to run conv_norm in fp32"

This reverts commit cec592890c.

* revert using baddmm in attention's forward

* cleanup comment

* remove restriction to run conv_norm in fp32. no quality loss was noticed

This reverts commit cc9bc1339c.

* add more optimizations techniques to docs

* Revert "add shapes comments"

This reverts commit 31c58eadb8.

* apply suggestions

* make quality

* apply suggestions

* styling

* `scheduler.timesteps` are now arrays so we dont need .to()

* remove useless .type()

* use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`

* move scheduler timestamps to correct device if tensors

* add device to `set_timesteps` in LMSD scheduler

* `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it

* quick fix

* styling

* remove kwargs from schedulers `set_timesteps`

* revert to using max in K-LMS inpaint pipeline test

* Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"

This reverts commit 00d5a51e5c.

* move timesteps to correct device before loop in SD pipeline

* apply previous fix to other SD pipelines

* UNet now accepts tensor timesteps even on wrong device, to avoid errors
- it shouldnt affect performance if timesteps are alrdy on correct device
- it does slow down performance if they're on the wrong device

* fix pipeline when timesteps are arrays with strides
2022-09-30 09:49:13 +02:00
Partho
a7058f42e1 Renamed x -> hidden_states in resnet.py (#676)
renamed x to hidden_states
2022-09-29 21:19:09 +02:00
V Vishnu Anirudh
3dacbb94ca trained_betas ignored in some schedulers (#635)
* correcting the beta value assignment

* updating DDIM and LMSDiscreteFlax schedulers

* bringing back the changes that were lost as part of main branch merge
2022-09-29 19:21:04 +02:00
Pedro Cuenca
f10576ad5c Flax from_pretrained: clean up mismatched_keys. (#630)
Flax from_pretrained: clean up `mismatched_keys`.

Originally removed in 73e0bc692c.
2022-09-29 16:06:19 +02:00
Suraj Patil
84b9df57a7 [gradient checkpointing] lower tolerance for test (#652)
* lowe tolerance

* put model in eval mode
2022-09-29 11:57:37 +02:00
Suraj Patil
210be4fe71 [examples] update transfomers version (#665)
update transfomrers version in example
2022-09-29 11:16:28 +02:00
Tanishq Abraham
f5b9bc8b49 Update index.mdx (#670) 2022-09-29 09:17:52 +02:00
Suraj Patil
c16761e9d9 [CLIPGuidedStableDiffusion] take the correct text embeddings (#667)
take the correct text embeddings
2022-09-28 17:41:34 +02:00
Isamu Isozaki
7f31142c2e Added script to save during textual inversion training. Issue 524 (#645)
* Added script to save during training

* Suggested changes
2022-09-28 17:26:02 +02:00
Anton Lozhkov
765506ce28 Fix the LMS pytorch regression (#664)
* Fix the LMS pytorch regression

* Copy over the changes from #637

* Copy over the changes from #637

* Fix betas test
2022-09-28 14:07:26 +02:00
Pedro Cuenca
235770dd84 Fix main: stable diffusion pipelines cannot be loaded (#655)
* Replace deprecation warning f-string with class name.

When `__repr__` is invoked in the instance serialization of
`config_dict` fails, because it contains `kwargs` of type `<class
inspect._empty>`.

* Revert "Replace deprecation warning f-string with class name."

This reverts commit 1c4eb8cb10.

* Do not attempt to register `"kwargs"` as an attribute.

Otherwise serialization could fail.
This may happen for other attributes, so we should create a better
solution.
2022-09-27 20:19:04 +02:00
Anton Lozhkov
d8572f20c7 Fix onnx tensor format (#654)
fix np onnx
2022-09-27 19:09:13 +02:00
Suraj Patil
c0c98df9a1 [CLIPGuidedStableDiffusion] remove set_format from pipeline (#653)
remove set_format from pipeline
2022-09-27 18:56:47 +02:00
Kashif Rasul
85494e8818 [Pytorch] add dep. warning for pytorch schedulers (#651)
* add dep. warning for schedulers

* fix format
2022-09-27 18:39:34 +02:00
Suraj Patil
3304538229 [DDIM, DDPM] fix add_noise (#648)
fix add noise
2022-09-27 17:32:43 +02:00
Suraj Patil
e5eed5235b [dreambooth] update install section (#650)
update install section
2022-09-27 17:32:21 +02:00
Suraj Patil
ac665b6484 [examples/dreambooth] don't pass tensor_format to scheduler. (#649)
don't pass tensor_format
2022-09-27 17:24:12 +02:00
Kashif Rasul
bd8df2da89 [Pytorch] Pytorch only schedulers (#534)
* pytorch only schedulers

* fix style

* remove match_shape

* pytorch only ddpm

* remove SchedulerMixin

* remove numpy from karras_ve

* fix types

* remove numpy from lms_discrete

* remove numpy from pndm

* fix typo

* remove mixin and numpy from sde_vp and ve

* remove remaining tensor_format

* fix style

* sigmas has to be torch tensor

* removed set_format in readme

* remove set format from docs

* remove set_format from pipelines

* update tests

* fix typo

* continue to use mixin

* fix imports

* removed unsed imports

* match shape instead of assuming image shapes

* remove import typo

* update call to add_noise

* use math instead of numpy

* fix t_index

* removed commented out numpy tests

* timesteps needs to be discrete

* cast timesteps to int in flax scheduler too

* fix device mismatch issue

* small fix

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:27:34 +02:00
Zhenhuan Liu
3b747de845 Add training example for DreamBooth. (#554)
* Add training example for DreamBooth.

* Fix bugs.

* Update readme and default hyperparameters.

* Reformatting code with black.

* Update for multi-gpu trianing.

* Apply suggestions from code review

* improgve sampling

* fix autocast

* improve sampling more

* fix saving

* actuallu fix saving

* fix saving

* improve dataset

* fix collate fun

* fix collate_fn

* fix collate fn

* fix key name

* fix dataset

* fix collate fn

* concat batch in collate fn

* add grad ckpt

* add option for 8bit adam

* do two forward passes for prior preservation

* Revert "do two forward passes for prior preservation"

This reverts commit 661ca4677e.

* add option for prior_loss_weight

* add option for clip grad norm

* add more comments

* update readme

* update readme

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add docstr for dataset

* update the saving logic

* Update examples/dreambooth/README.md

* remove unused imports

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:01:18 +02:00
Yih-Dar
d886e49782 Fix SpatialTransformer (#578)
* Fix SpatialTransformer

* Fix SpatialTransformer

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-27 14:42:43 +02:00
Pedro Cuenca
ab3fd671d7 Flax pipeline pndm (#583)
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline

* todo comment

* Fix imports

* Fix imports

* add dummies

* Fix empty init

* make pipeline work

* up

* Allow dtype to be overridden on model load.

This may be a temporary solution until #567 is addressed.

* Convert params to bfloat16 or fp16 after loading.

This deals with the weights, not the model.

* Use Flax schedulers (typing, docstring)

* PNDM: replace control flow with jax functions.

Otherwise jitting/parallelization don't work properly as they don't know
how to deal with traced objects.

I temporarily removed `step_prk`.

* Pass latents shape to scheduler set_timesteps()

PNDMScheduler uses it to reserve space, other schedulers will just
ignore it.

* Wrap model imports inside availability checks.

* Optionally return state in from_config.

Useful for Flax schedulers.

* Do not convert model weights to dtype.

* Re-enable PRK steps with functional implementation.

Values returned still not verified for correctness.

* Remove left over has_state var.

* make style

* Apply suggestion list -> tuple

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestion list -> tuple

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Remove unused comments.

* Use zeros instead of empty.

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-27 14:16:11 +02:00
Pedro Cuenca
c070e5f0c5 Remove inappropriate docstrings in LMS docstrings. (#634) 2022-09-27 13:22:05 +02:00
Ryan Russell
b6945310c9 refactor: custom_init_isort readability fixups (#631)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-27 13:13:36 +02:00
Pedro Cuenca
b671cb0920 Remove deprecated torch_device kwarg (#623)
* Remove deprecated `torch_device` kwarg.

* Remove unused imports.
2022-09-27 12:07:41 +02:00
Abdullah Alfaraj
bb0c5d1595 Fix docs link to train_unconditional.py (#642)
the link points to an old location of the train_unconditional.py file
2022-09-27 11:23:09 +02:00
Yuta Hayashibe
f7ebe56921 Warning for too long prompts in DiffusionPipelines (Resolve #447) (#472)
* Return encoded texts by DiffusionPipelines

* Updated README to show hot to use enoded_text_input

* Reverted examples in README.md

* Reverted all

* Warning for long prompts

* Fix bugs

* Formatted
2022-09-27 11:14:16 +02:00
Anton Lozhkov
57b70c599c [CI] Fix onnxruntime installation order (#633) 2022-09-24 18:32:03 +02:00
Grigory Sizov
35e9209601 Fix formula for noise levels in Karras scheduler and tests (#627)
fix formula for noise levels in karras scheduler and tests
2022-09-24 18:24:08 +02:00
Ryan Russell
d0aa899f0e docs: src/diffusers readability improvements (#629)
* docs: `src/diffusers` readability improvements

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* docs: `make style` lint

Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-24 16:21:28 +02:00
Pedro Cuenca
1e152030bd Fix breaking error: "ort is not defined" (#626)
Fix "ort is not defined" issue.
2022-09-23 17:02:03 +02:00
cloudhan
8211b62227 Allow passing session_options for ORT backend (#620) 2022-09-23 15:28:31 +02:00
Ryan Russell
ce31f83d8c refactor: pipelines readability improvements (#622)
* refactor: pipelines readability improvements

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* docs: remove todo comment from flax pipeline

Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-23 15:02:12 +02:00
Abdullah Alfaraj
b00382e2a7 fix docs: change sample to images (#613)
the result of running the pipeline is stored in StableDiffusionPipelineOutput.images
2022-09-23 14:27:29 +02:00
Younes Belkada
8b0be93596 Flax documentation (#589)
* documenting `attention_flax.py` file

* documenting `embeddings_flax.py`

* documenting `unet_blocks_flax.py`

* Add new objs to doc page

* document `vae_flax.py`

* Apply suggestions from code review

* modify `unet_2d_condition_flax.py`

* make style

* Apply suggestions from code review

* make style

* Apply suggestions from code review

* fix indent

* fix typo

* fix indent unet

* Update src/diffusers/models/vae_flax.py

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-23 13:24:16 +02:00
Ryan Russell
df80ccf7de docs: .md readability fixups (#619)
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-23 12:02:27 +02:00
Jonathan Whitaker
91db81894b Adding pred_original_sample to SchedulerOutput for some samplers (#614)
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs

* Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra

* Reordered library imports to follow standard

* didnt get import order quite right apparently

* Forgot to change name of LMSDiscreteSchedulerOutput

* Aha, needed some extra libs for make style to fully work
2022-09-22 18:53:40 +02:00
Ryan Russell
f149d037de docs: fix stochastic_karras_ve ref (#618)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-22 18:36:29 +02:00
Suraj Patil
e7120bae95 [UNet2DConditionModel] add gradient checkpointing (#461)
* add grad ckpt to downsample blocks

* make it work

* don't pass gradient_checkpointing to upsample block

* add tests for UNet2DConditionModel

* add test_gradient_checkpointing

* add gradient_checkpointing for up and down blocks

* add functions to enable and disable grad ckpt

* remove the forward argument

* better naming

* make supports_gradient_checkpointing private
2022-09-22 15:36:47 +02:00
Mishig Davaadorj
534512bedb [flax] 'dtype' should not be part of self._internal_dict (#609) 2022-09-22 11:46:31 +02:00
Mishig Davaadorj
4b8880a306 Make flax from_pretrained work with local subfolder (#608) 2022-09-22 11:44:22 +02:00
Anton Lozhkov
dd350c8afe Handle the PIL.Image.Resampling deprecation (#588)
* Handle the PIL.Image.Resampling deprecation

* style
2022-09-22 00:02:14 +02:00
Ryan Russell
80183ca58b docs: fix Berkeley ref (#611)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 22:55:32 +02:00
Anton Lozhkov
6bd005ebbe [ONNX] Collate the external weights, speed up loading from the hub (#610) 2022-09-21 22:26:30 +02:00
Pedro Cuenca
a9fdb3de9e Return Flax scheduler state (#601)
* Optionally return state in from_config.

Useful for Flax schedulers.

* has_state is now a property, make check more strict.

I don't check the class is `SchedulerMixin` to prevent circular
dependencies. It should be enough that the class name starts with "Flax"
the object declares it "has_state" and the "create_state" exists too.

* Use state in pipeline from_pretrained.

* Make style
2022-09-21 22:25:27 +02:00
Anton Lozhkov
e72f1a8a71 Add torchvision to training deps (#607) 2022-09-21 13:54:32 +02:00
Anton Lozhkov
4f1c989ffb Add smoke tests for the training examples (#585)
* Add smoke tests for the training examples

* upd

* use a dummy dataset

* mark as slow

* cleanup

* Update test cases

* naming
2022-09-21 13:36:59 +02:00
Younes Belkada
3fc8ef7297 Replace dropout_prob by dropout in vae (#595)
replace `dropout_prob` by `dropout` in `vae`
2022-09-21 11:43:28 +02:00
Mishig Davaadorj
8685699392 Mv weights name consts to diffusers.utils (#605) 2022-09-21 11:30:14 +02:00
Mishig Davaadorj
f810060006 Fix flax from_pretrained pytorch weight check (#603) 2022-09-21 11:17:15 +02:00
Pedro Cuenca
fb2fbab10b Allow dtype to be specified in Flax pipeline (#600)
* Fix typo in docstring.

* Allow dtype to be overridden on model load.

This may be a temporary solution until #567 is addressed.

* Create latents in float32

The denoising loop always computes the next step in float32, so this
would fail when using `bfloat16`.
2022-09-21 10:57:01 +02:00
Pedro Cuenca
fb03aad8b4 Fix params replication when using the dummy checker (#602)
Fix params replication when sing the dummy checker.
2022-09-21 09:38:10 +02:00
Patrick von Platen
2345481c0e [Flax] Fix unet and ddim scheduler (#594)
* [Flax] Fix unet and ddim scheduler

* correct

* finish
2022-09-20 23:29:09 +02:00
Mishig Davaadorj
d934d3d795 FlaxDiffusionPipeline & FlaxStableDiffusionPipeline (#559)
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline

* todo comment

* Fix imports

* Fix imports

* add dummies

* Fix empty init

* make pipeline work

* up

* Use Flax schedulers (typing, docstring)

* Wrap model imports inside availability checks.

* more updates

* make sure flax is not broken

* make style

* more fixes

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@latenitesoft.com>
2022-09-20 21:28:07 +02:00
Suraj Patil
c6629e6f11 [flax safety checker] Use FlaxPreTrainedModel for saving/loading (#591)
* use FlaxPreTrainedModel for flax safety module

* fix name

* fix one more

* Apply suggestions from code review
2022-09-20 20:11:32 +02:00
Anton Lozhkov
8a6833b85c Add the K-LMS scheduler to the inpainting pipeline + tests (#587)
* Add the K-LMS scheduler to the inpainting pipeline + tests

* Remove redundant casts
2022-09-20 19:10:44 +02:00
Anton Lozhkov
a45dca077c Fix BaseOutput initialization from dict (#570)
* Fix BaseOutput initialization from dict

* style

* Simplify post-init, add tests

* remove debug
2022-09-20 18:32:16 +02:00
Suraj Patil
c01ec2d119 [FlaxAutoencoderKL] rename weights to align with PT (#584)
* rename weights to align with PT

* DiagonalGaussianDistribution => FlaxDiagonalGaussianDistribution

* fix name
2022-09-20 13:04:16 +02:00
Younes Belkada
0902449ef8 Add from_pt argument in .from_pretrained (#527)
* first commit:

- add `from_pt` argument in `from_pretrained` function
- add `modeling_flax_pytorch_utils.py` file

* small nit

- fix a small nit - to not enter in the second if condition

* major changes

- modify FlaxUnet modules
- first conversion script
- more keys to be matched

* keys match

- now all keys match
- change module names for correct matching
- upsample module name changed

* working v1

- test pass with atol and rtol= `4e-02`

* replace unsued arg

* make quality

* add small docstring

* add more comments

- add TODO for embedding layers

* small change

- use `jnp.expand_dims` for converting `timesteps` in case it is a 0-dimensional array

* add more conditions on conversion

- add better test to check for keys conversion

* make shapes consistent

- output `img_w x img_h x n_channels` from the VAE

* Revert "make shapes consistent"

This reverts commit 4cad1aeb4a.

* fix unet shape

- channels first!
2022-09-20 12:39:25 +02:00
Yuta Hayashibe
ca74951323 Fix typos (#568)
* Fix a setting bug

* Fix typos

* Reverted params to parms
2022-09-19 21:58:41 +02:00
Yih-Dar
84616b5de5 Fix CrossAttention._sliced_attention (#563)
* Fix CrossAttention._sliced_attention

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-19 18:07:32 +02:00
Suraj Patil
8d36d5adb1 Update clip_guided_stable_diffusion.py 2022-09-19 18:03:00 +02:00
Suraj Patil
dc2a1c1d07 [examples/community] add CLIPGuidedStableDiffusion (#561)
* add CLIPGuidedStableDiffusion

* add credits

* add readme

* style

* add clip prompt

* fnfix cond_n

* fix cond fn

* fix cond fn for lms
2022-09-19 17:29:19 +02:00
Anton Lozhkov
9727cda678 [Tests] Mark the ncsnpp model tests as slow (#575)
* [Tests] Mark the ncsnpp model tests as slow

* style
2022-09-19 17:20:58 +02:00
Anton Lozhkov
0a2c42f3e2 [Tests] Upload custom test artifacts (#572)
* make_reports

* add test utils

* style

* style
2022-09-19 17:08:29 +02:00
Patrick von Platen
2a8477de5c [Flax] Solve problem with VAE (#574) 2022-09-19 16:50:22 +02:00
Patrick von Platen
bf5ca036fa [Flax] Add Vae for Stable Diffusion (#555)
* [Flax] Add Vae

* correct

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-19 16:00:54 +02:00
Yih-Dar
b17d49f863 Fix _upsample_2d (#535)
* Fix _upsample_2d

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-19 15:52:52 +02:00
Anton Lozhkov
b8d1f2d344 Remove check_tf_utils to avoid an unnecessary TF import for now (#566) 2022-09-19 15:37:36 +02:00
Pedro Cuenca
5b3f249659 Flax: ignore dtype for configuration (#565)
Flax: ignore dtype for configuration.

This makes it possible to save models and configuration files.
2022-09-19 15:37:07 +02:00
Pedro Cuenca
fde9abcbba JAX/Flax safety checker (#558)
* Starting to integrate safety checker.

* Fix initialization of CLIPVisionConfig

* Remove commented lines.

* make style

* Remove unused import

* Pass dtype to modules

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Pass dtype to modules

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-19 15:26:49 +02:00
Kashif Rasul
b1182bcf21 [Flax] fix Flax scheduler (#564)
* remove match_shape

* ported fixes from #479 to flax

* remove unused argument

* typo

* remove warnings
2022-09-19 14:48:00 +02:00
ydshieh
0424615a5d revert the accidental commit 2022-09-19 14:16:10 +02:00
ydshieh
8187865aef Fix CrossAttention._sliced_attention 2022-09-19 14:08:29 +02:00
Mishig Davaadorj
0c0c222432 FlaxUNet2DConditionOutput @flax.struct.dataclass (#550) 2022-09-18 19:35:37 +02:00
Younes Belkada
d09bbae515 make fixup support (#546)
* add `get_modified_files.py`

- file copied from https://github.com/huggingface/transformers/blob/main/utils/get_modified_files.py

* make fixup
2022-09-18 19:34:51 +02:00
Patrick von Platen
429dace10a [Configuration] Better logging (#545)
* [Config] improve logging

* finish
2022-09-17 14:09:13 +02:00
Jonatan Kłosko
d7dcba4a13 Unify offset configuration in DDIM and PNDM schedulers (#479)
* Unify offset configuration in DDIM and PNDM schedulers

* Format

Add missing variables

* Fix pipeline test

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Default set_alpha_to_one to false

* Format

* Add tests

* Format

* add deprecation warning

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-17 14:07:43 +02:00
Patrick von Platen
9e439d8c60 [Hub] Update hub version (#538) 2022-09-16 20:29:01 +02:00
Patrick von Platen
e5902ed11a [Download] Smart downloading (#512)
* [Download] Smart downloading

* add test

* finish test

* update

* make style
2022-09-16 19:32:40 +02:00
Sid Sahai
a54cfe6828 Add LMSDiscreteSchedulerTest (#467)
* [WIP] add LMSDiscreteSchedulerTest

* fixes for comments

* add torch numpy test

* rebase

* Update tests/test_scheduler.py

* Update tests/test_scheduler.py

* style

* return residuals

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-09-16 19:10:56 +02:00
Patrick von Platen
88972172d8 Revert "adding more typehints to DDIM scheduler" (#533)
Revert "adding more typehints to DDIM scheduler (#456)"

This reverts commit a0558b1146.
2022-09-16 17:48:02 +02:00
V Vishnu Anirudh
a0558b1146 adding more typehints to DDIM scheduler (#456)
* adding more typehints

* resolving mypy issues

* resolving formatting issue

* fixing isort issue

Co-authored-by: V Vishnu Anirudh <git.vva@gmail.com>
Co-authored-by: V Vishnu Anirudh <vvani@kth.se>
2022-09-16 17:41:58 +02:00
Suraj Patil
06924c6a4f [StableDiffusionInpaintPipeline] accept tensors for init and mask image (#439)
* accept tensors

* fix mask handling

* make device placement cleaner

* update doc for mask image
2022-09-16 17:35:41 +02:00
Anton Lozhkov
761f0297b0 [Tests] Fix spatial transformer tests on GPU (#531) 2022-09-16 16:04:37 +02:00
Anton Lozhkov
c1796efd5f Quick fix for the img2img tests (#530)
* Quick fix for the img2img tests

* Remove debug lines
2022-09-16 15:52:26 +02:00
Yuta Hayashibe
76d492ea49 Fix typos and add Typo check GitHub Action (#483)
* Fix typos

* Add a typo check action

* Fix a bug

* Changed to manual typo check currently

Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Removed a confusing message

* Renamed "nin_shortcut" to "in_shortcut"

* Add memo about NIN

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-09-16 15:36:51 +02:00
Yih-Dar
c0493723f7 Remove the usage of numpy in up/down sample_2d (#503)
* Fix PT up/down sample_2d

* empty commit

* style

* style

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-16 15:15:05 +02:00
Anton Lozhkov
c727a6a5fb Finally fix the image-based SD tests (#509)
* Finally fix the image-based SD tests

* Remove autocast

* Remove autocast in image tests
2022-09-16 14:37:12 +02:00
Sid Sahai
f73ca908e5 [Tests] Test attention.py (#368)
* add test for AttentionBlock, SpatialTransformer

* add context_dim, handle device

* removed dropout test

* fixes, add dropout test
2022-09-16 12:59:42 +02:00
SkyTNT
37c9d789aa Fix is_onnx_available (#440)
* Fix is_onnx_available

Fix: If user install onnxruntime-gpu, is_onnx_available() will return False.

* add more onnxruntime candidates

* Run `make style`

Co-authored-by: anton-l <anton@huggingface.co>
2022-09-16 12:13:22 +02:00
Anton Lozhkov
214520c66a [CI] Add stalebot (#481)
* Add stalebot

* style

* Remove the closing logic

* Make sure not to spam
2022-09-16 12:03:04 +02:00
Suraj Patil
039958eae5 Stable diffusion text2img conversion script. (#154)
* begin text2img conversion script

* add fn to convert config

* create config if not provided

* update imports and use UNet2DConditionModel

* fix imports, layer names

* fix unet coversion

* add function to convert VAE

* fix vae conversion

* update main

* create text model

* update config creating logic for unet

* fix config creation

* update script to create and save pipeline

* remove unused imports

* fix checkpoint loading

* better name

* save progress

* finish

* up

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-16 00:07:32 +02:00
Pedro Cuenca
d8b0e4f433 UNet Flax with FlaxModelMixin (#502)
* First UNet Flax modeling blocks.

Mimic the structure of the PyTorch files.
The model classes themselves need work, depending on what we do about
configuration and initialization.

* Remove FlaxUNet2DConfig class.

* ignore_for_config non-config args.

* Implement `FlaxModelMixin`

* Use new mixins for Flax UNet.

For some reason the configuration is not correctly applied; the
signature of the `__init__` method does not contain all the parameters
by the time it's inspected in `extract_init_dict`.

* Import `FlaxUNet2DConditionModel` if flax is available.

* Rm unused method `framework`

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Indicate types in flax.struct.dataclass as pointed out by @mishig25

Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>

* Fix typo in transformer block.

* make style

* some more changes

* make style

* Add comment

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Rm unneeded comment

* Update docstrings

* correct ignore kwargs

* make style

* Update docstring examples

* Make style

* Style: remove empty line.

* Apply style (after upgrading black from pinned version)

* Remove some commented code and unused imports.

* Add init_weights (not yet in use until #513).

* Trickle down deterministic to blocks.

* Rename q, k, v according to the latest PyTorch version.

Note that weights were exported with the old names, so we need to be
careful.

* Flax UNet docstrings, default props as in PyTorch.

* Fix minor typos in PyTorch docstrings.

* Use FlaxUNet2DConditionOutput as output from UNet.

* make style

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-15 18:07:15 +02:00
Mishig Davaadorj
fb5468a6aa Add init_weights method to FlaxMixin (#513)
* Add `init_weights` method to `FlaxMixin`

* Rn `random_state` -> `shape_state`

* `PRNGKey(0)` for `jax.eval_shape`

* No allow mismatched sizes

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* docstring diffusers

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-15 17:01:41 +02:00
Suraj Patil
d144c46a59 [UNet2DConditionModel, UNet2DModel] pass norm_num_groups to all the blocks (#442)
* pass norm_num_groups to unet blocs and attention

* fix UNet2DConditionModel

* add norm_num_groups arg in vae

* add tests

* remove comment

* Apply suggestions from code review
2022-09-15 16:35:14 +02:00
Kashif Rasul
b34be039f9 Karras VE, DDIM and DDPM flax schedulers (#508)
* beta never changes removed from state

* fix typos in docs

* removed unused var

* initial ddim flax scheduler

* import

* added dummy objects

* fix style

* fix typo

* docs

* fix typo in comment

* set return type

* added flax ddom

* fix style

* remake

* pass PRNG key as argument and split before use

* fix doc string

* use config

* added flax Karras VE scheduler

* make style

* fix dummy

* fix ndarray type annotation

* replace returns a new state

* added lms_discrete scheduler

* use self.config

* add_noise needs state

* use config

* use config

* docstring

* added flax score sde ve

* fix imports

* fix typos
2022-09-15 15:55:48 +02:00
Mishig Davaadorj
83a7bb2aba Implement FlaxModelMixin (#493)
* Implement `FlaxModelMixin`

* Rm unused method `framework`

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* some more changes

* make style

* Add comment

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Rm unneeded comment

* Update docstrings

* correct ignore kwargs

* make style

* Update docstring examples

* Make style

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Rm incorrect docstring

* Add FlaxModelMixin to __init__.py

* make fix-copies

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-14 16:34:44 +02:00
Suraj Patil
8b45096927 [CrossAttention] add different method for sliced attention (#446)
* add different method for sliced attention

* Update src/diffusers/models/attention.py

* Apply suggestions from code review

* Update src/diffusers/models/attention.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-14 16:01:24 +02:00
Pedro Cuenca
1a69c6ff0e Fix MPS scheduler indexing when using mps (#450)
* Fix LMS scheduler indexing in `add_noise` #358.

* Fix DDIM and DDPM indexing with mps device.

* Verify format is PyTorch before using `.to()`
2022-09-14 14:33:37 +02:00
Nicolas Patry
7c4b38baca Removing .float() (autocast in fp16 will discard this (I think)). (#495) 2022-09-14 08:20:27 +02:00
Jithin James
ab7a78e8f1 docs: bocken doc links for relative links (#504)
fix: bocken doc links for relative links
2022-09-14 00:50:02 +02:00
Patrick von Platen
d12e9ebc90 [Docs] Add subfolder docs (#500)
* [Docs] Add subfolder docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-13 19:18:02 +02:00
Kashif Rasul
da7e3994ad Fix vae tests for cpu and gpu (#480) 2022-09-13 19:14:20 +02:00
Kashif Rasul
55f7ca3bb9 initial flax pndm schedular (#492)
* initial flax pndm

* fix typo

* use state

* return state

* add FlaxSchedulerOutput

* fix style

* add flax imports

* make style

* fix typos

* return created state

* make style

* add torch/flax imports

* docs

* fixed typo

* remove tensor_format

* round instead of cast

* ets is jnp array

* remove copy
2022-09-13 19:11:45 +02:00
Nathan Lambert
b56f102765 Fix scheduler inference steps error with power of 3 (#466)
* initial attempt at solving

* fix pndm power of 3 inference_step

* add power of 3 test

* fix index in pndm test, remove ddim test

* add comments, change to round()
2022-09-13 09:48:33 -06:00
Nathan Lambert
da990633a9 Scheduler docs update (#464)
* update scheduler docs TODOs, fix typos

* fix another typo
2022-09-13 08:34:33 -06:00
Pedro Cuenca
e335f05fb1 Rename test_scheduler_outputs_equivalence in model tests. (#451) 2022-09-13 15:03:36 +02:00
Pedro Cuenca
f7cd6b87e1 Fix disable_attention_slicing in pipelines (#498)
Fix `disable_attention_slicing` in pipelines.
2022-09-13 14:25:22 +02:00
Patrick von Platen
721e017401 [Flax] Make room for more frameworks (#494)
* start

* finish
2022-09-13 13:24:27 +02:00
Kashif Rasul
f4781a0b27 update expected results of slow tests (#268)
* update expected results of slow tests

* relax sum and mean tests

* Print shapes when reporting exception

* formatting

* fix sentence

* relax test_stable_diffusion_fast_ddim for gpu fp16

* relax flakey tests on GPU

* added comment on large tolerences

* black

* format

* set scheduler seed

* added generator

* use np.isclose

* set num_inference_steps to 50

* fix dep. warning

* update expected_slice

* preprocess if image

* updated expected results

* updated expected from CI

* pass generator to VAE

* undo change back to orig

* use orignal

* revert back the expected on cpu

* revert back values for CPU

* more undo

* update result after using gen

* update mean

* set generator for mps

* update expected on CI server

* undo

* use new seed every time

* cpu manual seed

* reduce num_inference_steps

* style

* use generator for randn

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-12 15:49:39 +02:00
Nathan Lambert
25a51b63ca fix table formatting for stable diffusion pipeline doc (add blank line) (#471)
fix table formatting (add blank line)
2022-09-12 10:28:27 +02:00
Partho
8eaaa546d8 Docs: fix installation typo (#453)
installation doc typo fix
2022-09-09 15:17:17 -06:00
Partho
58434879e1 Renamed variables from single letter to better naming (#449)
* renamed variable names

q -> query
k -> key
v -> value
b -> batch
c -> channel
h -> height
w -> weight

* rename variable names

missed some in the initial commit

* renamed more variable names

As per  code review suggestions, renamed x -> hidden_states and x_in -> residual

* fixed minor typo
2022-09-09 22:16:44 +05:30
Suraj Patil
5adb0a7bf7 use torch.matmul instead of einsum in attnetion. (#445)
* use torch.matmul instead of einsum

* fix softmax
2022-09-09 17:16:06 +05:30
Patrick von Platen
b2b3b1a8ab [Black] Update black (#433)
* Update black

* update table
2022-09-08 22:10:01 +02:00
Patrick von Platen
44968e4204 [Docs] Correct links (#432) 2022-09-08 21:29:24 +02:00
anton-l
5e71fb7752 Version bump: 0.4.0.dev0 2022-09-08 19:14:29 +02:00
anton-l
3f55d1359f Release: 0.3.0 2022-09-08 18:20:05 +02:00
Patrick von Platen
195ebe5a02 Mark in painting experimental (#430) 2022-09-08 18:12:46 +02:00
Patrick von Platen
1e98723e12 finish 2022-09-08 17:47:54 +02:00
Patrick von Platen
4e2c1f3a4d Add config docs (#429)
* advance

* finish

* finish
2022-09-08 17:46:03 +02:00
Kashif Rasul
5e6417e988 [Docs] Models (#416)
* docs for attention

* types for embeddings

* unet2d docstrings

* UNet2DConditionModel docstrings

* fix typos

* style and vq-vae docstrings

* docstrings  for VAE

* Update src/diffusers/models/unet_2d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* added inherits from sentence

* docstring to forward

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish model docs

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-08 17:28:11 +02:00
Patrick von Platen
234e90cca7 [Docs] Using diffusers (#428)
* [Docs] Using diffusers

* up
2022-09-08 17:27:36 +02:00
Patrick von Platen
f6fb3282b1 [Outputs] Improve syntax (#423)
* [Outputs] Improve syntax

* improve more

* fix docstring return

* correct all

* uP

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
2022-09-08 16:46:38 +02:00
Pedro Cuenca
1a79969d23 Initial ONNX doc (TODO: Installation) (#426) 2022-09-08 16:46:24 +02:00
Patrick von Platen
f55190b275 [Tests] Correct image folder tests (#427)
* [Tests] Correct image folder tests

* up
2022-09-08 16:45:29 +02:00
Patrick von Platen
f8325cfd7b [MPS] Make sure it doesn't break torch < 1.12 (#425)
* [MPS] Make sure it doesn't break torch < 1.12

* up
2022-09-08 16:22:23 +02:00
Anton Lozhkov
8d9c4a531b [ONNX] Stable Diffusion exporter and pipeline (#399)
* initial export and design

* update imports

* custom prover, import fixes

* Update src/diffusers/onnx_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/onnx_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* remove push_to_hub

* Update src/diffusers/onnx_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* remove torch_device

* numpify the rest of the pipeline

* torchify the safety checker

* revert tensor

* Code review suggestions + quality

* fix tests

* fix provider, add an end-to-end test

* style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-08 15:17:28 +02:00
Anton Lozhkov
7bcc873bb5 [Tests] Make image-based SD tests reproducible with fixed datasets (#424)
nicer datasets
2022-09-08 15:14:24 +02:00
Patrick von Platen
43c585111d [Docs] Outputs.mdx (#422)
* up

* remove bogus file
2022-09-08 14:47:14 +02:00
Patrick von Platen
46013e8e3f [Docs] Fix scheduler docs (#421)
* [Docs] Fix scheduler docs

* up

* Apply suggestions from code review
2022-09-08 14:04:09 +02:00
Patrick von Platen
e7457b377d [Docs] DiffusionPipeline (#418)
* Start

* up

* up

* finish
2022-09-08 13:50:06 +02:00
Satpal Singh Rathore
1d7adf1329 Improve unconditional diffusers example (#414)
* use gpu and improve

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 13:42:44 +02:00
Satpal Singh Rathore
f4a785cec7 Improve latent diff example (#413)
* improve latent diff example

* use gpu

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 13:42:16 +02:00
Pedro Cuenca
5dda1735fd Inference support for mps device (#355)
* Initial support for mps in Stable Diffusion pipeline.

* Initial "warmup" implementation when using mps.

* Make some deterministic tests pass with mps.

* Disable training tests when using mps.

* SD: generate latents in CPU then move to device.

This is especially important when using the mps device, because
generators are not supported there. See for example
https://github.com/pytorch/pytorch/issues/84288.

In addition, the other pipelines seem to use the same approach: generate
the random samples then move to the appropriate device.

After this change, generating an image in MPS produces the same result
as when using the CPU, if the same seed is used.

* Remove prints.

* Pass AutoencoderKL test_output_pretrained with mps.

Sampling from `posterior` must be done in CPU.

* Style

* Do not use torch.long for log op in mps device.

* Perform incompatible padding ops in CPU.

UNet tests now pass.
See https://github.com/pytorch/pytorch/issues/84535

* Style: fix import order.

* Remove unused symbols.

* Remove MPSWarmupMixin, do not apply automatically.

We do apply warmup in the tests, but not during normal use.
This adopts some PR suggestions by @patrickvonplaten.

* Add comment for mps fallback to CPU step.

* Add README_mps.md for mps installation and use.

* Apply `black` to modified files.

* Restrict README_mps to SD, show measures in table.

* Make PNDM indexing compatible with mps.

Addresses #239.

* Do not use float64 when using LDMScheduler.

Fixes #358.

* Fix typo identified by @patil-suraj

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Adapt example to new output style.

* Restore 1:1 results reproducibility with CompVis.

However, mps latents need to be generated in CPU because generators
don't work in the mps device.

* Move PyTorch nightly to requirements.

* Adapt `test_scheduler_outputs_equivalence` ton MPS.

* mps: skip training tests instead of ignoring silently.

* Make VQModel tests pass on mps.

* mps ddim tests: warmup, increase tolerance.

* ScoreSdeVeScheduler indexing made mps compatible.

* Make ldm pipeline tests pass using warmup.

* Style

* Simplify casting as suggested in PR.

* Add Known Issues to readme.

* `isort` import order.

* Remove _mps_warmup helpers from ModelMixin.

And just make changes to the tests.

* Skip tests using unittest decorator for consistency.

* Remove temporary var.

* Remove spurious blank space.

* Remove unused symbol.

* Remove README_mps.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 13:37:36 +02:00
Patrick von Platen
98f346835a [Docs] Minor fixes in optimization section (#420)
* uP

* more
2022-09-08 13:13:46 +02:00
Satpal Singh Rathore
6b9906f6c2 [Docs] Pipelines for inference (#417)
* Update conditional_image_generation.mdx

* Update unconditional_image_generation.mdx
2022-09-08 12:42:13 +02:00
Patrick von Platen
a353c46ec0 [Docs] Training docs (#415)
finish training docs
2022-09-08 10:17:37 +02:00
Pedro Cuenca
c29d81c3e3 Docs: fp16 page (#404)
* Initial version of `fp16` page.

* Fix typo in README.

* Change titles of fp16 section in toctree.

* PR suggestion

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* PR suggestion

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Clarify attention slicing is useful even for batches of 1

Explained by @patrickvonplaten after a suggestion by @keturn.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Do not talk about `batches` in `enable_attention_slicing`.

* Use Tip (just for fun), add link to method.

* Comment about fp16 results looking the same as float32 in practice.

* Style: docstring line wrapping.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 09:17:51 +02:00
Daniel Hug
a127363dca Add typing to scheduling_sde_ve: init, set_timesteps, and set_sigmas function definitions (#412)
Add typing to scheduling_sde_ve init, set_timesteps, and set_sigmas functions

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 09:17:14 +02:00
Nathan Lambert
b8894f181d Docs fix some typos (#408)
* fix small typos

* capitalize Diffusers
2022-09-08 09:08:35 +02:00
Nathan Lambert
e6110f6856 [docs sprint] schedulers docs, will update (#376)
* init schedulers docs

* add some docstrings, fix sidebar formatting

* add docstrings

* [Type hint] PNDM schedulers (#335)

* [Type hint] PNDM Schedulers

* ran make style

* updated timesteps type hint

* apply suggestions from code review

* ran make style

* removed unused import

* [Type hint] scheduling ddim (#343)

* [Type hint] scheduling ddim

* apply suggestions from code review

apply suggestions to also return the return type

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* update class docstrings

* add docstrings

* missed merge edit

* add general docs page

* modify headings for right sidebar

Co-authored-by: Partho <parthodas6176@gmail.com>
Co-authored-by: Santiago Víquez <santi.viquez@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 09:07:44 +02:00
Nathan Lambert
cee3aa0dd4 Docs: fix undefined in toctree (#406)
fix undefined in toctree
2022-09-07 23:02:36 +02:00
Patrick von Platen
8ff777d3c1 Attention slicing (#407)
uup
2022-09-07 22:48:13 +02:00
Rashmi Margani
1a431ae886 Rename variables from single letter to meaningful name fix (#395)
Co-authored-by: Rashmi S <rashmis@Rashmis-MacBook-Pro.local>
2022-09-07 18:50:56 +02:00
Pedro Cuenca
8d14edf27f Docs: Stable Diffusion pipeline (#386)
* Initial description of Stable Diffusion pipeline.

* Placeholder docstrings to test preview.

* Add docstrings to Stable Diffusion pipeline.

* Style

* Docs for all the SD pipelines + attention slicing.

* Style: wrap long lines.
2022-09-07 18:48:49 +02:00
Pedro Cuenca
58d627aed6 Small changes to Philosophy (#403)
Small changes to Philosophy.
2022-09-07 18:47:38 +02:00
Kashif Rasul
65ed5d2845 karras-ve docs (#401)
* karras-ve docs

for issue #293

* make style
2022-09-07 18:34:54 +02:00
Kashif Rasul
44091d8b2a Score sde ve doc (#400)
* initial score_sde_ve docs

* fixed typo

* fix VE term
2022-09-07 18:34:34 +02:00
Patrick von Platen
e0d836c813 [Docs] Finish Intro Section (#402)
* up

* up

* finish
2022-09-07 18:00:49 +02:00
Patrick von Platen
8603ca6b09 [Docs] Quicktour (#397)
* uP

* better

* up

* finish

* up
2022-09-07 16:29:34 +02:00
Kashif Rasul
fead3ba386 ddim docs (#396)
* ddim docs

for issue #293

* space
2022-09-07 16:29:06 +02:00
Pedro Cuenca
492f5c9a6c Docs: optimization / special hardware (#390)
Add mps documentation.
2022-09-07 16:27:14 +02:00
Kashif Rasul
71d737bfe2 added pndm docs (#391)
for issue  #293
2022-09-07 15:33:17 +02:00
Jonathan Whitaker
5b4f5951a9 Update text_inversion.mdx (#393)
* Update text_inversion.mdx

Getting in a bit of background info

* fixed typo mode -> model

* Link SD and re-write a few bits for clarity

* Copied in info from the example script

As suggested by surajpatil :)

* removed an unnecessary heading
2022-09-07 18:48:34 +05:30
Patrick von Platen
3dcc5e9a5a [Docs] Logging (#394)
up
2022-09-07 14:58:21 +02:00
Kashif Rasul
9288fb1df8 [Pipeline Docs] ddpm docs for sprint (#382)
* initial ddpm

for issue #293

* initial ddpm pipeline doc

* added docstrings

* Update docs/source/api/pipelines/ddpm.mdx

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* fix docs

* make style

* fix doc strings

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-07 14:43:29 +02:00
Satpal Singh Rathore
a0592a13ee [Pipeline Docs] Unconditional Latent Diffusion (#388)
* initial description

* add doc strings
2022-09-07 14:42:24 +02:00
Pedro Cuenca
cdb371f07b Docs: Conceptual section (#392)
Add contribution.mdx by copy/pasting and adapting.
2022-09-07 14:41:17 +02:00
Patrick von Platen
8ef1ee812d [Pipeline Docs] Latent Diffusion (#377)
* up

* up

* up

* up

* up

* up

* up
2022-09-07 12:53:03 +02:00
Suraj Patil
ac84c2fa5a [textual-inversion] fix saving embeds (#387)
fix saving embeds
2022-09-07 15:49:16 +05:30
Patrick von Platen
5a38033de4 [Docs] Let's go (#385) 2022-09-07 11:31:13 +02:00
apolinario
7bd50cabaf Add colab links to textual inversion (#375) 2022-09-06 22:23:02 +05:30
Patrick von Platen
5c4ea00de7 Efficient Attention (#366)
* up

* add tests

* correct

* up

* finish

* better naming

* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-06 18:06:47 +02:00
Pedro Cuenca
56c003705f Use expand instead of ones to broadcast tensor (#373)
Use `expand` instead of ones to broadcast tensor.

As suggested by @bes-dev. According the documentation this shouldn't
take any memory - it just plays with the strides.
2022-09-06 17:36:32 +02:00
Anton Lozhkov
7a1229fa29 [Tests] Fix SD slow tests (#364)
move to fp16, update ddim
2022-09-06 17:01:04 +02:00
Partho
f085d2f5c6 [Type Hint] VAE models (#365)
* [Type Hint] VAE models

* Update src/diffusers/models/vae.py

* apply suggestions from code review

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-09-05 19:09:48 +02:00
Santiago Víquez
be52be7215 [Type hint] scheduling lms discrete (#360)
* [Type hint] scheduling karras ve

* [Type hint] scheduling lms discrete
2022-09-05 18:28:49 +02:00
Santiago Víquez
3c1cdd3359 [Type hint] scheduling karras ve (#359) 2022-09-05 18:20:57 +02:00
Samuel Ajisegiri
07f8ebd543 type hints: models/vae.py (#346)
* type hints: models/vae.py

* modify typings in vae.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-09-05 16:46:12 +02:00
Sid Sahai
ada09bd3f0 [Type Hints] DDIM pipelines (#345)
* type hints

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-09-05 16:07:37 +02:00
Patrick von Platen
cc59b05635 [ModelOutputs] Replace dict outputs with Dict/Dataclass and allow to return tuples (#334)
* add outputs for models

* add for pipelines

* finish schedulers

* better naming

* adapt tests as well

* replace dict access with . access

* make schedulers works

* finish

* correct readme

* make  bcp compatible

* up

* small fix

* finish

* more fixes

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/vae.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Adapt model outputs

* Apply more suggestions

* finish examples

* correct

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-05 14:49:26 +02:00
Mishig Davaadorj
daddd98b88 package version on main should have .dev0 suffix (#354)
* package `version` on main should have `.dev0` suffix

package `version` on main should have `.dev0` suffix, which is the convention followed in transformers [here](https://github.com/huggingface/transformers/blob/main/setup.py#L403)

which will also make the docs built into `main` folder in [doc-build diffusers](https://github.com/huggingface/doc-build/tree/main/diffusers)

* dev version should be incremented

* Update version in `__init__.py`
2022-09-05 11:26:23 +02:00
Suraj Patil
55d6453fce [textual_inversion] use tokenizer.add_tokens to add placeholder_token (#357)
use add_tokens
2022-09-05 13:12:49 +05:30
Santiago Víquez
9ea9c6d1c2 [Type hint] scheduling ddim (#343)
* [Type hint] scheduling ddim

* apply suggestions from code review

apply suggestions to also return the return type

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-04 18:07:54 +02:00
Partho
5791f4acde [Type Hints] VAE models (#344)
* [Type Hints] VAE models

* apply suggestions from code review

apply suggestions to also return the return type
2022-09-04 18:06:16 +02:00
Partho
878af0e113 [Type Hint] DDPM schedulers (#349) 2022-09-04 18:05:13 +02:00
Partho
dea5ec508f [Type hint] PNDM schedulers (#335)
* [Type hint] PNDM Schedulers

* ran make style

* updated timesteps type hint

* apply suggestions from code review

* ran make style

* removed unused import
2022-09-04 18:01:57 +02:00
Yuntian Deng
6c0ca5efa6 Fix typo in unet_blocks.py (#353)
Update unet_blocks.py

fix typo
2022-09-04 18:01:14 +02:00
Patrick von Platen
cab7650524 Update bug-report.yml 2022-09-04 17:52:56 +02:00
Patrick von Platen
ed8ef6226d Update bug-report.yml 2022-09-04 17:50:59 +02:00
Patrick von Platen
59c1af77e8 [Commands] Add env command (#352)
* [Commands] Add env command

* Apply suggestions from code review
2022-09-04 17:43:51 +02:00
Patrick von Platen
fd76845651 Add transformers and scipy to dependency table (#348)
uP
2022-09-04 09:46:20 +02:00
Sid Sahai
b1fe170642 [Type Hint] Unet Models (#330)
* add void check

* remove void, add types for params
2022-09-03 12:31:38 +02:00
Patrick von Platen
9b704f7688 [Img2Img2] Re-add K LMS scheduler (#340) 2022-09-03 12:19:58 +02:00
Pedro Cuenca
e49dd03d2d Use ONNX / Core ML compatible method to broadcast (#310)
* Use ONNX / Core ML compatible method to broadcast.

Unfortunately `tile` could not be used either, it's still not compatible
with ONNX.

See #284.

* Add comment about why broadcast_to is not used.

Also, apply style to changed files.

* Make sure broadcast remains in same device.
2022-09-02 18:22:57 +02:00
Partho
7b628a225a [Type hint] PNDM pipeline (#327)
* [Type hint] PNDM pipeline

* ran make style

* Revert "ran make style" wrong black version
2022-09-02 17:45:33 +02:00
Santiago Víquez
033b77ebc4 [Type hint] Latent Diffusion Uncond pipeline (#333) 2022-09-02 16:39:34 +02:00
Patrick von Platen
e54206d095 Update README.md
Remove joke
2022-09-02 13:20:00 +02:00
Patrick von Platen
6b5baa9332 Add contributions to README and re-order a bit (#316)
* up

* thanks Clau

* finish

* finish

* up
2022-09-02 13:19:13 +02:00
Anton Lozhkov
66fd3ec70d [CI] try to fix GPU OOMs between tests and excessive tqdm logging (#323)
* Fix tqdm and OOM

* tqdm auto

* tqdm is still spamming try to disable it altogether

* rather just set the pipe config, to keep the global tqdm clean

* style
2022-09-02 13:18:49 +02:00
Pedro Cuenca
3a536ac8f1 README: stable diffusion version v1-3 -> v1-4 (#331)
Prose: stable diffusion version v1-3 -> v1-4

The code examples use `v1-4`, but the license text was referring to
`v1-3`.
2022-09-02 13:18:09 +02:00
Suraj Patil
30e7c78ac3 Update README.md 2022-09-02 14:29:27 +05:30
Suraj Patil
d0d3e24ec1 Textual inversion (#266)
* add textual inversion script

* make the loop work

* make coarse_loss optional

* save pipeline after training

* add arg pretrained_model_name_or_path

* fix saving

* fix gradient_accumulation_steps

* style

* fix progress bar steps

* scale lr

* add argument to accept style

* remove unused args

* scale lr using num gpus

* load tokenizer using args

* add checks when converting init token to id

* improve commnets and style

* document args

* more cleanup

* fix default adamw arsg

* TextualInversionWrapper -> CLIPTextualInversionWrapper

* fix tokenizer loading

* Use the CLIPTextModel instead of wrapper

* clean dataset

* remove commented code

* fix accessing grads for multi-gpu

* more cleanup

* fix saving on multi-GPU

* init_placeholder_token_embeds

* add seed

* fix flip

* fix multi-gpu

* add utility methods in wrapper

* remove ipynb

* don't use wrapper

* dont pass vae an dunet to accelerate prepare

* bring back accelerator.accumulate

* scale latents

* use only one progress bar for steps

* push_to_hub at the end of training

* remove unused args

* log some important stats

* store args in tensorboard

* pretty comments

* save the trained embeddings

* mobe the script up

* add requirements file

* more cleanup

* fux typo

* begin readme

* style -> learnable_property

* keep vae and unet in eval mode

* address review comments

* address more comments

* removed unused args

* add train command in readme

* update readme
2022-09-02 14:23:52 +05:30
Santiago Víquez
5164c9faa9 [Type hint] Score SDE VE pipeline (#325) 2022-09-01 22:17:00 +02:00
Anton Lozhkov
93debd301d [CI] Cancel pending jobs for PRs on new commits (#324)
Cancel pending jobs for PRs on new commits
2022-09-01 16:14:53 +02:00
Suraj Patil
1b1d6444c6 [train_unconditional] fix gradient accumulation. (#308)
fix grad accum
2022-09-01 16:02:15 +02:00
Anton Lozhkov
4724250980 Fix nondeterministic tests for GPU runs (#314)
* Fix nondeterministic tests for GPU runs

* force SD fast tests to the CPU
2022-09-01 15:25:39 +02:00
Patrick von Platen
64270eff34 Improve README to show how to use SD without an access token (#315)
* Readme sd

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-09-01 15:06:04 +02:00
Anton Lozhkov
3c138a4d2b Fix flake8 F401 imported but unused (#317)
* Fix flake8 F401 '...' imported but unused

* One more F403
2022-09-01 14:56:25 +02:00
Patrick von Platen
2fa4476525 Add new issue template 2022-09-01 12:51:55 +00:00
okalldal
d799084a9a Allow downloading of revisions for models. (#303) 2022-09-01 13:52:30 +02:00
Kirill
1e5d91d577 Fix more links (#312) 2022-09-01 16:41:19 +05:30
Patrick von Platen
d8f8b9aac9 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-09-01 12:43:26 +02:00
Patrick von Platen
4d1b1b46f4 improve issue guide 2022-09-01 12:43:22 +02:00
Juan Carrasquilla
1f196a09fe Changed variable name from "h" to "hidden_states" (#285)
* Changed variable name from "h" to "hidden_states"

Per issue #198 , changed variable name from "h" to "hidden_states" in the forward function only. I am happy to change any other variable names, please advise recommended new names.

* Update src/diffusers/models/resnet.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-01 15:01:02 +05:30
Kirill
034673bbeb Fix stable-diffusion-seeds.ipynb link (#309) 2022-09-01 14:59:34 +05:30
Patrick von Platen
17b8adeb0e Update README.md 2022-09-01 10:32:25 +02:00
Patrick von Platen
e8140304b9 [Tests] Add fast pipeline tests (#302)
* add fast tests

* Finish
2022-08-31 21:17:02 +02:00
Patrick von Platen
bc2ad5a661 Improve README (#301) 2022-08-31 21:02:46 +02:00
Patrick von Platen
f3937bc8f3 [Refactor] Remove set_seed (#289)
* [Refactor] Remove set_seed and class attributes

* apply anton's suggestiosn

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* update

* make style

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* make fix-copies

* make style

* make style and new copies

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-08-31 19:29:38 +02:00
Patrick von Platen
384fcac6df [Stable Diffusion] Hotfix (#299) 2022-08-31 19:27:49 +02:00
Patrick von Platen
0b1a843d32 Check dummy file (#297)
fix line type
2022-08-31 18:54:36 +02:00
Patrick von Platen
2299951e6d Update README.md 2022-08-31 18:34:35 +02:00
Anton Lozhkov
ab7857019a Add missing auth tokens for two SD tests (#296) 2022-08-31 17:57:46 +02:00
Anton Lozhkov
c7a3b2ed31 Fix GPU tests (token + single-process) (#294) 2022-08-31 17:26:20 +02:00
Nouamane Tazi
b64c522759 [PNDM Scheduler] format timesteps attrs to np arrays (#273)
* format timesteps attrs to np arrays in pndm scheduler
because lists don't get formatted to tensors in `self.set_format`

* convert to long type to use timesteps as indices for tensors

* add scheduler set_format test

* fix `_timesteps` type

* make style with black 22.3.0 and isort 5.10.1

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-31 14:12:08 +02:00
Kirill
7eb6dfc607 Fix link (#286)
Fix img2img link
2022-08-31 12:50:36 +02:00
Patrick von Platen
06bc1daf6c [Type hint] Karras VE pipeline (#288)
* [Type hint] Karras VE pipeline

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-08-31 12:50:11 +02:00
Anton Lozhkov
7e1b202d5e Add datasets + transformers + scipy to test deps (#279)
Add datasets + transformers to test deps
2022-08-30 20:19:21 +02:00
Richard Löwenström
170af08e7f Easily understandable error if inference steps not set before using scheduler (#263) (#264)
* Helpful exception if inference steps not set in schedulers (#263)

* Apply suggestions from codereview by patrickvonplaten

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-30 23:17:24 +05:30
Patrick von Platen
76985bc87a [Docs] Add some guides (#276) 2022-08-30 19:13:07 +02:00
Nathan Lambert
851b968630 readme: remove soon tag for diffuse-the-rest 2022-08-30 10:08:03 -07:00
Patrick von Platen
3a5eff9022 Update README.md 2022-08-30 19:02:14 +02:00
Patrick von Platen
6e808719d2 Update README.md 2022-08-30 19:01:58 +02:00
Patrick von Platen
eb64e201b8 [README] Add readme for SD (#274)
* [README] Add readme for SD

* fix

* fix

* up

* uP
2022-08-30 18:50:19 +02:00
Patrick von Platen
a4d5b59f13 Refactor Pipelines / Community pipelines and add better explanations. (#257)
* [Examples readme]

* Improve

* more

* save

* save

* save more

* up

* up

* Apply suggestions from code review

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* make deterministic

* up

* better

* up

* add generator to img2img pipe

* save

* make pipelines deterministic

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* apply all changes

* more correctnios

* finish

* improve table

* more fixes

* up

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Update src/diffusers/pipelines/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* add better links

* fix more

* finish

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-30 18:43:42 +02:00
hysts
5e84353eba Refactor progress bar (#242)
* Refactor progress bar of pipeline __call__

* Make any tqdm configs available

* remove init

* add some tests

* remove file

* finish

* make style

* improve progress bar test

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-30 12:30:06 +02:00
Anton Lozhkov
efa773afd2 Support K-LMS in img2img (#270)
* Support K-LMS in img2img

* Apply review suggestions
2022-08-29 17:17:05 +02:00
nicolas-dufour
da7d4cf200 [BugFix]: Fixed add_noise in LMSDiscreteScheduler (#253)
* Fixed add_noise in LMSDiscreteScheduler

* Linting

* Update src/diffusers/schedulers/scheduling_lms_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-08-29 16:40:49 +02:00
Patrick von Platen
9e1b1ca49d [Tests] Make sure tests are on GPU (#269)
* [Tests] Make sure tests are on GPU

* move more models

* speed up tests
2022-08-29 15:58:11 +02:00
Pulkit Mishra
16172c1c7e Adds missing torch imports to inpainting and image_to_image example (#265)
adds missing torch import to example
2022-08-29 10:56:37 +02:00
Evita
28f730520e Fix typo in README.md (#260) 2022-08-26 18:54:45 -07:00
Suraj Patil
5cbed8e0d1 Fix inpainting script (#258)
* expand latents before the check, style

* update readme
2022-08-26 21:16:43 +05:30
Anton Lozhkov
11133dcca1 Initialize CI for code quality and testing (#256)
* Init CI

* clarify cpu

* style

* Check scripts quality too

* Drop smi for cpu tests

* Run PR tests on cpu docker envs

* Update .github/workflows/push_tests.yml

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Try minimal python container

* Print env, install stable GPU torch

* Manual torch install

* remove deprecated platform.dist()

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-26 17:34:58 +02:00
Logan
bb4d605dfc add inpainting example script (#241)
* add inpainting

* added proper noising of init_latent as reccommened by jackloomen (https://github.com/huggingface/diffusers/pull/241#issuecomment-1226283542)

* move image preprocessing inside pipeline and allow non 512x512 mask
2022-08-26 20:32:46 +05:30
Nathan Lambert
e5b5deaea6 Update README.md with examples (#252)
* Update README.md

* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-08-26 15:27:25 +05:30
Pedro Cuenca
bfe37f3159 Reproducible images by supplying latents to pipeline (#247)
* Accept latents as input for StableDiffusionPipeline.

* Notebook to demonstrate reusable seeds (latents).

* More accurate type annotation

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Review comments: move to device, raise instead of assert.

* Actually commit the test notebook.

I had mistakenly pushed an empty file instead.

* Adapt notebook to Colab.

* Update examples readme.

* Move notebook to personal repo.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-25 19:17:05 +05:30
Anton Lozhkov
89793a97e2 Style the scripts directory (#250)
Style scripts
2022-08-25 15:46:09 +02:00
Anton Lozhkov
365f75233f Pin black==22.3 to keep a stable --preview flag (#249)
Pin black==22.3
2022-08-25 15:19:59 +02:00
Patrick von Platen
c1efda70b5 [Clean up] Clean unused code (#245)
* CleanResNet

* refactor more

* correct
2022-08-25 15:25:57 +05:30
Kashif Rasul
47893164ab added test workflow and fixed failing test (#237)
* added test workflow and fixed failing test

* 4 decimal places
2022-08-24 13:46:53 +02:00
Kashif Rasul
102cabeb23 split tests_modeling_utils (#223)
* split tests_modeling_utils

* Fix SD tests .to(device)

* fix merge

* Fix style

Co-authored-by: anton-l <anton@huggingface.co>
2022-08-24 13:27:16 +02:00
Suraj Patil
511bd3aaf2 [example/image2image] raise error if strength is not in desired range (#238)
raise error if strength is not in desired range
2022-08-23 19:52:52 +05:30
Suraj Patil
4674fdf807 Add image2image example script. (#231)
* boom boom

* reorganise examples

* add image2image in example inference

* add readme

* fix example

* update colab url

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix init_timestep

* update colab url

* update main readme

* rename readme

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-08-23 16:27:28 +05:30
Yih-Dar
6028d58cb0 Remove dead code in resnet.py (#218)
remove dead code in resnet.py

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-08-23 12:08:37 +05:30
anton-l
60a147343f Release: v0.2.4 2022-08-22 18:45:43 +02:00
anton-l
eb5267f377 Style quickfix 2022-08-22 18:40:04 +02:00
Patrick von Platen
db5fa43079 [Loading] allow modules to be loaded in fp16 (#230) 2022-08-22 18:27:17 +02:00
Anton Lozhkov
0ab948568d Add more visibility to the colab links with badges 2022-08-22 14:15:24 +02:00
anton-l
ebd80e2618 Release: v0.2.3 2022-08-22 10:49:38 +02:00
anton-l
89509230db Merge remote-tracking branch 'origin/main' 2022-08-22 10:22:36 +02:00
anton-l
577a6a65d6 Fix SD tests .to(device) 2022-08-22 10:22:28 +02:00
Anton Lozhkov
62b3efe351 Fix SD example typo 2022-08-22 09:25:55 +02:00
anton-l
21ceda3f6c Remove duplicate add_noise 2022-08-22 09:12:42 +02:00
Suraj Patil
5321f3e203 add add_noise method in LMSDiscreteScheduler, PNDMScheduler (#227)
add add_noise method in more schedulers
2022-08-22 08:38:07 +02:00
Nathan Lambert
3f1861ee46 hotfix for pdnm test (#220) 2022-08-22 07:23:59 +02:00
Pedro Cuenca
6a03060c45 Restore is_modelcards_available in .utils (#224)
Restore `is_modelcards_available` in `.utils`.

Otherwise attempting to import `hub_utils` (in training scripts, for
example), fails.

This was removed during the refactor in df90f0c.
2022-08-22 07:21:29 +02:00
Pedro Cuenca
2b7669183e Update README for 0.2.3 release (#225)
* Update README for 0.2.3 release:

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-21 23:59:46 +02:00
Patrick von Platen
e7b69cbe19 [Safety Checker] Lower adjustment value 2022-08-21 15:29:10 +00:00
Anton Lozhkov
3cde81408f Add incompatibility note for SD (temporary) 2022-08-20 12:02:32 +02:00
Pedro Cuenca
71ba8aec55 Pipeline to device (#210)
* Implement `pipeline.to(device)`

* DiffusionPipeline.to() decides best device on None.

* Breaking change: torch_device removed from __call__

`pipeline.to()` now has PyTorch semantics.

* Use kwargs and deprecation notice

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Apply torch_device compatibility to all pipelines.

* style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: anton-l <anton@huggingface.co>
2022-08-19 18:39:08 +02:00
Suraj Patil
89e9521048 fix safety check (#217) 2022-08-19 18:04:58 +05:30
Suraj Patil
65ea7d6b62 Add safety module (#213)
* add SafetyChecker

* better name, fix checker

* add checker in main init

* remove from main init

* update logic to detect pipeline module

* style

* handle all safety logic in safety checker

* draw text

* can't draw

* small fixes

* treat special care as nsfw

* remove commented lines

* update safety checker
2022-08-19 15:24:03 +05:30
Anton Lozhkov
e30e1b89d0 Support one-string prompts and custom image size in LDM (#212)
* Support one-string prompts in LDM

* Add other features from SD too
2022-08-18 17:55:15 +02:00
Anton Lozhkov
df90f0ce98 Add is_torch_available, is_flax_available (#204)
* Add is_<framework>_available, refactor import utils

* deps

* quality
2022-08-17 16:47:20 +02:00
Anton Lozhkov
ed22b4fd07 Revive make quality (#203)
* Revive Make utils

* Add datasets for training too
2022-08-17 15:22:04 +02:00
Suraj Patil
f9522d825c [StableDiffusionPipeline] use default params in __call__ (#196)
use default params in __call__
2022-08-17 17:06:12 +05:30
Suraj Patil
80e0c8ba9e fix stable-diffusion code snippet format. 2022-08-17 14:15:00 +05:30
Suraj Patil
3cd20d59d7 fix test_from_pretrained_hub_pass_model (#194)
init pipeline once
2022-08-17 13:58:18 +05:30
apolinario
e36a36788e Match params with official Stable Diffusion lib (#192)
https://github.com/CompVis/stable-diffusion
2022-08-16 22:52:22 +02:00
Patrick von Platen
4b02f53e62 Release: v0.2.2 2022-08-16 19:30:08 +02:00
Patrick von Platen
27d11a0094 [K-LMS Scheduler] fix import (#191) 2022-08-16 19:25:45 +02:00
Patrick von Platen
554e67cb06 Update README.md 2022-08-16 19:12:25 +02:00
Patrick von Platen
45cb500667 Update README.md 2022-08-16 19:10:35 +02:00
Patrick von Platen
8c78e73fef Update README.md 2022-08-16 19:09:09 +02:00
anton-l
c1b378db69 Release: v0.2.1 2022-08-16 18:22:45 +02:00
Patrick von Platen
b50a9ae383 [Stable diffusion] Hot fix 2022-08-16 16:17:32 +00:00
anton-l
ea2e177c1d Release: v0.2.0 2022-08-16 17:39:50 +02:00
Pedro Cuenca
513f1fbfb0 Allow passing non-default modules to pipeline (#188)
* Allow passing non-default modules to pipeline.

Override modules are recognized and replaced in the pipeline. However,
no check is performed about mismatched classes yet. This is because the
override module is already instantiated and we have no library or class
name to compare against.

* up

* add test

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-16 17:25:25 +02:00
Anton Lozhkov
d7b692083c Add K-LMS scheduler from k-diffusion (#185)
* test LMS with LDM

* test LMS with LDM

* Interchangeable sigma and timestep. Added dummy objects

* Debug

* cuda generator

* Fix derivatives

* Update tests

* Rename Lms->LMS
2022-08-16 16:48:35 +02:00
Patrick von Platen
9070c394aa [Naming] correct config naming of DDIM pipeline (#187) 2022-08-16 15:50:36 +02:00
Patrick von Platen
194ed794d8 [PNDM] Stable diffusion (#186)
* [PNDM] Stable diffusino

* finish
2022-08-16 15:33:13 +02:00
Patrick von Platen
051b34635f [Half precision] Make sure half-precision is correct (#182)
* [Half precision] Make sure half-precision is correct

* Update src/diffusers/models/unet_2d.py

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* correct some tests

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* finalize

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-16 10:42:24 +02:00
Suraj Patil
5f25818a0f allow custom height, width in StableDiffusionPipeline (#179)
* allow custom height width

* raise if height width are not mul of 8
2022-08-15 10:28:03 +05:30
Suraj Patil
c25d8c905c add tests for stable diffusion pipeline (#178)
add tests for sd pipeline
2022-08-14 18:51:02 +05:30
Suraj Patil
5782e0393d Stable diffusion pipeline (#168)
* add stable diffusion pipeline

* get rid of multiple if/else

* batch_size is unused

* add type hints

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* fix some bugs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-14 14:43:14 +02:00
Suraj Patil
92b6dbba1a [LDM pipeline] fix eta condition. (#171)
fix typo in condirion
2022-08-13 12:32:01 +05:30
Suraj Patil
c72e343085 [PNDM in LDM pipeline] use inspect in pipeline instead of unused kwargs (#167)
use inspect instead of unused kwargs
2022-08-12 20:29:54 +05:30
Suraj Patil
3228eb1609 allow pndm scheduler to be used with ldm pipeline (#165) 2022-08-11 14:58:14 +05:30
Suraj Patil
c1488ff348 add scaled_linear schedule in PNDM and DDPM (#164) 2022-08-11 14:56:12 +05:30
Suraj Patil
b344c953a8 add attention up/down blocks for VAE (#161) 2022-08-10 16:38:32 +05:30
Anton Lozhkov
dd10da76a7 Add an alternative Karras et al. stochastic scheduler for VE models (#160)
* karras + VE, not flexible yet

* Fix inputs incompatibility with the original unet

* Roll back sigma scaling

* Apply suggestions from code review

* Old comment

* Fix doc
2022-08-09 15:58:30 +02:00
Suraj Patil
543ee1e092 [LDMTextToImagePipeline] make text model generic (#162)
make text model generic
2022-08-09 19:16:17 +05:30
Pedro Cuenca
75b6c16567 Minor typos (#159) 2022-08-06 21:59:41 +02:00
Pedro Cuenca
c4ae7c2421 Fix arg key for dataset_name in create_model_card (#158)
Fix arg key for `dataset_name`

The example training script was changed in #152, but not
`create_model_card`.
2022-08-06 21:59:12 +02:00
Suraj Patil
a2090375ca [VAE] fix the downsample block in Encoder. (#156)
* pass downsample_padding in encoder

* update tests
2022-08-06 17:36:07 +05:30
Suraj Patil
c4a3b09a36 [UNet2DConditionModel] add cross_attention_dim as an argument (#155)
add cross_attention_dim as an argument
2022-08-05 18:12:03 +05:30
Sugato Ray
616c3a42cb Added diffusers to conda-forge and updated README for installation instruction (#129)
add instruction to install with conda

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-08-03 16:46:23 +02:00
Omar Sanseviero
d23cf98769 Add issue templates for feature requests and bug reports (#153)
* Add issue template for feature requests and bug reports

* Update .github/ISSUE_TEMPLATE/config.yml

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-08-03 16:38:37 +02:00
Anton Lozhkov
eeb9264acd Support training with a local image folder (#152)
* Support training with an image folder

* style
2022-08-03 15:25:00 +02:00
Eyal Mazuz
b6447fa87e Allow DDPM scheduler to use model's predicated variance (#132)
* Extented the ability of ddpm scheduler
to utilize model that also predict the variance.

* Update src/diffusers/schedulers/scheduling_ddpm.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-08-03 12:40:04 +02:00
anton-l
b6cadcef98 Release: 0.1.3 2022-07-28 10:27:32 +02:00
Patrick von Platen
3100bc9670 [Vae and AutoencoderKL] Final clean of LDM checkpoints (#137)
* [Vae and AutoencoderKL clean]

* save intermediate finished work

* more progress

* more progress

* finish modeling code

* save intermediate

* finish

* Correct tests
2022-07-28 10:14:34 +02:00
Anton Lozhkov
e05f03ae41 Disable test_ddpm_ddim_equality_batched until resolved (#142)
disable test_ddpm_ddim_equality_batched
2022-07-28 09:29:29 +02:00
Anton Lozhkov
6c15636b0b Add training and batched inference test for DDPM vs DDIM (#140)
* Add torch_device to the VE pipeline

* Mark the training test with slow
2022-07-27 15:01:56 +02:00
r8bhavneet
89f2011ced Update README.md (#134)
Hey, I really liked the project and was reading through the Readme.md file when I came across some spelling and grammatical errors that you might have missed while editing the documentation. It would be really a great opportunity for me if I could contribute to this project. Thank you.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-07-25 12:17:26 +02:00
Mario Šaško
0f8547c2af Add syntax highlighting to code blocks in README (#131) 2022-07-24 16:20:56 +02:00
Omar Sanseviero
343180c2cf Fix manifest to include model card (#136)
Update MANIFEST.in
2022-07-24 16:18:59 +02:00
Yue Zhao
27782bc18e fix some errors and rewrite sentences in README.md (#133)
* Update README.md

line 23, 24 and 25: Remove "that" because "that" is unnecessary in these three sentences.
line 33: Rewrite this sentence and make it more straightforward.
line 34: This first sentence is incomplete.
line 117: “focusses" -> "focuses"
line 118: "continuous" -> "continuous"
line 119: "consise" -> "concise"

* Update README.md
2022-07-24 12:02:39 +02:00
Anton Lozhkov
cde0ed162a Add a step about accelerate config to the examples (#130) 2022-07-22 13:48:26 +02:00
Omar Sanseviero
570d3f1eb9 Expose LR schedulers (#80)
* Expose schedulers

* Update __init__.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-07-22 13:29:42 +02:00
John Haugeland
85244d4a59 Documentation cross-reference (#127)
In https://github.com/huggingface/diffusers/issues/124 I incorrectly suggested that the image set creation process was undocumented.  In reality, I just hadn't located it.  @patrickvonplaten did so for me.

This PR places a hotlink so that people like me can be shoehorned over where they needed to be.
2022-07-21 21:46:15 +02:00
David Marx
1a84bd2a0f fixed URLs broken by bdecc3 folder move (#77) 2022-07-21 20:20:04 +02:00
Manuel Romero
3247eadde4 Fix var name (#119) 2022-07-21 19:33:30 +02:00
Anton Lozhkov
a487b5095a Update images 2022-07-21 17:11:36 +02:00
Patrick von Platen
04fa7baea8 Update README.md 2022-07-21 16:54:55 +02:00
apolinario
9a04a8a6a8 Update README.md with examples (#121)
Update README.md
2022-07-21 16:53:59 +02:00
Omar Sanseviero
a05a5fb9ba Update main README (#120)
* Update README.md

* Update README.md
2022-07-21 16:43:47 +02:00
Patrick von Platen
71faf347fd Update README.md 2022-07-21 16:25:17 +02:00
Patrick von Platen
2f1f7b01d6 Release: 0.1.2 2022-07-21 15:03:11 +02:00
Patrick von Platen
5311f564ed Final fixes (#118)
final fixes before release
2022-07-21 14:36:43 +02:00
Lysandre Debut
3b7f514a1c Beef up quickstart (#117) 2022-07-21 13:53:31 +02:00
anton-l
7c0a861894 Add torch_device to the VE pipeline 2022-07-21 13:53:09 +02:00
anton-l
a73ae3e5b0 Better default for AdamW 2022-07-21 13:36:16 +02:00
anton-l
06505ba4b4 Less eval steps during training 2022-07-21 11:47:40 +02:00
anton-l
13457002c0 Merge branch 'main' of github.com:huggingface/diffusers 2022-07-21 11:07:41 +02:00
anton-l
302b86bd0b Adapt training to the new UNet API 2022-07-21 11:07:21 +02:00
Lysandre Debut
d87d5edf66 README improvements: credits and roadmap (#116)
* Typos

* Credits and roadmap

* Second version
2022-07-21 10:06:16 +02:00
Patrick von Platen
e795a4c6f8 Fix import metadatalib 2022-07-21 04:56:46 +02:00
Patrick von Platen
4293b9f54f Release: 0.1.1 2022-07-21 04:51:37 +02:00
Patrick von Platen
0e5f2daee7 Release: 0.1.0 2022-07-21 02:35:27 +00:00
Patrick von Platen
416749ff96 modelcards and tensorboard are optional 2022-07-21 02:30:55 +00:00
Patrick von Platen
b1b99b59ac some more cleaning 2022-07-21 02:11:28 +00:00
Patrick von Platen
606ac57e50 finish pndm sampler 2022-07-21 01:51:58 +00:00
Patrick von Platen
394243ce98 finish pndm sampler 2022-07-21 01:50:12 +00:00
Nathan Lambert
fe98574622 fixing tests for numpy and make deterministic (ddpm) (#106)
* work in progress, fixing tests for numpy and make deterministic

* make tests pass via pytorch

* make pytorch == numpy test cleaner

* change default tensor format pndm --> pt
2022-07-21 02:24:59 +02:00
Patrick von Platen
c5c9399610 correct paths for tests 2022-07-21 00:20:10 +00:00
Patrick von Platen
836f3f35c2 Rename pipelines (#115)
up
2022-07-21 01:39:46 +02:00
Patrick von Platen
9c3820d05a Big Model Renaming (#109)
* up

* change model name

* renaming

* more changes

* up

* up

* up

* save checkpoint

* finish api / naming

* finish config renaming

* rename all weights

* finish really
2022-07-21 01:30:45 +02:00
Patrick von Platen
13e37cabe0 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-07-20 21:02:43 +00:00
Patrick von Platen
760dcb1ffc fix score sde ve scheduler 2022-07-20 21:02:40 +00:00
Nathan Lambert
889aa6008c PNDM API Updates, Tests Cleaning (#103)
* organize PNDM tests, begin API change

* clean timestep API PNDM

* update pipeline PNDM

* fix typo

* API clean round 2

* small nit
2022-07-20 12:47:39 -07:00
Anton Lozhkov
76f9b52289 Update the training examples (#102)
* New unet, gradient accumulation

* Save every n epochs

* Remove find_unused_params, hooray!

* Update examples

* Switch to DDPM completely
2022-07-20 19:51:23 +02:00
anton-l
6b275fca49 make PIL the default output type 2022-07-20 18:28:22 +02:00
Anton Lozhkov
1b42732ced PIL-ify the pipeline outputs (#111) 2022-07-20 18:09:51 +02:00
anton-l
9e9d2dbc59 Fix np.abs 2022-07-20 17:38:03 +02:00
Anton Lozhkov
8b4371f70f Refactor pipeline outputs, return LDM guidance_scale (#110) 2022-07-20 17:28:06 +02:00
Patrick von Platen
919e27d357 re-add super.__init__ for all PyTorch modules 2022-07-20 13:49:00 +00:00
Sylvain Gugger
ad9d252596 Add a decorator for register_to_config (#108)
* Add a decorator for register_to_config

* All models and test
2022-07-20 15:42:50 +02:00
Patrick von Platen
7e11392dfd fix ddpm scheduler 2022-07-19 23:47:04 +00:00
Patrick von Platen
1f49a343b5 hotfix 2022-07-19 23:14:03 +00:00
Patrick von Platen
936cd08488 improve loading a bit 2022-07-19 22:02:54 +00:00
Patrick von Platen
3a32b8c916 align API 2022-07-19 16:54:10 +00:00
Patrick von Platen
c3a15437f8 automatic logits verification >> visual logits verification 2022-07-19 16:14:17 +00:00
Patrick von Platen
8c31925b3b Get diffusers ready 🚀🚀🚀 (#101)
* big purge

* more fixes

* finish for now
2022-07-19 18:02:12 +02:00
Arthur
33344ed916 logits for google and compvis models (#100)
* initial commit

* quick fix
2022-07-19 18:02:04 +02:00
anton-l
7353b74ec2 Merge remote-tracking branch 'origin/main' 2022-07-19 17:12:48 +02:00
anton-l
44bb38fd8b Include model_card_template.md with the package 2022-07-19 17:07:54 +02:00
Patrick von Platen
2ea64a08ed Prepare code for big cleaning 2022-07-19 15:07:46 +00:00
Patrick von Platen
37fe8e00b2 upload 2022-07-19 15:05:40 +00:00
anton-l
0ea78f0d3b Include MANIFEST.in to package the modelcard template 2022-07-19 17:01:16 +02:00
anton-l
0e5a99bb5a Quick hacks for push_to_hub from notebooks - follow-up 2022-07-19 16:52:39 +02:00
anton-l
e3c982ee29 Quick hacks for push_to_hub from notebooks 2022-07-19 16:41:13 +02:00
anton-l
ab00f5d3e1 Update model names for CompVis and google 2022-07-19 15:13:22 +02:00
Patrick von Platen
3f0b44b322 improve ddpm conversion script 2022-07-19 11:24:13 +00:00
Patrick von Platen
cb90fd69b4 upload code 2022-07-19 10:34:52 +00:00
Arthur
f794432e81 Conversion script for ncsnpp models (#98)
* added kwargs for easier intialisation of random model

* initial commit for conversion script

* current debug script

* update

* Update

* done

* add updated debug conversion script

* style

* clean conversion script
2022-07-19 12:19:36 +02:00
Nathan Lambert
182b164f32 Fix VE SDE tests, clean API (#95)
* clean ddpm api to match ddim

* correct ve sde class

* update pipeline API for ve sde

* make style

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-07-19 12:12:45 +02:00
Patrick von Platen
8b42c7cecc make all tests pass 2022-07-19 00:24:10 +00:00
Patrick von Platen
66d5a1804c small fixes 2022-07-19 00:08:41 +00:00
Patrick von Platen
d5acb4110a Finalize ldm (#96)
* upload

* make checkpoint work

* finalize
2022-07-19 02:02:23 +02:00
Lysandre Debut
6cabc599a2 DDPM Conversion (#94)
* DDPM

* Fixes

* Edit tests
2022-07-19 01:59:58 +02:00
anton-l
36b459f6e6 Make tqdm calls notebook-compatible - follow-up 2022-07-18 18:43:18 +02:00
anton-l
1820024005 Make tqdm calls notebook-compatible 2022-07-18 18:39:39 +02:00
anton-l
ffe7b93b60 DDIM resolution->image_size 2022-07-18 12:23:27 +02:00
Patrick von Platen
f82ebb9a03 fix some model tests 2022-07-18 01:29:40 +00:00
Nathan Lambert
63c68d979a VE/VP SDE updates (#90)
* improve comments for sde_ve scheduler, init tests

* more comments, tweaking pipelines

* timesteps --> num_training_timesteps, some comments

* merge cpu test, add m1 data

* fix scheduler tests with num_train_timesteps

* make np compatible, add tests for sde ve

* minor default variable fixes

* make style and fix-copies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-07-18 03:08:08 +02:00
Patrick von Platen
ba3c9a9a3a [SDE] Merge to unconditional model (#89)
* up

* more

* uP

* make dummy test pass

* save intermediate

* p

* p

* finish

* finish

* finish
2022-07-18 02:52:37 +02:00
Patrick von Platen
b5c684f042 fix flaky cpu test 2022-07-15 19:49:05 +00:00
Patrick von Platen
da8e87e201 use real checkpoint 2022-07-15 19:13:39 +00:00
Patrick von Platen
43bbc78123 adapt test 2022-07-15 18:37:15 +00:00
Patrick von Platen
1c14ce9509 fix local subfolder 2022-07-15 17:55:20 +00:00
Patrick von Platen
29628acbec renaming of api 2022-07-15 17:29:14 +00:00
Patrick von Platen
9d2fc6b535 some fixes 2022-07-15 17:22:28 +00:00
Patrick von Platen
3f1e95928e Fix conversion script 2022-07-15 17:00:41 +00:00
Lysandre Debut
87060e6a9c LDM conversion script (#92)
Conversion script

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-07-15 17:29:34 +02:00
Patrick von Platen
e5f3415fbd Update README.md 2022-07-15 17:28:04 +02:00
Patrick von Platen
f5ca5af6ce add to readme 2022-07-15 14:06:45 +00:00
Patrick von Platen
2ac19ff190 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-07-15 14:03:11 +00:00
Patrick von Platen
badc5517ff fix small bug 2022-07-15 14:03:08 +00:00
Patrick von Platen
a8fc1560c6 Update README.md 2022-07-15 15:06:38 +02:00
Patrick von Platen
f448360bd0 Finish scheduler API (#91)
* finish

* up
2022-07-15 15:04:01 +02:00
Patrick von Platen
97e1e3ba76 finalize model API 2022-07-15 10:48:30 +00:00
Nathan Lambert
dacabaa47f readme vp typo fix 2022-07-14 14:39:19 -07:00
Patrick von Platen
6d5ef87e6b [DDPM] Make DDPM work (#88)
* up

* finish

* uP
2022-07-14 19:46:04 +02:00
Patrick von Platen
e7fe901e5e save intermediate (#87)
* save intermediate

* up

* up
2022-07-14 12:29:06 +02:00
Nathan Lambert
c3d78cd306 Docs (#45)
* first pass at docs structure

* minor reformatting, add github actions for docs

* populate docs (primarily from README, some writing)
2022-07-13 08:42:05 -07:00
Patrick von Platen
2a69c0b7b8 The big purge -> remove everything except vision for now 2022-07-13 11:42:40 +00:00
Patrick von Platen
c8c0c0e846 quick fix 2022-07-13 10:28:46 +00:00
Patrick von Platen
5e12d5c691 Clean uncond unet more (#85)
* up

* finished clean up

* remove @
2022-07-13 12:21:11 +02:00
Patrick von Platen
8aed37c1bd some more refactor 2022-07-12 19:35:47 +00:00
Patrick von Platen
06c79730d0 Add unconditional image generation (#79)
* uP

* finish downsampling layers

* finish major refactor

* remove bugus file
2022-07-12 18:34:41 +02:00
Patrick von Platen
ea8d58ea91 [MidBlock] Fix mid block (#78)
* upload files

* finish
2022-07-05 15:05:41 +02:00
Patrick von Platen
c352faeae3 Add MidBlock to Grad-TTS (#74)
Finish
2022-07-04 15:06:00 +02:00
Anton Lozhkov
107986639d Fix attention for Glide (#75) 2022-07-04 14:55:56 +02:00
Anton Lozhkov
d9316bf8bc Fix mutable proj_out weight in the Attention layer (#73)
* Catch unused params in DDP

* Fix proj_out, add test
2022-07-04 12:36:37 +02:00
Tanishq Abraham
3abf4bc439 EMA model stepping updated to keep track of current step (#64)
ema model stepping done automatically now
2022-07-04 11:53:15 +02:00
Patrick von Platen
94566e6dd8 update mid block (#70)
* update mid block

* finish mid block
2022-07-04 11:52:22 +02:00
Suraj Patil
4e2674934f add tests for 1D Up/Downsample blocks (#72) 2022-07-04 11:41:04 +02:00
Suraj Patil
53a42d0a0c Simplify FirUp/down, unet sde (#71)
* refactor fir up/down sample

* remove variance scaling

* remove variance scaling from unet sde

* refactor Linear

* style

* actually remove variance scaling

* add back upsample_2d, downsample_2d

* style

* fix FirUpsample2D
2022-07-04 11:23:19 +02:00
Patrick von Platen
321f9791d6 Downsample / Upsample - clean to 1D and 2D (#68)
* make unet rl work

* uploaad files / code

* upload files

* make style correct

* finish
2022-07-03 22:26:33 +02:00
Patrick von Platen
c524244f49 [Resnet] Remove unnecessary functions / classes (#67)
Remove unnecessary functions / classes
2022-07-03 19:17:25 +02:00
Patrick von Platen
d224c6373f Resnet => Resnet2D (#66) 2022-07-03 18:58:58 +02:00
Patrick von Platen
44705a648b [ResNet] Refactor resnet from VAE (#65) 2022-07-03 18:43:43 +02:00
Patrick von Platen
a7b0047e0f some clean up 2022-07-01 18:14:46 +00:00
Patrick von Platen
dcb9070bc2 quick fix to include non-fir kernels for sde-vp 2022-07-01 17:56:59 +00:00
Patrick von Platen
11667d08d3 Merge pull request #59 from huggingface/fuse_final_resnets
[Resnet] Merge final 2D resnet
2022-07-01 19:32:36 +02:00
Patrick von Platen
221de0edee correct 2022-07-01 17:28:29 +00:00
Patrick von Platen
0eac7bd682 small fix 2022-07-01 17:20:30 +00:00
Patrick von Platen
1e7e23a9c6 Merge branch 'fuse_final_resnets' of https://github.com/huggingface/diffusers into fuse_final_resnets 2022-07-01 16:42:26 +00:00
Patrick von Platen
b8415bb480 remove bogus files 2022-07-01 16:42:24 +00:00
Patrick von Platen
3a15afacab delete bogus files 2022-07-01 16:20:46 +00:00
Patrick von Platen
571e4062e5 merge from master 2022-07-01 16:20:05 +00:00
Patrick von Platen
14bd3567b0 update 2022-07-01 15:45:40 +00:00
Suraj Patil
c2bc59d2b1 Merge pull request #63 from huggingface/bddm-conversion-script
add conversion script for BDDMPipeline
2022-07-01 17:45:10 +02:00
patil-suraj
ab946575b1 add conversion script for BDDMPipeline 2022-07-01 17:44:38 +02:00
Patrick von Platen
1468f754e0 finish resnet 2022-07-01 15:40:54 +00:00
Patrick von Platen
fa7443c899 finish resnet 2022-07-01 15:39:57 +00:00
Patrick von Platen
8d7771d8b0 make work with first resnet 2022-07-01 15:24:26 +00:00
Suraj Patil
a1b5ef5ddc Merge pull request #62 from huggingface/fix-ldm-uncond
fix ldm uncond pipeline
2022-07-01 17:20:26 +02:00
patil-suraj
f26d3011c7 fix ldm uncond pipeline 2022-07-01 17:19:26 +02:00
Patrick von Platen
9da575d63c correct more 2022-07-01 17:07:41 +02:00
Suraj Patil
979c48be04 Merge pull request #61 from huggingface/conversion-scripts
add conversion script for LatentDiffusionUncondPipeline
2022-07-01 16:54:20 +02:00
patil-suraj
099d3eab49 add conversion script for LatentDiffusionUncondPipeline 2022-07-01 16:53:41 +02:00
Patrick von Platen
61dc657461 more fixes 2022-07-01 14:35:14 +00:00
patil-suraj
23904d54d0 Merge branch 'main' of https://github.com/huggingface/diffusers into conversion-scripts 2022-07-01 15:18:16 +02:00
Suraj Patil
c691bb2f42 Merge pull request #60 from huggingface/add-fir-back
fix unde sde for vp model.
2022-07-01 14:01:35 +02:00
patil-suraj
4c293e0e1b fix bias when using fir up/down sample 2022-07-01 13:54:33 +02:00
patil-suraj
516cb9e7f8 fix Upsample 2022-07-01 12:58:50 +02:00
patil-suraj
60a981343e actually fix the typo 2022-07-01 12:55:30 +02:00
patil-suraj
db5a05742e fix typo 2022-07-01 12:54:47 +02:00
patil-suraj
0dbc4779c8 add centered back 2022-07-01 12:50:34 +02:00
patil-suraj
5018abff6e add fir=False back 2022-07-01 12:01:59 +02:00
Patrick von Platen
f1aade0596 up 2022-07-01 09:04:18 +00:00
Patrick von Platen
abedfb08f1 Merge pull request #57 from huggingface/big_clean_up
[Clean up] Clean up unused code
2022-07-01 00:44:24 +02:00
Patrick von Platen
61ea57c5a7 clean up lots of dead code 2022-06-30 22:42:06 +00:00
Patrick von Platen
810c0e4fda Merge pull request #56 from huggingface/correct_tests
Slighly increase tolerance for tests
2022-07-01 00:29:33 +02:00
Patrick von Platen
db7ec72dd8 up 2022-06-30 22:29:18 +00:00
Patrick von Platen
52e0c5b294 update 2022-06-30 22:28:28 +00:00
Patrick von Platen
fb188cd3f5 Merge pull request #55 from huggingface/refactor_glide
[Resnet] Merge glide resnet into general resnet
2022-07-01 00:26:05 +02:00
Patrick von Platen
efe1e60e12 merge glide into resnets 2022-06-30 22:24:22 +00:00
Patrick von Platen
fd6f93b2b1 all glide passes 2022-06-30 22:09:49 +00:00
Patrick von Platen
db934c6750 fix more tests 2022-06-30 21:47:40 +00:00
Patrick von Platen
185347e411 up 2022-06-30 17:01:06 +00:00
Patrick von Platen
c1c4dea98d correct tests ncsnpp 2022-06-30 15:54:00 +00:00
Patrick von Platen
f4cd5a20d0 Merge pull request #53 from huggingface/more_aggressive_tests
[Testing] Make tests more aggressive
2022-06-30 16:55:06 +02:00
Patrick von Platen
3dbd6a8f4d up 2022-06-30 14:54:31 +00:00
patil-suraj
c54f36f087 style 2022-06-30 13:52:16 +02:00
Suraj Patil
8b0bc596de Merge pull request #52 from huggingface/clean-unet-sde
Clean UNetNCSNpp
2022-06-30 13:34:42 +02:00
patil-suraj
f35387b33f clean Linear 2022-06-30 13:31:47 +02:00
patil-suraj
3e2cff4da2 better names and more cleanup 2022-06-30 13:26:05 +02:00
patil-suraj
639b861129 get rid of the custom conv2d layer for up/down sampling 2022-06-30 13:18:09 +02:00
patil-suraj
663393e28a remove fir option 2022-06-30 12:33:52 +02:00
patil-suraj
c50d997591 remove unused args 2022-06-30 12:29:45 +02:00
patil-suraj
f1cb807496 remove get_act 2022-06-30 12:24:47 +02:00
patil-suraj
13ac40ed8e style 2022-06-30 12:21:04 +02:00
patil-suraj
ebe683432f cleanup conv1x1 and conv3x3 2022-06-30 12:20:49 +02:00
patil-suraj
b897008122 more cleanup 2022-06-30 12:01:27 +02:00
patil-suraj
8830af1168 get rid ResnetBlockDDPMpp and related functions 2022-06-30 11:54:32 +02:00
patil-suraj
81e7144783 remove naive up/down sample 2022-06-30 11:46:01 +02:00
patil-suraj
c9bd4d4338 remove if fir from resent block and upsample, downsample for sde unet 2022-06-30 11:41:06 +02:00
anton-l
7e0fd19ffe Merge remote-tracking branch 'origin/main' 2022-06-30 10:21:51 +02:00
anton-l
21aac1aca9 fix setup 2022-06-30 10:21:37 +02:00
Patrick von Platen
b65eb377dd Merge pull request #46 from huggingface/merge_ldm_resnet
[ResNet Refactor] Merge ldm into resnet
2022-06-29 19:34:13 +02:00
Patrick von Platen
26ce60c46d up 2022-06-29 17:30:48 +00:00
Patrick von Platen
358531be9d up 2022-06-29 17:30:35 +00:00
patil-suraj
66ee73eebc refactor up/down sample blocks in unet_rl 2022-06-29 17:17:00 +02:00
patil-suraj
32b93da875 begin conversion script 2022-06-29 17:10:08 +02:00
Patrick von Platen
597b7ae2fb remove wrong import 2022-06-29 14:40:46 +00:00
Patrick von Platen
519bd41ff3 make style 2022-06-29 14:39:39 +00:00
Patrick von Platen
eb90d3be13 Merge pull request #44 from huggingface/unify_resnet
Unify resnet [GradTTS & Unet.py]
2022-06-29 16:37:13 +02:00
Patrick von Platen
df2e145e5f Merge branch 'main' of https://github.com/huggingface/diffusers into unify_resnet 2022-06-29 14:36:58 +00:00
Patrick von Platen
046dc43075 make style 2022-06-29 14:36:35 +00:00
Patrick von Platen
c174bcf4bf finish 2022-06-29 14:35:18 +00:00
Patrick von Platen
466214d2d6 Remove bogus file 2022-06-29 14:29:35 +00:00
Patrick von Platen
4e125f72ab Remove bogus file 2022-06-29 14:28:51 +00:00
Patrick von Platen
0926dc2418 save intermediate grad tts 2022-06-29 14:28:40 +00:00
Anton Lozhkov
8cba133f36 Add the model card template (#43)
* add a metrics logger

* fix LatentDiffusionUncondPipeline

* add VQModel in init

* add image logging to tensorboard

* switch manual templates to the modelcards package

* hide ldm example

Co-authored-by: patil-suraj <surajp815@gmail.com>
2022-06-29 15:37:23 +02:00
Suraj Patil
f47066f707 Merge pull request #42 from huggingface/ldm-uncond-text
add test for ldm uncond
2022-06-29 15:34:29 +02:00
patil-suraj
859ffea2b1 add test for ldm uncond 2022-06-29 15:25:51 +02:00
patil-suraj
65788e46ed add scaled_linear schedule in DDIM 2022-06-29 15:12:58 +02:00
Suraj Patil
eceeb97242 move the VAE models in src/models
move the VAE models in src/models
2022-06-29 13:59:41 +02:00
patil-suraj
333a8da678 add tests for AutoencoderKL 2022-06-29 13:52:04 +02:00
Patrick von Platen
814133ec9c Merge pull request #41 from huggingface/fix_comments
[Resnets] Fix comments
2022-06-29 13:47:06 +02:00
Patrick von Platen
f15ab901a0 fix comments 2022-06-29 11:46:23 +00:00
Patrick von Platen
d1f2e3e47b up 2022-06-29 11:43:30 +00:00
Patrick von Platen
1899457b24 Merge pull request #40 from huggingface/start_resnet_unificiation
resnet in one file
2022-06-29 12:47:07 +02:00
Patrick von Platen
ebf3717c37 resnet in one file 2022-06-29 10:46:29 +00:00
patil-suraj
976173a4bf style 2022-06-29 12:34:28 +02:00
patil-suraj
bae04ea9d8 add test for VQModel 2022-06-29 12:34:24 +02:00
patil-suraj
0b7daa6de9 add forward for vq model 2022-06-29 11:56:19 +02:00
patil-suraj
99568c5a39 cleanup vae file 2022-06-29 11:53:58 +02:00
patil-suraj
2ac9b02609 remove AutoencoderKL from pipe __init__ 2022-06-29 11:43:04 +02:00
patil-suraj
17e5b4921a remove vae from ldm uncond pipe 2022-06-29 11:38:48 +02:00
patil-suraj
36e1893c6f remove vae from ldm pipeline 2022-06-29 11:38:38 +02:00
patil-suraj
4d1536bb2e add vae model 2022-06-29 11:38:27 +02:00
Patrick von Platen
e5d9baf0fe Merge pull request #38 from huggingface/one_attentino_module
Unify attention modules
2022-06-29 01:10:33 +02:00
Patrick von Platen
c482d7bd4f some clean up 2022-06-28 23:09:50 +00:00
Patrick von Platen
e47c97a451 no inference moed doesn't always work 2022-06-28 23:05:08 +00:00
Patrick von Platen
740326d2a2 Update README.md 2022-06-29 01:01:41 +02:00
Patrick von Platen
31d1f3c8c0 final fix 2022-06-28 22:59:21 +00:00
Patrick von Platen
635da72374 one attention module only 2022-06-28 22:41:39 +00:00
Patrick von Platen
79db3eb6ca fix tests 2022-06-28 17:36:56 +00:00
Patrick von Platen
e372767c4d Merge pull request #37 from huggingface/merg_unet_attn_into_glide
merge unet attention into glide attention
2022-06-28 19:33:06 +02:00
Patrick von Platen
c45fd7498c merge unet attention into glide attention 2022-06-28 17:31:44 +00:00
Patrick von Platen
9dccc7dc42 refactor unet's attention 2022-06-28 17:19:53 +00:00
Patrick von Platen
52b3ff5eb9 unify ldm and glide attention 2022-06-28 11:29:16 +00:00
Patrick von Platen
fff981df2f all attentions collected 2022-06-28 11:08:51 +00:00
Patrick von Platen
a42b900d27 finish pos embeddings 2022-06-28 11:03:53 +00:00
Patrick von Platen
bdecc3cffd move pipelines into folders 2022-06-28 10:47:47 +00:00
Patrick von Platen
0efac0aac9 remove einops fully 2022-06-28 09:52:55 +00:00
Patrick von Platen
d74b804d05 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-28 09:50:24 +00:00
Patrick von Platen
a859b1992b fix rl model tests 2022-06-28 09:50:21 +00:00
patil-suraj
22b63d155a add LatentDiffusionUncondPipeline 2022-06-28 11:45:48 +02:00
Nathan Lambert
85d991a12a Update README.md 2022-06-27 15:21:46 -04:00
Nathan Lambert
3a5c87055c add RL test, remove conds from RL model input 2022-06-27 14:48:15 -04:00
Patrick von Platen
a2b72faff7 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 17:20:20 +00:00
Patrick von Platen
c9504bba10 add tests for sde ve vp models 2022-06-27 17:20:15 +00:00
patil-suraj
26ea58d4e1 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 18:04:49 +02:00
patil-suraj
d1fb309381 consolidate downsample 2022-06-27 18:03:59 +02:00
patil-suraj
7b9b946cb2 add tests for downsample block 2022-06-27 18:03:51 +02:00
patil-suraj
b9de7172ba add Downsample 2022-06-27 18:03:41 +02:00
Patrick von Platen
4261c3aadf Make style 2022-06-27 15:59:04 +00:00
Patrick von Platen
932ce05d97 cancel einops 2022-06-27 15:39:41 +00:00
Patrick von Platen
4e08e0ca42 merge 2022-06-27 15:34:47 +00:00
Patrick von Platen
af6c143919 remove einops 2022-06-27 15:34:11 +00:00
anton-l
07ff0abff4 Glide and LDM training experiments 2022-06-27 17:25:59 +02:00
anton-l
3286dac6bf Merge remote-tracking branch 'origin/main' 2022-06-27 17:11:11 +02:00
anton-l
1cf7933ea2 Framework-agnostic timestep broadcasting 2022-06-27 17:11:01 +02:00
Patrick von Platen
d726857f7e remove einops from unet_ldm 2022-06-27 15:09:33 +00:00
patil-suraj
ee010726ab cleanup 2022-06-27 16:27:24 +02:00
patil-suraj
abcb25978a Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 16:25:52 +02:00
patil-suraj
183056f243 consolidate Upsample 2022-06-27 16:25:47 +02:00
patil-suraj
dc7c49e4e4 add tests for upsample blocks 2022-06-27 15:50:54 +02:00
Patrick von Platen
c991ffd4f0 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 13:25:28 +00:00
Patrick von Platen
3986741b8b add another ldm fast test 2022-06-27 13:25:26 +00:00
anton-l
0e13d3293c Merge remote-tracking branch 'origin/main'
# Conflicts:
#	tests/test_modeling_utils.py
2022-06-27 15:23:33 +02:00
anton-l
3f9e3d8ad6 add EMA during training 2022-06-27 15:23:01 +02:00
patil-suraj
e13ee8b5b3 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 14:48:22 +02:00
patil-suraj
0027993e91 add upsample and downsample blocks 2022-06-27 14:48:20 +02:00
Patrick von Platen
6846ee2ac4 finalize position embeddings 2022-06-27 11:43:08 +00:00
Patrick von Platen
c7a39d38ad refactor all sinus embeddings 2022-06-27 11:37:37 +00:00
Patrick von Platen
02a76c2c81 consolidate timestep embeds 2022-06-27 10:14:54 +00:00
patil-suraj
9b9afc9726 actually fix test_ldm_text2img_fast 2022-06-27 11:46:50 +02:00
patil-suraj
b7f0ce5b39 fix test_ldm_text2img_fast 2022-06-27 11:44:05 +02:00
patil-suraj
6921393ae2 add fast test for ldm 2022-06-27 11:42:52 +02:00
patil-suraj
17bf65e186 skip test_ldm_text2img for now 2022-06-27 11:39:19 +02:00
Patrick von Platen
014ebc594d Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 09:23:14 +00:00
Patrick von Platen
168e5b7ffa add embeddings 2022-06-27 09:23:10 +00:00
patil-suraj
43bf361a7a fix more LatentDiffusionPipeline 2022-06-27 11:10:10 +02:00
patil-suraj
8199f09c22 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 11:09:22 +02:00
patil-suraj
7c120874be fix LatentDiffusionPipeline 2022-06-27 11:09:21 +02:00
Patrick von Platen
3562a3e661 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-27 09:07:59 +00:00
Patrick von Platen
1a0331a78a fix some tests on gpu 2022-06-27 09:07:57 +00:00
patil-suraj
fbb103deb6 add the bert model in latent diffusion pipeline 2022-06-27 10:59:22 +02:00
Patrick von Platen
45a09bebf3 add first files 2022-06-27 10:46:39 +02:00
Patrick von Platen
0183bf13c7 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-27 10:46:18 +02:00
Patrick von Platen
f6e8c8c09c add layers 2022-06-27 10:46:13 +02:00
Patrick von Platen
9a4d53a476 Update README.md 2022-06-27 02:09:49 +02:00
Patrick von Platen
ba264419f4 finish vp 2022-06-27 00:07:57 +00:00
Patrick von Platen
dc6d028654 add vp sampler 2022-06-26 23:41:55 +00:00
Patrick von Platen
d5c527a499 clean up 2022-06-26 11:02:57 +00:00
Patrick von Platen
135acd83af fix bug 2022-06-26 00:56:18 +00:00
Patrick von Platen
433cb3f801 clean up sde ve more 2022-06-25 18:25:43 +00:00
Patrick von Platen
de810814da finish first version sde ve 2022-06-25 02:50:42 +00:00
Patrick von Platen
bc2d586dcb remove more dependencies 2022-06-25 00:53:55 +00:00
Patrick von Platen
49a81f9f1a port first 1024 model 2022-06-24 19:44:17 +00:00
Patrick von Platen
78e99a997b adapt run.py 2022-06-24 18:48:26 +00:00
Patrick von Platen
fc67917a18 up 2022-06-24 17:35:19 +00:00
Patrick von Platen
7ca832cac9 save intermediate state score_sde 2022-06-24 17:20:25 +00:00
Patrick von Platen
b296f2d4f3 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-24 15:55:29 +00:00
Patrick von Platen
ac796924df add score estimation model 2022-06-24 15:55:26 +00:00
Anton Lozhkov
3618d33039 Merge pull request #34 from kashif/patch-1
fixed typo in comment
2022-06-24 11:24:24 +02:00
Kashif Rasul
c3c1bdf8e2 fixed typo in comment 2022-06-24 10:44:52 +02:00
Patrick von Platen
bd9c9fbfbe Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-22 23:16:05 +02:00
Patrick von Platen
f941fc9917 refactor tts sampler a bit 2022-06-22 23:15:57 +02:00
Nathan Lambert
e29fc44635 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-22 14:17:01 -04:00
Nathan Lambert
7b4e049eb0 adding properties, formatting 2022-06-22 14:16:53 -04:00
Patrick von Platen
4fbf8c815e Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-22 18:41:17 +02:00
Patrick von Platen
0244e2af4c correct diffusion test 2022-06-22 18:41:14 +02:00
Patrick von Platen
6e456b7a7a Update README.md 2022-06-22 18:38:32 +02:00
Anton Lozhkov
3a17775454 TODO: Add FID and KID metrics 2022-06-22 17:26:07 +02:00
Patrick von Platen
40e28e8bf4 only remove module if necessary 2022-06-22 13:42:09 +00:00
Patrick von Platen
fc596c8625 merge conflict 2022-06-22 13:41:01 +00:00
Patrick von Platen
48269070d2 more fixes 2022-06-22 13:40:08 +00:00
anton-l
c31736a4a4 Merge remote-tracking branch 'origin/main'
# Conflicts:
#	src/diffusers/pipelines/pipeline_glide.py
2022-06-22 15:17:10 +02:00
anton-l
7b43035bcb init text2im script 2022-06-22 15:15:54 +02:00
Patrick von Platen
e45dae7dc0 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-22 12:38:44 +00:00
Patrick von Platen
d0032c6095 refactor naming 2022-06-22 12:38:36 +00:00
Anton Lozhkov
33abc79515 Update README.md 2022-06-22 13:52:45 +02:00
anton-l
0d80fe9327 Merge remote-tracking branch 'origin/main' 2022-06-22 13:38:24 +02:00
anton-l
848c86ca0a batched forward diffusion step 2022-06-22 13:38:14 +02:00
Patrick von Platen
320506c75a Merge pull request #27 from PROxZIMA/PROxZIMA-fix-todo-checklist-checkbox
Fix: TODO checklist checkbox
2022-06-21 22:22:35 +02:00
Patrick von Platen
30fbd39f0c Merge pull request #26 from maloyan/fix/scheduling_ddpm
fix alphas_cumprod
2022-06-21 22:17:18 +02:00
anton-l
62c2c547db Merge branch 'main' of github.com:huggingface/diffusers 2022-06-21 14:08:08 +02:00
anton-l
9e31c6a749 refactor GLIDE text2im pipeline, remove classifier_free_guidance 2022-06-21 14:07:58 +02:00
patil-suraj
e3bf932404 don't hardcode device in tests 2022-06-21 12:02:21 +02:00
patil-suraj
dc966cc447 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-21 12:01:19 +02:00
patil-suraj
ac00dad756 add GLIDETextToImageUNetModelTests 2022-06-21 12:01:07 +02:00
anton-l
072d75196c move conversion_glide.py to scripts 2022-06-21 11:42:01 +02:00
anton-l
da4aebeda7 Merge remote-tracking branch 'origin/main' 2022-06-21 11:36:08 +02:00
anton-l
71289ba06e add lr schedule utils 2022-06-21 11:35:56 +02:00
patil-suraj
bfb4ddca35 Merge branch 'main' of https://github.com/huggingface/diffusers into main 2022-06-21 11:28:45 +02:00
patil-suraj
c982fb8262 fix quaility command 2022-06-21 11:28:38 +02:00
anton-l
0417baf23d additional hub arguments 2022-06-21 11:21:10 +02:00
anton-l
9c82c32ba7 make style 2022-06-21 10:43:40 +02:00
anton-l
1a099e5e0e make einops optional for RL 2022-06-21 10:40:29 +02:00
anton-l
b09b152f77 Merge branch 'main' of github.com:huggingface/diffusers 2022-06-21 10:38:40 +02:00
anton-l
a2117cb797 add push_to_hub 2022-06-21 10:38:34 +02:00
Pratik Pingale
ee902ddf3a Fix: TODO checklist checkbox 2022-06-21 12:53:26 +05:30
Narek Maloyan
e1ef122260 fix alphas_cumprod 2022-06-20 20:11:43 +00:00
Nathan Lambert
4497e78d00 merge unet-rl formatting 2022-06-20 14:37:30 -04:00
Nathan Lambert
49718b4704 add imports for RL UNet 2022-06-20 14:35:39 -04:00
Patrick von Platen
77aadfee6a Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-20 16:13:54 +02:00
Patrick von Platen
452339e20e fix typo 2022-06-20 16:13:44 +02:00
patil-suraj
80898b5234 add UNetGradTTSModelTests 2022-06-20 15:57:58 +02:00
patil-suraj
e5675fad5d remove prints from tests 2022-06-20 14:47:13 +02:00
patil-suraj
27359ae049 remove wrong file 2022-06-20 14:46:35 +02:00
patil-suraj
95a45f5b3a add UNetLDMModelTests 2022-06-20 14:45:58 +02:00
patil-suraj
646e16fe06 fix test_output_pretrained for GLIDESuperResUNetModel 2022-06-20 14:27:37 +02:00
Patrick von Platen
08c852290a add license disclaimers to schedulers 2022-06-20 13:06:31 +02:00
Patrick von Platen
2b8bc91cf8 removed get alpha / get beta 2022-06-20 12:48:04 +02:00
Patrick von Platen
5b8ce1e7e6 remove one-liner functions 2022-06-20 12:09:34 +02:00
Patrick von Platen
05e265fbc8 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-20 11:58:39 +02:00
Nathan Lambert
694ad9849b Update README.md 2022-06-17 13:40:20 -04:00
Nathan Lambert
808b49a7dc Update README.md for RL example colab 2022-06-17 13:22:55 -04:00
Suraj Patil
1c953bc3ea Add tests for GLIDESuperResUNetModel # 22
Add tests for GLIDESuperResUNetModel
2022-06-17 19:04:40 +02:00
patil-suraj
e007c797b1 add GLIDESuperResUNetModel 2022-06-17 19:04:07 +02:00
patil-suraj
44e64f9464 fix warning in model utils 2022-06-17 19:03:51 +02:00
Patrick von Platen
a677565f16 Merge branch 'main' of https://github.com/huggingface/diffusers 2022-06-17 17:22:52 +02:00
Patrick von Platen
ff885b0e26 add dummy imports 2022-06-17 17:22:48 +02:00
Patrick von Platen
b4e6a7403d save intermediate 2022-06-17 16:58:45 +02:00
Suraj Patil
d182a6ad91 Add model tests
Add model tests
2022-06-17 16:41:02 +02:00
patil-suraj
12da0fe10d Merge branch 'main' into model-tests 2022-06-17 16:37:45 +02:00
patil-suraj
cf6cd39572 finish tests for UNet 2022-06-17 16:36:51 +02:00
patil-suraj
eef2327a47 update input names 2022-06-17 16:36:35 +02:00
Nathan Lambert
9c96682a51 ddpm changes for rl, add rl unet 2022-06-17 10:07:27 -04:00
Patrick von Platen
1997b90838 image->sample in schedule tests 2022-06-17 15:51:33 +02:00
Patrick von Platen
b2274ece73 finish pndm scheduler 2022-06-17 15:51:03 +02:00
patil-suraj
7dc71897b3 add UnetModelTests 2022-06-17 13:49:26 +02:00
patil-suraj
800b27703e wrap inflect in try catch 2022-06-17 13:48:51 +02:00
patil-suraj
d76bc43720 add skeleton for model tests 2022-06-17 13:36:59 +02:00
Patrick von Platen
de22d4cd5d make sure config attributes are only accessed via the config in schedulers 2022-06-17 12:42:54 +02:00
Patrick von Platen
8c1f51978c make clip name shorter 2022-06-17 12:11:40 +02:00
Patrick von Platen
dcb23b2d72 rename image to sample in schedulers 2022-06-17 12:10:35 +02:00
Patrick von Platen
13a78b3cd3 rename image to sample 2022-06-17 12:09:13 +02:00
Patrick von Platen
fe7d136324 correct dict 2022-06-17 11:55:02 +02:00
Patrick von Platen
e660a05fed remave onnx 2022-06-17 11:00:01 +02:00
Patrick von Platen
5e6f500038 rename register to register_to_config 2022-06-17 10:58:43 +02:00
Suraj Patil
0ffda1dfcc Update README.md 2022-06-16 18:34:56 +02:00
Suraj Patil
20c722c601 update speech example 2022-06-16 18:33:49 +02:00
Suraj Patil
7cabc0cddc Add GradTTS
Add GradTTS
2022-06-16 18:28:13 +02:00
patil-suraj
c2e48b23f8 remove unused import 2022-06-16 18:27:47 +02:00
patil-suraj
ace07110c1 style 2022-06-16 18:26:00 +02:00
Suraj Patil
988369a01c Merge branch 'main' into grad-tts 2022-06-16 18:24:08 +02:00
patil-suraj
5a3467e623 add default params for GradTTS 2022-06-16 18:17:45 +02:00
patil-suraj
e26782759c add GradTTS in init 2022-06-16 18:14:01 +02:00
patil-suraj
1d2551d716 finish GradTTS pipeline 2022-06-16 18:08:33 +02:00
patil-suraj
8007393614 wrap transformers import with try/catch 2022-06-16 18:08:21 +02:00
patil-suraj
cdf26c55f5 remove unused import 2022-06-16 18:07:59 +02:00
Suraj Patil
bed32182f6 render latex in readme
render latex in readme
2022-06-16 18:02:00 +02:00
Kashif Rasul
cf3fdb8479 use inference_mode 2022-06-16 17:55:20 +02:00
Kashif Rasul
d2940c23fe Merge branch 'main' into latex 2022-06-16 17:50:16 +02:00
Kashif Rasul
13f003c9bd use bold 2022-06-16 17:49:35 +02:00
Kashif Rasul
a1e1806575 render latex in readme 2022-06-16 17:45:31 +02:00
patil-suraj
cc45831ec6 add GradTTSScheduler 2022-06-16 17:10:36 +02:00
patil-suraj
2d8d82f93e update grad tts pipeline 2022-06-16 16:48:23 +02:00
patil-suraj
71ecc7aed8 add speaker emb in unet 2022-06-16 16:48:00 +02:00
patil-suraj
3f2d46a14e fix tokenizer 2022-06-16 16:47:04 +02:00
Patrick von Platen
ebbba62c36 Merge pull request #18 from vvvm23/logging-transformers-to-diffusers
changes comments and env vars in `utils/logging.py`
2022-06-16 14:17:00 +02:00
patil-suraj
7b55d334d5 being pipeline 2022-06-16 14:08:53 +02:00
patil-suraj
986cc9b2f4 add tokenizer 2022-06-16 14:08:41 +02:00
Alexander McKinney
c3cc8eb23c changes comments and env vars in utils/logging
removes mentions of 🤗Transformers with 🤗Diffusers equivalent.
2022-06-16 10:54:00 +01:00
Patrick von Platen
926658665f Merge pull request #16 from Muhtasham/patch-1
Update README.md
2022-06-16 10:44:53 +02:00
Suraj Patil
acb2faaefa Update README.md 2022-06-16 10:22:55 +02:00
Suraj Patil
4c16b3a5fd Fix some little typos
Fix some little typos
2022-06-16 10:07:19 +02:00
milyiyo
c5e54c200a Fix some little typos 2022-06-15 20:23:27 -04:00
Muhtasham Oblokulov
4bf6bea52a Update README.md
small typo fixed and added Idea to ToDo
2022-06-15 23:47:20 +02:00
Anton Lozhkov
7d4bafa8a4 Merge pull request #15 from mrm8488/patch-1
Fix output path name
2022-06-15 22:52:24 +02:00
Manuel Romero
57aba1ef50 Fix output path name 2022-06-15 21:45:49 +02:00
Suraj Patil
71c6b36254 Update README.md 2022-06-15 17:01:48 +02:00
Anton Lozhkov
1112699149 add a training examples doc 2022-06-15 16:51:37 +02:00
Patrick von Platen
52a9acfa8e Update README.md 2022-06-15 16:28:58 +02:00
516 changed files with 107481 additions and 10790 deletions

51
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: "\U0001F41B Bug Report"
description: Report a bug on diffusers
labels: [ "bug" ]
body:
- type: markdown
attributes:
value: |
Thanks a lot for taking the time to file this issue 🤗.
Issues do not only help to improve the library, but also publicly document common problems, questions, workflows for the whole community!
Thus, issues are of the same importance as pull requests when contributing to this library ❤️.
In order to make your issue as **useful for the community as possible**, let's try to stick to some simple guidelines:
- 1. Please try to be as precise and concise as possible.
*Give your issue a fitting title. Assume that someone which very limited knowledge of diffusers can understand your issue. Add links to the source code, documentation other issues, pull requests etc...*
- 2. If your issue is about something not working, **always** provide a reproducible code snippet. The reader should be able to reproduce your issue by **only copy-pasting your code snippet into a Python shell**.
*The community cannot solve your issue if it cannot reproduce it. If your bug is related to training, add your training script and make everything needed to train public. Otherwise, just add a simple Python code snippet.*
- 3. Add the **minimum amount of code / context that is needed to understand, reproduce your issue**.
*Make the life of maintainers easy. `diffusers` is getting many issues every day. Make sure your issue is about one bug and one bug only. Make sure you add only the context, code needed to understand your issues - nothing more. Generally, every issue is a way of documenting this library, try to make it a good documentation entry.*
- type: markdown
attributes:
value: |
For more in-detail information on how to write good issues you can have a look [here](https://huggingface.co/course/chapter8/5?fw=pt)
- type: textarea
id: bug-description
attributes:
label: Describe the bug
description: A clear and concise description of what the bug is. If you intend to submit a pull request for this issue, tell us in the description. Thanks!
placeholder: Bug description
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: Reproduction
description: Please provide a minimal reproducible code which we can copy/paste and reproduce the issue.
placeholder: Reproduction
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs
description: "Please include the Python logs if you can."
render: shell
- type: textarea
id: system-info
attributes:
label: System Info
description: Please share your system info with us. You can run the command `diffusers-cli env` and copy-paste its output below.
placeholder: diffusers version, platform, python version, ...
validations:
required: true

4
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,4 @@
contact_links:
- name: Blank issue
url: https://github.com/huggingface/diffusers/issues/new
about: General usage questions and community discussions

View File

@@ -0,0 +1,20 @@
---
name: "\U0001F680 Feature request"
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

12
.github/ISSUE_TEMPLATE/feedback.md vendored Normal file
View File

@@ -0,0 +1,12 @@
---
name: "💬 Feedback about API Design"
about: Give feedback about the current API design
title: ''
labels: ''
assignees: ''
---
**What API design would you like to have changed or added to the library? Why?**
**What use case would this enable or better enable? Can you give us a code example?**

View File

@@ -0,0 +1,31 @@
name: "\U0001F31F New model/pipeline/scheduler addition"
description: Submit a proposal/request to implement a new diffusion model / pipeline / scheduler
labels: [ "New model/pipeline/scheduler" ]
body:
- type: textarea
id: description-request
validations:
required: true
attributes:
label: Model/Pipeline/Scheduler description
description: |
Put any and all important information relative to the model/pipeline/scheduler
- type: checkboxes
id: information-tasks
attributes:
label: Open source status
description: |
Please note that if the model implementation isn't available or if the weights aren't open-source, we are less likely to implement it in `diffusers`.
options:
- label: "The model implementation is available"
- label: "The model weights are available (Only relevant if addition is not a scheduler)."
- type: textarea
id: additional-info
attributes:
label: Provide useful links for the implementation
description: |
Please provide information regarding the implementation, the weights, and the authors.
Please mention the authors by @gh-username if you're aware of their usernames.

View File

@@ -0,0 +1,146 @@
name: Set up conda environment for testing
description: Sets up miniconda in your ${RUNNER_TEMP} environment and gives you the ${CONDA_RUN} environment variable so you don't have to worry about polluting non-empeheral runners anymore
inputs:
python-version:
description: If set to any value, dont use sudo to clean the workspace
required: false
type: string
default: "3.9"
miniconda-version:
description: Miniconda version to install
required: false
type: string
default: "4.12.0"
environment-file:
description: Environment file to install dependencies from
required: false
type: string
default: ""
runs:
using: composite
steps:
# Use the same trick from https://github.com/marketplace/actions/setup-miniconda
# to refresh the cache daily. This is kind of optional though
- name: Get date
id: get-date
shell: bash
run: echo "::set-output name=today::$(/bin/date -u '+%Y%m%d')d"
- name: Setup miniconda cache
id: miniconda-cache
uses: actions/cache@v2
with:
path: ${{ runner.temp }}/miniconda
key: miniconda-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}
- name: Install miniconda (${{ inputs.miniconda-version }})
if: steps.miniconda-cache.outputs.cache-hit != 'true'
env:
MINICONDA_VERSION: ${{ inputs.miniconda-version }}
shell: bash -l {0}
run: |
MINICONDA_INSTALL_PATH="${RUNNER_TEMP}/miniconda"
mkdir -p "${MINICONDA_INSTALL_PATH}"
case ${RUNNER_OS}-${RUNNER_ARCH} in
Linux-X64)
MINICONDA_ARCH="Linux-x86_64"
;;
macOS-ARM64)
MINICONDA_ARCH="MacOSX-arm64"
;;
macOS-X64)
MINICONDA_ARCH="MacOSX-x86_64"
;;
*)
echo "::error::Platform ${RUNNER_OS}-${RUNNER_ARCH} currently unsupported using this action"
exit 1
;;
esac
MINICONDA_URL="https://repo.anaconda.com/miniconda/Miniconda3-py39_${MINICONDA_VERSION}-${MINICONDA_ARCH}.sh"
curl -fsSL "${MINICONDA_URL}" -o "${MINICONDA_INSTALL_PATH}/miniconda.sh"
bash "${MINICONDA_INSTALL_PATH}/miniconda.sh" -b -u -p "${MINICONDA_INSTALL_PATH}"
rm -rf "${MINICONDA_INSTALL_PATH}/miniconda.sh"
- name: Update GitHub path to include miniconda install
shell: bash
run: |
MINICONDA_INSTALL_PATH="${RUNNER_TEMP}/miniconda"
echo "${MINICONDA_INSTALL_PATH}/bin" >> $GITHUB_PATH
- name: Setup miniconda env cache (with env file)
id: miniconda-env-cache-env-file
if: ${{ runner.os }} == 'macOS' && ${{ inputs.environment-file }} != ''
uses: actions/cache@v2
with:
path: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
key: miniconda-env-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}-${{ hashFiles(inputs.environment-file) }}
- name: Setup miniconda env cache (without env file)
id: miniconda-env-cache
if: ${{ runner.os }} == 'macOS' && ${{ inputs.environment-file }} == ''
uses: actions/cache@v2
with:
path: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
key: miniconda-env-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}
- name: Setup conda environment with python (v${{ inputs.python-version }})
if: steps.miniconda-env-cache-env-file.outputs.cache-hit != 'true' && steps.miniconda-env-cache.outputs.cache-hit != 'true'
shell: bash
env:
PYTHON_VERSION: ${{ inputs.python-version }}
ENV_FILE: ${{ inputs.environment-file }}
run: |
CONDA_BASE_ENV="${RUNNER_TEMP}/conda-python-${PYTHON_VERSION}"
ENV_FILE_FLAG=""
if [[ -f "${ENV_FILE}" ]]; then
ENV_FILE_FLAG="--file ${ENV_FILE}"
elif [[ -n "${ENV_FILE}" ]]; then
echo "::warning::Specified env file (${ENV_FILE}) not found, not going to include it"
fi
conda create \
--yes \
--prefix "${CONDA_BASE_ENV}" \
"python=${PYTHON_VERSION}" \
${ENV_FILE_FLAG} \
cmake=3.22 \
conda-build=3.21 \
ninja=1.10 \
pkg-config=0.29 \
wheel=0.37
- name: Clone the base conda environment and update GitHub env
shell: bash
env:
PYTHON_VERSION: ${{ inputs.python-version }}
CONDA_BASE_ENV: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
run: |
CONDA_ENV="${RUNNER_TEMP}/conda_environment_${GITHUB_RUN_ID}"
conda create \
--yes \
--prefix "${CONDA_ENV}" \
--clone "${CONDA_BASE_ENV}"
# TODO: conda-build could not be cloned because it hardcodes the path, so it
# could not be cached
conda install --yes -p ${CONDA_ENV} conda-build=3.21
echo "CONDA_ENV=${CONDA_ENV}" >> "${GITHUB_ENV}"
echo "CONDA_RUN=conda run -p ${CONDA_ENV} --no-capture-output" >> "${GITHUB_ENV}"
echo "CONDA_BUILD=conda run -p ${CONDA_ENV} conda-build" >> "${GITHUB_ENV}"
echo "CONDA_INSTALL=conda install -p ${CONDA_ENV}" >> "${GITHUB_ENV}"
- name: Get disk space usage and throw an error for low disk space
shell: bash
run: |
echo "Print the available disk space for manual inspection"
df -h
# Set the minimum requirement space to 4GB
MINIMUM_AVAILABLE_SPACE_IN_GB=4
MINIMUM_AVAILABLE_SPACE_IN_KB=$(($MINIMUM_AVAILABLE_SPACE_IN_GB * 1024 * 1024))
# Use KB to avoid floating point warning like 3.1GB
df -k | tr -s ' ' | cut -d' ' -f 4,9 | while read -r LINE;
do
AVAIL=$(echo $LINE | cut -f1 -d' ')
MOUNT=$(echo $LINE | cut -f2 -d' ')
if [ "$MOUNT" = "/" ]; then
if [ "$AVAIL" -lt "$MINIMUM_AVAILABLE_SPACE_IN_KB" ]; then
echo "There is only ${AVAIL}KB free space left in $MOUNT, which is less than the minimum requirement of ${MINIMUM_AVAILABLE_SPACE_IN_KB}KB. Please help create an issue to PyTorch Release Engineering via https://github.com/pytorch/test-infra/issues and provide the link to the workflow run."
exit 1;
else
echo "There is ${AVAIL}KB free space left in $MOUNT, continue"
fi
fi
done

View File

@@ -0,0 +1,50 @@
name: Build Docker images (nightly)
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *" # every day at midnight
concurrency:
group: docker-image-builds
cancel-in-progress: false
env:
REGISTRY: diffusers
jobs:
build-docker-images:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
strategy:
fail-fast: false
matrix:
image-name:
- diffusers-pytorch-cpu
- diffusers-pytorch-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu
- diffusers-onnxruntime-cuda
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v3
with:
no-cache: true
context: ./docker/${{ matrix.image-name }}
push: true
tags: ${{ env.REGISTRY }}/${{ matrix.image-name }}:latest

View File

@@ -0,0 +1,18 @@
name: Build documentation
on:
push:
branches:
- main
- doc-builder*
- v*-release
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
with:
commit_sha: ${{ github.sha }}
package: diffusers
languages: en ko
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}

View File

@@ -0,0 +1,17 @@
name: Build PR Documentation
on:
pull_request:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
with:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: diffusers
languages: en ko

View File

@@ -0,0 +1,13 @@
name: Delete dev documentation
on:
pull_request:
types: [ closed ]
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
with:
pr_number: ${{ github.event.number }}
package: diffusers

162
.github/workflows/nightly_tests.yml vendored Normal file
View File

@@ -0,0 +1,162 @@
name: Nightly tests on main
on:
schedule:
- cron: "0 0 * * *" # every day at midnight
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
RUN_SLOW: yes
RUN_NIGHTLY: yes
jobs:
run_nightly_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Nightly PyTorch CUDA tests on Ubuntu
framework: pytorch
runner: docker-gpu
image: diffusers/diffusers-pytorch-cuda
report: torch_cuda
- name: Nightly Flax TPU tests on Ubuntu
framework: flax
runner: docker-tpu
image: diffusers/diffusers-flax-tpu
report: flax_tpu
- name: Nightly ONNXRuntime CUDA tests on Ubuntu
framework: onnxruntime
runner: docker-gpu
image: diffusers/diffusers-onnxruntime-cuda
report: onnx_cuda
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
if: ${{ matrix.config.runner == 'docker-gpu' }}
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m pip install -e .[quality,test]
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
python utils/print_env.py
- name: Run nightly PyTorch CUDA tests
if: ${{ matrix.config.framework == 'pytorch' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run nightly Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run nightly ONNXRuntime CUDA tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: ${{ matrix.config.report }}_test_reports
path: reports
run_nightly_tests_apple_m1:
name: Nightly PyTorch MPS tests on MacOS
runs-on: [ self-hosted, apple-m1 ]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Clean checkout
shell: arch -arch arm64 bash {0}
run: |
git clean -fxd
- name: Setup miniconda
uses: ./.github/actions/setup-miniconda
with:
python-version: 3.9
- name: Install dependencies
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py
- name: Run nightly PyTorch tests on M1 (MPS)
shell: arch -arch arm64 bash {0}
env:
HF_HOME: /System/Volumes/Data/mnt/cache
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_mps_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_mps_test_reports
path: reports

50
.github/workflows/pr_quality.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Run code quality checks
on:
pull_request:
branches:
- main
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check_code_quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install .[quality]
- name: Check quality
run: |
black --check --preview examples tests src utils scripts
isort --check-only examples tests src utils scripts
flake8 examples tests src utils scripts
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source
check_repository_consistency:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install .[quality]
- name: Check quality
run: |
python utils/check_copies.py
python utils/check_dummies.py

155
.github/workflows/pr_tests.yml vendored Normal file
View File

@@ -0,0 +1,155 @@
name: Fast tests for PRs
on:
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
DIFFUSERS_IS_CI: yes
OMP_NUM_THREADS: 4
MKL_NUM_THREADS: 4
PYTEST_TIMEOUT: 60
jobs:
run_fast_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Fast PyTorch CPU tests on Ubuntu
framework: pytorch
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu
- name: Fast Flax CPU tests on Ubuntu
framework: flax
runner: docker-cpu
image: diffusers/diffusers-flax-cpu
report: flax_cpu
- name: Fast ONNXRuntime CPU tests on Ubuntu
framework: onnxruntime
runner: docker-cpu
image: diffusers/diffusers-onnxruntime-cpu
report: onnx_cpu
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev -y
python -m pip install -e .[quality,test]
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
python utils/print_env.py
- name: Run fast PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run fast ONNXRuntime CPU tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_${{ matrix.config.report }}_test_reports
path: reports
run_fast_tests_apple_m1:
name: Fast PyTorch MPS tests on MacOS
runs-on: [ self-hosted, apple-m1 ]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Clean checkout
shell: arch -arch arm64 bash {0}
run: |
git clean -fxd
- name: Setup miniconda
uses: ./.github/actions/setup-miniconda
with:
python-version: 3.9
- name: Install dependencies
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
${CONDA_RUN} python -m pip install -U git+https://github.com/huggingface/transformers
- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py
- name: Run fast PyTorch tests on M1 (MPS)
shell: arch -arch arm64 bash {0}
env:
HF_HOME: /System/Volumes/Data/mnt/cache
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
${CONDA_RUN} python -m pytest -n 0 -s -v --make-reports=tests_torch_mps tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_mps_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_torch_mps_test_reports
path: reports

156
.github/workflows/push_tests.yml vendored Normal file
View File

@@ -0,0 +1,156 @@
name: Slow tests on main
on:
push:
branches:
- main
env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
RUN_SLOW: yes
jobs:
run_slow_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Slow PyTorch CUDA tests on Ubuntu
framework: pytorch
runner: docker-gpu
image: diffusers/diffusers-pytorch-cuda
report: torch_cuda
- name: Slow Flax TPU tests on Ubuntu
framework: flax
runner: docker-tpu
image: diffusers/diffusers-flax-tpu
report: flax_tpu
- name: Slow ONNXRuntime CUDA tests on Ubuntu
framework: onnxruntime
runner: docker-gpu
image: diffusers/diffusers-onnxruntime-cuda
report: onnx_cuda
name: ${{ matrix.config.name }}
runs-on: ${{ matrix.config.runner }}
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
if : ${{ matrix.config.runner == 'docker-gpu' }}
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m pip install -e .[quality,test]
python -m pip install -U git+https://github.com/huggingface/transformers
python -m pip install git+https://github.com/huggingface/accelerate
- name: Environment
run: |
python utils/print_env.py
- name: Run slow PyTorch CUDA tests
if: ${{ matrix.config.framework == 'pytorch' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run slow Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Run slow ONNXRuntime CUDA tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: ${{ matrix.config.report }}_test_reports
path: reports
run_examples_tests:
name: Examples PyTorch CUDA tests on Ubuntu
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m pip install -e .[quality,test,training]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers
- name: Environment
run: |
python utils/print_env.py
- name: Run example tests on GPU
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/examples_torch_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: examples_test_reports
path: reports

27
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Stale Bot
on:
schedule:
- cron: "0 15 * * *"
jobs:
close_stale_issues:
name: Close Stale Issues
if: github.repository == 'huggingface/diffusers'
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Install requirements
run: |
pip install PyGithub
- name: Close stale issues
run: |
python utils/stale.py

14
.github/workflows/typos.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Check typos
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: typos-action
uses: crate-ci/typos@v1.12.4

7
.gitignore vendored
View File

@@ -163,4 +163,9 @@ tags
*.lock
# DS_Store (MacOS)
.DS_Store
.DS_Store
# RL pipelines may produce mp4 outputs
*.mp4
# dependencies
/transformers

129
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,129 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
feedback@huggingface.co.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

294
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,294 @@
<!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# How to contribute to diffusers?
Everyone is welcome to contribute, and we value everybody's contribution. Code
is thus not the only way to help the community. Answering questions, helping
others, reaching out and improving the documentations are immensely valuable to
the community.
It also helps us if you spread the word: reference the library from blog posts
on the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply star the repo to say "thank you".
Whichever way you choose to contribute, please be mindful to respect our
[code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md).
## You can contribute in so many ways!
There are 4 ways you can contribute to diffusers:
* Fixing outstanding issues with the existing code;
* Implementing [new diffusion pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines#contribution), [new schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) or [new models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)
* [Contributing to the examples](https://github.com/huggingface/diffusers/tree/main/examples) or to the documentation;
* Submitting issues related to bugs or desired new features.
In particular there is a special [Good First Issue](https://github.com/huggingface/diffusers/contribute) listing.
It will give you a list of open Issues that are open to anybody to work on. Just comment in the issue that you'd like to work on it.
In that same listing you will also find some Issues with `Good Second Issue` label. These are
typically slightly more complicated than the Issues with just `Good First Issue` label. But if you
feel you know what you're doing, go for it.
*All are equally valuable to the community.*
## Submitting a new issue or feature request
Do your best to follow these guidelines when submitting an issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.
### Did you find a bug?
The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.
First, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on Github under Issues).
### Do you want to implement a new diffusion pipeline / diffusion model?
Awesome! Please provide the following information:
* Short description of the diffusion pipeline and link to the paper;
* Link to the implementation if it is open-source;
* Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can best
guide you.
### Do you want a new feature (that is not a model)?
A world-class feature request addresses the following points:
1. Motivation first:
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.
If your issue is well written we're already 80% of the way there by the time you
post it.
## Start contributing! (Pull Requests)
Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to be able to contribute to
🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.
Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L426)):
1. Fork the [repository](https://github.com/huggingface/diffusers) by
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
$ git clone git@github.com:<your Github handle>/diffusers.git
$ cd diffusers
$ git remote add upstream https://github.com/huggingface/diffusers.git
```
3. Create a new branch to hold your development changes:
```bash
$ git checkout -b a-descriptive-name-for-my-changes
```
**Do not** work on the `main` branch.
4. Set up a development environment by running the following command in a virtual environment:
```bash
$ pip install -e ".[dev]"
```
(If diffusers was already installed in the virtual environment, remove
it with `pip uninstall diffusers` before reinstalling it in editable
mode with the `-e` flag.)
To run the full test suite, you might need the additional dependency on `transformers` and `datasets` which requires a separate source
install:
```bash
$ git clone https://github.com/huggingface/transformers
$ cd transformers
$ pip install -e .
```
```bash
$ git clone https://github.com/huggingface/datasets
$ cd datasets
$ pip install -e .
```
If you have already cloned that repo, you might need to `git pull` to get the most recent changes in the `datasets`
library.
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this:
```bash
$ pytest tests/<TEST_TO_RUN>.py
```
You can also run the full suite with the following command, but it takes
a beefy machine to produce a result in a decent amount of time now that
Diffusers has grown a lot. Here is the command for it:
```bash
$ make test
```
For more information about tests, check out the
[dedicated documentation](https://huggingface.co/docs/diffusers/testing)
🧨 Diffusers relies on `black` and `isort` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
```bash
$ make style
```
🧨 Diffusers also uses `flake8` and a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash
$ make quality
```
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
```bash
$ git add modified_file.py
$ git commit
```
It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:
```bash
$ git fetch upstream
$ git rebase upstream/main
```
Push the changes to your account using:
```bash
$ git push -u origin a-descriptive-name-for-my-changes
```
6. Once you are satisfied (**and the checklist below is happy too**), go to the
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.
7. It's ok if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
### Checklist
1. The title of your pull request should be a summary of its contribution;
2. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
3. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
4. Make sure existing tests pass;
5. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
- If you are adding a new tokenizer, write tests, and make sure
`RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py` passes.
CircleCI does not run the slow tests, but github actions does every night!
6. All public methods must have informative docstrings that work nicely with sphinx. See `modeling_bert.py` for an
example.
7. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
### Tests
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests).
We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, here's how to run tests with `pytest` for the library:
```bash
$ python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
In fact, that's how `make test` is implemented (sans the `pip install` line)!
You can specify a smaller set of tests in order to test only the feature
you're working on.
By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models — make sure you
have enough disk space and a good Internet connection, or a lot of patience!
```bash
$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
This means `unittest` is fully supported. Here's how to run tests with
`unittest`:
```bash
$ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v
```
### Style guide
For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**
### Syncing forked main with upstream (HuggingFace) main
To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing
```

2
MANIFEST.in Normal file
View File

@@ -0,0 +1,2 @@
include LICENSE
include src/diffusers/utils/model_card_template.md

View File

@@ -3,7 +3,7 @@
# make sure to test the local checkout in scripts and not the pre-installed one (don't use quotes!)
export PYTHONPATH = src
check_dirs := examples tests src utils
check_dirs := examples scripts src tests utils
modified_only_fixup:
$(eval modified_py_files := $(shell python utils/get_modified_files.py $(check_dirs)))
@@ -34,30 +34,25 @@ autogenerate_code: deps_table_update
# Check that the repo is in a good state
repo-consistency:
python utils/check_copies.py
python utils/check_table.py
python utils/check_dummies.py
python utils/check_repo.py
python utils/check_inits.py
python utils/check_config_docstrings.py
python utils/tests_fetcher.py --sanity_check
# this target runs checks on all files
quality:
black --check --preview $(check_dirs)
isort --check-only $(check_dirs)
python utils/custom_init_isort.py --check_only
python utils/sort_auto_mappings.py --check_only
flake8 $(check_dirs)
doc-builder style src/transformers docs/source --max_len 119 --check_only --path_to_docs docs/source
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source
python utils/check_doc_toc.py
# Format source code automatically and check is there are any problems left that need manual fixing
extra_style_checks:
python utils/custom_init_isort.py
python utils/sort_auto_mappings.py
doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source
doc-builder style src/diffusers docs/source --max_len 119 --path_to_docs docs/source
python utils/check_doc_toc.py --fix_and_overwrite
# this target runs checks on all files and potentially modifies some of them
@@ -75,7 +70,6 @@ fixup: modified_only_fixup extra_style_checks autogenerate_code repo-consistency
fix-copies:
python utils/check_copies.py --fix_and_overwrite
python utils/check_table.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite
# Run tests for the library
@@ -88,11 +82,6 @@ test:
test-examples:
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/
# Run tests for SageMaker DLC release
test-sagemaker: # install sagemaker dependencies in advance with pip install .[sagemaker]
TEST_SAGEMAKER=True python -m pytest -n auto -s -v ./tests/sagemaker
# Release stuff

674
README.md
View File

@@ -1,6 +1,6 @@
<p align="center">
<br>
<img src="docs/source/imgs/diffusers_library.jpg" width="400"/>
<img src="./docs/source/en/imgs/diffusers_library.jpg" width="400"/>
<br>
<p>
<p align="center">
@@ -20,266 +20,544 @@ as a modular toolbox for inference and training of diffusion models.
More precisely, 🤗 Diffusers offers:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)).
- Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
- Multiple types of models, such as UNet, that can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion models (see [examples](https://github.com/huggingface/diffusers/tree/main/examples)).
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers.
- Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).
## Definitions
## Installation
**Models**: Neural network that models **p_θ(x_t-1|x_t)** (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
### For PyTorch
![model_diff_1_50](https://user-images.githubusercontent.com/23423619/171610307-dab0cd8b-75da-4d4e-9f5a-5922072e2bb5.png)
**With `pip`** (official package)
```bash
pip install --upgrade diffusers[torch]
```
**Schedulers**: Algorithm class for both **inference** and **training**.
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training.
*Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902)
**With `conda`** (maintained by the community)
![sampling](https://user-images.githubusercontent.com/23423619/171608981-3ad05953-a684-4c82-89f8-62a459147a07.png)
![training](https://user-images.githubusercontent.com/23423619/171608964-b3260cce-e6b4-4841-959d-7d8ba4b8d1b2.png)
```sh
conda install -c conda-forge diffusers
```
**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
*Examples*: GLIDE, Latent-Diffusion, Imagen, DALL-E 2
### For Flax
![imagen](https://user-images.githubusercontent.com/23423619/171609001-c3f2c1c9-f597-4a16-9843-749bf3f9431c.png)
**With `pip`**
```bash
pip install --upgrade diffusers[flax]
```
## Philosophy
**Apple Silicon (M1/M2) support**
- Readability and clarity is prefered over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code desgin. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focusses on providing pretrained models and tools to build systems that generate **continous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as consise, elementary building blocks whereas diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of other library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
Please, refer to [the documentation](https://huggingface.co/docs/diffusers/optimization/mps).
## Contributing
We ❤️ contributions from the open-source community!
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)
Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or
just hang out ☕.
## Quickstart
### Installation
In order to get started, we recommend taking a look at two notebooks:
**Note**: If you want to run PyTorch on GPU on a CUDA-compatible machine, please make sure to install the corresponding `torch` version from the
[official website](https://pytorch.org/).
```
git clone https://github.com/huggingface/diffusers.git
cd diffusers && pip install -e .
- The [Getting started with Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, and also to understand each independent building block in the library.
- The [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook summarizes diffusion models training methods. This notebook takes a step-by-step approach to training your
diffusion models on an image dataset, with explanatory graphics.
## Stable Diffusion is fully compatible with `diffusers`!
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [LAION](https://laion.ai/) and [RunwayML](https://runwayml.com/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 4GB VRAM.
See the [model card](https://huggingface.co/CompVis/stable-diffusion) for more information.
### Text-to-Image generation with Stable Diffusion
First let's install
```bash
pip install --upgrade diffusers transformers accelerate
```
### 1. `diffusers` as a toolbox for schedulers and models.
`diffusers` is more modularized than `transformers`. The idea is that researchers and engineers can use only parts of the library easily for the own use cases.
It could become a central place for all kinds of models, schedulers, training utils and processors that one can mix and match for one's own use case.
Both models and schedulers should be load- and saveable from the Hub.
For more examples see [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) and [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)
#### **Example for [DDPM](https://arxiv.org/abs/2006.11239):**
We recommend using the model in [half-precision (`fp16`)](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) as it gives almost always the same results as full
precision while being roughly twice as fast and requiring half the amount of GPU RAM.
```python
import torch
from diffusers import UNetModel, DDPMScheduler
import PIL
import numpy as np
import tqdm
from diffusers import StableDiffusionPipeline
generator = torch.manual_seed(0)
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
# 1. Load models
noise_scheduler = DDPMScheduler.from_config("fusing/ddpm-lsun-church", tensor_format="pt")
unet = UNetModel.from_pretrained("fusing/ddpm-lsun-church").to(torch_device)
# 2. Sample gaussian noise
image = torch.randn(
(1, unet.in_channels, unet.resolution, unet.resolution),
generator=generator,
)
image = image.to(torch_device)
# 3. Denoise
num_prediction_steps = len(noise_scheduler)
for t in tqdm.tqdm(reversed(range(num_prediction_steps)), total=num_prediction_steps):
# predict noise residual
with torch.no_grad():
residual = unet(image, t)
# predict previous mean of image x_t-1
pred_prev_image = noise_scheduler.step(residual, image, t)
# optionally sample variance
variance = 0
if t > 0:
noise = torch.randn(image.shape, generator=generator).to(image.device)
variance = noise_scheduler.get_variance(t).sqrt() * noise
# set current image to prev_image: x_t -> x_t-1
image = pred_prev_image + variance
# 5. process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# 6. save image
image_pil.save("test.png")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
#### **Example for [DDIM](https://arxiv.org/abs/2010.02502):**
#### Running the model locally
You can also simply download the model folder and pass the path to the local folder to the `StableDiffusionPipeline`.
```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```
Assuming the folder is stored locally under `./stable-diffusion-v1-5`, you can run stable diffusion
as follows:
```python
import torch
from diffusers import UNetModel, DDIMScheduler
import PIL
pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
If you are limited by GPU memory, you might want to consider chunking the attention computation in addition
to using `fp16`.
The following snippet should result in less than 4GB VRAM.
```python
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_attention_slicing()
image = pipe(prompt).images[0]
```
If you wish to use a different scheduler (e.g.: DDIM, LMS, PNDM/PLMS), you can instantiate
it before the pipeline and pass it to `from_pretrained`.
```python
from diffusers import LMSDiscreteScheduler
pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
If you want to run Stable Diffusion on CPU or you want to have maximum precision on GPU,
please run the model in the default *full-precision* setting:
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
# disable the following line if you run on CPU
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### JAX/Flax
Diffusers offers a JAX / Flax implementation of Stable Diffusion for very fast inference. JAX shines specially on TPU hardware because each TPU server has 8 accelerators working in parallel, but it runs great on GPUs too.
Running the pipeline with the default PNDMScheduler:
```python
import jax
import numpy as np
import tqdm
from flax.jax_utils import replicate
from flax.training.common_utils import shard
generator = torch.manual_seed(0)
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
from diffusers import FlaxStableDiffusionPipeline
# 1. Load models
noise_scheduler = DDIMScheduler.from_config("fusing/ddpm-celeba-hq", tensor_format="pt")
unet = UNetModel.from_pretrained("fusing/ddpm-celeba-hq").to(torch_device)
# 2. Sample gaussian noise
image = torch.randn(
(1, unet.in_channels, unet.resolution, unet.resolution),
generator=generator,
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="flax", dtype=jax.numpy.bfloat16
)
image = image.to(torch_device)
# 3. Denoise
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
eta = 0.0 # <- deterministic sampling
for t in tqdm.tqdm(reversed(range(num_inference_steps)), total=num_inference_steps):
# 1. predict noise residual
orig_t = noise_scheduler.get_orig_t(t, num_inference_steps)
with torch.no_grad():
residual = unet(image, orig_t)
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# 2. predict previous mean of image x_t-1
pred_prev_image = noise_scheduler.step(residual, image, t, num_inference_steps, eta)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
# 3. optionally sample variance
variance = 0
if eta > 0:
noise = torch.randn(image.shape, generator=generator).to(image.device)
variance = noise_scheduler.get_variance(t).sqrt() * eta * noise
# 4. set current image to prev_image: x_t -> x_t-1
image = pred_prev_image + variance
# 5. process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# 6. save image
image_pil.save("test.png")
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
### 2. `diffusers` as a collection of popula Diffusion systems (GLIDE, Dalle, ...)
For more examples see [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines).
#### **Example image generation with PNDM**
**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
```python
from diffusers import PNDM, UNetModel, PNDMScheduler
import PIL.Image
import jax
import numpy as np
import torch
from flax.jax_utils import replicate
from flax.training.common_utils import shard
model_id = "fusing/ddim-celeba-hq"
from diffusers import FlaxStableDiffusionPipeline
model = UNetModel.from_pretrained(model_id)
scheduler = PNDMScheduler()
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16
)
# load model and scheduler
pndm = PNDM(unet=model, noise_scheduler=scheduler)
prompt = "a photo of an astronaut riding a horse on mars"
# run pipeline in inference (sample random noise and denoise)
with torch.no_grad():
image = pndm()
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
# process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) / 2
image_processed = torch.clamp(image_processed, 0.0, 1.0)
image_processed = image_processed * 255
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# save image
image_pil.save("test.png")
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
#### **Text to Image generation with Latent Diffusion**
Diffusers also has a Image-to-Image generation pipeline with Flax/Jax
```python
import jax
import numpy as np
import jax.numpy as jnp
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import requests
from io import BytesIO
from PIL import Image
from diffusers import FlaxStableDiffusionImg2ImgPipeline
_Note: To use latent diffusion install transformers from [this branch](https://github.com/patil-suraj/transformers/tree/ldm-bert)._
def create_key(seed=0):
return jax.random.PRNGKey(seed)
rng = create_key(0)
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_img = Image.open(BytesIO(response.content)).convert("RGB")
init_img = init_img.resize((768, 512))
prompts = "A fantasy landscape, trending on artstation"
pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="flax",
dtype=jnp.bfloat16,
)
num_samples = jax.device_count()
rng = jax.random.split(rng, jax.device_count())
prompt_ids, processed_image = pipeline.prepare_inputs(prompt=[prompts]*num_samples, image = [init_img]*num_samples)
p_params = replicate(params)
prompt_ids = shard(prompt_ids)
processed_image = shard(processed_image)
output = pipeline(
prompt_ids=prompt_ids,
image=processed_image,
params=p_params,
prng_seed=rng,
strength=0.75,
num_inference_steps=50,
jit=True,
height=512,
width=768).images
output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
```
Diffusers also has a Text-guided inpainting pipeline with Flax/Jax
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import PIL
import requests
from io import BytesIO
from diffusers import FlaxStableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained("xvjiarui/stable-diffusion-2-inpainting")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
init_image = num_samples * [init_image]
mask_image = num_samples * [mask_image]
prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs(prompt, init_image, mask_image)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
processed_masked_images = shard(processed_masked_images)
processed_masks = shard(processed_masks)
images = pipeline(prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
### Image-to-Image text-guided generation with Stable Diffusion
The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
# load the pipeline
device = "cuda"
model_id_or_path = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
# or download via git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
# and pass `model_id_or_path="./stable-diffusion-v1-5"`.
pipe = pipe.to(device)
# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
prompt = "A fantasy landscape, trending on artstation"
images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
### In-painting using Stable Diffusion
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and a text prompt.
```python
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
### Tweak prompts reusing seeds and latents
You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked.
Please have a look at [Reusing seeds for deterministic generation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/reusing_seeds).
## Fine-Tuning Stable Diffusion
Fine-tuning techniques make it possible to adapt Stable Diffusion to your own dataset, or add new subjects to it. These are some of the techniques supported in `diffusers`:
Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new 'words' in the embedding space of the pipeline's text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images.
- Textual Inversion. Capture novel concepts from a small set of sample images, and associate them with new "words" in the embedding space of the text encoder. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) or [documentation](https://huggingface.co/docs/diffusers/training/text_inversion) to try for yourself.
- Dreambooth. Another technique to capture new concepts in Stable Diffusion. This method fine-tunes the UNet (and, optionally, also the text encoder) of the pipeline to achieve impressive results. Please, refer to [our training example](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) and [training report](https://huggingface.co/blog/dreambooth) for additional details and training recommendations.
- Full Stable Diffusion fine-tuning. If you have a more sizable dataset with a specific look or style, you can fine-tune Stable Diffusion so that it outputs images following those examples. This was the approach taken to create [a Pokémon Stable Diffusion model](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) (by Justing Pinkney / Lambda Labs), [a Japanese specific version of Stable Diffusion](https://huggingface.co/spaces/rinna/japanese-stable-diffusion) (by [Rinna Co.](https://github.com/rinnakk/japanese-stable-diffusion/) and others. You can start at [our text-to-image fine-tuning example](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) and go from there.
## Stable Diffusion Community Pipelines
The release of Stable Diffusion as an open source model has fostered a lot of interesting ideas and experimentation.
Our [Community Examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community) contains many ideas worth exploring, like interpolating to create animated videos, using CLIP Guidance for additional prompt fidelity, term weighting, and much more! [Take a look](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) and [contribute your own](https://huggingface.co/docs/diffusers/using-diffusers/contribute_pipeline).
## Other Examples
There are many ways to try running Diffusers! Here we outline code-focused tools (primarily using `DiffusionPipeline`s and Google Colab) and interactive web-tools.
### Running Code
If you want to run the code yourself 💻, you can try out:
- [Text-to-Image Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)
```python
# !pip install diffusers["torch"] transformers
from diffusers import DiffusionPipeline
ldm = DiffusionPipeline.from_pretrained("fusing/latent-diffusion-text2im-large")
device = "cuda"
model_id = "CompVis/ldm-text2im-large-256"
generator = torch.manual_seed(42)
# load model and scheduler
ldm = DiffusionPipeline.from_pretrained(model_id)
ldm = ldm.to(device)
# run pipeline in inference (sample random noise and denoise)
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], generator=generator, eta=0.3, guidance_scale=6.0, num_inference_steps=50)
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = image_processed * 255.
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
image = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images[0]
# save image
image_pil.save("test.png")
image.save("squirrel.png")
```
#### **Text to speech with BDDM**
_Follow the isnstructions [here](https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/) to load tacotron2 model._
- [Unconditional Diffusion with discrete scheduler](https://huggingface.co/google/ddpm-celebahq-256)
```python
import torch
from diffusers import BDDM, DiffusionPipeline
# !pip install diffusers["torch"]
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
torch_device = "cuda"
model_id = "google/ddpm-celebahq-256"
device = "cuda"
# load the BDDM pipeline
bddm = DiffusionPipeline.from_pretrained("fusing/diffwave-vocoder-ljspeech")
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
ddpm.to(device)
# load tacotron2 to get the mel spectograms
tacotron2 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tacotron2', model_math='fp16')
tacotron2 = tacotron2.to(torch_device).eval()
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
text = "Hello world, I missed you so much."
utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tts_utils')
sequences, lengths = utils.prepare_input_sequence([text])
# generate mel spectograms using text
with torch.no_grad():
mel_spec, _, _ = tacotron2.infer(sequences, lengths)
# generate the speech by passing mel spectograms to BDDM pipeline
generator = torch.manual_seed(0)
audio = bddm(mel_spec, generator, torch_device)
# save generated audio
from scipy.io.wavfile import write as wavwrite
sampling_rate = 22050
wavwrite("generated_audio.wav", sampling_rate, audio.squeeze().cpu().numpy())
# save image
image.save("ddpm_generated_image.png")
```
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256)
- [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
## TODO
**Other Image Notebooks**:
* [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
* [tweak images via repeated Stable Diffusion seeds](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
- Create common API for models [ ]
- Add tests for models [ ]
- Adapt schedulers for training [ ]
- Write google colab for training [ ]
- Write docs / Think about how to structure docs [ ]
- Add tests to circle ci [ ]
- Add more vision models [ ]
- Add more speech models [ ]
- Add RL model [ ]
**Diffusers for Other Modalities**:
* [Molecule conformation generation](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/geodiff_molecule_conformation.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
* [Model-based reinforcement learning](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
### Web Demos
If you just want to play around with some web demos, you can try out the following 🚀 Spaces:
| Model | Hugging Face Spaces |
|-------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Text-to-Image Latent Diffusion | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion) |
| Faces generator | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion) |
| DDPM with different schedulers | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/fusing/celeba-diffusion) |
| Conditional generation from sketch | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/huggingface/diffuse-the-rest) |
| Composable diffusion | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Shuang59/Composable-Diffusion) |
## Definitions
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/>
<br>
<em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
**Schedulers**: Algorithm class for both **inference** and **training**.
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training. Also known as **Samplers**.
*Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902)
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width="800"/>
<br>
<em> Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
*Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width="550"/>
<br>
<em> Figure from ImageGen (https://imagen.research.google/). </em>
<p>
## Philosophy
- Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
## In the works
For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:
- Diffusers for audio
- Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105).
- Diffusers for video generation
- Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54)
A few pipeline components are already being worked on, namely:
- BDDMPipeline for spectrogram-to-sound vocoding
- GLIDEPipeline to support OpenAI's GLIDE model
- Grad-TTS for text to audio generation / conditional audio generation
We want diffusers to be a toolbox useful for diffusers models in general; if you find yourself limited in any way by the current API, or would like to see additional models, schedulers, or techniques, please open a [GitHub issue](https://github.com/huggingface/diffusers/issues) mentioning what you would like to see.
## Credits
This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:
- @CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion)
- @hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion)
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim).
- @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.
## Citation
```bibtex
@misc{von-platen-etal-2022-diffusers,
author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
title = {Diffusers: State-of-the-art diffusion models},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/diffusers}}
}
```

13
_typos.toml Normal file
View File

@@ -0,0 +1,13 @@
# Files for typos
# Instruction: https://github.com/marketplace/actions/typos-action#getting-started
[default.extend-identifiers]
[default.extend-words]
NIN="NIN" # NIN is used in scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
nd="np" # nd may be np (numpy)
parms="parms" # parms is used in scripts/convert_original_stable_diffusion_to_diffusers.py
[files]
extend-exclude = ["_typos.toml"]

View File

@@ -0,0 +1,44 @@
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
# follow the instructions here: https://cloud.google.com/tpu/docs/run-in-container#train_a_jax_model_in_a_docker_container
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --upgrade --no-cache-dir \
clu \
"jax[cpu]>=0.2.16,!=0.3.2" \
"flax>=0.4.1" \
"jaxlib>=0.1.65" && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

View File

@@ -0,0 +1,46 @@
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
# follow the instructions here: https://cloud.google.com/tpu/docs/run-in-container#train_a_jax_model_in_a_docker_container
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
"jax[tpu]>=0.2.16,!=0.3.2" \
-f https://storage.googleapis.com/jax-releases/libtpu_releases.html && \
python3 -m pip install --upgrade --no-cache-dir \
clu \
"flax>=0.4.1" \
"jaxlib>=0.1.65" && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

View File

@@ -0,0 +1,44 @@
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
onnxruntime \
--extra-index-url https://download.pytorch.org/whl/cpu && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

View File

@@ -0,0 +1,44 @@
FROM nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
"onnxruntime-gpu>=1.13.1" \
--extra-index-url https://download.pytorch.org/whl/cu117 && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

View File

@@ -0,0 +1,43 @@
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
--extra-index-url https://download.pytorch.org/whl/cpu && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

View File

@@ -0,0 +1,43 @@
FROM nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y bash \
build-essential \
git \
git-lfs \
curl \
ca-certificates \
libsndfile1-dev \
python3.8 \
python3-pip \
python3.8-venv && \
rm -rf /var/lib/apt/lists
# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
--extra-index-url https://download.pytorch.org/whl/cu117 && \
python3 -m pip install --no-cache-dir \
accelerate \
datasets \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
scipy \
tensorboard \
transformers
CMD ["/bin/bash"]

271
docs/README.md Normal file
View File

@@ -0,0 +1,271 @@
<!---
Copyright 2022- The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Generating the documentation
To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
you can install them with the following command, at the root of the code repository:
```bash
pip install -e ".[docs]"
```
Then you need to install our open source documentation builder tool:
```bash
pip install git+https://github.com/huggingface/doc-builder
```
---
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.
---
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name} {path_to_docs}
```
For example:
```bash
doc-builder preview diffusers docs/source/en
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md or .mdx).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
```
Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:
```
Sections that were moved:
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
```
Use the relative style to link to the new file so that the versioned docs continue to work.
For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.mdx).
## Writing Documentation - Specification
The `huggingface/diffusers` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.
### Adding a new tutorial
Adding a new tutorial or section is done in two steps:
- Add a new file under `docs/source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
- Link that file in `docs/source/_toctree.yml` on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.
### Adding a new pipeline/scheduler
When adding a new pipeline:
- create a file `xxx.mdx` under `docs/source/api/pipelines` (don't hesitate to copy an existing file as template).
- Link that file in (*Diffusers Summary*) section in `docs/source/api/pipelines/overview.mdx`, along with the link to the paper, and a colab notebook (if available).
- Write a short overview of the diffusion model:
- Overview with paper & authors
- Paper abstract
- Tips and tricks and how to use it best
- Possible an end-to-end example of how to use it
- Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
```
## XXXPipeline
[[autodoc]] XXXPipeline
- all
- __call__
```
This will include every public method of the pipeline that is documented, as well as the `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`.
```
[[autodoc]] XXXPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
```
You can follow the same process to create a new scheduler under the `docs/source/api/schedulers` folder
### Writing source documentation
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will be converted into a link with
`pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
#### Defining arguments in a method
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:
```
Args:
n_layers (`int`): The number of layers of the model.
```
If the description is too long to fit in one line, another indentation is necessary before writing the description
after the argument.
Here's an example showcasing everything so far:
```
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
[`~PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
```
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:
```
def my_function(x: str = None, a: float = 1):
```
then its documentation should look like this:
```
Args:
x (`str`, *optional*):
This argument controls ...
a (`float`, *optional*, defaults to 1):
This argument is used to ...
```
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).
#### Writing a multi-line code block
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```
# first line of code
# second line
# etc
```
````
#### Writing a return block
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.
Here's an example of a single value return:
```
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```
Here's an example of a tuple return, comprising several objects:
```
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```
#### Adding an image
Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
## Styling the docstring
We have an automatic script running with the `make style` command that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library
This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
easily.

57
docs/TRANSLATING.md Normal file
View File

@@ -0,0 +1,57 @@
### Translating the Diffusers documentation into your language
As part of our mission to democratize machine learning, we'd love to make the Diffusers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.
**🗞️ Open an issue**
To get started, navigate to the [Issues](https://github.com/huggingface/diffusers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button.
Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.
**🍴 Fork the repository**
First, you'll need to [fork the Diffusers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.
Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:
```bash
git clone https://github.com/YOUR-USERNAME/diffusers.git
```
**📋 Copy-paste the English version with a new language code**
The documentation files are in one leading directory:
- [`docs/source`](https://github.com/huggingface/diffusers/tree/main/docs/source): All the documentation materials are organized here by language.
You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/diffusers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:
```bash
cd ~/path/to/diffusers/docs
cp -r source/en source/LANG-ID
```
Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.
**✍️ Start translating**
The fun part comes - translating the text!
The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website.
> 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory!
The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml):
```yaml
- sections:
- local: pipeline_tutorial # Do not change this! Use the same name for your .md file
title: Pipelines for inference # Translate this!
...
title: Tutorials # Translate this!
```
Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.
> 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/diffusers/issues) and tag @patrickvonplaten.

204
docs/source/en/_toctree.yml Normal file
View File

@@ -0,0 +1,204 @@
- sections:
- local: index
title: 🧨 Diffusers
- local: quicktour
title: Quicktour
- local: stable_diffusion
title: Stable Diffusion
- local: installation
title: Installation
title: Get started
- sections:
- sections:
- local: using-diffusers/loading
title: Loading Pipelines, Models, and Schedulers
- local: using-diffusers/schedulers
title: Using different Schedulers
- local: using-diffusers/configuration
title: Configuring Pipelines, Models, and Schedulers
- local: using-diffusers/custom_pipeline_overview
title: Loading and Adding Custom Pipelines
title: Loading & Hub
- sections:
- local: using-diffusers/unconditional_image_generation
title: Unconditional Image Generation
- local: using-diffusers/conditional_image_generation
title: Text-to-Image Generation
- local: using-diffusers/img2img
title: Text-Guided Image-to-Image
- local: using-diffusers/inpaint
title: Text-Guided Image-Inpainting
- local: using-diffusers/depth2img
title: Text-Guided Depth-to-Image
- local: using-diffusers/reusing_seeds
title: Reusing seeds for deterministic generation
- local: using-diffusers/reproducibility
title: Reproducibility
- local: using-diffusers/custom_pipeline_examples
title: Community Pipelines
- local: using-diffusers/contribute_pipeline
title: How to contribute a Pipeline
title: Pipelines for Inference
- sections:
- local: using-diffusers/rl
title: Reinforcement Learning
- local: using-diffusers/audio
title: Audio
- local: using-diffusers/other-modalities
title: Other Modalities
title: Taking Diffusers Beyond Images
title: Using Diffusers
- sections:
- local: optimization/fp16
title: Memory and Speed
- local: optimization/xformers
title: xFormers
- local: optimization/onnx
title: ONNX
- local: optimization/open_vino
title: OpenVINO
- local: optimization/mps
title: MPS
- local: optimization/habana
title: Habana Gaudi
title: Optimization/Special Hardware
- sections:
- local: training/overview
title: Overview
- local: training/unconditional_training
title: Unconditional Image Generation
- local: training/text_inversion
title: Textual Inversion
- local: training/dreambooth
title: Dreambooth
- local: training/text2image
title: Text-to-image fine-tuning
- local: training/lora
title: LoRA Support in Diffusers
title: Training
- sections:
- local: conceptual/philosophy
title: Philosophy
- local: conceptual/contribution
title: How to contribute?
title: Conceptual Guides
- sections:
- sections:
- local: api/models
title: Models
- local: api/diffusion_pipeline
title: Diffusion Pipeline
- local: api/logging
title: Logging
- local: api/configuration
title: Configuration
- local: api/outputs
title: Outputs
- local: api/loaders
title: Loaders
title: Main Classes
- sections:
- local: api/pipelines/overview
title: Overview
- local: api/pipelines/alt_diffusion
title: AltDiffusion
- local: api/pipelines/audio_diffusion
title: Audio Diffusion
- local: api/pipelines/cycle_diffusion
title: Cycle Diffusion
- local: api/pipelines/dance_diffusion
title: Dance Diffusion
- local: api/pipelines/ddim
title: DDIM
- local: api/pipelines/ddpm
title: DDPM
- local: api/pipelines/dit
title: DiT
- local: api/pipelines/latent_diffusion
title: Latent Diffusion
- local: api/pipelines/paint_by_example
title: PaintByExample
- local: api/pipelines/pndm
title: PNDM
- local: api/pipelines/repaint
title: RePaint
- local: api/pipelines/stable_diffusion_safe
title: Safe Stable Diffusion
- local: api/pipelines/score_sde_ve
title: Score SDE VE
- sections:
- local: api/pipelines/stable_diffusion/overview
title: Overview
- local: api/pipelines/stable_diffusion/text2img
title: Text-to-Image
- local: api/pipelines/stable_diffusion/img2img
title: Image-to-Image
- local: api/pipelines/stable_diffusion/inpaint
title: Inpaint
- local: api/pipelines/stable_diffusion/depth2img
title: Depth-to-Image
- local: api/pipelines/stable_diffusion/image_variation
title: Image-Variation
- local: api/pipelines/stable_diffusion/upscale
title: Super-Resolution
- local: api/pipelines/stable_diffusion/pix2pix
title: InstructPix2Pix
title: Stable Diffusion
- local: api/pipelines/stable_diffusion_2
title: Stable Diffusion 2
- local: api/pipelines/stochastic_karras_ve
title: Stochastic Karras VE
- local: api/pipelines/unclip
title: UnCLIP
- local: api/pipelines/latent_diffusion_uncond
title: Unconditional Latent Diffusion
- local: api/pipelines/versatile_diffusion
title: Versatile Diffusion
- local: api/pipelines/vq_diffusion
title: VQ Diffusion
title: Pipelines
- sections:
- local: api/schedulers/overview
title: Overview
- local: api/schedulers/ddim
title: DDIM
- local: api/schedulers/ddpm
title: DDPM
- local: api/schedulers/deis
title: DEIS
- local: api/schedulers/dpm_discrete
title: DPM Discrete Scheduler
- local: api/schedulers/dpm_discrete_ancestral
title: DPM Discrete Scheduler with ancestral sampling
- local: api/schedulers/euler_ancestral
title: Euler Ancestral Scheduler
- local: api/schedulers/euler
title: Euler scheduler
- local: api/schedulers/heun
title: Heun Scheduler
- local: api/schedulers/ipndm
title: IPNDM
- local: api/schedulers/lms_discrete
title: Linear Multistep
- local: api/schedulers/multistep_dpm_solver
title: Multistep DPM-Solver
- local: api/schedulers/pndm
title: PNDM
- local: api/schedulers/repaint
title: RePaint Scheduler
- local: api/schedulers/singlestep_dpm_solver
title: Singlestep DPM-Solver
- local: api/schedulers/stochastic_karras_ve
title: Stochastic Kerras VE
- local: api/schedulers/score_sde_ve
title: VE-SDE
- local: api/schedulers/score_sde_vp
title: VP-SDE
- local: api/schedulers/vq_diffusion
title: VQDiffusionScheduler
title: Schedulers
- sections:
- local: api/experimental/rl
title: RL Planning
title: Experimental Features
title: API

View File

@@ -0,0 +1,23 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Configuration
In Diffusers, schedulers of type [`schedulers.scheduling_utils.SchedulerMixin`], and models of type [`ModelMixin`] inherit from [`ConfigMixin`] which conveniently takes care of storing all parameters that are
passed to the respective `__init__` methods in a JSON-configuration file.
## ConfigMixin
[[autodoc]] ConfigMixin
- load_config
- from_config
- save_config

View File

@@ -0,0 +1,46 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pipelines
The [`DiffusionPipeline`] is the easiest way to load any pretrained diffusion pipeline from the [Hub](https://huggingface.co/models?library=diffusers) and to use it in inference.
<Tip>
One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual
components of diffusion pipelines are usually trained individually, so we suggest to directly work
with [`UNetModel`] and [`UNetConditionModel`].
</Tip>
Any diffusion pipeline that is loaded with [`~DiffusionPipeline.from_pretrained`] will automatically
detect the pipeline type, *e.g.* [`StableDiffusionPipeline`] and consequently load each component of the
pipeline and pass them into the `__init__` function of the pipeline, *e.g.* [`~StableDiffusionPipeline.__init__`].
Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrained`].
## DiffusionPipeline
[[autodoc]] DiffusionPipeline
- all
- __call__
- device
- to
## ImagePipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipelines.ImagePipelineOutput
## AudioPipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipelines.AudioPipelineOutput

View File

@@ -0,0 +1,15 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# TODO
Coming soon!

View File

@@ -0,0 +1,30 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Loaders
There are many weights to train adapter neural networks for diffusion models, such as
- [Textual Inversion](./training/text_inversion.mdx)
- [LoRA](https://github.com/cloneofsimo/lora)
- [Hypernetworks](https://arxiv.org/abs/1609.09106)
Such adapter neural networks often only consist of a fraction of the number of weights compared
to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use
API to load such adapter neural networks via the [`loaders.py` module](https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders.py).
**Note**: This module is still highly experimental and prone to future changes.
## LoaderMixins
### UNet2DConditionLoadersMixin
[[autodoc]] loaders.UNet2DConditionLoadersMixin

View File

@@ -0,0 +1,98 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Logging
🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily.
Currently the default verbosity of the library is `WARNING`.
To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
to the INFO level.
```python
import diffusers
diffusers.logging.set_verbosity_info()
```
You can also use the environment variable `DIFFUSERS_VERBOSITY` to override the default verbosity. You can set it
to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:
```bash
DIFFUSERS_VERBOSITY=error ./myprogram.py
```
Additionally, some `warnings` can be disabled by setting the environment variable
`DIFFUSERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using
[`logger.warning_advice`]. For example:
```bash
DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```
Here is an example of how to use the same logger as the library in your own module or script:
```python
from diffusers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("diffusers")
logger.info("INFO")
logger.warning("WARN")
```
All the methods of this logging module are documented below, the main ones are
[`logging.get_verbosity`] to get the current level of verbosity in the logger and
[`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
- `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` (int value, 50): only report the most
critical errors.
- `diffusers.logging.ERROR` (int value, 40): only report errors.
- `diffusers.logging.WARNING` or `diffusers.logging.WARN` (int value, 30): only reports error and
warnings. This the default level used by the library.
- `diffusers.logging.INFO` (int value, 20): reports error, warnings and basic information.
- `diffusers.logging.DEBUG` (int value, 10): report all information.
By default, `tqdm` progress bars will be displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] can be used to suppress or unsuppress this behavior.
## Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
## Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar

View File

@@ -0,0 +1,83 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Models
Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models.
The primary function of these models is to denoise an input sample, by modeling the distribution $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$.
The models are built on the base class ['ModelMixin'] that is a `torch.nn.module` with basic functionality for saving and loading models both locally and from the HuggingFace hub.
## ModelMixin
[[autodoc]] ModelMixin
## UNet2DOutput
[[autodoc]] models.unet_2d.UNet2DOutput
## UNet2DModel
[[autodoc]] UNet2DModel
## UNet1DOutput
[[autodoc]] models.unet_1d.UNet1DOutput
## UNet1DModel
[[autodoc]] UNet1DModel
## UNet2DConditionOutput
[[autodoc]] models.unet_2d_condition.UNet2DConditionOutput
## UNet2DConditionModel
[[autodoc]] UNet2DConditionModel
## DecoderOutput
[[autodoc]] models.vae.DecoderOutput
## VQEncoderOutput
[[autodoc]] models.vq_model.VQEncoderOutput
## VQModel
[[autodoc]] VQModel
## AutoencoderKLOutput
[[autodoc]] models.autoencoder_kl.AutoencoderKLOutput
## AutoencoderKL
[[autodoc]] AutoencoderKL
## Transformer2DModel
[[autodoc]] Transformer2DModel
## Transformer2DModelOutput
[[autodoc]] models.transformer_2d.Transformer2DModelOutput
## PriorTransformer
[[autodoc]] models.prior_transformer.PriorTransformer
## PriorTransformerOutput
[[autodoc]] models.prior_transformer.PriorTransformerOutput
## FlaxModelMixin
[[autodoc]] FlaxModelMixin
## FlaxUNet2DConditionOutput
[[autodoc]] models.unet_2d_condition_flax.FlaxUNet2DConditionOutput
## FlaxUNet2DConditionModel
[[autodoc]] FlaxUNet2DConditionModel
## FlaxDecoderOutput
[[autodoc]] models.vae_flax.FlaxDecoderOutput
## FlaxAutoencoderKLOutput
[[autodoc]] models.vae_flax.FlaxAutoencoderKLOutput
## FlaxAutoencoderKL
[[autodoc]] FlaxAutoencoderKL

View File

@@ -0,0 +1,55 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# BaseOutputs
All models have outputs that are instances of subclasses of [`~utils.BaseOutput`]. Those are
data structures containing all the information returned by the model, but that can also be used as tuples or
dictionaries.
Let's see how this looks in an example:
```python
from diffusers import DDIMPipeline
pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
outputs = pipeline()
```
The `outputs` object is a [`~pipelines.ImagePipelineOutput`], as we can see in the
documentation of that class below, it means it has an image attribute.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`:
```python
outputs.images
```
or via keyword lookup
```python
outputs["images"]
```
When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values.
Here for instance, we could retrieve images via indexing:
```python
outputs[:1]
```
which will return the tuple `(outputs.images)` for instance.
## BaseOutput
[[autodoc]] utils.BaseOutput
- to_tuple

View File

@@ -0,0 +1,83 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AltDiffusion
AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu
The abstract of the paper is the following:
*In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.*
*Overview*:
| Pipeline | Tasks | Colab | Demo
|---|---|:---:|:---:|
| [pipeline_alt_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py) | *Text-to-Image Generation* | - | -
| [pipeline_alt_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | - |-
## Tips
- AltDiffusion is conceptually exaclty the same as [Stable Diffusion](./api/pipelines/stable_diffusion/overview).
- *Run AltDiffusion*
AltDiffusion can be tested very easily with the [`AltDiffusionPipeline`], [`AltDiffusionImg2ImgPipeline`] and the `"BAAI/AltDiffusion-m9"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](./using-diffusers/conditional_image_generation) and the [Image-to-Image Generation Guide](./using-diffusers/img2img).
- *How to load and use different schedulers.*
The alt diffusion pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the alt diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
```python
>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler
>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler")
>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler)
```
- *How to convert all use cases with multiple or single pipeline*
If you want to use all possible use cases in a single `DiffusionPipeline` we recommend using the `components` functionality to instantiate all components in the most memory-efficient way:
```python
>>> from diffusers import (
... AltDiffusionPipeline,
... AltDiffusionImg2ImgPipeline,
... )
>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components)
>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline
```
## AltDiffusionPipelineOutput
[[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput
- all
- __call__
## AltDiffusionPipeline
[[autodoc]] AltDiffusionPipeline
- all
- __call__
## AltDiffusionImg2ImgPipeline
[[autodoc]] AltDiffusionImg2ImgPipeline
- all
- __call__

View File

@@ -0,0 +1,98 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Audio Diffusion
## Overview
[Audio Diffusion](https://github.com/teticio/audio-diffusion) by Robert Dargavel Smith.
Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to
and from mel spectrogram images.
The original codebase of this implementation can be found [here](https://github.com/teticio/audio-diffusion), including
training scripts and example notebooks.
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_audio_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py) | *Unconditional Audio Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teticio/audio-diffusion/blob/master/notebooks/audio_diffusion_pipeline.ipynb) |
## Examples:
### Audio Diffusion
```python
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device)
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=mel.get_sample_rate()))
```
### Latent Audio Diffusion
```python
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device)
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```
### Audio Diffusion with DDIM (faster)
```python
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device)
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```
### Variations, in-painting, out-painting etc.
```python
output = pipe(
raw_audio=output.audios[0, 0],
start_step=int(pipe.get_default_steps() / 2),
mask_start_secs=1,
mask_end_secs=1,
)
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```
## AudioDiffusionPipeline
[[autodoc]] AudioDiffusionPipeline
- all
- __call__
## Mel
[[autodoc]] Mel

View File

@@ -0,0 +1,100 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Cycle Diffusion
## Overview
Cycle Diffusion is a Text-Guided Image-to-Image Generation model proposed in [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) by Chen Henry Wu, Fernando De la Torre.
The abstract of the paper is the following:
*Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs.*
*Tips*:
- The Cycle Diffusion pipeline is fully compatible with any [Stable Diffusion](./stable_diffusion) checkpoints
- Currently Cycle Diffusion only works with the [`DDIMScheduler`].
*Example*:
In the following we should how to best use the [`CycleDiffusionPipeline`]
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import CycleDiffusionPipeline, DDIMScheduler
# load the pipeline
# make sure you're logged in with `huggingface-cli login`
model_id_or_path = "CompVis/stable-diffusion-v1-4"
scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler")
pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda")
# let's download an initial image
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image.save("horse.png")
# let's specify a prompt
source_prompt = "An astronaut riding a horse"
prompt = "An astronaut riding an elephant"
# call the pipeline
image = pipe(
prompt=prompt,
source_prompt=source_prompt,
image=init_image,
num_inference_steps=100,
eta=0.1,
strength=0.8,
guidance_scale=2,
source_guidance_scale=1,
).images[0]
image.save("horse_to_elephant.png")
# let's try another example
# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image.save("black.png")
source_prompt = "A black colored car"
prompt = "A blue colored car"
# call the pipeline
torch.manual_seed(0)
image = pipe(
prompt=prompt,
source_prompt=source_prompt,
image=init_image,
num_inference_steps=100,
eta=0.1,
strength=0.85,
guidance_scale=3,
source_guidance_scale=1,
).images[0]
image.save("black_to_blue.png")
```
## CycleDiffusionPipeline
[[autodoc]] CycleDiffusionPipeline
- all
- __call__

View File

@@ -0,0 +1,34 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Dance Diffusion
## Overview
[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) by Zach Evans.
Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai.
For more info or to get involved in the development of these tools, please visit https://harmonai.org and fill out the form on the front page.
The original codebase of this implementation can be found [here](https://github.com/Harmonai-org/sample-generator).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_dance_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py) | *Unconditional Audio Generation* | - |
## DanceDiffusionPipeline
[[autodoc]] DanceDiffusionPipeline
- all
- __call__

View File

@@ -0,0 +1,36 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DDIM
## Overview
[Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The abstract of the paper is the following:
Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
The original codebase of this paper can be found here: [ermongroup/ddim](https://github.com/ermongroup/ddim).
For questions, feel free to contact the author on [tsong.me](https://tsong.me/).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_ddim.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddim/pipeline_ddim.py) | *Unconditional Image Generation* | - |
## DDIMPipeline
[[autodoc]] DDIMPipeline
- all
- __call__

View File

@@ -0,0 +1,37 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DDPM
## Overview
[Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.
The abstract of the paper is the following:
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
The original codebase of this paper can be found [here](https://github.com/hojonathanho/diffusion).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_ddpm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddpm/pipeline_ddpm.py) | *Unconditional Image Generation* | - |
# DDPMPipeline
[[autodoc]] DDPMPipeline
- all
- __call__

View File

@@ -0,0 +1,59 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Scalable Diffusion Models with Transformers (DiT)
## Overview
[Scalable Diffusion Models with Transformers](https://arxiv.org/abs/2212.09748) (DiT) by William Peebles and Saining Xie.
The abstract of the paper is the following:
*We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.*
The original codebase of this paper can be found here: [facebookresearch/dit](https://github.com/facebookresearch/dit).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_dit.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py) | *Conditional Image Generation* | - |
## Usage example
```python
from diffusers import DiTPipeline, DPMSolverMultistepScheduler
import torch
pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
# pick words from Imagenet class labels
pipe.labels # to print all available words
# pick words that exist in ImageNet
words = ["white shark", "umbrella"]
class_ids = pipe.get_label_ids(words)
generator = torch.manual_seed(33)
output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator)
image = output.images[0] # label 'white shark'
```
## DiTPipeline
[[autodoc]] DiTPipeline
- all
- __call__

View File

@@ -0,0 +1,49 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Latent Diffusion
## Overview
Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract of the paper is the following:
*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).
## Tips:
-
-
-
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) | *Text-to-Image Generation* | - |
| [pipeline_latent_diffusion_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py) | *Super Resolution* | - |
## Examples:
## LDMTextToImagePipeline
[[autodoc]] LDMTextToImagePipeline
- all
- __call__
## LDMSuperResolutionPipeline
[[autodoc]] LDMSuperResolutionPipeline
- all
- __call__

View File

@@ -0,0 +1,42 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Unconditional Latent Diffusion
## Overview
Unconditional Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract of the paper is the following:
*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).
## Tips:
-
-
-
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_latent_diffusion_uncond.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py) | *Unconditional Image Generation* | - |
## Examples:
## LDMPipeline
[[autodoc]] LDMPipeline
- all
- __call__

View File

@@ -0,0 +1,200 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pipelines
Pipelines provide a simple way to run state-of-the-art diffusion models in inference.
Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler
components - all of which are needed to have a functioning end-to-end diffusion system.
As an example, [Stable Diffusion](https://huggingface.co/blog/stable_diffusion) has three independently trained models:
- [Autoencoder](./api/models#vae)
- [Conditional Unet](./api/models#UNet2DConditionModel)
- [CLIP text encoder](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/clip#transformers.CLIPTextModel)
- a scheduler component, [scheduler](./api/scheduler#pndm),
- a [CLIPFeatureExtractor](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/clip#transformers.CLIPFeatureExtractor),
- as well as a [safety checker](./stable_diffusion#safety_checker).
All of these components are necessary to run stable diffusion in inference even though they were trained
or created independently from each other.
To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API.
More specifically, we strive to provide pipelines that
- 1. can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (*e.g.* [LDMTextToImagePipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/latent_diffusion), uses the officially released weights of [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)),
- 2. have a simple user interface to run the model in inference (see the [Pipelines API](#pipelines-api) section),
- 3. are easy to understand with code that is self-explanatory and can be read along-side the official paper (see [Pipelines summary](#pipelines-summary)),
- 4. can easily be contributed by the community (see the [Contribution](#contribution) section).
**Note** that pipelines do not (and should not) offer any training functionality.
If you are looking for *official* training examples, please have a look at [examples](https://github.com/huggingface/diffusers/tree/main/examples).
## 🧨 Diffusers Summary
The following table summarizes all officially supported pipelines, their corresponding paper, and if
available a colab notebook to directly try them out.
| Pipeline | Paper | Tasks | Colab
|---|---|:---:|:---:|
| [alt_diffusion](./alt_diffusion) | [**AltDiffusion**](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation | -
| [audio_diffusion](./audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio_diffusion.git) | Unconditional Audio Generation |
| [cycle_diffusion](./cycle_diffusion) | [**Cycle Diffusion**](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
| [dance_diffusion](./dance_diffusion) | [**Dance Diffusion**](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
| [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
| [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
| [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
| [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
| [latent_diffusion_uncond](./latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
| [paint_by_example](./paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
| [pndm](./pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
| [score_sde_ve](./score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [score_sde_vp](./score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [stable_diffusion](./stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [stable_diffusion](./stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [stable_diffusion](./stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
| [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
| [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
| [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
| [stable_diffusion_safe](./stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb)
| [stochastic_karras_ve](./stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [unclip](./unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation |
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
| [vq_diffusion](./vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
**Note**: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers.
However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the [Examples](#examples) below.
## Pipelines API
Diffusion models often consist of multiple independently-trained models or other previously existing components.
Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one.
During inference, we however want to be able to easily load all components and use them in inference - even if one component, *e.g.* CLIP's text encoder, originates from a different library, such as [Transformers](https://github.com/huggingface/transformers). To that end, all pipelines provide the following functionality:
- [`from_pretrained` method](../diffusion_pipeline) that accepts a Hugging Face Hub repository id, *e.g.* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) or a path to a local directory, *e.g.*
"./stable-diffusion". To correctly retrieve which models and components should be loaded, one has to provide a `model_index.json` file, *e.g.* [runwayml/stable-diffusion-v1-5/model_index.json](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), which defines all components that should be
loaded into the pipelines. More specifically, for each model/component one needs to define the format `<name>: ["<library>", "<class name>"]`. `<name>` is the attribute name given to the loaded instance of `<class name>` which can be found in the library or pipeline folder called `"<library>"`.
- [`save_pretrained`](../diffusion_pipeline) that accepts a local path, *e.g.* `./stable-diffusion` under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, *e.g.* `./stable_diffusion/unet`.
In addition, a `model_index.json` file is created at the root of the local path, *e.g.* `./stable_diffusion/model_index.json` so that the complete pipeline can again be instantiated
from the local path.
- [`to`](../diffusion_pipeline) which accepts a `string` or `torch.device` to move all models that are of type `torch.nn.Module` to the passed device. The behavior is fully analogous to [PyTorch's `to` method](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to).
- [`__call__`] method to use the pipeline in inference. `__call__` defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the `__call__` method can strongly vary from pipeline to pipeline. *E.g.* a text-to-image pipeline, such as [`StableDiffusionPipeline`](./stable_diffusion) should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as [DDPMPipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/ddpm) on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for
each pipeline, one should look directly into the respective pipeline.
**Note**: All pipelines have PyTorch's autograd disabled by decorating the `__call__` method with a [`torch.no_grad`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) decorator because pipelines should
not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community)
## Contribution
We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire
all of our pipelines to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**.
- **Self-contained**: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the [`DiffusionPipeline` class](.../diffusion_pipeline) or be directly attached to the model and scheduler components of the pipeline.
- **Easy-to-use**: Pipelines should be extremely easy to use - one should be able to load the pipeline and
use it for its designated task, *e.g.* text-to-image generation, in just a couple of lines of code. Most
logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the `__call__` method.
- **Easy-to-tweak**: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part from pre-processing to diffusing to post-processing can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community). If you feel that an important pipeline should be part of the official pipelines but isn't, a contribution to the [official pipelines](./overview) would be even better.
- **One-purpose-only**: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, *e.g.* image2image translation and in-painting, pipelines shall be used for one task only to keep them *easy-to-tweak* and *readable*.
## Examples
### Text-to-Image generation with Stable Diffusion
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### Image-to-Image text-guided generation with Stable Diffusion
The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.
```python
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
# load the pipeline
device = "cuda"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to(
device
)
# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
prompt = "A fantasy landscape, trending on artstation"
images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
### Tweak prompts reusing seeds and latents
You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](https://github.com/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb).
### In-painting using Stable Diffusion
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and text prompt.
```python
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)

View File

@@ -0,0 +1,74 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PaintByExample
## Overview
[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen
The abstract of the paper is the following:
*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*
The original codebase can be found [here](https://github.com/Fantasy-Studio/Paint-by-Example).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_paint_by_example.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py) | *Image-Guided Image Painting* | - |
## Tips
- PaintByExample is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint has been warm-started from the [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and with the objective to inpaint partly masked images conditioned on example / reference images
- To quickly demo *PaintByExample*, please have a look at [this demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example)
- You can run the following code snippet as an example:
```python
# !pip install diffusers transformers
import PIL
import requests
import torch
from io import BytesIO
from diffusers import DiffusionPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png"
mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png"
example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
example_image = download_image(example_url).resize((512, 512))
pipe = DiffusionPipeline.from_pretrained(
"Fantasy-Studio/Paint-by-Example",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0]
image
```
## PaintByExamplePipeline
[[autodoc]] PaintByExamplePipeline
- all
- __call__

View File

@@ -0,0 +1,35 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PNDM
## Overview
[Pseudo Numerical methods for Diffusion Models on manifolds](https://arxiv.org/abs/2202.09778) (PNDM) by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao.
The abstract of the paper is the following:
Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules.
The original codebase can be found [here](https://github.com/luping-liu/PNDM).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_pndm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pndm/pipeline_pndm.py) | *Unconditional Image Generation* | - |
## PNDMPipeline
[[autodoc]] PNDMPipeline
- all
- __call__

View File

@@ -0,0 +1,77 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# RePaint
## Overview
[RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2201.09865) (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool.
The abstract of the paper is the following:
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions.
The original codebase can be found [here](https://github.com/andreas128/RePaint).
## Available Pipelines:
| Pipeline | Tasks | Colab
|-------------------------------------------------------------------------------------------------------------------------------|--------------------|:---:|
| [pipeline_repaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/repaint/pipeline_repaint.py) | *Image Inpainting* | - |
## Usage example
```python
from io import BytesIO
import torch
import PIL
import requests
from diffusers import RePaintPipeline, RePaintScheduler
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png"
mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"
# Load the original image and the mask as PIL images
original_image = download_image(img_url).resize((256, 256))
mask_image = download_image(mask_url).resize((256, 256))
# Load the RePaint scheduler and pipeline based on a pretrained DDPM model
scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256")
pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler)
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
output = pipe(
original_image=original_image,
mask_image=mask_image,
num_inference_steps=250,
eta=0.0,
jump_length=10,
jump_n_sample=10,
generator=generator,
)
inpainted_image = output.images[0]
```
## RePaintPipeline
[[autodoc]] RePaintPipeline
- all
- __call__

View File

@@ -0,0 +1,36 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Score SDE VE
## Overview
[Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) (Score SDE) by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole.
The abstract of the paper is the following:
Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
The original codebase can be found [here](https://github.com/yang-song/score_sde_pytorch).
This pipeline implements the Variance Expanding (VE) variant of the method.
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_score_sde_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py) | *Unconditional Image Generation* | - |
## ScoreSdeVePipeline
[[autodoc]] ScoreSdeVePipeline
- all
- __call__

View File

@@ -0,0 +1,33 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Depth-to-Image Generation
## StableDiffusionDepth2ImgPipeline
The depth-guided stable diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), as part of Stable Diffusion 2.0. It uses [MiDas](https://github.com/isl-org/MiDaS) to infer depth based on an image.
[`StableDiffusionDepth2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images as well as a `depth_map` to preserve the images structure.
The original codebase can be found here:
- *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion)
Available Checkpoints are:
- *stable-diffusion-2-depth*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth)
[[autodoc]] StableDiffusionDepth2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,31 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Image Variation
## StableDiffusionImageVariationPipeline
[`StableDiffusionImageVariationPipeline`] lets you generate variations from an input image using Stable Diffusion. It uses a fine-tuned version of Stable Diffusion model, trained by [Justin Pinkney](https://www.justinpinkney.com/) (@Buntworthy) at [Lambda](https://lambdalabs.com/)
The original codebase can be found here:
[Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations)
Available Checkpoints are:
- *sd-image-variations-diffusers*: [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
[[autodoc]] StableDiffusionImageVariationPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,29 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Image-to-Image Generation
## StableDiffusionImg2ImgPipeline
The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion.
The original codebase can be found here: [CampVis/stable-diffusion](https://github.com/CompVis/stable-diffusion/blob/main/scripts/img2img.py)
[`StableDiffusionImg2ImgPipeline`] is compatible with all Stable Diffusion checkpoints for [Text-to-Image](./text2img)
[[autodoc]] StableDiffusionImg2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,33 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Text-Guided Image Inpainting
## StableDiffusionInpaintPipeline
The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionInpaintPipeline`] lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion.
The original codebase can be found here:
- *Stable Diffusion V1*: [CampVis/stable-diffusion](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion)
- *Stable Diffusion V2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#image-inpainting-with-stable-diffusion)
Available checkpoints are:
- *stable-diffusion-inpainting (512x512 resolution)*: [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting)
- *stable-diffusion-2-inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
[[autodoc]] StableDiffusionInpaintPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,78 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stable diffusion pipelines
Stable Diffusion is a text-to-image _latent diffusion_ model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.
Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the [specific pipeline for latent diffusion](pipelines/latent_diffusion) that is part of 🤗 Diffusers.
For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official [launch announcement post](https://stability.ai/blog/stable-diffusion-announcement) and [this section of our own blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work).
*Tips*:
- To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb)
*Overview*:
| Pipeline | Tasks | Colab | Demo
|---|---|:---:|:---:|
| [StableDiffusionPipeline](./text2img) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) | [🤗 Stable Diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)
| [StableDiffusionImg2ImgPipeline](./img2img) | *Image-to-Image Text-Guided Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [StableDiffusionInpaintPipeline](./inpaint) | **Experimental** *Text-Guided Image Inpainting* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
| [StableDiffusionDepth2ImgPipeline](./depth2img) | **Experimental** *Depth-to-Image Text-Guided Generation * | | Coming soon
| [StableDiffusionImageVariationPipeline](./image_variation) | **Experimental** *Image Variation Generation * | | [🤗 Stable Diffusion Image Variations](https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations)
| [StableDiffusionUpscalePipeline](./upscale) | **Experimental** *Text-Guided Image Super-Resolution * | | Coming soon
| [StableDiffusionInstructPix2PixPipeline](./pix2pix) | **Experimental** *Text-Based Image Editing * | | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/spaces/timbrooks/instruct-pix2pix)
## Tips
### How to load and use different schedulers.
The stable diffusion pipeline uses [`PNDMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
```python
>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)
```
### How to convert all use cases with multiple or single pipeline
If you want to use all possible use cases in a single `DiffusionPipeline` you can either:
- Make use of the [Stable Diffusion Mega Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#stable-diffusion-mega) or
- Make use of the `components` functionality to instantiate all components in the most memory-efficient way:
```python
>>> from diffusers import (
... StableDiffusionPipeline,
... StableDiffusionImg2ImgPipeline,
... StableDiffusionInpaintPipeline,
... )
>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)
>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline
```
## StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput

View File

@@ -0,0 +1,70 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# InstructPix2Pix: Learning to Follow Image Editing Instructions
## Overview
[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) by Tim Brooks, Aleksander Holynski and Alexei A. Efros.
The abstract of the paper is the following:
*We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.*
Resources:
* [Project Page](https://www.timothybrooks.com/instruct-pix2pix).
* [Paper](https://arxiv.org/abs/2211.09800).
* [Original Code](https://github.com/timothybrooks/instruct-pix2pix).
* [Demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix).
## Available Pipelines:
| Pipeline | Tasks | Demo
|---|---|:---:|
| [StableDiffusionInstructPix2PixPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py) | *Text-Based Image Editing* | [🤗 Space](https://huggingface.co/spaces/timbrooks/instruct-pix2pix) |
<!-- TODO: add Colab -->
## Usage example
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(url)
prompt = "make the mountains snowy"
edit = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images[0]
images[0].save("snowy_mountains.png")
```
## StableDiffusionInstructPix2PixPipeline
[[autodoc]] StableDiffusionInstructPix2PixPipeline
- __call__
- all

View File

@@ -0,0 +1,39 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Text-to-Image Generation
## StableDiffusionPipeline
The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photo-realistic images given any text input using Stable Diffusion.
The original codebase can be found here:
- *Stable Diffusion V1*: [CampVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
- *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion)
Available Checkpoints are:
- *stable-diffusion-v1-4 (512x512 resolution)* [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
- *stable-diffusion-v1-5 (512x512 resolution)* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- *stable-diffusion-2-base (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base)
- *stable-diffusion-2 (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)
- *stable-diffusion-2-1-base (512x512 resolution)* [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- *stable-diffusion-2-1 (768x768 resolution)*: [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
[[autodoc]] StableDiffusionPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_vae_slicing
- disable_vae_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,32 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Super-Resolution
## StableDiffusionUpscalePipeline
The upscaler diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), as part of Stable Diffusion 2.0. [`StableDiffusionUpscalePipeline`] can be used to enhance the resolution of input images by a factor of 4.
The original codebase can be found here:
- *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#image-upscaling-with-stable-diffusion)
Available Checkpoints are:
- *stabilityai/stable-diffusion-x4-upscaler (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)
[[autodoc]] StableDiffusionUpscalePipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention

View File

@@ -0,0 +1,176 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stable diffusion 2
Stable Diffusion 2 is a text-to-image _latent diffusion_ model built upon the work of [Stable Diffusion 1](https://stability.ai/blog/stable-diffusion-public-release).
The project to train Stable Diffusion 2 was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/).
*The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels.
These models are trained on an aesthetic subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/) created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using [LAIONs NSFW filter](https://openreview.net/forum?id=M3Y74vmsMcY).*
For more details about how Stable Diffusion 2 works and how it differs from Stable Diffusion 1, please refer to the official [launch announcement post](https://stability.ai/blog/stable-diffusion-v2-release).
## Tips
### Available checkpoints:
Note that the architecture is more or less identical to [Stable Diffusion 1](./stable_diffusion/overview) so please refer to [this page](./stable_diffusion/overview) for API documentation.
- *Text-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) with [`StableDiffusionPipeline`]
- *Text-to-Image (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) with [`StableDiffusionPipeline`]
- *Image Inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) with [`StableDiffusionInpaintPipeline`]
- *Super-Resolution (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) [`StableDiffusionUpscalePipeline`]
- *Depth-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) with [`StableDiffusionDepth2ImagePipeline`]
We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest scheduler there is.
### Text-to-Image
- *Text-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) with [`StableDiffusionPipeline`]
```python
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "stabilityai/stable-diffusion-2-base"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "High quality photo of an astronaut riding a horse in space"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("astronaut.png")
```
- *Text-to-Image (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) with [`StableDiffusionPipeline`]
```python
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "stabilityai/stable-diffusion-2"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "High quality photo of an astronaut riding a horse in space"
image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0]
image.save("astronaut.png")
```
### Image Inpainting
- *Image Inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) with [`StableDiffusionInpaintPipeline`]
```python
import PIL
import requests
import torch
from io import BytesIO
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
repo_id = "stabilityai/stable-diffusion-2-inpainting"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0]
image.save("yellow_cat.png")
```
### Super-Resolution
- *Image Upscaling (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) with [`StableDiffusionUpscalePipeline`]
```python
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionUpscalePipeline
import torch
# load model and scheduler
model_id = "stabilityai/stable-diffusion-x4-upscaler"
pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipeline = pipeline.to("cuda")
# let's download an image
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
response = requests.get(url)
low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
low_res_img = low_res_img.resize((128, 128))
prompt = "a white cat"
upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
upscaled_image.save("upsampled_cat.png")
```
### Depth-to-Image
- *Depth-Guided Text-to-Image*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) [`StableDiffusionDepth2ImagePipeline`]
```python
import torch
import requests
from PIL import Image
from diffusers import StableDiffusionDepth2ImgPipeline
pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-depth",
torch_dtype=torch.float16,
).to("cuda")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
init_image = Image.open(requests.get(url, stream=True).raw)
prompt = "two tigers"
n_propmt = "bad, deformed, ugly, bad anotomy"
image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
```
### How to load and use different schedulers.
The stable diffusion pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
```python
>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler")
>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=euler_scheduler)
```

View File

@@ -0,0 +1,90 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Safe Stable Diffusion
Safe Stable Diffusion was proposed in [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://arxiv.org/abs/2211.05105) and mitigates the well known issue that models like Stable Diffusion that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, or otherwise offensive content.
Safe Stable Diffusion is an extension to the Stable Diffusion that drastically reduces content like this.
The abstract of the paper is the following:
*Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment.*
*Overview*:
| Pipeline | Tasks | Colab | Demo
|---|---|:---:|:---:|
| [pipeline_stable_diffusion_safe.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb) | -
## Tips
- Safe Stable Diffusion may also be used with weights of [Stable Diffusion](./api/pipelines/stable_diffusion/text2img).
### Run Safe Stable Diffusion
Safe Stable Diffusion can be tested very easily with the [`StableDiffusionPipelineSafe`], and the `"AIML-TUDA/stable-diffusion-safe"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](./using-diffusers/conditional_image_generation).
### Interacting with the Safety Concept
To check and edit the currently used safety concept, use the `safety_concept` property of [`StableDiffusionPipelineSafe`]
```python
>>> from diffusers import StableDiffusionPipelineSafe
>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
>>> pipeline.safety_concept
```
For each image generation the active concept is also contained in [`StableDiffusionSafePipelineOutput`].
### Using pre-defined safety configurations
You may use the 4 configurations defined in the [Safe Latent Diffusion paper](https://arxiv.org/abs/2211.05105) as follows:
```python
>>> from diffusers import StableDiffusionPipelineSafe
>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig
>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker"
>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX)
```
The following configurations are available: `SafetyConfig.WEAK`, `SafetyConfig.MEDIUM`, `SafetyConfig.STRONg`, and `SafetyConfig.MAX`.
### How to load and use different schedulers.
The safe stable diffusion pipeline uses [`PNDMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
```python
>>> from diffusers import StableDiffusionPipelineSafe, EulerDiscreteScheduler
>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("AIML-TUDA/stable-diffusion-safe", subfolder="scheduler")
>>> pipeline = StableDiffusionPipelineSafe.from_pretrained(
... "AIML-TUDA/stable-diffusion-safe", scheduler=euler_scheduler
... )
```
## StableDiffusionSafePipelineOutput
[[autodoc]] pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput
- all
- __call__
## StableDiffusionPipelineSafe
[[autodoc]] StableDiffusionPipelineSafe
- all
- __call__

View File

@@ -0,0 +1,36 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stochastic Karras VE
## Overview
[Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
The abstract of the paper is the following:
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.
This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models.
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_stochastic_karras_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py) | *Unconditional Image Generation* | - |
## KarrasVePipeline
[[autodoc]] KarrasVePipeline
- all
- __call__

View File

@@ -0,0 +1,37 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# unCLIP
## Overview
[Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen
The abstract of the paper is the following:
Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.
The unCLIP model in diffusers comes from kakaobrain's karlo and the original codebase can be found [here](https://github.com/kakaobrain/karlo). Additionally, lucidrains has a DALL-E 2 recreation [here](https://github.com/lucidrains/DALLE2-pytorch).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_unclip.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip.py) | *Text-to-Image Generation* | - |
| [pipeline_unclip_image_variation.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py) | *Image-Guided Image Generation* | - |
## UnCLIPPipeline
[[autodoc]] UnCLIPPipeline
- all
- __call__
[[autodoc]] UnCLIPImageVariationPipeline
- all
- __call__

View File

@@ -0,0 +1,70 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VersatileDiffusion
VersatileDiffusion was proposed in [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi .
The abstract of the paper is the following:
*The recent advances in diffusion models have set an impressive milestone in many generation tasks. Trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest in academia and industry. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-flow network, dubbed Versatile Diffusion (VD), that handles text-to-image, image-to-text, image-variation, and text-variation in one unified model. Moreover, we generalize VD to a unified multi-flow multimodal diffusion framework with grouped layers, swappable streams, and other propositions that can process modalities beyond images and text. Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.; c) Through these experiments and applications, VD provides more semantic insights of the generated outputs.*
## Tips
- VersatileDiffusion is conceptually very similar as [Stable Diffusion](./api/pipelines/stable_diffusion/overview), but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image.
### *Run VersatileDiffusion*
You can both load the memory intensive "all-in-one" [`VersatileDiffusionPipeline`] that can run all tasks
with the same class as shown in [`VersatileDiffusionPipeline.text_to_image`], [`VersatileDiffusionPipeline.image_variation`], and [`VersatileDiffusionPipeline.dual_guided`]
**or**
You can run the individual pipelines which are much more memory efficient:
- *Text-to-Image*: [`VersatileDiffusionTextToImagePipeline.__call__`]
- *Image Variation*: [`VersatileDiffusionImageVariationPipeline.__call__`]
- *Dual Text and Image Guided Generation*: [`VersatileDiffusionDualGuidedPipeline.__call__`]
### *How to load and use different schedulers.*
The versatile diffusion pipelines uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the alt diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
```python
>>> from diffusers import VersatileDiffusionPipeline, EulerDiscreteScheduler
>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("shi-labs/versatile-diffusion", subfolder="scheduler")
>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", scheduler=euler_scheduler)
```
## VersatileDiffusionPipeline
[[autodoc]] VersatileDiffusionPipeline
## VersatileDiffusionTextToImagePipeline
[[autodoc]] VersatileDiffusionTextToImagePipeline
- all
- __call__
## VersatileDiffusionImageVariationPipeline
[[autodoc]] VersatileDiffusionImageVariationPipeline
- all
- __call__
## VersatileDiffusionDualGuidedPipeline
[[autodoc]] VersatileDiffusionDualGuidedPipeline
- all
- __call__

View File

@@ -0,0 +1,35 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VQDiffusion
## Overview
[Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo
The abstract of the paper is the following:
We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality.
The original codebase can be found [here](https://github.com/microsoft/VQ-Diffusion).
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_vq_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py) | *Text-to-Image Generation* | - |
## VQDiffusionPipeline
[[autodoc]] VQDiffusionPipeline
- all
- __call__

View File

@@ -0,0 +1,27 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Denoising diffusion implicit models (DDIM)
## Overview
[Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The abstract of the paper is the following:
Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
The original codebase of this paper can be found here: [ermongroup/ddim](https://github.com/ermongroup/ddim).
For questions, feel free to contact the author on [tsong.me](https://tsong.me/).
## DDIMScheduler
[[autodoc]] DDIMScheduler

View File

@@ -0,0 +1,27 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Denoising diffusion probabilistic models (DDPM)
## Overview
[Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.
The abstract of the paper is the following:
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
The original paper can be found [here](https://arxiv.org/abs/2010.02502).
## DDPMScheduler
[[autodoc]] DDPMScheduler

View File

@@ -0,0 +1,22 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DEIS
Fast Sampling of Diffusion Models with Exponential Integrator.
## Overview
Original paper can be found [here](https://arxiv.org/abs/2204.13902). The original implementation can be found [here](https://github.com/qsh-zh/deis).
## DEISMultistepScheduler
[[autodoc]] DEISMultistepScheduler

View File

@@ -0,0 +1,22 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DPM Discrete Scheduler inspired by Karras et. al paper
## Overview
Inspired by [Karras et. al](https://arxiv.org/abs/2206.00364). Scheduler ported from @crowsonkb's https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to [Katherine Crowson](https://github.com/crowsonkb/)
## KDPM2DiscreteScheduler
[[autodoc]] KDPM2DiscreteScheduler

View File

@@ -0,0 +1,22 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper
## Overview
Inspired by [Karras et. al](https://arxiv.org/abs/2206.00364). Scheduler ported from @crowsonkb's https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to [Katherine Crowson](https://github.com/crowsonkb/)
## KDPM2AncestralDiscreteScheduler
[[autodoc]] KDPM2AncestralDiscreteScheduler

View File

@@ -0,0 +1,21 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Euler scheduler
## Overview
Euler scheduler (Algorithm 2) from the paper [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Karras et al. (2022). Based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51) implementation by Katherine Crowson.
Fast scheduler which often times generates good outputs with 20-30 steps.
## EulerDiscreteScheduler
[[autodoc]] EulerDiscreteScheduler

View File

@@ -0,0 +1,21 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Euler Ancestral scheduler
## Overview
Ancestral sampling with Euler method steps. Based on the original (k-diffusion)[https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72] implementation by Katherine Crowson.
Fast scheduler which often times generates good outputs with 20-30 steps.
## EulerAncestralDiscreteScheduler
[[autodoc]] EulerAncestralDiscreteScheduler

View File

@@ -0,0 +1,23 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Heun scheduler inspired by Karras et. al paper
## Overview
Algorithm 1 of [Karras et. al](https://arxiv.org/abs/2206.00364).
Scheduler ported from @crowsonkb's https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to [Katherine Crowson](https://github.com/crowsonkb/)
## HeunDiscreteScheduler
[[autodoc]] HeunDiscreteScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# improved pseudo numerical methods for diffusion models (iPNDM)
## Overview
Original implementation can be found [here](https://github.com/crowsonkb/v-diffusion-pytorch/blob/987f8985e38208345c1959b0ea767a625831cc9b/diffusion/sampling.py#L296).
## IPNDMScheduler
[[autodoc]] IPNDMScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Linear multistep scheduler for discrete beta schedules
## Overview
Original implementation can be found [here](https://arxiv.org/abs/2206.00364).
## LMSDiscreteScheduler
[[autodoc]] LMSDiscreteScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Multistep DPM-Solver
## Overview
Original paper can be found [here](https://arxiv.org/abs/2206.00927) and the [improved version](https://arxiv.org/abs/2211.01095). The original implementation can be found [here](https://github.com/LuChengTHU/dpm-solver).
## DPMSolverMultistepScheduler
[[autodoc]] DPMSolverMultistepScheduler

View File

@@ -0,0 +1,86 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Schedulers
Diffusers contains multiple pre-built schedule functions for the diffusion process.
## What is a scheduler?
The schedule functions, denoted *Schedulers* in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That's why schedulers may also be called *Samplers* in other diffusion models implementations.
- Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.
- adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images.
- for inference, the scheduler defines how to update a sample based on an output from a pretrained model.
- Schedulers are often defined by a *noise schedule* and an *update rule* to solve the differential equation solution.
### Discrete versus continuous schedulers
All schedulers take in a timestep to predict the updated version of the sample being diffused.
The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps.
Different algorithms use timesteps that can be discrete (accepting `int` inputs), such as the [`DDPMScheduler`] or [`PNDMScheduler`], or continuous (accepting `float` inputs), such as the score-based schedulers [`ScoreSdeVeScheduler`] or [`ScoreSdeVpScheduler`].
## Designing Re-usable schedulers
The core design principle between the schedule functions is to be model, system, and framework independent.
This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update.
To this end, the design of schedulers is such that:
- Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality.
- Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists).
- Many diffusion pipelines, such as [`StableDiffusionPipeline`] and [`DiTPipeline`] can use any of [`KarrasDiffusionSchedulers`]
## Schedulers Summary
The following table summarizes all officially supported schedulers, their corresponding paper
| Scheduler | Paper |
|---|---|
| [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) |
| [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) |
| [singlestep_dpm_solver](./singlestep_dpm_solver) | [**Singlestep DPM-Solver**](https://arxiv.org/abs/2206.00927) |
| [multistep_dpm_solver](./multistep_dpm_solver) | [**Multistep DPM-Solver**](https://arxiv.org/abs/2206.00927) |
| [heun](./heun) | [**Heun scheduler inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) |
| [dpm_discrete](./dpm_discrete) | [**DPM Discrete Scheduler inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) |
| [dpm_discrete_ancestral](./dpm_discrete_ancestral) | [**DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) |
| [stochastic_karras_ve](./stochastic_karras_ve) | [**Variance exploding, stochastic sampling from Karras et. al**](https://arxiv.org/abs/2206.00364) |
| [lms_discrete](./lms_discrete) | [**Linear multistep scheduler for discrete beta schedules**](https://arxiv.org/abs/2206.00364) |
| [pndm](./pndm) | [**Pseudo numerical methods for diffusion models (PNDM)**](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181) |
| [score_sde_ve](./score_sde_ve) | [**variance exploding stochastic differential equation (VE-SDE) scheduler**](https://arxiv.org/abs/2011.13456) |
| [ipndm](./ipndm) | [**improved pseudo numerical methods for diffusion models (iPNDM)**](https://github.com/crowsonkb/v-diffusion-pytorch/blob/987f8985e38208345c1959b0ea767a625831cc9b/diffusion/sampling.py#L296) |
| [score_sde_vp](./score_sde_vp) | [**Variance preserving stochastic differential equation (VP-SDE) scheduler**](https://arxiv.org/abs/2011.13456) |
| [euler](./euler) | [**Euler scheduler**](https://arxiv.org/abs/2206.00364) |
| [euler_ancestral](./euler_ancestral) | [**Euler Ancestral scheduler**](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) |
| [vq_diffusion](./vq_diffusion) | [**VQDiffusionScheduler**](https://arxiv.org/abs/2111.14822) |
| [repaint](./repaint) | [**RePaint scheduler**](https://arxiv.org/abs/2201.09865) |
## API
The core API for any new scheduler must follow a limited structure.
- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively.
- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task.
- Schedulers should be framework-specific.
The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers.
### SchedulerMixin
[[autodoc]] SchedulerMixin
### SchedulerOutput
The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call.
[[autodoc]] schedulers.scheduling_utils.SchedulerOutput
### KarrasDiffusionSchedulers
[[autodoc]] schedulers.scheduling_utils.KarrasDiffusionSchedulers

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Pseudo numerical methods for diffusion models (PNDM)
## Overview
Original implementation can be found [here](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181).
## PNDMScheduler
[[autodoc]] PNDMScheduler

View File

@@ -0,0 +1,23 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# RePaint scheduler
## Overview
DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks.
Intended for use with [`RePaintPipeline`].
Based on the paper [RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2201.09865)
and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint
## RePaintScheduler
[[autodoc]] RePaintScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# variance exploding stochastic differential equation (VE-SDE) scheduler
## Overview
Original paper can be found [here](https://arxiv.org/abs/2011.13456).
## ScoreSdeVeScheduler
[[autodoc]] ScoreSdeVeScheduler

View File

@@ -0,0 +1,26 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Variance preserving stochastic differential equation (VP-SDE) scheduler
## Overview
Original paper can be found [here](https://arxiv.org/abs/2011.13456).
<Tip warning={true}>
Score SDE-VP is under construction.
</Tip>
## ScoreSdeVpScheduler
[[autodoc]] schedulers.scheduling_sde_vp.ScoreSdeVpScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Singlestep DPM-Solver
## Overview
Original paper can be found [here](https://arxiv.org/abs/2206.00927) and the [improved version](https://arxiv.org/abs/2211.01095). The original implementation can be found [here](https://github.com/LuChengTHU/dpm-solver).
## DPMSolverSinglestepScheduler
[[autodoc]] DPMSolverSinglestepScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Variance exploding, stochastic sampling from Karras et. al
## Overview
Original paper can be found [here](https://arxiv.org/abs/2206.00364).
## KarrasVeScheduler
[[autodoc]] KarrasVeScheduler

View File

@@ -0,0 +1,20 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# VQDiffusionScheduler
## Overview
Original paper can be found [here](https://arxiv.org/abs/2111.14822)
## VQDiffusionScheduler
[[autodoc]] VQDiffusionScheduler

View File

@@ -0,0 +1,291 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to contribute to Diffusers 🧨
We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation not just code are valued and appreciated. Answering questions, helping others, reaching out and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it!
It also helps us if you spread the word: reference the library from blog posts
on the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply star the repo to say "thank you".
We encourage everyone to start by saying 👋 in our public Discord channel. We discuss the hottest trends about diffusion models, ask questions, show-off personal projects, help each other with contributions, or just hang out ☕. <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>
Whichever way you choose to contribute, we strive to be part of an open, welcoming and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions.
## Overview
You can contribute in so many ways! Just to name a few:
* Fixing outstanding issues with the existing code.
* Implementing [new diffusion pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines#contribution), [new schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) or [new models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models).
* [Contributing to the examples](https://github.com/huggingface/diffusers/tree/main/examples).
* [Contributing to the documentation](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Submitting issues related to bugs or desired new features.
*All are equally valuable to the community.*
### Browse GitHub issues for suggestions
If you need inspiration, you can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library. There are a few filters that can be helpful:
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute and getting started with the codebase.
- See [New pipeline/model](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models or diffusion pipelines.
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) to work on new samplers and schedulers.
## Submitting a new issue or feature request
Do your best to follow these guidelines when submitting an issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.
### Did you find a bug?
The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.
First, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on GitHub under Issues).
### Do you want to implement a new diffusion pipeline / diffusion model?
Awesome! Please provide the following information:
* Short description of the diffusion pipeline and link to the paper;
* Link to the implementation if it is open-source;
* Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can best
guide you.
### Do you want a new feature (that is not a model)?
A world-class feature request addresses the following points:
1. Motivation first:
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.
If your issue is well written we're already 80% of the way there by the time you
post it.
## Start contributing! (Pull Requests)
Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to be able to contribute to
🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.
Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L212)):
1. Fork the [repository](https://github.com/huggingface/diffusers) by
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
$ git clone git@github.com:<your Github handle>/diffusers.git
$ cd diffusers
$ git remote add upstream https://github.com/huggingface/diffusers.git
```
3. Create a new branch to hold your development changes:
```bash
$ git checkout -b a-descriptive-name-for-my-changes
```
**Do not** work on the `main` branch.
4. Set up a development environment by running the following command in a virtual environment:
```bash
$ pip install -e ".[dev]"
```
(If Diffusers was already installed in the virtual environment, remove
it with `pip uninstall diffusers` before reinstalling it in editable
mode with the `-e` flag.)
To run the full test suite, you might need the additional dependency on `transformers` and `datasets` which requires a separate source
install:
```bash
$ git clone https://github.com/huggingface/transformers
$ cd transformers
$ pip install -e .
```
```bash
$ git clone https://github.com/huggingface/datasets
$ cd datasets
$ pip install -e .
```
If you have already cloned that repo, you might need to `git pull` to get the most recent changes in the `datasets`
library.
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this:
```bash
$ pytest tests/<TEST_TO_RUN>.py
```
You can also run the full suite with the following command, but it takes
a beefy machine to produce a result in a decent amount of time now that
Diffusers has grown a lot. Here is the command for it:
```bash
$ make test
```
For more information about tests, check out the
[dedicated documentation](https://huggingface.co/docs/diffusers/testing)
🧨 Diffusers relies on `black` and `isort` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
```bash
$ make style
```
🧨 Diffusers also uses `flake8` and a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash
$ make quality
```
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
```bash
$ git add modified_file.py
$ git commit
```
It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:
```bash
$ git fetch upstream
$ git rebase upstream/main
```
Push the changes to your account using:
```bash
$ git push -u origin a-descriptive-name-for-my-changes
```
6. Once you are satisfied (**and the checklist below is happy too**), go to the
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.
7. It's ok if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
### Checklist
1. The title of your pull request should be a summary of its contribution;
2. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
3. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
4. Make sure existing tests pass;
5. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
- If you are adding a new tokenizer, write tests, and make sure
`RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py` passes.
CircleCI does not run the slow tests, but GitHub actions does every night!
6. All public methods must have informative docstrings that work nicely with sphinx. See `[pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py)` for an example.
7. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.
### Tests
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests).
We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, here's how to run tests with `pytest` for the library:
```bash
$ python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
In fact, that's how `make test` is implemented!
You can specify a smaller set of tests in order to test only the feature
you're working on.
By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models — make sure you
have enough disk space and a good Internet connection, or a lot of patience!
```bash
$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
`unittest` is fully supported, here's how to run tests with it:
```bash
$ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v
```
### Syncing forked main with upstream (HuggingFace) main
To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing
```
### Style guide
For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**

View File

@@ -0,0 +1,17 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Philosophy
- Readability and clarity are preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and use well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio. This is one of the guiding goals even if the initial pipelines are devoted to vision tasks.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementations and can include components of other libraries, such as text encoders. Examples of diffusion pipelines are [Glide](https://github.com/openai/glide-text2im), [Latent Diffusion](https://github.com/CompVis/latent-diffusion) and [Stable Diffusion](https://github.com/compvis/stable-diffusion).

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

64
docs/source/en/index.mdx Normal file
View File

@@ -0,0 +1,64 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
<p align="center">
<br>
<img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/>
<br>
</p>
# 🧨 Diffusers
🤗 Diffusers provides pretrained vision and audio diffusion models, and serves as a modular toolbox for inference and training.
More precisely, 🤗 Diffusers offers:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [**Using Diffusers**](./using-diffusers/conditional_image_generation)) or have a look at [**Pipelines**](#pipelines) to get an overview of all supported pipelines and their corresponding papers.
- Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference. For more information see [**Schedulers**](./api/schedulers/overview).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system. See [**Models**](./api/models) for more details
- Training examples to show how to train the most popular diffusion model tasks. For more information see [**Training**](./training/overview).
## 🧨 Diffusers Pipelines
The following table summarizes all officially supported pipelines, their corresponding paper, and if
available a colab notebook to directly try them out.
| Pipeline | Paper | Tasks | Colab
|---|---|:---:|:---:|
| [alt_diffusion](./api/pipelines/alt_diffusion) | [**AltDiffusion**](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation |
| [audio_diffusion](./api/pipelines/audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teticio/audio-diffusion/blob/master/notebooks/audio_diffusion_pipeline.ipynb)
| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [**Cycle Diffusion**](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
| [dance_diffusion](./api/pipelines/dance_diffusion) | [**Dance Diffusion**](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
| [ddpm](./api/pipelines/ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
| [ddim](./api/pipelines/ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
| [latent_diffusion](./api/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
| [latent_diffusion](./api/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
| [paint_by_example](./api/pipelines/paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
| [pndm](./api/pipelines/pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
| [score_sde_ve](./api/pipelines/score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [score_sde_vp](./api/pipelines/score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [stable_diffusion](./api/pipelines/stable_diffusion/text2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion/img2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [stable_diffusion](./api/pipelines/stable_diffusion/inpaint) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb)
| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
**Note**: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers.

View File

@@ -0,0 +1,144 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Installation
Install 🤗 Diffusers for whichever deep learning library youre working with.
🤗 Diffusers is tested on Python 3.7+, PyTorch 1.7.0+ and flax. Follow the installation instructions below for the deep learning library you are using:
- [PyTorch](https://pytorch.org/get-started/locally/) installation instructions.
- [Flax](https://flax.readthedocs.io/en/latest/) installation instructions.
## Install with pip
You should install 🤗 Diffusers in a [virtual environment](https://docs.python.org/3/library/venv.html).
If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies.
Start by creating a virtual environment in your project directory:
```bash
python -m venv .env
```
Activate the virtual environment:
```bash
source .env/bin/activate
```
Now you're ready to install 🤗 Diffusers with the following command:
**For PyTorch**
```bash
pip install diffusers["torch"]
```
**For Flax**
```bash
pip install diffusers["flax"]
```
## Install from source
Before intsalling `diffusers` from source, make sure you have `torch` and `accelerate` installed.
For `torch` installation refer to the `torch` [docs](https://pytorch.org/get-started/locally/#start-locally).
To install `accelerate`
```bash
pip install accelerate
```
Install 🤗 Diffusers from source with the following command:
```bash
pip install git+https://github.com/huggingface/diffusers
```
This command installs the bleeding edge `main` version rather than the latest `stable` version.
The `main` version is useful for staying up-to-date with the latest developments.
For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet.
However, this means the `main` version may not always be stable.
We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day.
If you run into a problem, please open an [Issue](https://github.com/huggingface/transformers/issues), so we can fix it even sooner!
## Editable install
You will need an editable install if you'd like to:
* Use the `main` version of the source code.
* Contribute to 🤗 Diffusers and need to test changes in the code.
Clone the repository and install 🤗 Diffusers with the following commands:
```bash
git clone https://github.com/huggingface/diffusers.git
cd diffusers
```
**For PyTorch**
```
pip install -e ".[torch]"
```
**For Flax**
```
pip install -e ".[flax]"
```
These commands will link the folder you cloned the repository to and your Python library paths.
Python will now look inside the folder you cloned to in addition to the normal library paths.
For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the folder you cloned to: `~/diffusers/`.
<Tip warning={true}>
You must keep the `diffusers` folder if you want to keep using the library.
</Tip>
Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command:
```bash
cd ~/diffusers/
git pull
```
Your Python environment will find the `main` version of 🤗 Diffusers on the next run.
## Notice on telemetry logging
Our library gathers telemetry information during `from_pretrained()` requests.
This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class,
and the path to a pretrained checkpoint if it is hosted on the Hub.
This usage data helps us debug issues and prioritize new features.
Telemetry is only sent when loading models and pipelines from the HuggingFace Hub,
and is not collected during local usage.
We understand that not everyone wants to share additional information, and we respect your privacy,
so you can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal:
On Linux/MacOS:
```bash
export DISABLE_TELEMETRY=YES
```
On Windows:
```bash
set DISABLE_TELEMETRY=YES
```

View File

@@ -0,0 +1,346 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Memory and speed
We present some techniques and ideas to optimize 🤗 Diffusers _inference_ for memory or speed. As a general rule, we recommend the use of [xFormers](https://github.com/facebookresearch/xformers) for memory efficient attention, please see the recommended [installation instructions](xformers).
We'll discuss how the following settings impact performance and memory.
| | Latency | Speedup |
| ---------------- | ------- | ------- |
| original | 9.50s | x1 |
| cuDNN auto-tuner | 9.37s | x1.01 |
| fp16 | 3.61s | x2.63 |
| channels last | 3.30s | x2.88 |
| traced UNet | 3.21s | x2.96 |
| memory efficient attention | 2.63s | x3.61 |
<em>
obtained on NVIDIA TITAN RTX by generating a single image of size 512x512 from
the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM
steps.
</em>
## Enable cuDNN auto-tuner
[NVIDIA cuDNN](https://developer.nvidia.com/cudnn) supports many algorithms to compute a convolution. Autotuner runs a short benchmark and selects the kernel with the best performance on a given hardware for a given input size.
Since were using **convolutional networks** (other types currently not supported), we can enable cuDNN autotuner before launching the inference by setting:
```python
import torch
torch.backends.cudnn.benchmark = True
```
### Use tf32 instead of fp32 (on Ampere and later CUDA devices)
On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too. It can significantly speed up computations with typically negligible loss of numerical accuracy. You can read more about it [here](https://huggingface.co/docs/transformers/v4.18.0/en/performance#tf32). All you need to do is to add this before your inference:
```python
import torch
torch.backends.cuda.matmul.allow_tf32 = True
```
## Half precision weights
To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named `fp16`, and telling PyTorch to use the `float16` type when loading them:
```Python
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
<Tip warning={true}>
It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure
float16 precision.
</Tip>
## Sliced attention for additional memory savings
For even additional memory savings, you can use a sliced version of attention that performs the computation in steps instead of all at once.
<Tip>
Attention slicing is useful even if a batch size of just 1 is used - as long
as the model uses more than one attention head. If there is more than one
attention head the *QK^T* attention matrix can be computed sequentially for
each head which can save a significant amount of memory.
</Tip>
To perform the attention computation sequentially over each head, you only need to invoke [`~StableDiffusionPipeline.enable_attention_slicing`] in your pipeline before inference, like here:
```Python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_attention_slicing()
image = pipe(prompt).images[0]
```
There's a small performance penalty of about 10% slower inference times, but this method allows you to use Stable Diffusion in as little as 3.2 GB of VRAM!
## Sliced VAE decode for larger batches
To decode large batches of images with limited VRAM, or to enable batches with 32 images or more, you can use sliced VAE decode that decodes the batch latents one image at a time.
You likely want to couple this with [`~StableDiffusionPipeline.enable_attention_slicing`] or [`~StableDiffusionPipeline.enable_xformers_memory_efficient_attention`] to further minimize memory use.
To perform the VAE decode one image at a time, invoke [`~StableDiffusionPipeline.enable_vae_slicing`] in your pipeline before inference. For example:
```Python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_vae_slicing()
images = pipe([prompt] * 32).images
```
You may see a small performance boost in VAE decode on multi-image batches. There should be no performance impact on single-image batches.
## Offloading to CPU with accelerate for memory savings
For additional memory savings, you can offload the weights to CPU and only load them to GPU when performing the forward pass.
To perform CPU offloading, all you have to do is invoke [`~StableDiffusionPipeline.enable_sequential_cpu_offload`]:
```Python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
)
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_sequential_cpu_offload()
image = pipe(prompt).images[0]
```
And you can get the memory consumption to < 3GB.
If is also possible to chain it with attention slicing for minimal memory consumption (< 2GB).
```Python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
)
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_sequential_cpu_offload()
pipe.enable_attention_slicing(1)
image = pipe(prompt).images[0]
```
**Note**: When using `enable_sequential_cpu_offload()`, it is important to **not** move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal. See [this issue](https://github.com/huggingface/diffusers/issues/1934) for more information.
## Using Channels Last memory format
Channels last memory format is an alternative way of ordering NCHW tensors in memory preserving dimensions ordering. Channels last tensors ordered in such a way that channels become the densest dimension (aka storing images pixel-per-pixel). Since not all operators currently support channels last format it may result in a worst performance, so it's better to try it and see if it works for your model.
For example, in order to set the UNet model in our pipeline to use channels last format, we can use the following:
```python
print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1)
pipe.unet.to(memory_format=torch.channels_last) # in-place operation
print(
pipe.unet.conv_out.state_dict()["weight"].stride()
) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works
```
## Tracing
Tracing runs an example input tensor through your model, and captures the operations that are invoked as that input makes its way through the model's layers so that an executable or `ScriptFunction` is returned that will be optimized using just-in-time compilation.
To trace our UNet model, we can use the following:
```python
import time
import torch
from diffusers import StableDiffusionPipeline
import functools
# torch disable grad
torch.set_grad_enabled(False)
# set variables
n_experiments = 2
unet_runs_per_experiment = 50
# load inputs
def generate_inputs():
sample = torch.randn(2, 4, 64, 64).half().cuda()
timestep = torch.rand(1).half().cuda() * 999
encoder_hidden_states = torch.randn(2, 77, 768).half().cuda()
return sample, timestep, encoder_hidden_states
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
).to("cuda")
unet = pipe.unet
unet.eval()
unet.to(memory_format=torch.channels_last) # use channels_last memory format
unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default
# warmup
for _ in range(3):
with torch.inference_mode():
inputs = generate_inputs()
orig_output = unet(*inputs)
# trace
print("tracing..")
unet_traced = torch.jit.trace(unet, inputs)
unet_traced.eval()
print("done tracing")
# warmup and optimize graph
for _ in range(5):
with torch.inference_mode():
inputs = generate_inputs()
orig_output = unet_traced(*inputs)
# benchmarking
with torch.inference_mode():
for _ in range(n_experiments):
torch.cuda.synchronize()
start_time = time.time()
for _ in range(unet_runs_per_experiment):
orig_output = unet_traced(*inputs)
torch.cuda.synchronize()
print(f"unet traced inference took {time.time() - start_time:.2f} seconds")
for _ in range(n_experiments):
torch.cuda.synchronize()
start_time = time.time()
for _ in range(unet_runs_per_experiment):
orig_output = unet(*inputs)
torch.cuda.synchronize()
print(f"unet inference took {time.time() - start_time:.2f} seconds")
# save the model
unet_traced.save("unet_traced.pt")
```
Then we can replace the `unet` attribute of the pipeline with the traced model like the following
```python
from diffusers import StableDiffusionPipeline
import torch
from dataclasses import dataclass
@dataclass
class UNet2DConditionOutput:
sample: torch.FloatTensor
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
).to("cuda")
# use jitted unet
unet_traced = torch.jit.load("unet_traced.pt")
# del pipe.unet
class TracedUNet(torch.nn.Module):
def __init__(self):
super().__init__()
self.in_channels = pipe.unet.in_channels
self.device = pipe.unet.device
def forward(self, latent_model_input, t, encoder_hidden_states):
sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0]
return UNet2DConditionOutput(sample=sample)
pipe.unet = TracedUNet()
with torch.inference_mode():
image = pipe([prompt] * 1, num_inference_steps=50).images[0]
```
## Memory Efficient Attention
Recent work on optimizing the bandwitdh in the attention block has generated huge speed ups and gains in GPU memory usage. The most recent being Flash Attention from @tridao: [code](https://github.com/HazyResearch/flash-attention), [paper](https://arxiv.org/pdf/2205.14135.pdf).
Here are the speedups we obtain on a few Nvidia GPUs when running the inference at 512x512 with a batch size of 1 (one prompt):
| GPU | Base Attention FP16 | Memory Efficient Attention FP16 |
|------------------ |--------------------- |--------------------------------- |
| NVIDIA Tesla T4 | 3.5it/s | 5.5it/s |
| NVIDIA 3060 RTX | 4.6it/s | 7.8it/s |
| NVIDIA A10G | 8.88it/s | 15.6it/s |
| NVIDIA RTX A6000 | 11.7it/s | 21.09it/s |
| NVIDIA TITAN RTX | 12.51it/s | 18.22it/s |
| A100-SXM4-40GB | 18.6it/s | 29.it/s |
| A100-SXM-80GB | 18.7it/s | 29.5it/s |
To leverage it just make sure you have:
- PyTorch > 1.12
- Cuda available
- [Installed the xformers library](xformers).
```python
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
with torch.inference_mode():
sample = pipe("a small cat")
# optional: You can disable it via
# pipe.disable_xformers_memory_efficient_attention()
```

View File

@@ -0,0 +1,70 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to use Stable Diffusion on Habana Gaudi
🤗 Diffusers is compatible with Habana Gaudi through 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion).
## Requirements
- Optimum Habana 1.3 or later, [here](https://huggingface.co/docs/optimum/habana/installation) is how to install it.
- SynapseAI 1.7.
## Inference Pipeline
To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances:
- A pipeline with [`GaudiStableDiffusionPipeline`](https://huggingface.co/docs/optimum/habana/package_reference/stable_diffusion_pipeline). This pipeline supports *text-to-image generation*.
- A scheduler with [`GaudiDDIMScheduler`](https://huggingface.co/docs/optimum/habana/package_reference/stable_diffusion_pipeline#optimum.habana.diffusers.GaudiDDIMScheduler). This scheduler has been optimized for Habana Gaudi.
When initializing the pipeline, you have to specify `use_habana=True` to deploy it on HPUs.
Furthermore, in order to get the fastest possible generations you should enable **HPU graphs** with `use_hpu_graphs=True`.
Finally, you will need to specify a [Gaudi configuration](https://huggingface.co/docs/optimum/habana/package_reference/gaudi_config) which can be downloaded from the [Hugging Face Hub](https://huggingface.co/Habana).
```python
from optimum.habana import GaudiConfig
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline
model_name = "stabilityai/stable-diffusion-2-base"
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
pipeline = GaudiStableDiffusionPipeline.from_pretrained(
model_name,
scheduler=scheduler,
use_habana=True,
use_hpu_graphs=True,
gaudi_config="Habana/stable-diffusion",
)
```
You can then call the pipeline to generate images by batches from one or several prompts:
```python
outputs = pipeline(
prompt=[
"High quality photo of an astronaut riding a horse in space",
"Face of a yellow cat, high resolution, sitting on a park bench",
],
num_images_per_prompt=10,
batch_size=4,
)
```
For more information, check out Optimum Habana's [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion) and the [example](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion) provided in the official Github repository.
## Benchmark
Here are the latencies for Habana Gaudi 1 and Gaudi 2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) Gaudi configuration (mixed precision bf16/fp32):
| | Latency | Batch size |
| ------- |:-------:|:----------:|
| Gaudi 1 | 4.37s | 4/8 |
| Gaudi 2 | 1.19s | 4/8 |

View File

@@ -0,0 +1,63 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to use Stable Diffusion in Apple Silicon (M1/M2)
🤗 Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch `mps` device. These are the steps you need to follow to use your M1 or M2 computer with Stable Diffusion.
## Requirements
- Mac computer with Apple silicon (M1/M2) hardware.
- macOS 12.6 or later (13.0 or later recommended).
- arm64 version of Python.
- PyTorch 1.13. You can install it with `pip` or `conda` using the instructions in https://pytorch.org/get-started/locally/.
## Inference Pipeline
The snippet below demonstrates how to use the `mps` backend using the familiar `to()` interface to move the Stable Diffusion pipeline to your M1 or M2 device.
We recommend to "prime" the pipeline using an additional one-time pass through it. This is a temporary workaround for a weird issue we have detected: the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and it's ok to use just one inference step and discard the result.
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("mps")
# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
# First-time "warmup" pass (see explanation above)
_ = pipe(prompt, num_inference_steps=1)
# Results match those from the CPU device after the warmup pass.
image = pipe(prompt).images[0]
```
## Performance Recommendations
M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does.
We recommend you use _attention slicing_ to reduce memory pressure during inference and prevent swapping, particularly if your computer has lass than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 × 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed _better performance_ in most Apple Silicon computers, unless you have 64 GB or more.
```python
pipeline.enable_attention_slicing()
```
## Known Issues
- As mentioned above, we are investigating a strange [first-time inference issue](https://github.com/huggingface/diffusers/issues/372).
- Generating multiple prompts in a batch [crashes or doesn't work reliably](https://github.com/huggingface/diffusers/issues/363). We believe this is related to the [`mps` backend in PyTorch](https://github.com/pytorch/pytorch/issues/84039). This is being resolved, but for now we recommend to iterate instead of batching.

View File

@@ -0,0 +1,42 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# How to use the ONNX Runtime for inference
🤗 Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available.
## Installation
- TODO
## Stable Diffusion Inference
The snippet below demonstrates how to use the ONNX runtime. You need to use `StableDiffusionOnnxPipeline` instead of `StableDiffusionPipeline`. You also need to download the weights from the `onnx` branch of the repository, and indicate the runtime provider you want to use.
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionOnnxPipeline
pipe = StableDiffusionOnnxPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
revision="onnx",
provider="CUDAExecutionProvider",
)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
## Known Issues
- Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.

View File

@@ -0,0 +1,15 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# OpenVINO
Under construction 🚧

View File

@@ -0,0 +1,26 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Installing xFormers
We recommend the use of [xFormers](https://github.com/facebookresearch/xformers) for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption.
Installing xFormers has historically been a bit involved, as binary distributions were not always up to date. Fortunately, the project has [very recently](https://github.com/facebookresearch/xformers/pull/591) integrated a process to build pip wheels as part of the project's continuous integration, so this should improve a lot starting from xFormers version 0.0.16.
Until xFormers 0.0.16 is deployed, you can install pip wheels using [`TestPyPI`](https://test.pypi.org/project/formers/). These are the steps that worked for us in a Linux computer to install xFormers version 0.0.15:
```bash
pip install pyre-extensions==0.0.23
pip install -i https://test.pypi.org/simple/ formers==0.0.15.dev376
```
We'll update these instructions when the wheels are published to the official PyPI repository.

View File

@@ -0,0 +1,130 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Quicktour
Get up and running with 🧨 Diffusers quickly!
Whether you're a developer or an everyday user, this quick tour will help you get started and show you how to use [`DiffusionPipeline`] for inference.
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install --upgrade diffusers accelerate transformers
```
- [`accelerate`](https://huggingface.co/docs/accelerate/index) speeds up model loading for inference and training
- [`transformers`](https://huggingface.co/docs/transformers/index) is required to run the most popular diffusion models, such as [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview)
## DiffusionPipeline
The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference. You can use the [`DiffusionPipeline`] out-of-the-box for many tasks across different modalities. Take a look at the table below for some supported tasks:
| **Task** | **Description** | **Pipeline**
|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|
| Unconditional Image Generation | generate an image from gaussian noise | [unconditional_image_generation](./using-diffusers/unconditional_image_generation`) |
| Text-Guided Image Generation | generate an image given a text prompt | [conditional_image_generation](./using-diffusers/conditional_image_generation) |
| Text-Guided Image-to-Image Translation | adapt an image guided by a text prompt | [img2img](./using-diffusers/img2img) |
| Text-Guided Image-Inpainting | fill the masked part of an image given the image, the mask and a text prompt | [inpaint](./using-diffusers/inpaint) |
| Text-Guided Depth-to-Image Translation | adapt parts of an image guided by a text prompt while preserving structure via depth estimation | [depth2image](./using-diffusers/depth2image) |
For more in-detail information on how diffusion pipelines function for the different tasks, please have a look at the [**Using Diffusers**](./using-diffusers/overview) section.
As an example, start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for text-to-image generation with [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion).
For [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), please carefully read its [license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) before running the model.
This is due to the improved image generation capabilities of the model and the potentially harmful content that could be produced with it.
Please, head over to your stable diffusion model of choice, *e.g.* [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5), and read the license.
You can load the model as follows:
```python
>>> from diffusers import DiffusionPipeline
>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
```
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.
```python
>>> pipeline.to("cuda")
```
Now you can use the `pipeline` on your text prompt:
```python
>>> image = pipeline("An image of a squirrel in Picasso style").images[0]
```
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
You can save the image by simply calling:
```python
>>> image.save("image_of_squirrel_painting.png")
```
**Note**: You can also use the pipeline locally by downloading the weights via:
```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```
and then loading the saved weights into the pipeline.
```python
>>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
```
Running the pipeline is then identical to the code above as it's the same model architecture.
```python
>>> generator.to("cuda")
>>> image = generator("An image of a squirrel in Picasso style").images[0]
>>> image.save("image_of_squirrel_painting.png")
```
Diffusion systems can be used with multiple different [schedulers](./api/schedulers/overview) each with their
pros and cons. By default, Stable Diffusion runs with [`PNDMScheduler`], but it's very simple to
use a different scheduler. *E.g.* if you would instead like to use the [`EulerDiscreteScheduler`] scheduler,
you could use it as follows:
```python
>>> from diffusers import EulerDiscreteScheduler
>>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # change scheduler to Euler
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
```
For more in-detail information on how to change between schedulers, please refer to the [Using Schedulers](./using-diffusers/schedulers) guide.
[Stability AI's](https://stability.ai/) Stable Diffusion model is an impressive image generation model
and can do much more than just generating images from text. We have dedicated a whole documentation page,
just for Stable Diffusion [here](./conceptual/stable_diffusion).
If you want to know how to optimize Stable Diffusion to run on less memory, higher inference speeds, on specific hardware, such as Mac, or with [ONNX Runtime](https://onnxruntime.ai/), please have a look at our
optimization pages:
- [Optimized PyTorch on GPU](./optimization/fp16)
- [Mac OS with PyTorch](./optimization/mps)
- [ONNX](./optimization/onnx)
- [OpenVINO](./optimization/open_vino)
If you want to fine-tune or train your diffusion model, please have a look at the [**training section**](./training/overview)
Finally, please be considerate when distributing generated images publicly 🤗.

View File

@@ -0,0 +1,333 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# The Stable Diffusion Guide 🎨
<a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_101_guide.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## Intro
Stable Diffusion is a [Latent Diffusion model](https://github.com/CompVis/latent-diffusion) developed by researchers from the Machine Vision and Learning group at LMU Munich, *a.k.a* CompVis.
Model checkpoints were publicly released at the end of August 2022 by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out [the official blog post](https://stability.ai/blog/stable-diffusion-public-release).
Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints **faster**, **more memory efficient**, and **more performant**.
🧨 Diffusers offers a simple API to run stable diffusion with all memory, computing, and quality improvements.
This notebook walks you through the improvements one-by-one so you can best leverage [`StableDiffusionPipeline`] for **inference**.
## Prompt Engineering 🎨
When running *Stable Diffusion* in inference, we usually want to generate a certain type, or style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation.
So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time.
This can be done by both improving the **computational efficiency** (speed) and the **memory efficiency** (GPU RAM).
Let's start by looking into computational efficiency first.
Throughout the notebook, we will focus on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):
``` python
model_id = "runwayml/stable-diffusion-v1-5"
```
Let's load the pipeline.
## Speed Optimization
``` python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(model_id)
```
We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:
``` python
prompt = "portrait photo of a old warrior chief"
```
To begin with, we should make sure we run inference on GPU, so let's move the pipeline to GPU, just like you would with any PyTorch module.
``` python
pipe = pipe.to("cuda")
```
To generate an image, you should use the [~`StableDiffusionPipeline.__call__`] method.
To make sure we can reproduce more or less the same image in every call, let's make use of the generator. See the documentation on reproducibility [here](./conceptual/reproducibility) for more information.
``` python
generator = torch.Generator("cuda").manual_seed(0)
```
Now, let's take a spin on it.
``` python
image = pipe(prompt, generator=generator).images[0]
image
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_1.png)
Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if your allocated GPU is better than a T4).
The default run we did above used full float32 precision and ran the default number of inference steps (50). The easiest speed-ups come from switching to float16 (or half) precision and simply running fewer inference steps. Let's load the model now in float16 instead.
``` python
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
```
And we can again call the pipeline to generate an image.
``` python
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, generator=generator).images[0]
image
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_2.png)
Cool, this is almost three times as fast for arguably the same image quality.
We strongly suggest always running your pipelines in float16 as so far we have very rarely seen degradations in quality because of it.
Next, let's see if we need to use 50 inference steps or whether we could use significantly fewer. The number of inference steps is associated with the denoising scheduler we use. Choosing a more efficient scheduler could help us decrease the number of steps.
Let's have a look at all the schedulers the stable diffusion pipeline is compatible with.
``` python
pipe.scheduler.compatibles
```
```
[diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler,
diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler,
diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler,
diffusers.schedulers.scheduling_pndm.PNDMScheduler,
diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler,
diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler,
diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler,
diffusers.schedulers.scheduling_ddpm.DDPMScheduler,
diffusers.schedulers.scheduling_ddim.DDIMScheduler]
```
Cool, that's a lot of schedulers.
🧨 Diffusers is constantly adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend taking a look at the official documentation [here](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview).
Alright, right now Stable Diffusion is using the `PNDMScheduler` which usually requires around 50 inference steps. However, other schedulers such as `DPMSolverMultistepScheduler` or `DPMSolverSinglestepScheduler` seem to get away with just 20 to 25 inference steps. Let's try them out.
You can set a new scheduler by making use of the [from_config](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) function.
``` python
from diffusers import DPMSolverMultistepScheduler
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
```
Now, let's try to reduce the number of inference steps to just 20.
``` python
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
image
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_3.png)
The image now does look a little different, but it's arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍.
## Memory Optimization
Less memory used in generation indirectly implies more speed, since we're often trying to maximize how many images we can generate per second. Usually, the more images per inference run, the more images per second too.
The easiest way to see how many images we can generate at once is to simply try it out, and see when we get a *"Out-of-memory (OOM)"* error.
We can run batched inference by simply passing a list of prompts and generators. Let's define a quick function that generates a batch for us.
``` python
def get_inputs(batch_size=1):
generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)]
prompts = batch_size * [prompt]
num_inference_steps = 20
return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps}
```
This function returns a list of prompts and a list of generators, so we can reuse the generator that produced a result we like.
We also need a method that allows us to easily display a batch of images.
``` python
from PIL import Image
def image_grid(imgs, rows=2, cols=2):
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
```
Cool, let's see how much memory we can use starting with `batch_size=4`.
``` python
images = pipe(**get_inputs(batch_size=4)).images
image_grid(images)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_4.png)
Going over a batch_size of 4 will error out in this notebook (assuming we are running it on a T4 GPU). Also, we can see we only generate slightly more images per second (3.75s/image) compared to 4s/image previously.
However, the community has found some nice tricks to improve the memory constraints further. After stable diffusion was released, the community found improvements within days and shared them freely over GitHub - open-source at its finest! I believe the original idea came from [this](https://github.com/basujindal/stable-diffusion/pull/117) GitHub thread.
By far most of the memory is taken up by the cross-attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amount of memory.
It can easily be enabled by calling `enable_attention_slicing` as is documented [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.enable_attention_slicing).
``` python
pipe.enable_attention_slicing()
```
Great, now that attention slicing is enabled, let's try to double the batch size again, going for `batch_size=8`.
``` python
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_5.png)
Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs).
We're at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality.
Next, let's look into how to improve the quality!
## Quality Improvements
Now that our image generation pipeline is blazing fast, let's try to get maximum image quality.
First of all, image quality is extremely subjective, so it's difficult to make general claims here.
The most obvious step to take to improve quality is to use *better checkpoints*. Since the release of Stable Diffusion, many improved versions have been released, which are summarized here:
- *Official Release - 22 Aug 2022*: [Stable-Diffusion 1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
- *20 October 2022*: [Stable-Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- *24 Nov 2022*: [Stable-Diffusion 2.0](https://huggingface.co/stabilityai/stable-diffusion-2-0)
- *7 Dec 2022*: [Stable-Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
Newer versions don't necessarily mean better image quality with the same parameters. People mentioned that *2.0* is slightly worse than *1.5* for certain prompts, but given the right prompt engineering *2.0* and *2.1* seem to be better.
Overall, we strongly recommend just trying the models out and reading up on advice online (e.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example [this nice blog post](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/).
Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction.
We recommend having a look at all [diffusers checkpoints sorted by downloads and trying out the different checkpoints](https://huggingface.co/models?library=diffusers).
For the following, we will stick to v1.5 for simplicity.
Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at [this blog post](https://huggingface.co/blog/stable_diffusion).
Let's load [stabilityai's newest auto-decoder](https://huggingface.co/stabilityai/stable-diffusion-2-1).
``` python
from diffusers import AutoencoderKL
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda")
```
Now we can set it to the vae of the pipeline to use it.
``` python
pipe.vae = vae
```
Let's run the same prompt as before to compare quality.
``` python
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_6.png)
Seems like the difference is only very minor, but the new generations are arguably a bit *sharper*.
Cool, finally, let's look a bit into prompt engineering.
Our goal was to generate a photo of an old warrior chief. Let's now try to bring a bit more color into the photos and make the look more impressive.
Originally our prompt was "*portrait photo of an old warrior chief*".
To improve the prompt, it often helps to add cues that could have been used online to save high-quality photos, as well as add more details.
Essentially, when doing prompt engineering, one has to think:
- How was the photo or similar photos of the one I want probably stored on the internet?
- What additional detail can I give that steers the models into the style that I want?
Cool, let's add more details.
``` python
prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes"
```
and let's also add some cues that usually help to generate higher quality images.
``` python
prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta"
prompt
```
Cool, let's now try this prompt.
``` python
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_7.png)
Pretty impressive! We got some very high-quality image generations there. The 2nd image is my personal favorite, so I'll re-use this seed and see whether I can tweak the prompts slightly by using "oldest warrior", "old", "", and "young" instead of "old".
``` python
prompts = [
"portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
"portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
"portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
"portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
]
generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image
images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images
image_grid(images)
```
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_8.png)
The first picture looks nice! The eye movement slightly changed and looks nice. This finished up our 101-guide on how to use Stable Diffusion 🤗.
For more information on optimization or other guides, I recommend taking a look at the following:
- [Blog post about Stable Diffusion](https://huggingface.co/blog/stable_diffusion): In-detail blog post explaining Stable Diffusion.
- [FlashAttention](https://huggingface.co/docs/diffusers/optimization/xformers): XFormers flash attention can optimize your model even further with more speed and memory improvements.
- [Dreambooth](https://huggingface.co/docs/diffusers/training/dreambooth) - Quickly customize the model by fine-tuning it.
- [General info on Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview) - Info on other tasks that are powered by Stable Diffusion.

Some files were not shown because too many files have changed in this diff Show More