* Fix overflow in rgblike_to_depthmap by safe dtype casting (torch & NumPy)
* Fix: store original dtype and cast back after safe computation
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Changing the way we infer dtype to avoid force evaluation of lazy tensors
* changing way to infer dtype to ensure type consistency
* more robust infering of dtype
* removing the upscale dtype entirely
* Bria FIBO pipeline
* style fixs
* fix CR
* Refactor BriaFibo classes and update pipeline parameters
- Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents.
- Modified the _unpack_latents method in BriaFiboPipeline to improve clarity.
- Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching.
- Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests.
* edit the docs of FIBO
* Remove unused BriaFibo imports and update CPU offload method in BriaFiboPipeline
* Refactor FIBO classes to BriaFibo naming convention
- Updated class names from FIBO to BriaFibo for consistency across the module.
- Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming.
- Ensured all references in the BriaFiboTransformer2DModel are updated accordingly.
* Add BriaFiboTransformer2DModel import to transformers module
* Remove unused BriaFibo imports from modular pipelines and add BriaFiboTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers.
* Update BriaFibo classes with copied documentation and fix import typo in pipeline module
- Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods.
- Corrected the import statement for BriaFiboPipeline in the pipelines module.
* Remove unused BriaFibo imports from __init__.py to streamline modular pipelines.
* Refactor documentation comments in BriaFibo classes to indicate inspiration from existing implementations
- Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied.
- Enhanced clarity on the origins of the methods to maintain proper attribution.
* change Inspired by to Based on
* add reference link and fix trailing whitespace
* Add BriaFiboTransformer2DModel documentation and update comments in BriaFibo classes
- Introduced a new documentation file for BriaFiboTransformer2DModel.
- Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution.
---------
Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
* rename photon to prx
* rename photon into prx
* Revert .gitignore to state before commit b7fb0fe9d6
* rename photon to prx
* rename photon into prx
* Revert .gitignore to state before commit b7fb0fe9d6
* make fix-copies
* purge HF_HUB_ENABLE_HF_TRANSFER; promote Xet
* purge HF_HUB_ENABLE_HF_TRANSFER; promote Xet x2
* restrict docker build test to the ones we actually use in CI.
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Add Photon model and pipeline support
This commit adds support for the Photon image generation model:
- PhotonTransformer2DModel: Core transformer architecture
- PhotonPipeline: Text-to-image generation pipeline
- Attention processor updates for Photon-specific attention mechanism
- Conversion script for loading Photon checkpoints
- Documentation and tests
* just store the T5Gemma encoder
* enhance_vae_properties if vae is provided only
* remove autocast for text encoder forwad
* BF16 example
* conditioned CFG
* remove enhance vae and use vae.config directly when possible
* move PhotonAttnProcessor2_0 in transformer_photon
* remove einops dependency and now inherits from AttentionMixin
* unify the structure of the forward block
* update doc
* update doc
* fix T5Gemma loading from hub
* fix timestep shift
* remove lora support from doc
* Rename EmbedND for PhotoEmbedND
* remove modulation dataclass
* put _attn_forward and _ffn_forward logic in PhotonBlock's forward
* renam LastLayer for FinalLayer
* remove lora related code
* rename vae_spatial_compression_ratio for vae_scale_factor
* support prompt_embeds in call
* move xattention conditionning out computation out of the denoising loop
* add negative prompts
* Use _import_structure for lazy loading
* make quality + style
* add pipeline test + corresponding fixes
* utility function that determines the default resolution given the VAE
* Refactor PhotonAttention to match Flux pattern
* built-in RMSNorm
* Revert accidental .gitignore change
* parameter names match the standard diffusers conventions
* renaming and remove unecessary attributes setting
* Update docs/source/en/api/pipelines/photon.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* quantization example
* added doc to toctree
* Update docs/source/en/api/pipelines/photon.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/api/pipelines/photon.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/api/pipelines/photon.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* use dispatch_attention_fn for multiple attention backend support
* naming changes
* make fix copy
* Update docs/source/en/api/pipelines/photon.md
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
* Add PhotonTransformer2DModel to TYPE_CHECKING imports
* make fix-copies
* Use Tuple instead of tuple
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
* restrict the version of transformers
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
* Update tests/pipelines/photon/test_pipeline_photon.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
* Update tests/pipelines/photon/test_pipeline_photon.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
* change | for Optional
* fix nits.
* use typing Dict
---------
Co-authored-by: davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local>
Co-authored-by: David Briand <david@photoroom.com>
Co-authored-by: davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
Fix: Use incorrect temporary variable key when replacing adapter name in state dict within load_lora_adapter function
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix dockerfile definitions.
* python 3.10 slim.
* up
* up
* up
* up
* up
* revert pr_tests.yml changes
* up
* up
* reduce python version for torch 2.1.0
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
I noticed that the test should be for the option check_compiled="ignore"
but it was using check_compiled="warn". This has been fixed, now the
correct argument is passed.
However, the fact that the test passed means that it was incorrect to
begin with. The way that logs are collected does not collect the
logger.warning call here (not sure why). To amend this, I'm now using
assertNoLogs. With this change, the test correctly fails when the wrong
argument is passed.
* cache non lora pipeline outputs.
* up
* up
* up
* up
* Revert "up"
This reverts commit 772c32e433.
* up
* Revert "up"
This reverts commit cca03df7fc.
* up
* up
* add .
* up
* up
* up
* up
* up
* up
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* misc: update examples link
* misc: update examples link
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* Refine documentation for CacheDiT features
Updated the wording for clarity and consistency in the documentation. Adjusted sections on cache acceleration, automatic block adapter, patch functor, and hybrid cache configuration.
* Upgrade huggingface-hub to version 0.35.0
Updated huggingface-hub version from 0.26.1 to 0.35.0.
* Add uvicorn and accelerate to requirements
* Fix install instructions for server
* Convert alphas for embedders for sd-scripts to ai toolkit conversion
* Add kohya embedders conversion test
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Basic implementation of request scheduling
* Basic editing in SD and Flux Pipelines
* Small Fix
* Fix
* Update for more pipelines
* Add examples/server-async
* Add examples/server-async
* Updated RequestScopedPipeline to handle a single tokenizer lock to avoid race conditions
* Fix
* Fix _TokenizerLockWrapper
* Fix _TokenizerLockWrapper
* Delete _TokenizerLockWrapper
* Fix tokenizer
* Update examples/server-async
* Fix server-async
* Optimizations in examples/server-async
* We keep the implementation simple in examples/server-async
* Update examples/server-async/README.md
* Update examples/server-async/README.md for changes to tokenizer locks and backward-compatible retrieve_timesteps
* The changes to the diffusers core have been undone and all logic is being moved to exmaples/server-async
* Update examples/server-async/utils/*
* Fix BaseAsyncScheduler
* Rollback in the core of the diffusers
* Update examples/server-async/README.md
* Complete rollback of diffusers core files
* Simple implementation of an asynchronous server compatible with SD3-3.5 and Flux Pipelines
* Update examples/server-async/README.md
* Fixed import errors in 'examples/server-async/serverasync.py'
* Flux Pipeline Discard
* Update examples/server-async/README.md
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* fix hidream type hint
* fix hunyuan-video type hint
* fix many type hint
* fix many type hint errors
* fix many type hint errors
* fix many type hint errors
* make stype & make quality
* Update autoencoder_kl_wan.py
When using the Wan2.2 VAE, the spatial compression ratio calculated here is incorrect. It should be 16 instead of 8. Pass it in directly via the config to ensure it’s correct here.
* Update autoencoder_kl_wan.py
* support Wan2.2-VACE-Fun-A14B
* support Wan2.2-VACE-Fun-A14B
* support Wan2.2-VACE-Fun-A14B
* Apply style fixes
* test
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Use SDP on BF16 in GPU/HPU migration
Signed-off-by: Daniel Socek <daniel.socek@intel.com>
* Formatting fix for enabling SDP with BF16 precision on HPU
Signed-off-by: Daniel Socek <daniel.socek@intel.com>
---------
Signed-off-by: Daniel Socek <daniel.socek@intel.com>
* Add AttentionMixin to WanVACETransformer3DModel
to enable methods like `set_attn_processor()`.
* Import AttentionMixin in transformer_wan_vace.py
Special thanks to @tolgacangoz 🙇♂️
* make modular pipeline work with model_index.json
* up
* style
* up
* up
* style
* up more
* Fix MultiControlNet import (#12118)
fix
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* fix: update SkyReels-V2 documentation and moving into attn dispatcher
* Refactors SkyReelsV2's attention implementation
* style
* up
* Fixes formatting in SkyReels-V2 documentation
Wraps the visual demonstration section in a Markdown code block.
This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document.
* Docs: Condense example arrays in skyreels_v2 guide
Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation.
This change makes the underlying data patterns in the examples easier to understand at a glance.
* Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel
* Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies
* Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes
* up
* up
* Update model_id in SkyReels-V2 documentation
* up
* refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation
* fix: update copyright year to 2025 in skyreels_v2.md
* docs: enhance parameter examples and formatting in skyreels_v2.md
* docs: update example formatting and add notes on LoRA support in skyreels_v2.md
* refactor: remove copied comments from transformer_wan in SkyReelsV2 classes
* Clean up comments in skyreels_v2.md
Removed comments about acceleration helpers and Flash Attention installation.
* Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class
* Fix PyTorch 2.3.1 compatibility: add version guard for torch.library.custom_op
- Add hasattr() check for torch.library.custom_op and register_fake
- These functions were added in PyTorch 2.4, causing import failures in 2.3.1
- Both decorators and functions are now properly guarded with version checks
- Maintains backward compatibility while preserving functionality
Fixes#12195
* Use dummy decorators approach for PyTorch version compatibility
- Replace hasattr check with version string comparison
- Add no-op decorator functions for PyTorch < 2.4.0
- Follows pattern from #11941 as suggested by reviewer
- Maintains cleaner code structure without indentation changes
* Update src/diffusers/models/attention_dispatch.py
Update all the decorator usages
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Update src/diffusers/models/attention_dispatch.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Update src/diffusers/models/attention_dispatch.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Update src/diffusers/models/attention_dispatch.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Move version check to top of file and use private naming as requested
* Apply style fixes
---------
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Add Bria model and pipeline to diffusers
- Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities.
- Updated import structures across various modules to include the new Bria components.
- Added utility functions and output classes specific to the Bria pipeline.
- Implemented tests for the Bria pipeline to ensure functionality and output integrity.
* with working tests
* style and quality pass
* adding docs
* add to overview
* fixes from "make fix-copies"
* Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py.
* remove redundent and duplicates tests and fix bf16
slow test
* style fixes
* small doc update
* Enhance Bria 3.2 documentation and implementation
- Updated the GitHub repository link for Bria 3.2.
- Added usage instructions for the gated model access.
- Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture.
- Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline.
- Updated the pipeline output class to reflect Bria-specific functionality.
- Adjusted test cases to align with the new Bria model structure.
* Refactor Bria model components and update documentation
- Removed outdated inference example from Bria 3.2 documentation.
- Introduced the BriaTransformerBlock class to enhance model architecture.
- Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`.
- Improved import structure in the Bria pipeline to handle optional dependencies.
- Adjusted test cases to reflect changes in model dtype assertions.
* Update Bria model reference in documentation to reflect new file naming convention
* Update docs/source/en/_toctree.yml
* Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly.
* move the __call__ func to the end of file
* Update BriaPipeline example to use bfloat16 for precision sensitivity for better result
* make style && make quality && make fix-copiessource
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
- Modify offload_models function to handle DiffusionPipeline correctly
- Ensure compatibility with both single and multiple module inputs
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks
* CogView4: use local final AdaLN (no SiLU) per review; keep generic AdaLN unchanged
* re-add configs as normal files (no LFS)
* Apply suggestions from code review
* Apply style fixes
---------
Co-authored-by: 武嘉涵 <lambert@wujiahandeMacBook-Pro.local>
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* try to use deepseek with an agent to auto i18n to zh
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* add two more docs
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* fix, updated some prompt for better translation
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* Try to passs CI check
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* fix up for human review process
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* fix up
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* fix review comments
Signed-off-by: SamYuan1990 <yy19902439@126.com>
---------
Signed-off-by: SamYuan1990 <yy19902439@126.com>
* Initial commit implementing frequency-decoupled guidance (FDG) as a guider
* Update FrequencyDecoupledGuidance docstring to describe FDG
* Update project so that it accepts any number of non-batch dims
* Change guidance_scale and other params to accept a list of params for each freq level
* Add comment with Laplacian pyramid shapes
* Add function to import_utils to check if the kornia package is available
* Only import from kornia if package is available
* Fix bug: use pred_cond/uncond in freq space rather than data space
* Allow guidance rescaling to be done in data space or frequency space (speculative)
* Add kornia install instructions to kornia import error message
* Add config to control whether operations are upcast to fp64
* Add parallel_weights recommended values to docstring
* Apply style fixes
* make fix-copies
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co>
* feat: support lora in qwen image and training script
* up
* up
* up
* up
* up
* up
* add lora tests
* fix
* add tests
* fix
* reviewer feedback
* up[
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co>
---------
Co-authored-by: Aryan <aryan@huggingface.co>
[Examples] uniform naming notations
since the in parameter `size` represents `args.resolution`, I thus replace the `args.resolution` inside DreamBoothData with `size`. And revise some notations such as `center_crop`.
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
* style
* Fix class name casing for SkyReelsV2 components in multiple files to ensure consistency and correct functionality.
* cleaning
* cleansing
* Refactor `get_timestep_embedding` to move modifications into `SkyReelsV2TimeTextImageEmbedding`.
* Remove unnecessary line break in `get_timestep_embedding` function for cleaner code.
* Remove `skyreels_v2` entry from `_import_structure` and update its initialization to directly assign the list of SkyReelsV2 components.
* cleansing
* Refactor attention processing in `SkyReelsV2AttnProcessor2_0` to always convert query, key, and value to `torch.bfloat16`, simplifying the code and improving clarity.
* Enhance example usage in `pipeline_skyreels_v2_diffusion_forcing.py` by adding VAE initialization and detailed prompt for video generation, improving clarity and usability of the documentation.
* Refactor import structure in `__init__.py` for SkyReelsV2 components and improve formatting in `pipeline_skyreels_v2_diffusion_forcing.py` to enhance code readability and maintainability.
* Update `guidance_scale` parameter in `SkyReelsV2DiffusionForcingPipeline` from 5.0 to 6.0 to enhance video generation quality.
* Update `guidance_scale` parameter in example documentation and class definition of `SkyReelsV2DiffusionForcingPipeline` to ensure consistency and improve video generation quality.
* Update `causal_block_size` parameter in `SkyReelsV2DiffusionForcingPipeline` to default to `None`.
* up
* Fix dtype conversion for `timestep_proj` in `SkyReelsV2Transformer3DModel` to *ensure* correct tensor operations.
* Optimize causal mask generation by replacing repeated tensor with `repeat_interleave` for improved efficiency in `SkyReelsV2Transformer3DModel`.
* style
* Enhance example documentation in `SkyReelsV2DiffusionForcingPipeline` with guidance scale and shift parameters for T2V and I2V. Remove unused `retrieve_latents` function to streamline the code.
* Refactor sample scheduler creation in `SkyReelsV2DiffusionForcingPipeline` to use `deepcopy` for improved state management during inference steps.
* Enhance error handling and documentation in `SkyReelsV2DiffusionForcingPipeline` for `overlap_history` and `addnoise_condition` parameters to improve long video generation guidance.
* Update documentation and progress bar handling in `SkyReelsV2DiffusionForcingPipeline` to clarify asynchronous inference settings and improve progress tracking during denoising steps.
* Refine progress bar calculation in `SkyReelsV2DiffusionForcingPipeline` by rounding the step size to one decimal place for improved readability during denoising steps.
* Update import statements in `SkyReelsV2DiffusionForcingPipeline` documentation for improved clarity and organization.
* Refactor progress bar handling in `SkyReelsV2DiffusionForcingPipeline` to use total steps instead of calculated step size.
* update templates for i2v, v2v
* Add `retrieve_latents` function to streamline latent retrieval in `SkyReelsV2DiffusionForcingPipeline`. Update video latent processing to utilize this new function for improved clarity and maintainability.
* Add `retrieve_latents` function to both i2v and v2v pipelines for consistent latent retrieval. Update video latent processing to utilize this function, enhancing clarity and maintainability across the SkyReelsV2DiffusionForcingPipeline implementations.
* Remove redundant ValueError for `overlap_history` in `SkyReelsV2DiffusionForcingPipeline` to streamline error handling and improve user guidance for long video generation.
* Update default video dimensions and flow matching scheduler parameter in `SkyReelsV2DiffusionForcingPipeline` to enhance video generation capabilities.
* Refactor `SkyReelsV2DiffusionForcingPipeline` to support Image-to-Video (i2v) generation. Update class name, add image encoding functionality, and adjust parameters for improved video generation. Enhance error handling for image inputs and update documentation accordingly.
* Improve organization for image-last_image condition.
* Refactor `SkyReelsV2DiffusionForcingImageToVideoPipeline` to improve latent preparation and video condition handling integration.
* style
* style
* Add example usage of PIL for image input in `SkyReelsV2DiffusionForcingImageToVideoPipeline` documentation.
* Refactor `SkyReelsV2DiffusionForcingPipeline` to `SkyReelsV2DiffusionForcingVideoToVideoPipeline`, enhancing support for Video-to-Video (v2v) generation. Introduce video input handling, update latent preparation logic, and improve error handling for input parameters.
* Refactor `SkyReelsV2DiffusionForcingImageToVideoPipeline` by removing the `image_encoder` and `image_processor` dependencies. Update the CPU offload sequence accordingly.
* Refactor `SkyReelsV2DiffusionForcingImageToVideoPipeline` to enhance latent preparation logic and condition handling. Update image input type to `Optional`, streamline video condition processing, and improve handling of `last_image` during latent generation.
* Enhance `SkyReelsV2DiffusionForcingPipeline` by refining latent preparation for long video generation. Introduce new parameters for video handling, overlap history, and causal block size. Update logic to accommodate both short and long video scenarios, ensuring compatibility and improved processing.
* refactor
* fix num_frames
* fix prefix_video_latents
* up
* refactor
* Fix typo in scheduler method call within `SkyReelsV2DiffusionForcingVideoToVideoPipeline` to ensure proper noise scaling during latent generation.
* up
* Enhance `SkyReelsV2DiffusionForcingImageToVideoPipeline` by adding support for `last_image` parameter and refining latent frame calculations. Update preprocessing logic.
* add statistics
* Refine latent frame handling in `SkyReelsV2DiffusionForcingImageToVideoPipeline` by correcting variable names and reintroducing latent mean and standard deviation calculations. Update logic for frame preparation and sampling to ensure accurate video generation.
* up
* refactor
* up
* Refactor `SkyReelsV2DiffusionForcingVideoToVideoPipeline` to improve latent handling by enforcing tensor input for video, updating frame preparation logic, and adjusting default frame count. Enhance preprocessing and postprocessing steps for better integration.
* style
* fix vae output indexing
* upup
* up
* Fix tensor concatenation and repetition logic in `SkyReelsV2DiffusionForcingImageToVideoPipeline` to ensure correct dimensionality for video conditions and latent conditions.
* Refactor latent retrieval logic in `SkyReelsV2DiffusionForcingVideoToVideoPipeline` to handle tensor dimensions more robustly, ensuring compatibility with both 3D and 4D video inputs.
* Enhance logging in `SkyReelsV2DiffusionForcing` pipelines by adding iteration print statements for better debugging. Clean up unused code related to prefix video latents length calculation in `SkyReelsV2DiffusionForcingImageToVideoPipeline`.
* Update latent handling in `SkyReelsV2DiffusionForcingImageToVideoPipeline` to conditionally set latents based on video iteration state, improving flexibility for video input processing.
* Refactor `SkyReelsV2TimeTextImageEmbedding` to utilize `get_1d_sincos_pos_embed_from_grid` for timestep projection.
* Enhance `get_1d_sincos_pos_embed_from_grid` function to include an optional parameter `flip_sin_to_cos` for flipping sine and cosine embeddings, improving flexibility in positional embedding generation.
* Update timestep projection in `SkyReelsV2TimeTextImageEmbedding` to include `flip_sin_to_cos` parameter, enhancing the flexibility of time embedding generation.
* Refactor tensor type handling in `SkyReelsV2AttnProcessor2_0` and `SkyReelsV2TransformerBlock` to ensure consistent use of `torch.float32` and `torch.bfloat16`, improving integration.
* Update tensor type in `SkyReelsV2RotaryPosEmbed` to use `torch.float32` for frequency calculations, ensuring consistency in data types across the model.
* Refactor `SkyReelsV2TimeTextImageEmbedding` to utilize automatic mixed precision for timestep projection.
* down
* down
* style
* Add debug tensor tracking to `SkyReelsV2Transformer3DModel` for enhanced debugging and output analysis; update `Transformer2DModelOutput` to include debug tensors.
* up
* Refactor indentation in `SkyReelsV2AttnProcessor2_0` to improve code readability and maintain consistency in style.
* Convert query, key, and value tensors to bfloat16 in `SkyReelsV2AttnProcessor2_0` for improved performance.
* Add debug print statements in `SkyReelsV2TransformerBlock` to track tensor shapes and values for improved debugging and analysis.
* debug
* debug
* Remove commented-out debug tensor tracking from `SkyReelsV2TransformerBlock`
* Add functionality to save processed video latents as a Safetensors file in `SkyReelsV2DiffusionForcingPipeline`.
* up
* Add functionality to save output latents as a Safetensors file in `SkyReelsV2DiffusionForcingPipeline`.
* up
* Remove additional commented-out debug tensor tracking from `SkyReelsV2TransformerBlock` and `SkyReelsV2Transformer3DModel` for cleaner code.
* style
* cleansing
* Update example documentation and parameters in `SkyReelsV2Pipeline`. Adjusted example code for loading models, modified default values for height, width, num_frames, and guidance_scale, and improved output video quality settings.
* Update shift parameter in example documentation and default values across SkyReels V2 pipelines. Adjusted shift values for I2V from 3.0 to 5.0 and updated related example code for consistency.
* Update example documentation in SkyReels V2 pipelines to include available model options and update model references for loading. Adjusted model names to reflect the latest versions across I2V, V2V, and T2V pipelines.
* Add test templates
* style
* Add docs template
* Add SkyReels V2 Diffusion Forcing Video-to-Video Pipeline to imports
* style
* fix-copies
* convert i2v 1.3b
* Update transformer configuration to include `image_dim` for SkyReels V2 models and refactor imports to use `SkyReelsV2Transformer3DModel`.
* Refactor transformer import in SkyReels V2 pipeline to use `SkyReelsV2Transformer3DModel` for consistency.
* Update transformer configuration in SkyReels V2 to increase `in_channels` from 16 to 36 for i2v conf.
* Update transformer configuration in SkyReels V2 to set `added_kv_proj_dim` values for different model types.
* up
* up
* up
* Add SkyReelsV2Pipeline support for T2V model type in conversion script
* upp
* Refactor model type checks in conversion script to use substring matching for improved flexibility
* upp
* Fix shard path formatting in conversion script to accommodate varying model types by dynamically adjusting zero padding.
* Update sharded safetensors loading logic in conversion script to use substring matching for model directory checks
* Update scheduler parameters in SkyReels V2 test files for consistency across image and video pipelines
* Refactor conversion script to initialize text encoder, tokenizer, and scheduler for SkyReels pipelines, enhancing model integration
* style
* Update documentation for SkyReels-V2, introducing the Infinite-length Film Generative model, enhancing text-to-video generation examples, and updating model references throughout the API documentation.
* Add SkyReelsV2Transformer3DModel and FlowMatchUniPCMultistepScheduler documentation, updating TOC and introducing new model and scheduler files.
* style
* Update documentation for SkyReelsV2DiffusionForcingPipeline to correct flow matching scheduler parameter for I2V from 3.0 to 5.0, ensuring clarity in usage examples.
* Add documentation for causal_block_size parameter in SkyReelsV2DF pipelines, clarifying its role in asynchronous inference.
* Simplify min_ar_step calculation in SkyReelsV2DiffusionForcingPipeline to improve clarity.
* style and fix-copies
* style
* Add documentation for SkyReelsV2Transformer3DModel
Introduced a new markdown file detailing the SkyReelsV2Transformer3DModel, including usage instructions and model output specifications.
* Update test configurations for SkyReelsV2 pipelines
- Adjusted `in_channels` from 36 to 16 in `test_skyreels_v2_df_image_to_video.py`.
- Added new parameters: `overlap_history`, `num_frames`, and `base_num_frames` in `test_skyreels_v2_df_video_to_video.py`.
- Updated expected output shape in video tests from (17, 3, 16, 16) to (41, 3, 16, 16).
* Refines SkyReelsV2DF test parameters
* Update src/diffusers/models/modeling_outputs.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Refactor `grid_sizes` processing by using already-calculated post-patch parameters to simplify
* Update docs/source/en/api/pipelines/skyreels_v2.md
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Refactor parameter naming for diffusion forcing in SkyReelsV2 pipelines
- Changed `flag_df` to `enable_diffusion_forcing` for clarity in the SkyReelsV2Transformer3DModel and associated pipelines.
- Updated all relevant method calls to reflect the new parameter name.
* Revert _toctree.yml to adjust section expansion states
* style
* Update docs/source/en/api/models/skyreels_v2_transformer_3d.md
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* Add copying label to SkyReelsV2ImageEmbedding from WanImageEmbedding.
* Refactor transformer block processing in SkyReelsV2Transformer3DModel
- Ensured proper handling of hidden states during both gradient checkpointing and standard processing.
* Update SkyReels V2 documentation to remove VRAM requirement and streamline imports
- Removed the mention of ~13GB VRAM requirement for the SkyReels-V2 model.
- Simplified import statements by removing unused `load_image` import.
* Add SkyReelsV2LoraLoaderMixin for loading and managing LoRA layers in SkyReelsV2Transformer3DModel
- Introduced SkyReelsV2LoraLoaderMixin class to handle loading, saving, and fusing of LoRA weights specific to the SkyReelsV2 model.
- Implemented methods for state dict management, including compatibility checks for various LoRA formats.
- Enhanced functionality for loading weights with options for low CPU memory usage and hotswapping.
- Added detailed docstrings for clarity on parameters and usage.
* Update SkyReelsV2 documentation and loader mixin references
- Corrected the documentation to reference the new `SkyReelsV2LoraLoaderMixin` for loading LoRA weights.
- Updated comments in the `SkyReelsV2LoraLoaderMixin` class to reflect changes in model references from `WanTransformer3DModel` to `SkyReelsV2Transformer3DModel`.
* Enhance SkyReelsV2 integration by adding SkyReelsV2LoraLoaderMixin references
- Added `SkyReelsV2LoraLoaderMixin` to the documentation and loader imports for improved LoRA weight management.
- Updated multiple pipeline classes to inherit from `SkyReelsV2LoraLoaderMixin` instead of `WanLoraLoaderMixin`.
* Update SkyReelsV2 model references in documentation
- Replaced placeholder model paths with actual paths for SkyReels-V2 models in multiple pipeline files.
- Ensured consistency across the documentation for loading models in the SkyReelsV2 pipelines.
* style
* fix-copies
* Refactor `fps_projection` in `SkyReelsV2Transformer3DModel`
- Replaced the sequential linear layers for `fps_projection` with a `FeedForward` layer using `SiLU` activation for better integration.
* Update docs
* Refactor video processing in SkyReelsV2DiffusionForcingPipeline
- Renamed parameters for clarity: `video` to `video_latents` and `overlap_history` to `overlap_history_latent_frames`.
- Updated logic for handling long video generation, including adjustments to latent frame calculations and accumulation.
- Consolidated handling of latents for both long and short video generation scenarios.
- Final decoding step now consistently converts latents to pixels, ensuring proper output format.
* Update activation function in `fps_projection` of `SkyReelsV2Transformer3DModel`
- Changed activation function from `silu` to `linear-silu` in the `fps_projection` layer for improved performance and integration.
* Add fps_projection layer renaming in convert_skyreelsv2_to_diffusers.py
- Updated key mappings for the `fps_projection` layer to align with new naming conventions, ensuring consistency in model integration.
* Fix fps_projection assignment in SkyReelsV2Transformer3DModel
- Corrected the assignment of the `fps_projection` layer to ensure it is properly cast to the appropriate data type, enhancing model functionality.
* Update _keep_in_fp32_modules in SkyReelsV2Transformer3DModel
- Added `fps_projection` to the list of modules that should remain in FP32 precision, ensuring proper handling of data types during model operations.
* Remove integration test classes from SkyReelsV2 test files
- Deleted the `SkyReelsV2DiffusionForcingPipelineIntegrationTests` and `SkyReelsV2PipelineIntegrationTests` classes along with their associated setup, teardown, and test methods, as they were not implemented and not needed for current testing.
* style
* Refactor: Remove hardcoded `torch.bfloat16` cast in attention
* Refactor: Simplify data type handling in transformer model
Removes unnecessary data type conversions for the FPS embedding and timestep projection.
This change simplifies the forward pass by relying on the inherent data types of the tensors.
* Refactor: Remove `fps_projection` from `_keep_in_fp32_modules` in `SkyReelsV2Transformer3DModel`
* Update src/diffusers/models/transformers/transformer_skyreels_v2.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Refactor: Remove unused flags and simplify attention mask handling in SkyReelsV2AttnProcessor2_0 and SkyReelsV2Transformer3DModel
Refactor: Simplify causal attention logic in SkyReelsV2
Removes the `flag_causal_attention` and `_flag_ar_attention` flags to simplify the implementation.
The decision to apply a causal attention mask is now based directly on the `num_frame_per_block` configuration, eliminating redundant flags and conditional checks. This streamlines the attention mechanism and simplifies the `set_ar_attention` methods.
* Refactor: Clarify variable names for latent frames
Renames `base_num_frames` to `base_latent_num_frames` to make it explicit that the variable refers to the number of frames in the latent space.
This change improves code readability and reduces potential confusion between latent frames and decoded video frames.
The `num_frames` parameter in `generate_timestep_matrix` is also renamed to `num_latent_frames` for consistency.
* Enhance documentation: Add detailed docstring for timestep matrix generation in SkyReelsV2DiffusionForcingPipeline
* Docs: Clarify long video chunking in pipeline docstring
Improves the explanation of long video processing within the pipeline's docstring.
The update replaces the abstract description with a concrete example, illustrating how the sliding window mechanism works with overlapping chunks. This makes the roles of `base_num_frames` and `overlap_history` clearer for users.
* Docs: Move visual demonstration and processing details for SkyReelsV2DiffusionForcingPipeline to docs page from the code
* Docs: Update asynchronous processing timeline and examples for long video handling in SkyReels-V2 documentation
* Enhance timestep matrix generation documentation and logic for synchronous/asynchronous video processing
* Update timestep matrix documentation and enhance analysis for clarity in SkyReelsV2DiffusionForcingPipeline
* Docs: Update visual demonstration section and add detailed step matrix construction example for asynchronous processing in SkyReelsV2DiffusionForcingPipeline
* style
* fix-copies
* Refactor parameter names for clarity in SkyReelsV2DiffusionForcingImageToVideoPipeline and SkyReelsV2DiffusionForcingVideoToVideoPipeline
* Refactor: Avoid VAE roundtrip in long video generation
Improves performance and quality for long video generation by operating entirely in latent space during the iterative generation process.
Instead of decoding latents to video and then re-encoding the overlapping section for the next chunk, this change passes the generated latents directly between iterations.
This avoids a computationally expensive and potentially lossy VAE decode/encode cycle within the loop. The full video is now decoded only once from the accumulated latents at the end of the process.
* Refactor: Rename prefix_video_latents_length to prefix_video_latents_frames for clarity
* Refactor: Rename num_latent_frames to current_num_latent_frames for clarity in SkyReelsV2DiffusionForcingImageToVideoPipeline
* Refactor: Enhance long video generation logic and improve latent handling in SkyReelsV2DiffusionForcingImageToVideoPipeline
Refactor: Unify video generation and pass latents directly
Unifies the separate code paths for short and long video generation into a single, streamlined loop.
This change eliminates the inefficient decode-encode cycle during long video generation. Instead of converting latents to pixel-space video between chunks, the pipeline now passes the generated latents directly to the next iteration.
This improves performance, avoids potential quality loss from intermediate VAE steps, and enhances code maintainability by removing significant duplication.
* style
* Refactor: Remove overlap_history parameter and streamline long video generation logic in SkyReelsV2DiffusionForcingImageToVideoPipeline
Refactor: Streamline long video generation logic
Removes the `overlap_history` parameter and simplifies the conditioning process for long video generation.
This change avoids a redundant VAE encoding step by directly using latent frames from the previous chunk for conditioning. It also moves image preprocessing outside the main generation loop to prevent repeated computations and clarifies the handling of prefix latents.
* style
* Refactor latent handling in i2v diffusion forcing pipeline
Improves the latent conditioning and accumulation logic within the image-to-video diffusion forcing loop.
- Corrects the splitting of the initial conditioning tensor to robustly handle both even and odd lengths.
- Simplifies how latents are accumulated across iterations for long video generation.
- Ensures the final latents are trimmed correctly before decoding only when a `last_image` is provided.
* Refactor: Remove overlap_history parameter from SkyReelsV2DiffusionForcingImageToVideoPipeline
* Refactor: Adjust video_latents parameter handling in prepare_latents method
* style
* Refactor: Update long video iteration print statements for clarity
* Fix: Update transformer config with dynamic causal block size
Updates the SkyReelsV2 pipelines to correctly set the `causal_block_size` in the transformer's configuration when it's provided during a pipeline call.
This ensures the model configuration reflects the user's specified setting for the inference run. The `set_ar_attention` method is also renamed to `_set_ar_attention` to mark it as an internal helper.
* style
* Refactor: Adjust video input size and expected output shape in inference test
* Refactor: Rename video variables for clarity in SkyReelsV2DiffusionForcingVideoToVideoPipeline
* Docs: Clarify time embedding logic in SkyReelsV2
Adds comments to explain the handling of different time embedding tensor dimensions.
A 2D tensor is used for standard models with a single time embedding per batch, while a 3D tensor is used for Diffusion Forcing models where each frame has its own time embedding. This clarifies the expected input for different model variations.
* Docs: Update SkyReels V2 pipeline examples
Updates the docstring examples for the SkyReels V2 pipelines to reflect current best practices and API changes.
- Removes the `shift` parameter from pipeline call examples, as it is now configured directly on the scheduler.
- Replaces the `set_ar_attention` method call with the `causal_block_size` argument in the pipeline call for diffusion forcing examples.
- Adjusts recommended parameters for I2V and V2V examples, including inference steps, guidance scale, and `ar_step`.
* Refactor: Remove `shift` parameter from SkyReelsV2 pipelines
Removes the `shift` parameter from the call signature of all SkyReelsV2 pipelines.
This parameter is a scheduler-specific configuration and should be set directly on the scheduler during its initialization, rather than being passed at runtime through the pipeline. This change simplifies the pipeline API.
Usage examples are updated to reflect that the `shift` value should now be passed when creating the `FlowMatchUniPCMultistepScheduler`.
* Refactors SkyReelsV2 image-to-video tests and adds last image case
Simplifies the test suite by removing a duplicated test class and streamlining the dummy component and input generation.
Adds a new test to verify the pipeline's behavior when a `last_image` is provided as input for conditioning.
* test: Add image components to SkyReelsV2 pipeline test
Adds the `image_encoder` and `image_processor` to the test components for the image-to-video pipeline.
Also replaces a hardcoded value for the positional embedding sequence length with a more descriptive calculation, improving clarity.
* test: Add callback configuration test for SkyReelsV2DiffusionForcingVideoToVideoPipeline
test: Add callback test for SkyReelsV2DFV2V pipeline
Adds a test to validate the callback functionality for the `SkyReelsV2DiffusionForcingVideoToVideoPipeline`.
This test confirms that `callback_on_step_end` is invoked correctly and can modify the pipeline's state during inference. It uses a callback to dynamically increase the `guidance_scale` and asserts that the final value is as expected.
The implementation correctly accounts for the nested denoising loops present in diffusion forcing pipelines.
* style
* fix: Update image_encoder type to CLIPVisionModelWithProjection in SkyReelsV2ImageToVideoPipeline
* UP
* Add conversion support for SkyReels-V2-FLF2V models
Adds configurations for three new FLF2V model variants (1.3B-540P, 14B-540P, and 14B-720P) to the conversion script.
This change also introduces specific handling to zero out the image positional embeddings for these models and updates the main script to correctly initialize the image-to-video pipeline.
* Docs: Update and simplify SkyReels V2 usage examples
Simplifies the text-to-video example by removing the manual group offloading configuration, making it more straightforward.
Adds comments to pipeline parameters to clarify their purpose and provides guidance for different resolutions and long video generation.
Introduces a new section with a code example for the video-to-video pipeline.
* style
* docs: Add SkyReels-V2 FLF2V 1.3B model to supported models list
* docs: Update SkyReels-V2 documentation
* Move the initialization of the `gradient_checkpointing` attribute to its suggested location.
* Refactor: Use logger for long video progress messages
Replaces `print()` calls with `logger.debug()` for reporting progress during long video generation in SkyReelsV2DF pipelines.
This change reduces console output verbosity for standard runs while allowing developers to view progress by enabling debug-level logging.
* Refactor SkyReelsV2 timestep embedding into a module
Extract the sinusoidal timestep embedding logic into a new `SkyReelsV2Timesteps` `nn.Module`.
This change encapsulates the embedding generation, which simplifies the `SkyReelsV2TimeTextImageEmbedding` class and improves code modularity.
* Fix: Preserve original shape in timestep embeddings
Reshapes the timestep embedding tensor to match the original input shape.
This ensures that batched timestep inputs retain their batch dimension after embedding, preventing potential shape mismatches.
* style
* Refactor: Move SkyReelsV2Timesteps to model file
Colocates the `SkyReelsV2Timesteps` class with the SkyReelsV2 transformer model.
This change moves model-specific timestep embedding logic from the general embeddings module to the transformer's own file, improving modularity and making the model more self-contained.
* Refactor parameter dtype retrieval to use utility function
Replaces manual parameter iteration with the `get_parameter_dtype` helper to determine the time embedder's data type.
This change improves code readability and centralizes the logic.
* Add comments to track the tensor shape transformations
* Add copied froms
* style
* fix-copies
* up
* Remove FlowMatchUniPCMultistepScheduler
Deletes the `FlowMatchUniPCMultistepScheduler` as it is no longer being used.
* Refactor: Replace FlowMatchUniPC scheduler with UniPC
Removes the `FlowMatchUniPCMultistepScheduler` and integrates its functionality into the existing `UniPCMultistepScheduler`.
This consolidation is achieved by using the `use_flow_sigmas=True` parameter in `UniPCMultistepScheduler`, simplifying the scheduler API and reducing code duplication. All usages, documentation, and tests are updated accordingly.
* style
* Remove text_encoder parameter from SkyReelsV2DiffusionForcingPipeline initialization
* Docs: Rename `pipe` to `pipeline` in SkyReels examples
Updates the variable name from `pipe` to `pipeline` across all SkyReels V2 documentation examples. This change improves clarity and consistency.
* Fix: Rename shift parameter to flow_shift in SkyReels-V2 examples
* Fix: Rename shift parameter to flow_shift in example documentation across SkyReels-V2 files
* Fix: Rename shift parameter to flow_shift in UniPCMultistepScheduler initialization across SkyReels test files
* Removes unused generator argument from scheduler step
The `generator` parameter is not used by the scheduler's `step` method within the SkyReelsV2 diffusion forcing pipelines. This change removes the unnecessary argument from the method call for code clarity and consistency.
* Fix: Update time_embedder_dtype assignment to use the first parameter's dtype in SkyReelsV2TimeTextImageEmbedding
* style
* Refactor: Use get_parameter_dtype utility function
Replaces manual parameter iteration with the `get_parameter_dtype` helper.
* Fix: Prevent (potential) error in parameter dtype check
Adds a check to ensure the `_keep_in_fp32_modules` attribute exists on a parameter before it is accessed.
This prevents a potential `AttributeError`, making the utility function more robust when used with models that do not define this attribute.
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Update pipeline_onnx_stable_diffusion.py to remove float64
init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
* Update pipeline_onnx_stable_diffusion_inpaint.py to remove float64
init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
* Update pipeline_onnx_stable_diffusion_upscale.py to remove float64
init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
* Update pipeline_onnx_stable_diffusion.py with comment for previous commit
Added comment on purpose of init_noise_sigma. This comment exists in related scripts that use the same line of code, but it was missing here.
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* remove k-diffusion as we don't use it from the core.
* Revert "remove k-diffusion as we don't use it from the core."
This reverts commit 8bc86925a0.
* pin k-diffusion
* FIX set_lora_device when target layers differ
Resolves#11833
Fixes a bug that occurs after calling set_lora_device when multiple LoRA
adapters are loaded that target different layers.
Note: Technically, the accompanying test does not require a GPU because
the bug is triggered even if the parameters are already on the
corresponding device, i.e. loading on CPU and then changing the device
to CPU is sufficient to cause the bug. However, this may be optimized
away in the future, so I decided to test with GPU.
* Update docstring to warn about device mismatch
* Extend docstring with an example
* Fix docstring
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* ENH Improve speed of expanding LoRA scales
Resolves#11816
The following call proved to be a bottleneck when setting a lot of LoRA
adapters in diffusers:
cdaf84a708/src/diffusers/loaders/peft.py (L482)
This is because we would repeatedly call unet.state_dict(), even though
in the standard case, it is not necessary:
cdaf84a708/src/diffusers/loaders/unet_loader_utils.py (L55)
This PR fixes this by deferring this call, so that it is only run when
it's necessary, not earlier.
* Small fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* feat: use exclude modules to loraconfig.
* version-guard.
* tests and version guard.
* remove print.
* describe the test
* more detailed warning message + shift to debug
* update
* update
* update
* remove test
* ⚡️ Speed up method `AutoencoderKLWan.clear_cache` by 886%
**Key optimizations:**
- Compute the number of `WanCausalConv3d` modules in each model (`encoder`/`decoder`) **only once during initialization**, store in `self._cached_conv_counts`. This removes unnecessary repeated tree traversals at every `clear_cache` call, which was the main bottleneck (from profiling).
- The internal helper `_count_conv3d_fast` is optimized via a generator expression with `sum` for efficiency.
All comments from the original code are preserved, except for updated or removed local docstrings/comments relevant to changed lines.
**Function signatures and outputs remain unchanged.**
* Apply style fixes
* Apply suggestions from code review
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
* Apply style fixes
---------
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: Aryan <contact.aryanvs@gmail.com>
Co-authored-by: Aseem Saxena <aseem.bits@gmail.com>
* Add Pruna optimization framework documentation
- Introduced a new section for Pruna in the table of contents.
- Added comprehensive documentation for Pruna, detailing its optimization techniques, installation instructions, and examples for optimizing and evaluating models
* Enhance Pruna documentation with image alt text and code block formatting
- Added alt text to images for better accessibility and context.
- Changed code block syntax from diff to python for improved clarity.
* Add installation section to Pruna documentation
- Introduced a new installation section in the Pruna documentation to guide users on how to install the framework.
- Enhanced the overall clarity and usability of the documentation for new users.
* Update pruna.md
* Update pruna.md
* Update Pruna documentation for model optimization and evaluation
- Changed section titles for consistency and clarity, from "Optimizing models" to "Optimize models" and "Evaluating and benchmarking optimized models" to "Evaluate and benchmark models".
- Enhanced descriptions to clarify the use of `diffusers` models and the evaluation process.
- Added a new example for evaluating standalone `diffusers` models.
- Updated references and links for better navigation within the documentation.
* Refactor Pruna documentation for clarity and consistency
- Removed outdated references to FLUX-juiced and streamlined the explanation of benchmarking.
- Enhanced the description of evaluating standalone `diffusers` models.
- Cleaned up code examples by removing unnecessary imports and comments for better readability.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Enhance Pruna documentation with new examples and clarifications
- Added an image to illustrate the optimization process.
- Updated the explanation for sharing and loading optimized models on the Hugging Face Hub.
- Clarified the evaluation process for optimized models using the EvaluationAgent.
- Improved descriptions for defining metrics and evaluating standalone diffusers models.
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* support text-to-image
* update example
* make fix-copies
* support use_flow_sigmas in EDM scheduler instead of maintain cosmos-specific scheduler
* support video-to-world
* update
* rename text2image pipeline
* make fix-copies
* add t2i test
* add test for v2w pipeline
* support edm dpmsolver multistep
* update
* update
* update
* update tests
* fix tests
* safety checker
* make conversion script work without guardrail
* add clarity in documentation for device_map
* docs
* fix how compiler tester mixins are used.
* propagate
* more
* typo.
* fix tests
* fix order of decroators.
* clarify more.
* more test cases.
* fix doc
* fix device_map docstring in pipeline_utils.
* more examples
* more
* update
* remove code for stuff that is already supported.
* fix stuff.
* allow loading from repo with dot in name
* put new arg at the end to avoid breaking compatibility
* add test for loading repo with dot in name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update pipeline_flux_inpaint.py to fix padding_mask_crop returning only the inpainted area and not the entire image.
* Apply style fixes
* Update src/diffusers/pipelines/flux/pipeline_flux_inpaint.py
* Add community class StableDiffusionXL_T5Pipeline
Will be used with base model opendiffusionai/stablediffusionxl_t5
* Changed pooled_embeds to use projection instead of slice
* "make style" tweaks
* Added comments to top of code
* Apply style fixes
2025-06-09 15:57:51 -04:00
1731 changed files with 119066 additions and 31240 deletions
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make style && make quality'" >> $GITHUB_STEP_SUMMARY
check_repository_consistency:
needs:check_code_quality
runs-on:ubuntu-22.04
steps:
- uses:actions/checkout@v3
- name:Set up Python
uses:actions/setup-python@v4
with:
python-version:"3.10"
- name:Install dependencies
run:|
pip install --upgrade pip
pip install .[quality]
- name:Check repo consistency
run:|
python utils/check_copies.py
python utils/check_dummies.py
python utils/check_support_list.py
make deps_table_check_updated
- name:Check if failure
if:${{ failure() }}
run:|
echo "Repo consistency check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make fix-copies'" >> $GITHUB_STEP_SUMMARY
We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/#installation), please refer to their official documentation.
We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/), please refer to their official documentation.
### PyTorch
@@ -53,14 +53,6 @@ With `conda` (maintained by the community):
conda install -c conda-forge diffusers
```
### Flax
With `pip` (official package):
```bash
pip install --upgrade diffusers[flax]
```
### Apple Silicon (M1/M2) support
Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide.
@@ -179,7 +171,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
Welcome to Diffusers Benchmarks. These benchmarks are use to obtain latency and memory information of the most popular models across different scenarios such as:
* Base case i.e., when using `torch.bfloat16` and `torch.nn.functional.scaled_dot_product_attention`.
* Base + `torch.compile()`
* NF4 quantization
* Layerwise upcasting
Instead of full diffusion pipelines, only the forward pass of the respective model classes (such as `FluxTransformer2DModel`) is tested with the real checkpoints (such as `"black-forest-labs/FLUX.1-dev"`).
The entrypoint to running all the currently available benchmarks is in `run_all.py`. However, one can run the individual benchmarks, too, e.g., `python benchmarking_flux.py`. It should produce a CSV file containing various information about the benchmarks run.
The benchmarks are run on a weekly basis and the CI is defined in [benchmark.yml](../.github/workflows/benchmark.yml).
## Running the benchmarks manually
First set up `torch` and install `diffusers` from the root of the directory:
```py
pipinstall-e".[quality,test]"
```
Then make sure the other dependencies are installed:
```sh
cd benchmarks/
pip install -r requirements.txt
```
We need to be authenticated to access some of the checkpoints used during benchmarking:
```sh
hf auth login
```
We use an L40 GPU with 128GB RAM to run the benchmark CI. As such, the benchmarks are configured to run on NVIDIA GPUs. So, make sure you have access to a similar machine (or modify the benchmarking scripts accordingly).
Then you can either launch the entire benchmarking suite by running:
```sh
python run_all.py
```
Or, you can run the individual benchmarks.
## Customizing the benchmarks
We define "scenarios" to cover the most common ways in which these models are used. You can
define a new scenario, modifying an existing benchmark file:
You can also configure a new model-level benchmark and add it to the existing suite. To do so, just defining a valid benchmarking file like `benchmarking_flux.py` should be enough.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -14,11 +14,8 @@ specific language governing permissions and limitations under the License.
Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from [`ModelMixin`] inherit from [`ConfigMixin`] which stores all the parameters that are passed to their respective `__init__` methods in a JSON-configuration file.
<Tip>
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`.
</Tip>
> [!TIP]
> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -20,6 +20,12 @@ All pipelines with [`VaeImageProcessor`] accept PIL Image, PyTorch tensor, or Nu
[[autodoc]] image_processor.VaeImageProcessor
## InpaintProcessor
The [`InpaintProcessor`] accepts `mask` and `image` inputs and process them together. Optionally, it can accept padding_mask_crop and apply mask overlay.
[[autodoc]] image_processor.InpaintProcessor
## VaeImageProcessorLDM3D
The [`VaeImageProcessorLDM3D`] accepts RGB and depth inputs and returns RGB and depth outputs.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -14,11 +14,8 @@ specific language governing permissions and limitations under the License.
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder.
<Tip>
Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide.
</Tip>
> [!TIP]
> Learn how to load and use an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/ip_adapter) guide,.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -26,16 +26,19 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
- [`HunyuanVideoLoraLoaderMixin`] provides similar functions for [HunyuanVideo](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video).
- [`Lumina2LoraLoaderMixin`] provides similar functions for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
- [`WanLoraLoaderMixin`] provides similar functions for [Wan](https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan).
- [`SkyReelsV2LoraLoaderMixin`] provides similar functions for [SkyReels-V2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/skyreels_v2).
- [`CogView4LoraLoaderMixin`] provides similar functions for [CogView4](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogview4).
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
- [`HiDreamImageLoraLoaderMixin`] provides similar functions for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hidream)
- [`QwenImageLoraLoaderMixin`] provides similar functions for [Qwen Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/qwen)
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
<Tip>
> [!TIP]
> To learn more about how to load LoRA weights, see the [LoRA](../../tutorials/using_peft_for_inference) loading guide.
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
## LoraBaseMixin
</Tip>
[[autodoc]] loaders.lora_base.LoraBaseMixin
## StableDiffusionLoraLoaderMixin
@@ -88,6 +91,10 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -12,13 +12,10 @@ specific language governing permissions and limitations under the License.
# PEFT
Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`~loaders.peft.PeftAdapterMixin`] class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`], [`SD3Transformer2DModel`] to operate with an adapter.
Diffusers supports loading adapters such as [LoRA](../../tutorials/using_peft_for_inference) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`~loaders.peft.PeftAdapterMixin`] class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`], [`SD3Transformer2DModel`] to operate with an adapter.
<Tip>
Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.
</Tip>
> [!TIP]
> Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -16,11 +16,8 @@ Textual Inversion is a training method for personalizing models by learning new
[`TextualInversionLoaderMixin`] provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings.
<Tip>
To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide.
</Tip>
> [!TIP]
> To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/textual_inversion_inference) loading guide.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -16,11 +16,8 @@ Some training methods - like LoRA and Custom Diffusion - typically target the UN
The [`UNet2DConditionLoadersMixin`] class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters.
<Tip>
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
</Tip>
> [!TIP]
> To learn more about how to load LoRA weights, see the [LoRA](../../tutorials/using_peft_for_inference) guide.
<!-- Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. -->
# AutoencoderKLHunyuanImageRefiner
The 3D variational autoencoder (VAE) model with KL loss used in [HunyuanImage2.1](https://github.com/Tencent-Hunyuan/HunyuanImage-2.1) for its refiner pipeline.
The model can be loaded with the following code snippet.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@@ -16,11 +16,8 @@ Consistency decoder can be used to decode the latents from the denoising UNet in
The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder).
<Tip warning={true}>
Inference is only supported for 2 iterations as of now.
</Tip>
> [!WARNING]
> Inference is only supported for 2 iterations as of now.
The pipeline could not have been contributed without the help of [madebyollin](https://github.com/madebyollin) and [mrsteyk](https://github.com/mrsteyk) from [this issue](https://github.com/openai/consistencydecoder/issues/1).
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
<!-- Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.