Commit Graph

1702 Commits

Author SHA1 Message Date
Dhruv Nair
4ac80a1ddb update 2026-04-07 12:20:31 +02:00
Dhruv Nair
bc24d8b463 update 2026-04-07 12:19:05 +02:00
DN6
d6fd4cb6f6 update 2026-03-26 15:56:29 +05:30
Sayak Paul
85ffcf1db2 [tests] Tests for conditional pipeline blocks (#13247)
* implement test suite for conditional blocks.

* remove

* another fix.

* Revert "another fix."

This reverts commit ab07b603ab.
2026-03-26 08:48:16 +05:30
Kashif Rasul
5d207e756e [Discrete Diffusion] Add LLaDA2 pipeline (#13226)
* feat: add LLaDA2 and BlockRefinement pipelines for discrete text diffusion

Add support for LLaDA2/LLaDA2.1 discrete diffusion text generation:
- BlockRefinementPipeline: block-wise iterative refinement with confidence-based
  token commitment, supporting editing threshold for LLaDA2.1 models
- LLaDA2Pipeline: convenience wrapper with LLaDA2-specific defaults
- DiscreteDiffusionPipelineMixin: shared SAR sampling utilities (top-k, top-p,
  temperature) and prompt/prefix helpers
- compute_confidence_aware_loss: CAP-style training loss
- Examples: sampling scripts for LLaDA2 and block refinement, training scripts
  with Qwen causal LM
- Docs and tests included

* feat: add BlockRefinementScheduler for commit-by-confidence scheduling

Extract the confidence-based token commit logic from BlockRefinementPipeline
into a dedicated BlockRefinementScheduler, following diffusers conventions.

The scheduler owns:
- Transfer schedule computation (get_num_transfer_tokens)
- Timestep management (set_timesteps)
- Step logic: confidence-based mask-filling and optional token editing

The pipeline now delegates scheduling to self.scheduler.step() and accepts
a scheduler parameter in __init__.

* test: add unit tests for BlockRefinementScheduler

12 tests covering set_timesteps, get_num_transfer_tokens, step logic
(confidence-based commits, threshold behavior, editing, prompt masking,
batched inputs, tuple output).

* docs: add toctree entries and standalone scheduler doc page

- Add BlockRefinement and LLaDA2 to docs sidebar navigation
- Add BlockRefinementScheduler to schedulers sidebar navigation
- Move scheduler autodoc to its own page under api/schedulers/

* feat: add --revision flag and fix dtype deprecation in sample_llada2.py

- Add --revision argument for loading model revisions from the Hub
- Replace deprecated torch_dtype with dtype for transformers 5.x compat

* fix: use 1/0 attention mask instead of 0/-inf for LLaDA2 compat

LLaDA2 models expect a boolean-style (1/0) attention mask, not an
additive (0/-inf) mask. The model internally converts to additive,
so passing 0/-inf caused double-masking and gibberish output.

* refactor: consolidate training scripts into single train_block_refinement.py

- Remove toy train_block_refinement_cap.py (self-contained demo with tiny model)
- Rename train_block_refinement_qwen_cap.py to train_block_refinement.py
  (already works with any causal LM via AutoModelForCausalLM)
- Fix torch_dtype deprecation and update README with correct script names

* fix formatting

* docs: improve LLaDA2 and BlockRefinement documentation

- Add usage examples with real model IDs and working code
- Add recommended parameters table for LLaDA2.1 quality/speed modes
- Note that editing is LLaDA2.1-only (not for LLaDA2.0 models)
- Remove misleading config defaults section from BlockRefinement docs

* feat: set LLaDA2Pipeline defaults to recommended model parameters

- threshold: 0.95 -> 0.7 (quality mode)
- max_post_steps: 0 -> 16 (recommended for LLaDA2.1, harmless for 2.0)
- eos_early_stop: False -> True (stop at EOS token)

block_length=32, steps=32, temperature=0.0 were already correct.
editing_threshold remains None (users enable for LLaDA2.1 models).

* feat: default editing_threshold=0.5 for LLaDA2.1 quality mode

LLaDA2.1 is the current generation. Users with LLaDA2.0 models can
disable editing by passing editing_threshold=None.

* fix: align sampling utilities with official LLaDA2 implementation

- top_p filtering: add shift-right to preserve at least one token above
  threshold (matches official code line 1210)
- temperature ordering: apply scaling before top-k/top-p filtering so
  filtering operates on scaled logits (matches official code lines 1232-1235)
- greedy branch: return argmax directly when temperature=0 without
  filtering (matches official code lines 1226-1230)

* refactor: remove duplicate prompt encoding, reuse mixin's _prepare_input_ids

LLaDA2Pipeline._prepare_prompt_ids was a near-copy of
DiscreteDiffusionPipelineMixin._prepare_input_ids. Remove the duplicate
and call the mixin method directly. Also simplify _extract_input_ids
since we always pass return_dict=True.

* formatting

* fix: replace deprecated torch_dtype with dtype in examples and docstrings

- Update EXAMPLE_DOC_STRING to use dtype= and LLaDA2.1-mini model ID
- Fix sample_block_refinement.py to use dtype=

* remove BlockRefinementPipeline

* cleanup

* fix readme

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* removed DiscreteDiffusionPipelineMixin

* add support for 2d masks for flash attn

* Update src/diffusers/training_utils.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/training_utils.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* fix issues from review

* added tests

* formatting

* add check_eos_finished to scheduler

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_block_refinement.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_block_refinement.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* fix renaming issues and types

* remove duplicate check

* Update docs/source/en/api/pipelines/llada2.md

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/llada2/pipeline_llada2.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
2026-03-25 16:17:50 +05:30
Sayak Paul
e358ddcce6 fix to device and to dtype tests. (#13323) 2026-03-25 11:47:02 +05:30
Sayak Paul
f1fd515257 [tests] fix lora logging tests for models. (#13318)
* fix lora logging tests for models.

* make style
2026-03-24 15:48:03 +05:30
Cheung Ka Wai
afdda57f61 Fix the attention mask in ulysses SP for QwenImage (#13278)
* fix mask in SP

* change the modification to qwen specific

* drop xfail since qwen-image mask is fixed

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-24 02:12:50 -07:00
Sayak Paul
6350a7690a [chore] properly deprecate src.diffusers.utils.testing_utils. (#13314)
properly deprecate src.diffusers.utils.testing_utils.
2026-03-24 10:54:35 +05:30
Cheung Ka Wai
9d4c9dcf21 change QwenImageTransformer UT to batch inputs (#13312)
* UT expands to batch inputs

* update according to suggestion

* update according to suggestion 2

* fix CI

* update according to suggestion 3

* clean line
2026-03-24 08:56:40 +05:30
ddavidchick
ef309a1bb0 Add KVAE 1.0 (#13033)
* add kvae2d

* add kvae3d video

* add docs for kvae2d and kvae3d video

* style fixes

* fix kvae3d docs

* fix normalzation

* fix kvae video for code style

* fix kvae video

* kvae minor fixes

* add gradient ckpting for kvaes

* get rid of inplace ops kvae video

* add tests for KVAEs

* kvae2d normalization style change

* kvaes fix style

* update dummy_pt_objects test for kvaes

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2026-03-23 12:56:49 -10:00
Dhruv Nair
52558b45d8 [CI] Flux2 Model Test Refactor (#13071)
* update

* update

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-23 16:56:08 +05:30
Sayak Paul
c02c17c6ee [tests] test load_components in modular (#13245)
* test load_components.

* fix

* fix

* u[

* up
2026-03-21 09:41:48 +05:30
Sayak Paul
522b523e40 [ci] hoping to fix is_flaky with wanvace. (#13294)
* hoping to fix is_flaky with wanvace.

* revert changes in src/diffusers/utils/testing_utils.py and propagate them to tests/testing_utils.py.

* up
2026-03-20 16:02:16 +05:30
Dhruv Nair
32b4cfc81c [Modular] Test for catching dtype and device issues with AutoModel type hints (#13287)
* update

* update

* update
2026-03-20 10:36:03 +05:30
dg845
072d15ee42 Add Support for LTX-2.3 Models (#13217)
* Initial implementation of perturbed attn processor for LTX 2.3

* Update DiT block for LTX 2.3 + add self_attention_mask

* Add flag to control using perturbed attn processor for now

* Add support for new video upsampling blocks used by LTX-2.3

* Support LTX-2.3 Big-VGAN V2-style vocoder

* Initial implementation of LTX-2.3 vocoder with bandwidth extender

* Initial support for LTX-2.3 per-modality feature extractor

* Refactor so that text connectors own all text encoder hidden_states normalization logic

* Fix some bugs for inference

* Fix LTX-2.X DiT block forward pass

* Support prompt timestep embeds and prompt cross attn modulation

* Add LTX-2.3 configs to conversion script

* Support converting LTX-2.3 DiT checkpoints

* Support converting LTX-2.3 Video VAE checkpoints

* Support converting LTX-2.3 Vocoder with bandwidth extender

* Support converting LTX-2.3 text connectors

* Don't convert any upsamplers for now

* Support self attention mask for LTX2Pipeline

* Fix some inference bugs

* Support self attn mask and sigmas for LTX-2.3 I2V, Cond pipelines

* Support STG and modality isolation guidance for LTX-2.3

* make style and make quality

* Make audio guidance values default to video values by default

* Update to LTX-2.3 style guidance rescaling

* Support cross timesteps for LTX-2.3 cross attention modulation

* Fix RMS norm bug for LTX-2.3 text connectors

* Perform guidance rescale in sample (x0) space following original code

* Support LTX-2.3 Latent Spatial Upsampler model

* Support LTX-2.3 distilled LoRA

* Support LTX-2.3 Distilled checkpoint

* Support LTX-2.3 prompt enhancement

* Make LTX-2.X processor non-required so that tests pass

* Fix test_components_function tests for LTX2 T2V and I2V

* Fix LTX-2.3 Video VAE configuration bug causing pixel jitter

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Refactor LTX-2.X Video VAE upsampler block init logic

* Refactor LTX-2.X guidance rescaling to use rescale_noise_cfg

* Use generator initial seed to control prompt enhancement if available

* Remove self attention mask logic as it is not used in any current pipelines

* Commit fixes suggested by claude code (guidance in sample (x0) space, denormalize after timestep conditioning)

* Use constant shift following original code

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-19 14:58:29 -07:00
kaixuanliu
8e4b5607ed skip invalid test case for helios pipeline (#13218)
* skip invalid test case for helio pipeline

Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>

* update skip reason

Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>

---------

Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
2026-03-17 20:58:35 -10:00
Dhruv Nair
11a3284cee [CI] Qwen Image Model Test Refactor (#13069)
* update

* update

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-17 16:44:04 +05:30
Sayak Paul
16e7067647 [tests] fix llava kwargs in the hunyuan tests (#13275)
fix llava kwargs in the hunyuan tests
2026-03-17 10:11:47 +05:30
Wang, Yi
9677859ebf fix parallelism case failure in xpu (#13270)
* fix parallelism case failure in xpu

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>

* updated

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-17 08:52:15 +05:30
Sayak Paul
764f7ede33 [core] Flux2 klein kv followups (#13264)
* implement Flux2Transformer2DModelOutput.

* add output class to docs.

* add Flux2KleinKV to docs.

* add pipeline tests for klein kv.
2026-03-13 10:05:11 +05:30
Dhruv Nair
07c5ba8eee [Context Parallel] Add support for custom device mesh (#13064)
* add custom mesh support

* update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-11 16:42:11 +05:30
YiYi Xu
068c6ef6c1 [modular] helios (#13216)
* add helios modular

* upup

* revert change in guider

* up

* fix for real

* fix batch test

* Apply suggestion from @yiyixuxu

---------

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-163-127.ec2.internal>
2026-03-09 10:37:56 -10:00
Sayak Paul
a08c274c33 [tests] Use tmp_path fixture modular tests (#13194)
* add a test to check modular index consistency

* check for compulsory keys.

* use fixture for tmp_path in modular tests.

* remove unneeded test.

* fix code quality.

* up

* up
2026-03-09 15:17:59 +05:30
annitang1997
7f92d81320 Add VidTok AutoEncoders (#11261)
* add_autoencoder_vidtok

* format standardization

* remove small functions

* making the code style more diffusers-like

* Apply style fixes

* Add dummy objects for AutoencoderVidTok

* Fix AutoencoderVidTok avg_pool3d BFloat16 CPU compatibility

* skip test_layerwise_casting_training test

* Apply style fixes

---------

Co-authored-by: annitang1997 <memory97@sjtu.edu.cn>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
2026-03-08 20:23:49 -07:00
Ando
8ec0a5ccad feat: implement rae autoencoder. (#13046)
* feat: implement three RAE encoders(dinov2, siglip2, mae)

* feat: finish first version of autoencoder_rae

* fix formatting

* make fix-copies

* initial doc

* fix latent_mean / latent_var init types to accept config-friendly inputs

* use mean and std convention

* cleanup

* add rae to diffusers script

* use imports

* use attention

* remove unneeded class

* example traiing script

* input and ground truth sizes have to be the same

* fix argument

* move loss to training script

* cleanup

* simplify mixins

* fix training script

* fix entrypoint for instantiating the AutoencoderRAE

* added encoder_image_size config

* undo last change

* fixes from pretrained weights

* cleanups

* address reviews

* fix train script to use pretrained

* fix conversion script review

* latebt normalization buffers are now always registered with no-op defaults

* Update examples/research_projects/autoencoder_rae/README.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update src/diffusers/models/autoencoders/autoencoder_rae.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* use image url

* Encoder is frozen

* fix slow test

* remove config

* use ModelTesterMixin and AutoencoderTesterMixin

* make quality

* strip final layernorm when converting

* _strip_final_layernorm_affine for training script

* fix test

* add dispatch forward and update conversion script

* update training script

* error out as soon as possible and add comments

* Update src/diffusers/models/autoencoders/autoencoder_rae.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* use buffer

* inline

* Update src/diffusers/models/autoencoders/autoencoder_rae.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* remove optional

* _noising takes a generator

* Update src/diffusers/models/autoencoders/autoencoder_rae.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* fix api

* rename

* remove unittest

* use randn_tensor

* fix device map on multigpu

* check if the key is missing in the original state dict and only then add to the allow_missing set

* remove initialize_weights

---------

Co-authored-by: wangyuqi <wangyuqi@MBP-FJDQNJTWYN-0208.local>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
2026-03-05 20:17:14 +05:30
dg845
33f785b444 Add Helios-14B Video Generation Pipelines (#13208)
* [1/N] add helios

* fix test

* make fix-copies

* change script path

* fix cus script

* update docs

* fix documented check

* update links for docs and examples

* change default config

* small refactor

* add test

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove register_buffer for _scale_cache

* fix non-cuda devices error

* remove "handle the case when timestep is 2D"

* refactor HeliosMultiTermMemoryPatch and process_input_hidden_states

* Update src/diffusers/pipelines/helios/pipeline_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/pipelines/helios/pipeline_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* fix calculate_shift

* Update src/diffusers/pipelines/helios/pipeline_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* rewritten `einops` in pure `torch`

* fix: pass patch_size to apply_schedule_shift instead of hardcoding

* remove the logics of 'vae_decode_type'

* move some validation into check_inputs()

* rename helios scheduler & merge all into one step()

* add some details to doc

* move dmd  step() logics from pipeline to scheduler

* change to Python 3.9+ style type

* fix NoneType error

* refactor DMD scheduler's set_timestep

* change rope related vars name

* fix stage2 sample

* fix dmd sample

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove redundant & refactor norm_out

* Update src/diffusers/pipelines/helios/pipeline_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* change "is_keep_x0" to "keep_first_frame"

* use a more intuitive name

* refactor dynamic_time_shifting

* remove use_dynamic_shifting args

* remove usage of UniPCMultistepScheduler

* separate stage2 sample to HeliosPyramidPipeline

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/transformers/transformer_helios.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* fix transformer

* use a more intuitive name

* update example script

* fix requirements

* remove redudant attention mask

* fix

* optimize pipelines

* make style .

* update TYPE_CHECKING

* change to use torch.split

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* derive memory patch sizes from patch_size multiples

* remove some hardcoding

* move some checks into check_inputs

* refactor sample_block_noise

* optimize encoding chunks logits for v2v

* use num_history_latent_frames = sum(history_sizes)

* Update src/diffusers/pipelines/helios/pipeline_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* remove redudant optimized_scale

* Update src/diffusers/pipelines/helios/pipeline_helios_pyramid.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* use more descriptive name

* optimize history_latents

* remove not used "num_inference_steps"

* removed redudant "pyramid_num_stages"

* add "is_cfg_zero_star" and "is_distilled" to HeliosPyramidPipeline

* remove redudant

* change example scripts name

* change example scripts name

* correct docs

* update example

* update docs

* Update tests/models/transformers/test_models_transformer_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update tests/models/transformers/test_models_transformer_helios.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* separate HeliosDMDScheduler

* fix numerical stability issue:

* Update src/diffusers/schedulers/scheduling_helios_dmd.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_helios_dmd.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_helios_dmd.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_helios_dmd.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Update src/diffusers/schedulers/scheduling_helios_dmd.py

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* remove redudant

* small refactor

* remove use_interpolate_prompt logits

* simplified model test

* fallbackt to BaseModelTesterConfig

* remove _maybe_expand_t2v_lora_for_i2v

* fix HeliosLoraLoaderMixin

* update docs

* use randn_tensor for test

* fix doc typo

* optimize code

* mark torch.compile xfail

* change paper name

* Make get_dummy_inputs deterministic using self.generator

* Set less strict threshold for test_save_load_float16 test for Helios pipeline

* make style and make quality

* Preparation for merging

* add torch.Generator

* Fix HeliosPipelineOutput doc path

* Fix Helios related (optimize docs & remove redudant) (#13210)

* fix docs

* remove redudant

* remove redudant

* fix group offload

* Removed fixes for group offload

---------

Co-authored-by: yuanshenghai <yuanshenghai@bytedance.com>
Co-authored-by: Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: SHYuanBest <shyuan-cs@hotmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-04 21:31:43 +05:30
Shenghai Yuan
06ccde9490 Fix group-offloading bug (#13211)
* Implement synchronous onload for offloaded parameters

Add fallback synchronous onload for conditionally-executed modules.

* add test for new code path about group-offloading

* Update tests/hooks/test_group_offloading.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* use unittest.skipIf and update the comment

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-03-04 20:34:13 +05:30
Sayak Paul
4a2833c1c2 [Modular] implement requirements validation for custom blocks (#12196)
* feat: implement requirements validation for custom blocks.

* up

* unify.

* up

* add tests

* Apply suggestions from code review

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* reviewer feedback.

* [docs] validation for custom blocks (#13156)

validation

* move to tmp_path fixture.

* propagate to conditional and loopsequential blocks.

* up

* remove collected tests

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-03-04 12:19:08 +05:30
YiYi Xu
1fe688a651 [modular] not pass trust_remote_code to external repos (#13204)
* add

* update warn

* add a test

* updaqte

* update_component with custom model

* add more tests

* Apply suggestion from @DN6

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* up

---------

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2026-03-03 02:36:36 -10:00
YiYi Xu
bbbcdd87bd [modular]Update model card to include workflow (#13195)
* up

* up

* update

* remove test

---------

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal>
Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
2026-03-02 20:50:07 -10:00
Dhruv Nair
84ff061b1d [Modular] Save Modular Pipeline weights to Hub (#13168)
* update

* update

* update

* update

* update

* update
2026-03-02 22:20:42 +05:30
Dhruv Nair
3fd14f1acf [AutoModel] Allow registering auto_map to model config (#13186)
* update

* update
2026-03-02 22:13:25 +05:30
Dhruv Nair
e7fe4ce92f [AutoModel] Fix bug with subfolders and local model paths when loading custom code (#13197)
* update

* update
2026-03-02 17:44:25 +05:30
Sayak Paul
5b54496131 [tests] enable cpu offload test in torchao without compilation. (#12704)
enable cpu offload test in torchao without compilation.
2026-03-02 15:03:58 +05:30
YiYi Xu
39188248a7 [modular] fallback to default_blocks_name when loading base block classes in ModularPipeline (#13193)
up

Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal>
2026-02-27 18:58:01 -10:00
Sayak Paul
9b97932424 [tests] consistency tests for modular index (#13192)
* add a test to check modular index consistency

* check for compulsory keys.
2026-02-28 08:47:21 +05:30
Jerry Song
40e96454f1 Fix LTX-2 image-to-video generation failure in two stages generation (#13187)
* Fix LTX-2 image-to-video generation failure in two stages generation

In LTX-2's two-stage image-to-video generation task, specifically after
the upsampling step, a shape mismatch occurs between the `latents` and
the `conditioning_mask`, which causes an error in function
`_create_noised_state`.

Fix it by creating the `conditioning_mask` based on the shape of the
`latents`.

* Add unit test for LTX-2 i2v two stages inference with upsampler

* Downscaling the upsampler in LTX-2 image-to-video unit test

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-27 00:55:01 -08:00
Miguel Martin
212db7b999 Cosmos Transfer2.5 Auto-Regressive Inference Pipeline (#13114)
* AR

* address comments

* address comments 2
2026-02-25 14:42:29 -10:00
Sayak Paul
5e94d62eb4 migrate to transformers v5 (#12976)
* switch to transformers main again./

* more

* up

* up

* fix group offloading.

* attributes

* up

* up

* tie embedding issue.

* fix t5 stuff for more.

* matrix configuration to see differences between 4.57.3 and main failures.

* change qwen expected slice because of how init is handled in v5.

* same stuff.

* up

* up

* Revert "up"

This reverts commit 515dd06db5.

* Revert "up"

This reverts commit 5274ffdd7f.

* up

* up

* fix with peft_format.

* just keep main for easier debugging.

* remove torchvision.

* empty

* up

* up with skyreelsv2 fixes.

* fix skyreels type annotation.

* up

* up

* fix variant loading issues.

* more fixes.

* fix dduf

* fix

* fix

* fix

* more fixes

* fixes

* up

* up

* fix dduf test

* up

* more

* update

* hopefully ,final?

* one last breath

* always install from main

* up

* audioldm tests

* up

* fix PRX tests.

* up

* kandinsky fixes

* qwen fixes.

* prx

* hidream
2026-02-24 10:53:56 +05:30
Dhruv Nair
4890e9bf70 Allow Automodel to use from_config with custom code. (#13123)
* update

* update
2026-02-23 21:55:59 +05:30
Dhruv Nair
db2d7e7bc4 [CI] Fix new LoRAHotswap tests (#13163)
update

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-02-20 09:01:20 +05:30
dg845
fe78a7b7c6 Fix ftfy import for PRX Pipeline (#13154)
* Guard ftfy import with is_ftfy_available

* Remove xfail for PRX pipeline tests as they appear to work on transformers>4.57.1

* make style and make quality
2026-02-18 20:44:33 -08:00
Sayak Paul
35086ac06a [core] support device type device_maps to work with offloading. (#12811)
* support device type device_maps to work with offloading.

* add tests.

* fix tests

* skip tests where it's not supported.

* empty

* up

* up

* fix allegro.
2026-02-16 16:31:45 +05:30
Sayak Paul
e390646f25 [tests] accept recompile_limit from the user in tests (#13150)
accept recompile_limit from the user in tests
2026-02-16 14:48:21 +05:30
YiYi Xu
c919ec0611 [Modular] add explicit workflow support (#13028)
* up

* up up

* update outputs

* style

* add modular_auto_docstring!

* more auto docstring

* style

* up up up

* more more

* up

* address feedbacks

* add TODO in the description for empty docstring

* refactor based on dhruv's feedback: remove the class method

* add template method

* up

* up up up

* apply auto docstring

* make style

* rmove space in make docstring

* Apply suggestions from code review

* revert change in z

* fix

* Apply style fixes

* include auto-docstring check in the modular ci. (#13004)

* initial support: workflow

* up up

* treeat loop sequential pipeline blocks as leaf

* update qwen image docstring note

* add workflow support for sdxl

* add a test suit

* add test for qwen-image

* refactor flux a bit, seperate modular_blocks into modular_blocks_flux and modular_blocks_flux_kontext + support workflow

* refactor flux2: seperate blocks for klein_base + workflow

* qwen: remove import support for stuff other than the default blocks

* add workflow support for wan

* sdxl: remove some imports:

* refactor z

* update flux2 auto core denoise

* add workflow test for z and flux2

* Apply suggestions from code review

* Apply suggestions from code review

* add test for flux

* add workflow test for flux

* add test for flux-klein

* sdxl: modular_blocks.py -> modular_blocks_stable_diffusion_xl.py

* style

* up

* add auto docstring

* workflow_names -> available_workflows

* fix workflow test for klein base

* Apply suggestions from code review

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* fix workflow tests

* qwen: edit -> image_conditioned to be consistent with flux kontext/2 such

* remove Optional

* update type hints

* update guider update_components

* fix more

* update docstring auto again

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2026-02-14 16:18:48 -10:00
Sayak Paul
2843b3d37a Sunset Python 3.8 & get rid of explicit typing exports where possible (#12524)
* drop python 3.8

* remove list, tuple, dict from typing

* fold Unions into |

* up

* fix a bunch and please me.

* up

* up

* up

* up

* up

* up

* enforce 3.10.0.

* up

* up

* up

* up

* up

* up

* up

* up

* Update setup.py

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>

* up.

* python 3.10.

* ifx

* up

* up

* up

* up

* final

* up

* fix typing utils.

* up

* up

* up

* up

* up

* up

* fix

* up

* up

* up

* up

* up

* up

* handle modern types.

* up

* up

* fix ip adapter type checking.

* up

* up

* up

* up

* up

* up

* up

* revert docstring changes.

* keep deleted files deleted.

* keep deleted files deleted.

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2026-02-13 18:16:51 +05:30
Sayak Paul
ed77a246c9 [modular] add tests for robust model loading. (#13120)
* add tests for robust model loading.

* apply review feedback.
2026-02-12 10:04:29 +05:30
Miguel Martin
a1816166a5 Cosmos Transfer2.5 inference pipeline: general/{seg, depth, blur, edge} (#13066)
* initial conversion script

* cosmos control net block

* CosmosAttention

* base model conversion

* wip

* pipeline updates

* convert controlnet

* pipeline: working without controls

* wip

* debugging

* Almost working

* temp

* control working

* cleanup + detail on neg_encoder_hidden_states

* convert edge

* pos emb for control latents

* convert all chkpts

* resolve TODOs

* remove prints

* Docs

* add siglip image reference encoder

* Add unit tests

* controlnet: add duplicate layers

* Additional tests

* skip less

* skip less

* remove image_ref

* minor

* docs

* remove skipped test in transfer

* Don't crash process

* formatting

* revert some changes

* remove skipped test

* make style

* Address comment + fix example

* CosmosAttnProcessor2_0 revert + CosmosAttnProcessor2_5 changes

* make style

* make fix-copies
2026-02-11 18:33:09 -10:00
Jared Wen
d32483913a [Fix]Allow prompt and prior_token_ids to be provided simultaneously in GlmImagePipeline (#13092)
* allow loose input

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* add tests

Signed-off-by: JaredforReal <w13431838023@gmail.com>

* format test_glm_image

Signed-off-by: JaredforReal <w13431838023@gmail.com>

---------

Signed-off-by: JaredforReal <w13431838023@gmail.com>
2026-02-11 08:29:36 -10:00