dg845
33f785b444
Add Helios-14B Video Generation Pipelines ( #13208 )
...
* [1/N] add helios
* fix test
* make fix-copies
* change script path
* fix cus script
* update docs
* fix documented check
* update links for docs and examples
* change default config
* small refactor
* add test
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* remove register_buffer for _scale_cache
* fix non-cuda devices error
* remove "handle the case when timestep is 2D"
* refactor HeliosMultiTermMemoryPatch and process_input_hidden_states
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* fix calculate_shift
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* rewritten `einops` in pure `torch`
* fix: pass patch_size to apply_schedule_shift instead of hardcoding
* remove the logics of 'vae_decode_type'
* move some validation into check_inputs()
* rename helios scheduler & merge all into one step()
* add some details to doc
* move dmd step() logics from pipeline to scheduler
* change to Python 3.9+ style type
* fix NoneType error
* refactor DMD scheduler's set_timestep
* change rope related vars name
* fix stage2 sample
* fix dmd sample
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* remove redundant & refactor norm_out
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* change "is_keep_x0" to "keep_first_frame"
* use a more intuitive name
* refactor dynamic_time_shifting
* remove use_dynamic_shifting args
* remove usage of UniPCMultistepScheduler
* separate stage2 sample to HeliosPyramidPipeline
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* fix transformer
* use a more intuitive name
* update example script
* fix requirements
* remove redudant attention mask
* fix
* optimize pipelines
* make style .
* update TYPE_CHECKING
* change to use torch.split
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* derive memory patch sizes from patch_size multiples
* remove some hardcoding
* move some checks into check_inputs
* refactor sample_block_noise
* optimize encoding chunks logits for v2v
* use num_history_latent_frames = sum(history_sizes)
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* remove redudant optimized_scale
* Update src/diffusers/pipelines/helios/pipeline_helios_pyramid.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* use more descriptive name
* optimize history_latents
* remove not used "num_inference_steps"
* removed redudant "pyramid_num_stages"
* add "is_cfg_zero_star" and "is_distilled" to HeliosPyramidPipeline
* remove redudant
* change example scripts name
* change example scripts name
* correct docs
* update example
* update docs
* Update tests/models/transformers/test_models_transformer_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update tests/models/transformers/test_models_transformer_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* separate HeliosDMDScheduler
* fix numerical stability issue:
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* remove redudant
* small refactor
* remove use_interpolate_prompt logits
* simplified model test
* fallbackt to BaseModelTesterConfig
* remove _maybe_expand_t2v_lora_for_i2v
* fix HeliosLoraLoaderMixin
* update docs
* use randn_tensor for test
* fix doc typo
* optimize code
* mark torch.compile xfail
* change paper name
* Make get_dummy_inputs deterministic using self.generator
* Set less strict threshold for test_save_load_float16 test for Helios pipeline
* make style and make quality
* Preparation for merging
* add torch.Generator
* Fix HeliosPipelineOutput doc path
* Fix Helios related (optimize docs & remove redudant) (#13210 )
* fix docs
* remove redudant
* remove redudant
* fix group offload
* Removed fixes for group offload
---------
Co-authored-by: yuanshenghai <yuanshenghai@bytedance.com >
Co-authored-by: Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: SHYuanBest <shyuan-cs@hotmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-04 21:31:43 +05:30