Shenghai Yuan
ae5881ba77
Fix Helios paper link in documentation ( #13213 )
...
* Fix Helios paper link in documentation
Updated the link to the Helios paper for accuracy.
* Fix reference link in HeliosTransformer3DModel documentation
Updated the reference link for the Helios Transformer model paper.
* Update Helios research paper link in documentation
* Update Helios research paper link in documentation
2026-03-05 18:58:13 +05:30
dg845
33f785b444
Add Helios-14B Video Generation Pipelines ( #13208 )
...
* [1/N] add helios
* fix test
* make fix-copies
* change script path
* fix cus script
* update docs
* fix documented check
* update links for docs and examples
* change default config
* small refactor
* add test
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* remove register_buffer for _scale_cache
* fix non-cuda devices error
* remove "handle the case when timestep is 2D"
* refactor HeliosMultiTermMemoryPatch and process_input_hidden_states
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* fix calculate_shift
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* rewritten `einops` in pure `torch`
* fix: pass patch_size to apply_schedule_shift instead of hardcoding
* remove the logics of 'vae_decode_type'
* move some validation into check_inputs()
* rename helios scheduler & merge all into one step()
* add some details to doc
* move dmd step() logics from pipeline to scheduler
* change to Python 3.9+ style type
* fix NoneType error
* refactor DMD scheduler's set_timestep
* change rope related vars name
* fix stage2 sample
* fix dmd sample
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* remove redundant & refactor norm_out
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* change "is_keep_x0" to "keep_first_frame"
* use a more intuitive name
* refactor dynamic_time_shifting
* remove use_dynamic_shifting args
* remove usage of UniPCMultistepScheduler
* separate stage2 sample to HeliosPyramidPipeline
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/transformers/transformer_helios.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* fix transformer
* use a more intuitive name
* update example script
* fix requirements
* remove redudant attention mask
* fix
* optimize pipelines
* make style .
* update TYPE_CHECKING
* change to use torch.split
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* derive memory patch sizes from patch_size multiples
* remove some hardcoding
* move some checks into check_inputs
* refactor sample_block_noise
* optimize encoding chunks logits for v2v
* use num_history_latent_frames = sum(history_sizes)
* Update src/diffusers/pipelines/helios/pipeline_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* remove redudant optimized_scale
* Update src/diffusers/pipelines/helios/pipeline_helios_pyramid.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* use more descriptive name
* optimize history_latents
* remove not used "num_inference_steps"
* removed redudant "pyramid_num_stages"
* add "is_cfg_zero_star" and "is_distilled" to HeliosPyramidPipeline
* remove redudant
* change example scripts name
* change example scripts name
* correct docs
* update example
* update docs
* Update tests/models/transformers/test_models_transformer_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update tests/models/transformers/test_models_transformer_helios.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* separate HeliosDMDScheduler
* fix numerical stability issue:
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_helios_dmd.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* remove redudant
* small refactor
* remove use_interpolate_prompt logits
* simplified model test
* fallbackt to BaseModelTesterConfig
* remove _maybe_expand_t2v_lora_for_i2v
* fix HeliosLoraLoaderMixin
* update docs
* use randn_tensor for test
* fix doc typo
* optimize code
* mark torch.compile xfail
* change paper name
* Make get_dummy_inputs deterministic using self.generator
* Set less strict threshold for test_save_load_float16 test for Helios pipeline
* make style and make quality
* Preparation for merging
* add torch.Generator
* Fix HeliosPipelineOutput doc path
* Fix Helios related (optimize docs & remove redudant) (#13210 )
* fix docs
* remove redudant
* remove redudant
* fix group offload
* Removed fixes for group offload
---------
Co-authored-by: yuanshenghai <yuanshenghai@bytedance.com >
Co-authored-by: Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: SHYuanBest <shyuan-cs@hotmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-04 21:31:43 +05:30
Dhruv Nair
3fd14f1acf
[AutoModel] Allow registering auto_map to model config ( #13186 )
...
* update
* update
2026-03-02 22:13:25 +05:30
Dhruv Nair
bedc67c75f
[Docs] Add guide for AutoModel with custom code ( #13099 )
...
update
2026-02-10 12:19:44 +05:30
Steven Liu
40de88af8c
[docs] AutoModel ( #12644 )
...
* automodel
* fix
2025-11-13 08:43:24 -08:00
Ali Imran
1b456bd5d5
docs: cleanup of runway model ( #12503 )
...
* cleanup of runway model
* quality fixes
2025-10-17 14:10:50 -07:00
Steven Liu
b4e6dc3037
[docs] Fix broken links ( #12487 )
...
fix broken links
2025-10-15 06:42:10 +05:30
Steven Liu
3eb40786ca
[docs] Prompting ( #12312 )
...
* init
* fix
* batch inf
* feedback
* update
2025-10-14 13:53:56 -07:00
Steven Liu
cc5b31ffc9
[docs] Migrate syntax ( #12390 )
...
* change syntax
* make style
2025-09-30 10:11:19 -07:00
Steven Liu
c07fcf780a
[docs] Model formats ( #12256 )
...
* init
* config
* lora metadata
* feedback
* fix
* cache allocator warmup for from_single_file
* feedback
* feedback
2025-09-29 11:36:14 -07:00
Steven Liu
76810eca2b
[docs] Schedulers ( #12246 )
...
* init
* toctree
* scheduler suggestions
* toctree
2025-09-23 10:29:16 -07:00
Sayak Paul
eb7ef26736
[quant] allow components_to_quantize to be a non-list for single components ( #12234 )
...
* allow non list components_to_quantize.
* up
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* [docs] components_to_quantize (#12287 )
init
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-09-10 09:47:08 -10:00
Steven Liu
fc337d5853
[docs] Models ( #12248 )
...
* init
* fix
* feedback
* feedback
2025-09-05 11:52:09 -07:00
Steven Liu
32798bf242
[docs] Inference section cleanup ( #12281 )
...
init
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-09-05 09:34:37 -07:00
Steven Liu
c2e5ece08b
[docs] Sharing pipelines/models ( #12280 )
...
init
2025-09-04 11:43:47 -07:00
Steven Liu
cbecc33570
[docs] Reproducibility ( #12237 )
...
* init
* dupe
* feedback
2025-08-27 11:35:31 -07:00
Steven Liu
5237a82a35
[docs] Remove Flax ( #12244 )
...
* remove flax
* toctree
* feedback
2025-08-27 11:11:07 -07:00
Manith Ratnayake
552c127c05
docs: correct typos in using-diffusers/other-formats ( #12243 )
2025-08-26 08:48:05 -07:00
Tianqi Tang
4b7fe044e3
Fix typos and inconsistencies ( #12204 )
...
Fix typos and test assertions
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-08-26 07:58:08 -07:00
Steven Liu
2c4ee10b77
[docs] Diffusion pipeline ( #12148 )
...
* init
* refactor
* refresh
* fix?
* fix?
* fix
* fix-copies
* feedback
* feedback
* fix
* feedback
2025-08-25 11:06:12 -07:00
Steven Liu
b60faf456b
[docs] Pipeline callbacks ( #12212 )
...
* init
* review
2025-08-22 13:01:24 -07:00
Steven Liu
3e73dc24a4
[docs] Community pipelines ( #12201 )
...
* refresh
* feedback
2025-08-22 10:42:13 -07:00
Steven Liu
421ee07e33
[docs] Parallel loading of shards ( #12135 )
...
* initial
* feedback
* Update docs/source/en/using-diffusers/loading.md
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-08-14 09:39:40 +05:30
Steven Liu
c6fbcf717b
[docs] Update toctree ( #11936 )
...
* update
* fix
* feedback
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-07-18 13:37:04 -07:00
Steven Liu
ce338d4e4a
[docs] LoRA metadata ( #11848 )
...
* draft
* hub image
* update
* fix
2025-07-08 08:29:38 -07:00
Aryan
8c938fb410
[docs] Add a note of _keep_in_fp32_modules ( #11851 )
...
* update
* Update docs/source/en/using-diffusers/schedulers.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update schedulers.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-07-02 15:51:57 -07:00
Steven Liu
d31b8cea3e
[docs] Batch generation ( #11841 )
...
* draft
* fix
* fix
* feedback
* feedback
2025-07-01 17:00:20 -07:00
Aryan
a4df8dbc40
Update more licenses to 2025 ( #11746 )
...
update
2025-06-19 07:46:01 +05:30
Steven Liu
c934720629
[docs] Model cards ( #11112 )
...
* initial
* update
* hunyuanvideo
* ltx
* fix
* wan
* gen guide
* feedback
* feedback
* pipeline-level quant config
* feedback
* ltx
2025-06-02 16:55:14 -07:00
Steven Liu
be2fb77dc1
[docs] PyTorch 2.0 ( #11618 )
...
* combine
* Update docs/source/en/optimization/fp16.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-28 09:42:41 -07:00
Quentin Gallouédec
c8bb1ff53e
Use HF Papers ( #11567 )
...
* Use HF Papers
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-19 06:22:33 -10:00
space_samurai
8270fa58e4
Doc update ( #11531 )
...
Update docs/source/en/using-diffusers/inpaint.md
2025-05-19 13:32:08 +05:30
Steven Liu
e23705e557
[docs] Adapters ( #11331 )
...
* refactor adapter docs
* ip-adapter
* ip adapter
* fix toctree
* fix toctree
* lora
* images
* controlnet
* feedback
* controlnet
* t2i
* fix typo
* feedback
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-02 08:08:33 +05:30
co63oc
86294d3c7f
Fix typos in docs and comments ( #11416 )
...
* Fix typos in docs and comments
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-30 20:30:53 -10:00
Sayak Paul
cefa28f449
[docs] Promote AutoModel usage ( #11300 )
...
* docs: promote the usage of automodel.
* bitsandbytes
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-04-15 09:25:40 +05:30
Sayak Paul
f685981ed0
[docs] minor updates to dtype map docs. ( #11237 )
...
minor updates to dtype map docs.
2025-04-09 08:38:17 +05:30
Benjamin Bossan
fb54499614
[LoRA] Implement hot-swapping of LoRA ( #9453 )
...
* [WIP][LoRA] Implement hot-swapping of LoRA
This PR adds the possibility to hot-swap LoRA adapters. It is WIP.
Description
As of now, users can already load multiple LoRA adapters. They can
offload existing adapters or they can unload them (i.e. delete them).
However, they cannot "hotswap" adapters yet, i.e. substitute the weights
from one LoRA adapter with the weights of another, without the need to
create a separate LoRA adapter.
Generally, hot-swapping may not appear not super useful but when the
model is compiled, it is necessary to prevent recompilation. See #9279
for more context.
Caveats
To hot-swap a LoRA adapter for another, these two adapters should target
exactly the same layers and the "hyper-parameters" of the two adapters
should be identical. For instance, the LoRA alpha has to be the same:
Given that we keep the alpha from the first adapter, the LoRA scaling
would be incorrect for the second adapter otherwise.
Theoretically, we could override the scaling dict with the alpha values
derived from the second adapter's config, but changing the dict will
trigger a guard for recompilation, defeating the main purpose of the
feature.
I also found that compilation flags can have an impact on whether this
works or not. E.g. when passing "reduce-overhead", there will be errors
of the type:
> input name: arg861_1. data pointer changed from 139647332027392 to
139647331054592
I don't know enough about compilation to determine whether this is
problematic or not.
Current state
This is obviously WIP right now to collect feedback and discuss which
direction to take this. If this PR turns out to be useful, the
hot-swapping functions will be added to PEFT itself and can be imported
here (or there is a separate copy in diffusers to avoid the need for a
min PEFT version to use this feature).
Moreover, more tests need to be added to better cover this feature,
although we don't necessarily need tests for the hot-swapping
functionality itself, since those tests will be added to PEFT.
Furthermore, as of now, this is only implemented for the unet. Other
pipeline components have yet to implement this feature.
Finally, it should be properly documented.
I would like to collect feedback on the current state of the PR before
putting more time into finalizing it.
* Reviewer feedback
* Reviewer feedback, adjust test
* Fix, doc
* Make fix
* Fix for possible g++ error
* Add test for recompilation w/o hotswapping
* Make hotswap work
Requires https://github.com/huggingface/peft/pull/2366
More changes to make hotswapping work. Together with the mentioned PEFT
PR, the tests pass for me locally.
List of changes:
- docstring for hotswap
- remove code copied from PEFT, import from PEFT now
- adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some
state dict renaming was necessary, LMK if there is a better solution)
- adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this
is even necessary or not, I'm unsure what the overall relationship is
between this and PeftAdapterMixin.load_lora_adapter
- also in UNet2DConditionLoadersMixin._process_lora, I saw that there is
no LoRA unloading when loading the adapter fails, so I added it
there (in line with what happens in PeftAdapterMixin.load_lora_adapter)
- rewritten tests to avoid shelling out, make the test more precise by
making sure that the outputs align, parametrize it
- also checked the pipeline code mentioned in this comment:
https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871 ;
when running this inside the with
torch._dynamo.config.patch(error_on_recompile=True) context, there is
no error, so I think hotswapping is now working with pipelines.
* Address reviewer feedback:
- Revert deprecated method
- Fix PEFT doc link to main
- Don't use private function
- Clarify magic numbers
- Add pipeline test
Moreover:
- Extend docstrings
- Extend existing test for outputs != 0
- Extend existing test for wrong adapter name
* Change order of test decorators
parameterized.expand seems to ignore skip decorators if added in last
place (i.e. innermost decorator).
* Split model and pipeline tests
Also increase test coverage by also targeting conv2d layers (support of
which was added recently on the PEFT PR).
* Reviewer feedback: Move decorator to test classes
... instead of having them on each test method.
* Apply suggestions from code review
Co-authored-by: hlky <hlky@hlky.ac >
* Reviewer feedback: version check, TODO comment
* Add enable_lora_hotswap method
* Reviewer feedback: check _lora_loadable_modules
* Revert changes in unet.py
* Add possibility to ignore enabled at wrong time
* Fix docstrings
* Log possible PEFT error, test
* Raise helpful error if hotswap not supported
I.e. for the text encoder
* Formatting
* More linter
* More ruff
* Doc-builder complaint
* Update docstring:
- mention no text encoder support yet
- make it clear that LoRA is meant
- mention that same adapter name should be passed
* Fix error in docstring
* Update more methods with hotswap argument
- SDXL
- SD3
- Flux
No changes were made to load_lora_into_transformer.
* Add hotswap argument to load_lora_into_transformer
For SD3 and Flux. Use shorter docstring for brevity.
* Extend docstrings
* Add version guards to tests
* Formatting
* Fix LoRA loading call to add prefix=None
See:
https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064
* Run make fix-copies
* Add hot swap documentation to the docs
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-04-08 17:05:31 +05:30
hlky
e5c6027ef8
[docs] torch_dtype map ( #11194 )
2025-04-02 12:46:28 +01:00
Parag Ekbote
982f9b38d6
Add Example of IPAdapterScaleCutoffCallback to Docs ( #10934 )
...
* Add example of Ip-Adapter-Callback.
* Add image links from HF Hub.
2025-03-03 08:32:45 -08:00
Anton Obukhov
3fab6624fd
Marigold Update: v1-1 models, Intrinsic Image Decomposition pipeline, documentation ( #10884 )
...
* minor documentation fixes of the depth and normals pipelines
* update license headers
* update model checkpoints in examples
fix missing prediction_type in register_to_config in the normals pipeline
* add initial marigold intrinsics pipeline
update comments about num_inference_steps and ensemble_size
minor fixes in comments of marigold normals and depth pipelines
* update uncertainty visualization to work with intrinsics
* integrate iid
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-02-25 14:13:02 -10:00
Steven Liu
3fdf173084
[docs] Update prompt weighting docs ( #10843 )
...
* sd_embed
* feedback
2025-02-24 08:46:26 -08:00
Aryan
57ac673802
Refactor OmniGen ( #10771 )
...
* OmniGen model.py
* update OmniGenTransformerModel
* omnigen pipeline
* omnigen pipeline
* update omnigen_pipeline
* test case for omnigen
* update omnigenpipeline
* update docs
* update docs
* offload_transformer
* enable_transformer_block_cpu_offload
* update docs
* reformat
* reformat
* reformat
* update docs
* update docs
* make style
* make style
* Update docs/source/en/api/models/omnigen_transformer.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* revert changes to examples/
* update OmniGen2DModel
* make style
* update test cases
* Update docs/source/en/api/pipelines/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* typo
* Update src/diffusers/models/embeddings.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/attention.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/transformers/transformer_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/transformers/transformer_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/transformers/transformer_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/omnigen/test_pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/omnigen/test_pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* consistent attention processor
* updata
* update
* check_inputs
* make style
* update testpipeline
* update testpipeline
* refactor omnigen
* more updates
* apply review suggestion
---------
Co-authored-by: shitao <2906698981@qq.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: hlky <hlky@hlky.ac >
2025-02-12 14:06:14 +05:30
Shitao Xiao
798e17187d
Add OmniGen ( #10148 )
...
* OmniGen model.py
* update OmniGenTransformerModel
* omnigen pipeline
* omnigen pipeline
* update omnigen_pipeline
* test case for omnigen
* update omnigenpipeline
* update docs
* update docs
* offload_transformer
* enable_transformer_block_cpu_offload
* update docs
* reformat
* reformat
* reformat
* update docs
* update docs
* make style
* make style
* Update docs/source/en/api/models/omnigen_transformer.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* revert changes to examples/
* update OmniGen2DModel
* make style
* update test cases
* Update docs/source/en/api/pipelines/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/omnigen.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* typo
* Update src/diffusers/models/embeddings.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/attention.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/transformers/transformer_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/transformers/transformer_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/models/transformers/transformer_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/omnigen/test_pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/omnigen/test_pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
Co-authored-by: hlky <hlky@hlky.ac >
* consistent attention processor
* updata
* update
* check_inputs
* make style
* update testpipeline
* update testpipeline
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-02-12 02:16:38 +05:30
Parag Ekbote
3e35f56b00
Fix Documentation about Image-to-Image Pipeline ( #10704 )
...
Fix Doc Tutorial.
2025-02-03 09:54:00 -08:00
Ikpreet S Babra
537891e693
Fixed grammar in "write_own_pipeline" readme ( #10706 )
2025-02-03 09:53:30 -08:00
Shenghai Yuan
23b467c79c
[core] ConsisID ( #10140 )
...
* Update __init__.py
* add consisid
* update consisid
* update consisid
* make style
* make_style
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* add doc
* make style
* Rename consisid .md to consisid.md
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update demo.ipynb
* Update pipeline_consisid.py
* make fix-copies
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update doc & pipeline code
* fix typo
* make style
* update example
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update example
* update example
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* update
* add test and update
* remove some changes from docs
* refactor
* fix
* undo changes to examples
* remove save/load and fuse methods
* update
* link hf-doc-img & make test extremely small
* update
* add lora
* fix test
* update
* update
* change expected_diff_max to 0.4
* fix typo
* fix link
* fix typo
* update docs
* update
* remove consisid lora tests
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-19 13:10:08 +05:30
Marc Sun
fbff43acc9
[FEAT] DDUF format ( #10037 )
...
* load and save dduf archive
* style
* switch to zip uncompressed
* updates
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* first draft
* remove print
* switch to dduf_file for consistency
* switch to huggingface hub api
* fix log
* add a basic test
* Update src/diffusers/configuration_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix
* fix variant
* change saving logic
* DDUF - Load transformers components manually (#10171 )
* update hfh version
* Load transformers components manually
* load encoder from_pretrained with state_dict
* working version with transformers and tokenizer !
* add generation_config case
* fix tests
* remove saving for now
* typing
* need next version from transformers
* Update src/diffusers/configuration_utils.py
Co-authored-by: Lucain <lucain@huggingface.co >
* check path corectly
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* udapte
* typing
* remove check for subfolder
* quality
* revert setup changes
* oups
* more readable condition
* add loading from the hub test
* add basic docs.
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* add example
* add
* make functions private
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* minor.
* fixes
* fix
* change the precdence of parameterized.
* error out when custom pipeline is passed with dduf_file.
* updates
* fix
* updates
* fixes
* updates
* fix xfail condition.
* fix xfail
* fixes
* sharded checkpoint compat
* add test for sharded checkpoint
* add suggestions
* Update src/diffusers/models/model_loading_utils.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* from suggestions
* add class attributes to flag dduf tests
* last one
* fix logic
* remove comment
* revert changes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Lucain <lucain@huggingface.co >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-14 13:21:42 +05:30
Sayak Paul
74b67524b5
[Docs] Update hunyuan_video.md to rectify the checkpoint id ( #10524 )
...
* Update hunyuan_video.md to rectify the checkpoint id
* bfloat16
* more fixes
* don't update the checkpoint ids.
* update
* t -> T
* Apply suggestions from code review
* fix
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-13 10:59:13 -10:00
Steven Liu
91008aabc4
[docs] Video generation update ( #10272 )
...
* update
* update
* feedback
* fix videos
* use previous checkpoint
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-31 12:44:57 -08:00
Steven Liu
0d11ab26c4
[docs] load_lora_adapter ( #10119 )
...
* load_lora_adapter
* save
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-05 08:00:03 +05:30