Sayak Paul
294a5f0d65
Merge branch 'main' into sd3-test-refactor
2026-03-27 16:12:27 +05:30
Howard Zhang
1fe2125802
remove str option for quantization config in torchao ( #13291 )
...
* remove str option for quantization config in torchao
* Apply style fixes
* minor fixes
* Added AOBaseConfig docs to torchao.md
* minor fixes for removing str option torchao
* minor change to add back int and uint check
* minor fixes
* minor fixes to tests
* Update tests/quantization/torchao/test_torchao.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/quantization/torchao.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update tests/quantization/torchao/test_torchao.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* version=2 update to test_torchao.py
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-27 08:52:37 +05:30
dg845
7298f5be93
Update LTX-2 Docs to Cover LTX-2.3 Models ( #13337 )
...
* Update LTX-2 docs to cover multimodal guidance and prompt enhancement
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Apply reviewer feedback
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2026-03-26 17:51:29 -07:00
Sayak Paul
b757035df6
fix claude workflow to include id-token with write. ( #13338 )
2026-03-26 15:39:10 +05:30
DN6
6ec4dee783
update
2026-03-26 15:25:08 +05:30
DN6
50015c966a
update
2026-03-26 15:21:29 +05:30
kaixuanliu
41e1003316
avoid hardcode device in flux-control example ( #13336 )
...
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
2026-03-26 12:40:53 +05:30
Sayak Paul
85ffcf1db2
[tests] Tests for conditional pipeline blocks ( #13247 )
...
* implement test suite for conditional blocks.
* remove
* another fix.
* Revert "another fix."
This reverts commit ab07b603ab .
2026-03-26 08:48:16 +05:30
Steven Liu
cbf4d9a3c3
[docs] kernels ( #13139 )
...
* kernels
* feedback
2026-03-25 09:31:54 -07:00
Sayak Paul
426daabad9
[ci] claude in ci. ( #13297 )
...
* claude in ci.
* review feedback.
2026-03-25 21:30:06 +05:30
Kashif Rasul
762ae059fa
[LLADA2] documentation fixes ( #13333 )
...
documentation fixes
2026-03-25 17:49:31 +05:30
Kashif Rasul
5d207e756e
[Discrete Diffusion] Add LLaDA2 pipeline ( #13226 )
...
* feat: add LLaDA2 and BlockRefinement pipelines for discrete text diffusion
Add support for LLaDA2/LLaDA2.1 discrete diffusion text generation:
- BlockRefinementPipeline: block-wise iterative refinement with confidence-based
token commitment, supporting editing threshold for LLaDA2.1 models
- LLaDA2Pipeline: convenience wrapper with LLaDA2-specific defaults
- DiscreteDiffusionPipelineMixin: shared SAR sampling utilities (top-k, top-p,
temperature) and prompt/prefix helpers
- compute_confidence_aware_loss: CAP-style training loss
- Examples: sampling scripts for LLaDA2 and block refinement, training scripts
with Qwen causal LM
- Docs and tests included
* feat: add BlockRefinementScheduler for commit-by-confidence scheduling
Extract the confidence-based token commit logic from BlockRefinementPipeline
into a dedicated BlockRefinementScheduler, following diffusers conventions.
The scheduler owns:
- Transfer schedule computation (get_num_transfer_tokens)
- Timestep management (set_timesteps)
- Step logic: confidence-based mask-filling and optional token editing
The pipeline now delegates scheduling to self.scheduler.step() and accepts
a scheduler parameter in __init__.
* test: add unit tests for BlockRefinementScheduler
12 tests covering set_timesteps, get_num_transfer_tokens, step logic
(confidence-based commits, threshold behavior, editing, prompt masking,
batched inputs, tuple output).
* docs: add toctree entries and standalone scheduler doc page
- Add BlockRefinement and LLaDA2 to docs sidebar navigation
- Add BlockRefinementScheduler to schedulers sidebar navigation
- Move scheduler autodoc to its own page under api/schedulers/
* feat: add --revision flag and fix dtype deprecation in sample_llada2.py
- Add --revision argument for loading model revisions from the Hub
- Replace deprecated torch_dtype with dtype for transformers 5.x compat
* fix: use 1/0 attention mask instead of 0/-inf for LLaDA2 compat
LLaDA2 models expect a boolean-style (1/0) attention mask, not an
additive (0/-inf) mask. The model internally converts to additive,
so passing 0/-inf caused double-masking and gibberish output.
* refactor: consolidate training scripts into single train_block_refinement.py
- Remove toy train_block_refinement_cap.py (self-contained demo with tiny model)
- Rename train_block_refinement_qwen_cap.py to train_block_refinement.py
(already works with any causal LM via AutoModelForCausalLM)
- Fix torch_dtype deprecation and update README with correct script names
* fix formatting
* docs: improve LLaDA2 and BlockRefinement documentation
- Add usage examples with real model IDs and working code
- Add recommended parameters table for LLaDA2.1 quality/speed modes
- Note that editing is LLaDA2.1-only (not for LLaDA2.0 models)
- Remove misleading config defaults section from BlockRefinement docs
* feat: set LLaDA2Pipeline defaults to recommended model parameters
- threshold: 0.95 -> 0.7 (quality mode)
- max_post_steps: 0 -> 16 (recommended for LLaDA2.1, harmless for 2.0)
- eos_early_stop: False -> True (stop at EOS token)
block_length=32, steps=32, temperature=0.0 were already correct.
editing_threshold remains None (users enable for LLaDA2.1 models).
* feat: default editing_threshold=0.5 for LLaDA2.1 quality mode
LLaDA2.1 is the current generation. Users with LLaDA2.0 models can
disable editing by passing editing_threshold=None.
* fix: align sampling utilities with official LLaDA2 implementation
- top_p filtering: add shift-right to preserve at least one token above
threshold (matches official code line 1210)
- temperature ordering: apply scaling before top-k/top-p filtering so
filtering operates on scaled logits (matches official code lines 1232-1235)
- greedy branch: return argmax directly when temperature=0 without
filtering (matches official code lines 1226-1230)
* refactor: remove duplicate prompt encoding, reuse mixin's _prepare_input_ids
LLaDA2Pipeline._prepare_prompt_ids was a near-copy of
DiscreteDiffusionPipelineMixin._prepare_input_ids. Remove the duplicate
and call the mixin method directly. Also simplify _extract_input_ids
since we always pass return_dict=True.
* formatting
* fix: replace deprecated torch_dtype with dtype in examples and docstrings
- Update EXAMPLE_DOC_STRING to use dtype= and LLaDA2.1-mini model ID
- Fix sample_block_refinement.py to use dtype=
* remove BlockRefinementPipeline
* cleanup
* fix readme
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* removed DiscreteDiffusionPipelineMixin
* add support for 2d masks for flash attn
* Update src/diffusers/training_utils.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/training_utils.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* fix issues from review
* added tests
* formatting
* add check_eos_finished to scheduler
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_block_refinement.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/schedulers/scheduling_block_refinement.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* fix renaming issues and types
* remove duplicate check
* Update docs/source/en/api/pipelines/llada2.md
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* Update src/diffusers/pipelines/llada2/pipeline_llada2.py
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
2026-03-25 16:17:50 +05:30
Sayak Paul
e358ddcce6
fix to device and to dtype tests. ( #13323 )
2026-03-25 11:47:02 +05:30
Sayak Paul
153fcbc5a8
fix klein lora loading. ( #13313 )
2026-03-25 07:51:35 +05:30
Beinsezii
da6718f080
ZImageTransformer2D: Only build attention mask if seqlens are not equal ( #12955 )
2026-03-24 06:06:50 -10:00
Alexey Kirillov
832676d35e
Use defaultdict for _SET_ADAPTER_SCALE_FN_MAPPING ( #13320 )
...
refactor: use defaultdict for _SET_ADAPTER_SCALE_FN_MAPPING
Co-authored-by: Alexkkir <alexkkir@gmail.coom >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-24 17:49:50 +05:30
Dhruv Nair
7bbd96da5d
[CI] Update fetching pipelines for latest HF Hub Version ( #13322 )
...
update
2026-03-24 16:42:32 +05:30
Dhruv Nair
62777fa819
Fix unguarded torchvision import in Cosmos ( #13321 )
...
update
2026-03-24 16:00:24 +05:30
Sayak Paul
f1fd515257
[tests] fix lora logging tests for models. ( #13318 )
...
* fix lora logging tests for models.
* make style
2026-03-24 15:48:03 +05:30
Cheung Ka Wai
afdda57f61
Fix the attention mask in ulysses SP for QwenImage ( #13278 )
...
* fix mask in SP
* change the modification to qwen specific
* drop xfail since qwen-image mask is fixed
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-24 02:12:50 -07:00
YangKai0616
5fc2bd2c8f
Stabilize low-precision custom autoencoder RMS normalization ( #13316 )
...
* Stabilize low-precision custom autoencoder RMS normalization
* Add fp8/4
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
2026-03-24 02:00:05 -07:00
Sayak Paul
6350a7690a
[chore] properly deprecate src.diffusers.utils.testing_utils. ( #13314 )
...
properly deprecate src.diffusers.utils.testing_utils.
2026-03-24 10:54:35 +05:30
Cheung Ka Wai
9d4c9dcf21
change QwenImageTransformer UT to batch inputs ( #13312 )
...
* UT expands to batch inputs
* update according to suggestion
* update according to suggestion 2
* fix CI
* update according to suggestion 3
* clean line
2026-03-24 08:56:40 +05:30
ddavidchick
ef309a1bb0
Add KVAE 1.0 ( #13033 )
...
* add kvae2d
* add kvae3d video
* add docs for kvae2d and kvae3d video
* style fixes
* fix kvae3d docs
* fix normalzation
* fix kvae video for code style
* fix kvae video
* kvae minor fixes
* add gradient ckpting for kvaes
* get rid of inplace ops kvae video
* add tests for KVAEs
* kvae2d normalization style change
* kvaes fix style
* update dummy_pt_objects test for kvaes
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2026-03-23 12:56:49 -10:00
Charles
b9761ce5a2
[export] Add export-safe LRU cache helper ( #13290 )
...
* [core] Add export-safe LRU cache helper
* torch version check!
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-23 18:10:07 +05:30
Dhruv Nair
52558b45d8
[CI] Flux2 Model Test Refactor ( #13071 )
...
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-23 16:56:08 +05:30
Sayak Paul
c02c17c6ee
[tests] test load_components in modular ( #13245 )
...
* test load_components.
* fix
* fix
* u[
* up
2026-03-21 09:41:48 +05:30
Sayak Paul
a9855c4204
[tests] fix audioldm2 tests. ( #13293 )
...
fix audioldm2 tests.
2026-03-20 20:53:21 +05:30
Sayak Paul
0b35834351
[core] fa4 support. ( #13280 )
...
* start fa4 support.
* up
* specify minimum version
2026-03-20 17:28:09 +05:30
Sayak Paul
522b523e40
[ci] hoping to fix is_flaky with wanvace. ( #13294 )
...
* hoping to fix is_flaky with wanvace.
* revert changes in src/diffusers/utils/testing_utils.py and propagate them to tests/testing_utils.py.
* up
2026-03-20 16:02:16 +05:30
Dhruv Nair
e9b9f25f67
[CI] Update transformer version in release tests ( #13296 )
...
update
2026-03-20 11:40:06 +05:30
Dhruv Nair
32b4cfc81c
[Modular] Test for catching dtype and device issues with AutoModel type hints ( #13287 )
...
* update
* update
* update
2026-03-20 10:36:03 +05:30
YiYi Xu
a13e5cf9fc
[agents]support skills ( #13269 )
...
* support skills
* update
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update baSeed on new best practice
* Update .ai/skills/parity-testing/pitfalls.md
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
* update
---------
Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal >
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com >
2026-03-19 18:07:41 -10:00
dg845
072d15ee42
Add Support for LTX-2.3 Models ( #13217 )
...
* Initial implementation of perturbed attn processor for LTX 2.3
* Update DiT block for LTX 2.3 + add self_attention_mask
* Add flag to control using perturbed attn processor for now
* Add support for new video upsampling blocks used by LTX-2.3
* Support LTX-2.3 Big-VGAN V2-style vocoder
* Initial implementation of LTX-2.3 vocoder with bandwidth extender
* Initial support for LTX-2.3 per-modality feature extractor
* Refactor so that text connectors own all text encoder hidden_states normalization logic
* Fix some bugs for inference
* Fix LTX-2.X DiT block forward pass
* Support prompt timestep embeds and prompt cross attn modulation
* Add LTX-2.3 configs to conversion script
* Support converting LTX-2.3 DiT checkpoints
* Support converting LTX-2.3 Video VAE checkpoints
* Support converting LTX-2.3 Vocoder with bandwidth extender
* Support converting LTX-2.3 text connectors
* Don't convert any upsamplers for now
* Support self attention mask for LTX2Pipeline
* Fix some inference bugs
* Support self attn mask and sigmas for LTX-2.3 I2V, Cond pipelines
* Support STG and modality isolation guidance for LTX-2.3
* make style and make quality
* Make audio guidance values default to video values by default
* Update to LTX-2.3 style guidance rescaling
* Support cross timesteps for LTX-2.3 cross attention modulation
* Fix RMS norm bug for LTX-2.3 text connectors
* Perform guidance rescale in sample (x0) space following original code
* Support LTX-2.3 Latent Spatial Upsampler model
* Support LTX-2.3 distilled LoRA
* Support LTX-2.3 Distilled checkpoint
* Support LTX-2.3 prompt enhancement
* Make LTX-2.X processor non-required so that tests pass
* Fix test_components_function tests for LTX2 T2V and I2V
* Fix LTX-2.3 Video VAE configuration bug causing pixel jitter
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Refactor LTX-2.X Video VAE upsampler block init logic
* Refactor LTX-2.X guidance rescaling to use rescale_noise_cfg
* Use generator initial seed to control prompt enhancement if available
* Remove self attention mask logic as it is not used in any current pipelines
* Commit fixes suggested by claude code (guidance in sample (x0) space, denormalize after timestep conditioning)
* Use constant shift following original code
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-19 14:58:29 -07:00
kaixuanliu
67613369bb
fix: 'PaintByExampleImageEncoder' object has no attribute 'all_tied_w… ( #13252 )
...
* fix: 'PaintByExampleImageEncoder' object has no attribute 'all_tied_weights_keys'
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
* also fix LDMBertModel
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
---------
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2026-03-18 17:55:08 -10:00
Shenghai Yuan
0c01a4b5e2
[Helios] Remove lru_cache for better AoTI compatibility and cleaner code ( #13282 )
...
fix: drop lru_cache for better AoTI compatibility
2026-03-18 23:41:58 +05:30
kaixuanliu
8e4b5607ed
skip invalid test case for helios pipeline ( #13218 )
...
* skip invalid test case for helio pipeline
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
* update skip reason
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
---------
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com >
2026-03-17 20:58:35 -10:00
Junsong Chen
c6f72ad2f6
add ltx2 vae in sana-video; ( #13229 )
...
* add ltx2 vae in sana-video;
* add ltx vae in conversion script;
* Update src/diffusers/pipelines/sana_video/pipeline_sana_video.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/pipelines/sana_video/pipeline_sana_video.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* condition `vae_scale_factor_xxx` related settings on VAE types;
* make the mean/std depends on vae class;
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2026-03-17 18:09:52 -10:00
Dhruv Nair
11a3284cee
[CI] Qwen Image Model Test Refactor ( #13069 )
...
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-17 16:44:04 +05:30
Sayak Paul
16e7067647
[tests] fix llava kwargs in the hunyuan tests ( #13275 )
...
fix llava kwargs in the hunyuan tests
2026-03-17 10:11:47 +05:30
Dhruv Nair
d1b3555c29
[Modular] Fix dtype assignment when type hint is AutoModel ( #13271 )
...
* update
* update
2026-03-17 09:47:53 +05:30
Wang, Yi
9677859ebf
fix parallelism case failure in xpu ( #13270 )
...
* fix parallelism case failure in xpu
Signed-off-by: Wang, Yi <yi.a.wang@intel.com >
* updated
Signed-off-by: Wang, Yi <yi.a.wang@intel.com >
---------
Signed-off-by: Wang, Yi <yi.a.wang@intel.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2026-03-17 08:52:15 +05:30
Steven Liu
ed31974c3e
[docs] updates ( #13248 )
...
* fixes
* few more links
* update zh
* fix
2026-03-16 13:24:57 -07:00
YiYi Xu
e5aa719241
Add AGENTS.md ( #13259 )
...
* add a draft
* add
* up
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2026-03-14 08:35:12 -10:00
teith
4bc1c59a67
fix: correct invalid type annotation for image in Flux2Pipeline.__call__ ( #13205 )
...
fix: correct invalid type annotation for image in Flux2Pipeline.__call__
2026-03-13 15:56:38 -03:00
Sayak Paul
764f7ede33
[core] Flux2 klein kv followups ( #13264 )
...
* implement Flux2Transformer2DModelOutput.
* add output class to docs.
* add Flux2KleinKV to docs.
* add pipeline tests for klein kv.
2026-03-13 10:05:11 +05:30
Sayak Paul
8d0f3e1ba8
[lora] fix z-image non-diffusers lora loading. ( #13255 )
...
fix z-image non-diffusers lora loading.
2026-03-13 06:58:53 +05:30
huemin
094caf398f
klein 9b kv ( #13262 )
...
* klein 9b kv
* Apply style fixes
* fix typo inline modulation split
* make fix-copies
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-03-12 06:53:56 -10:00
Alvaro Bartolome
81c354d879
Add PRXPipeline in AUTO_TEXT2IMAGE_PIPELINES_MAPPING ( #13257 )
2026-03-11 14:39:24 -03:00
Miguel Martin
0a2c26d0a4
Update Documentation for NVIDIA Cosmos ( #13251 )
...
* fix docs
* update main example
2026-03-11 09:14:56 -07:00