From 426daabad9cda17caa502f071a107b7dd0c48e9d Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 25 Mar 2026 21:30:06 +0530 Subject: [PATCH 1/8] [ci] claude in ci. (#13297) * claude in ci. * review feedback. --- .ai/review-rules.md | 11 +++++++++ .github/workflows/claude_review.yml | 38 +++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) create mode 100644 .ai/review-rules.md create mode 100644 .github/workflows/claude_review.yml diff --git a/.ai/review-rules.md b/.ai/review-rules.md new file mode 100644 index 0000000000..12efc94c4b --- /dev/null +++ b/.ai/review-rules.md @@ -0,0 +1,11 @@ +# PR Review Rules + +Review-specific rules for Claude. Focus on correctness — style is handled by ruff. + +Before reviewing, read and apply the guidelines in: +- [AGENTS.md](AGENTS.md) — coding style, dependencies, copied code, model conventions +- [skills/model-integration/SKILL.md](skills/model-integration/SKILL.md) — attention pattern, pipeline rules, implementation checklist, gotchas +- [skills/parity-testing/SKILL.md](skills/parity-testing/SKILL.md) — testing rules, comparison utilities +- [skills/parity-testing/pitfalls.md](skills/parity-testing/pitfalls.md) — known pitfalls (dtype mismatches, config assumptions, etc.) + +## Common mistakes (add new rules below this line) diff --git a/.github/workflows/claude_review.yml b/.github/workflows/claude_review.yml new file mode 100644 index 0000000000..e772415d63 --- /dev/null +++ b/.github/workflows/claude_review.yml @@ -0,0 +1,38 @@ +name: Claude PR Review + +on: + issue_comment: + types: [created] + pull_request_review_comment: + types: [created] + +permissions: + contents: write + pull-requests: write + issues: read + +jobs: + claude-review: + if: | + ( + github.event_name == 'issue_comment' && + github.event.issue.pull_request && + github.event.issue.state == 'open' && + contains(github.event.comment.body, '@claude') && + (github.event.comment.author_association == 'MEMBER' || + github.event.comment.author_association == 'OWNER' || + github.event.comment.author_association == 'COLLABORATOR') + ) || ( + github.event_name == 'pull_request_review_comment' && + contains(github.event.comment.body, '@claude') && + (github.event.comment.author_association == 'MEMBER' || + github.event.comment.author_association == 'OWNER' || + github.event.comment.author_association == 'COLLABORATOR') + ) + runs-on: ubuntu-latest + steps: + - uses: anthropics/claude-code-action@v1 + with: + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + claude_args: | + --append-system-prompt "Review this PR against the rules in .ai/review-rules.md. Focus on correctness, not style (ruff handles style). Only review changes under src/diffusers/. Do NOT commit changes unless the comment explicitly asks you to using the phrase 'commit this'." From cbf4d9a3c384ef97d6b0e40c9846dd9e0e41886a Mon Sep 17 00:00:00 2001 From: Steven Liu <59462357+stevhliu@users.noreply.github.com> Date: Wed, 25 Mar 2026 09:31:54 -0700 Subject: [PATCH 2/8] [docs] kernels (#13139) * kernels * feedback --- docs/source/en/optimization/fp16.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/docs/source/en/optimization/fp16.md b/docs/source/en/optimization/fp16.md index 941f53604c..0e427d3a0a 100644 --- a/docs/source/en/optimization/fp16.md +++ b/docs/source/en/optimization/fp16.md @@ -248,6 +248,24 @@ Refer to the [diffusers/benchmarks](https://huggingface.co/datasets/diffusers/be The [diffusers-torchao](https://github.com/sayakpaul/diffusers-torchao#benchmarking-results) repository also contains benchmarking results for compiled versions of Flux and CogVideoX. +## Kernels + +[Kernels](https://huggingface.co/docs/kernels/index) is a library for building, distributing, and loading optimized compute kernels on the [Hub](https://huggingface.co/kernels-community). It supports [attention](./attention_backends#set_attention_backend) kernels and custom CUDA kernels for operations like RMSNorm, GEGLU, RoPE, and AdaLN. + +The [Diffusers Pipeline Integration](https://github.com/huggingface/kernels/blob/main/skills/cuda-kernels/references/diffusers-integration.md) guide shows how to integrate a kernel with the [add cuda-kernels](https://github.com/huggingface/kernels/blob/main/skills/cuda-kernels/SKILL.md) skill. This skill enables an agent, like Claude or Codex, to write custom kernels targeted towards a specific model and your hardware. + +> [!TIP] +> Install the [add cuda-kernels](https://github.com/huggingface/kernels/blob/main/skills/cuda-kernels/SKILL.md) skill to teach an agent how to write a kernel. The [Custom kernels for all from Codex and Claude](https://huggingface.co/blog/custom-cuda-kernels-agent-skills) blog post covers this in more detail. + +For example, a custom RMSNorm kernel (generated by the `add cuda-kernels` skill) with [torch.compile](#torchcompile) speeds up LTX-Video generation 1.43x on an H100. + + + ## Dynamic quantization [Dynamic quantization](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html) improves inference speed by reducing precision to enable faster math operations. This particular type of quantization determines how to scale the activations based on the data at runtime rather than using a fixed scaling factor. As a result, the scaling factor is more accurately aligned with the data. From 85ffcf1db23c0e981215416abd8e8a748bfd86b6 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Thu, 26 Mar 2026 08:48:16 +0530 Subject: [PATCH 3/8] [tests] Tests for conditional pipeline blocks (#13247) * implement test suite for conditional blocks. * remove * another fix. * Revert "another fix." This reverts commit ab07b603abefcb1a0c6137c222327ff129332c2a. --- .../test_conditional_pipeline_blocks.py | 242 ++++++++++++++++++ .../test_modular_pipelines_common.py | 117 --------- .../test_modular_pipelines_custom_blocks.py | 117 ++++++++- 3 files changed, 358 insertions(+), 118 deletions(-) create mode 100644 tests/modular_pipelines/test_conditional_pipeline_blocks.py diff --git a/tests/modular_pipelines/test_conditional_pipeline_blocks.py b/tests/modular_pipelines/test_conditional_pipeline_blocks.py new file mode 100644 index 0000000000..5d9a7fe7d2 --- /dev/null +++ b/tests/modular_pipelines/test_conditional_pipeline_blocks.py @@ -0,0 +1,242 @@ +# Copyright 2025 The HuggingFace Team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +from diffusers.modular_pipelines import ( + AutoPipelineBlocks, + ConditionalPipelineBlocks, + InputParam, + ModularPipelineBlocks, +) + + +class TextToImageBlock(ModularPipelineBlocks): + model_name = "text2img" + + @property + def inputs(self): + return [InputParam(name="prompt")] + + @property + def intermediate_outputs(self): + return [] + + @property + def description(self): + return "text-to-image workflow" + + def __call__(self, components, state): + block_state = self.get_block_state(state) + block_state.workflow = "text2img" + self.set_block_state(state, block_state) + return components, state + + +class ImageToImageBlock(ModularPipelineBlocks): + model_name = "img2img" + + @property + def inputs(self): + return [InputParam(name="prompt"), InputParam(name="image")] + + @property + def intermediate_outputs(self): + return [] + + @property + def description(self): + return "image-to-image workflow" + + def __call__(self, components, state): + block_state = self.get_block_state(state) + block_state.workflow = "img2img" + self.set_block_state(state, block_state) + return components, state + + +class InpaintBlock(ModularPipelineBlocks): + model_name = "inpaint" + + @property + def inputs(self): + return [InputParam(name="prompt"), InputParam(name="image"), InputParam(name="mask")] + + @property + def intermediate_outputs(self): + return [] + + @property + def description(self): + return "inpaint workflow" + + def __call__(self, components, state): + block_state = self.get_block_state(state) + block_state.workflow = "inpaint" + self.set_block_state(state, block_state) + return components, state + + +class ConditionalImageBlocks(ConditionalPipelineBlocks): + block_classes = [InpaintBlock, ImageToImageBlock, TextToImageBlock] + block_names = ["inpaint", "img2img", "text2img"] + block_trigger_inputs = ["mask", "image"] + default_block_name = "text2img" + + @property + def description(self): + return "Conditional image blocks for testing" + + def select_block(self, mask=None, image=None) -> str | None: + if mask is not None: + return "inpaint" + if image is not None: + return "img2img" + return None # falls back to default_block_name + + +class OptionalConditionalBlocks(ConditionalPipelineBlocks): + block_classes = [InpaintBlock, ImageToImageBlock] + block_names = ["inpaint", "img2img"] + block_trigger_inputs = ["mask", "image"] + default_block_name = None # no default; block can be skipped + + @property + def description(self): + return "Optional conditional blocks (skippable)" + + def select_block(self, mask=None, image=None) -> str | None: + if mask is not None: + return "inpaint" + if image is not None: + return "img2img" + return None + + +class AutoImageBlocks(AutoPipelineBlocks): + block_classes = [InpaintBlock, ImageToImageBlock, TextToImageBlock] + block_names = ["inpaint", "img2img", "text2img"] + block_trigger_inputs = ["mask", "image", None] + + @property + def description(self): + return "Auto image blocks for testing" + + +class TestConditionalPipelineBlocksSelectBlock: + def test_select_block_with_mask(self): + blocks = ConditionalImageBlocks() + assert blocks.select_block(mask="something") == "inpaint" + + def test_select_block_with_image(self): + blocks = ConditionalImageBlocks() + assert blocks.select_block(image="something") == "img2img" + + def test_select_block_with_mask_and_image(self): + blocks = ConditionalImageBlocks() + assert blocks.select_block(mask="m", image="i") == "inpaint" + + def test_select_block_no_triggers_returns_none(self): + blocks = ConditionalImageBlocks() + assert blocks.select_block() is None + + def test_select_block_explicit_none_values(self): + blocks = ConditionalImageBlocks() + assert blocks.select_block(mask=None, image=None) is None + + +class TestConditionalPipelineBlocksWorkflowSelection: + def test_default_workflow_when_no_triggers(self): + blocks = ConditionalImageBlocks() + execution = blocks.get_execution_blocks() + assert execution is not None + assert isinstance(execution, TextToImageBlock) + + def test_mask_trigger_selects_inpaint(self): + blocks = ConditionalImageBlocks() + execution = blocks.get_execution_blocks(mask=True) + assert isinstance(execution, InpaintBlock) + + def test_image_trigger_selects_img2img(self): + blocks = ConditionalImageBlocks() + execution = blocks.get_execution_blocks(image=True) + assert isinstance(execution, ImageToImageBlock) + + def test_mask_and_image_selects_inpaint(self): + blocks = ConditionalImageBlocks() + execution = blocks.get_execution_blocks(mask=True, image=True) + assert isinstance(execution, InpaintBlock) + + def test_skippable_block_returns_none(self): + blocks = OptionalConditionalBlocks() + execution = blocks.get_execution_blocks() + assert execution is None + + def test_skippable_block_still_selects_when_triggered(self): + blocks = OptionalConditionalBlocks() + execution = blocks.get_execution_blocks(image=True) + assert isinstance(execution, ImageToImageBlock) + + +class TestAutoPipelineBlocksSelectBlock: + def test_auto_select_mask(self): + blocks = AutoImageBlocks() + assert blocks.select_block(mask="m") == "inpaint" + + def test_auto_select_image(self): + blocks = AutoImageBlocks() + assert blocks.select_block(image="i") == "img2img" + + def test_auto_select_default(self): + blocks = AutoImageBlocks() + # No trigger -> returns None -> falls back to default (text2img) + assert blocks.select_block() is None + + def test_auto_select_priority_order(self): + blocks = AutoImageBlocks() + assert blocks.select_block(mask="m", image="i") == "inpaint" + + +class TestAutoPipelineBlocksWorkflowSelection: + def test_auto_default_workflow(self): + blocks = AutoImageBlocks() + execution = blocks.get_execution_blocks() + assert isinstance(execution, TextToImageBlock) + + def test_auto_mask_workflow(self): + blocks = AutoImageBlocks() + execution = blocks.get_execution_blocks(mask=True) + assert isinstance(execution, InpaintBlock) + + def test_auto_image_workflow(self): + blocks = AutoImageBlocks() + execution = blocks.get_execution_blocks(image=True) + assert isinstance(execution, ImageToImageBlock) + + +class TestConditionalPipelineBlocksStructure: + def test_block_names_accessible(self): + blocks = ConditionalImageBlocks() + sub = dict(blocks.sub_blocks) + assert set(sub.keys()) == {"inpaint", "img2img", "text2img"} + + def test_sub_block_types(self): + blocks = ConditionalImageBlocks() + sub = dict(blocks.sub_blocks) + assert isinstance(sub["inpaint"], InpaintBlock) + assert isinstance(sub["img2img"], ImageToImageBlock) + assert isinstance(sub["text2img"], TextToImageBlock) + + def test_description(self): + blocks = ConditionalImageBlocks() + assert "Conditional" in blocks.description diff --git a/tests/modular_pipelines/test_modular_pipelines_common.py b/tests/modular_pipelines/test_modular_pipelines_common.py index 8a65999b20..8b212c0cbf 100644 --- a/tests/modular_pipelines/test_modular_pipelines_common.py +++ b/tests/modular_pipelines/test_modular_pipelines_common.py @@ -10,11 +10,6 @@ from huggingface_hub import hf_hub_download import diffusers from diffusers import AutoModel, ComponentsManager, ModularPipeline, ModularPipelineBlocks from diffusers.guiders import ClassifierFreeGuidance -from diffusers.modular_pipelines import ( - ConditionalPipelineBlocks, - LoopSequentialPipelineBlocks, - SequentialPipelineBlocks, -) from diffusers.modular_pipelines.modular_pipeline_utils import ( ComponentSpec, ConfigSpec, @@ -25,7 +20,6 @@ from diffusers.modular_pipelines.modular_pipeline_utils import ( from diffusers.utils import logging from ..testing_utils import ( - CaptureLogger, backend_empty_cache, numpy_cosine_similarity_distance, require_accelerator, @@ -498,117 +492,6 @@ class ModularGuiderTesterMixin: assert max_diff > expected_max_diff, "Output with CFG must be different from normal inference" -class TestCustomBlockRequirements: - def get_dummy_block_pipe(self): - class DummyBlockOne: - # keep two arbitrary deps so that we can test warnings. - _requirements = {"xyz": ">=0.8.0", "abc": ">=10.0.0"} - - class DummyBlockTwo: - # keep two dependencies that will be available during testing. - _requirements = {"transformers": ">=4.44.0", "diffusers": ">=0.2.0"} - - pipe = SequentialPipelineBlocks.from_blocks_dict( - {"dummy_block_one": DummyBlockOne, "dummy_block_two": DummyBlockTwo} - ) - return pipe - - def get_dummy_conditional_block_pipe(self): - class DummyBlockOne: - _requirements = {"xyz": ">=0.8.0", "abc": ">=10.0.0"} - - class DummyBlockTwo: - _requirements = {"transformers": ">=4.44.0", "diffusers": ">=0.2.0"} - - class DummyConditionalBlocks(ConditionalPipelineBlocks): - block_classes = [DummyBlockOne, DummyBlockTwo] - block_names = ["block_one", "block_two"] - block_trigger_inputs = [] - - def select_block(self, **kwargs): - return "block_one" - - return DummyConditionalBlocks() - - def get_dummy_loop_block_pipe(self): - class DummyBlockOne: - _requirements = {"xyz": ">=0.8.0", "abc": ">=10.0.0"} - - class DummyBlockTwo: - _requirements = {"transformers": ">=4.44.0", "diffusers": ">=0.2.0"} - - return LoopSequentialPipelineBlocks.from_blocks_dict({"block_one": DummyBlockOne, "block_two": DummyBlockTwo}) - - def test_sequential_block_requirements_save_load(self, tmp_path): - pipe = self.get_dummy_block_pipe() - pipe.save_pretrained(str(tmp_path)) - - config_path = tmp_path / "modular_config.json" - - with open(config_path, "r") as f: - config = json.load(f) - - assert "requirements" in config - requirements = config["requirements"] - - expected_requirements = { - "xyz": ">=0.8.0", - "abc": ">=10.0.0", - "transformers": ">=4.44.0", - "diffusers": ">=0.2.0", - } - assert expected_requirements == requirements - - def test_sequential_block_requirements_warnings(self, tmp_path): - pipe = self.get_dummy_block_pipe() - - logger = logging.get_logger("diffusers.modular_pipelines.modular_pipeline_utils") - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - pipe.save_pretrained(str(tmp_path)) - - template = "{req} was specified in the requirements but wasn't found in the current environment" - msg_xyz = template.format(req="xyz") - msg_abc = template.format(req="abc") - assert msg_xyz in str(cap_logger.out) - assert msg_abc in str(cap_logger.out) - - def test_conditional_block_requirements_save_load(self, tmp_path): - pipe = self.get_dummy_conditional_block_pipe() - pipe.save_pretrained(str(tmp_path)) - - config_path = tmp_path / "modular_config.json" - with open(config_path, "r") as f: - config = json.load(f) - - assert "requirements" in config - expected_requirements = { - "xyz": ">=0.8.0", - "abc": ">=10.0.0", - "transformers": ">=4.44.0", - "diffusers": ">=0.2.0", - } - assert expected_requirements == config["requirements"] - - def test_loop_block_requirements_save_load(self, tmp_path): - pipe = self.get_dummy_loop_block_pipe() - pipe.save_pretrained(str(tmp_path)) - - config_path = tmp_path / "modular_config.json" - with open(config_path, "r") as f: - config = json.load(f) - - assert "requirements" in config - expected_requirements = { - "xyz": ">=0.8.0", - "abc": ">=10.0.0", - "transformers": ">=4.44.0", - "diffusers": ">=0.2.0", - } - assert expected_requirements == config["requirements"] - - class TestModularModelCardContent: def create_mock_block(self, name="TestBlock", description="Test block description"): class MockBlock: diff --git a/tests/modular_pipelines/test_modular_pipelines_custom_blocks.py b/tests/modular_pipelines/test_modular_pipelines_custom_blocks.py index 59d6a3e75f..315e16d7b2 100644 --- a/tests/modular_pipelines/test_modular_pipelines_custom_blocks.py +++ b/tests/modular_pipelines/test_modular_pipelines_custom_blocks.py @@ -24,14 +24,18 @@ import torch from diffusers import FluxTransformer2DModel from diffusers.modular_pipelines import ( ComponentSpec, + ConditionalPipelineBlocks, InputParam, + LoopSequentialPipelineBlocks, ModularPipelineBlocks, OutputParam, PipelineState, + SequentialPipelineBlocks, WanModularPipeline, ) +from diffusers.utils import logging -from ..testing_utils import nightly, require_torch, require_torch_accelerator, slow, torch_device +from ..testing_utils import CaptureLogger, nightly, require_torch, require_torch_accelerator, slow, torch_device def _create_tiny_model_dir(model_dir): @@ -463,6 +467,117 @@ class TestModularCustomBlocks: assert output_prompt.startswith("Modular diffusers + ") +class TestCustomBlockRequirements: + def get_dummy_block_pipe(self): + class DummyBlockOne: + # keep two arbitrary deps so that we can test warnings. + _requirements = {"xyz": ">=0.8.0", "abc": ">=10.0.0"} + + class DummyBlockTwo: + # keep two dependencies that will be available during testing. + _requirements = {"transformers": ">=4.44.0", "diffusers": ">=0.2.0"} + + pipe = SequentialPipelineBlocks.from_blocks_dict( + {"dummy_block_one": DummyBlockOne, "dummy_block_two": DummyBlockTwo} + ) + return pipe + + def get_dummy_conditional_block_pipe(self): + class DummyBlockOne: + _requirements = {"xyz": ">=0.8.0", "abc": ">=10.0.0"} + + class DummyBlockTwo: + _requirements = {"transformers": ">=4.44.0", "diffusers": ">=0.2.0"} + + class DummyConditionalBlocks(ConditionalPipelineBlocks): + block_classes = [DummyBlockOne, DummyBlockTwo] + block_names = ["block_one", "block_two"] + block_trigger_inputs = [] + + def select_block(self, **kwargs): + return "block_one" + + return DummyConditionalBlocks() + + def get_dummy_loop_block_pipe(self): + class DummyBlockOne: + _requirements = {"xyz": ">=0.8.0", "abc": ">=10.0.0"} + + class DummyBlockTwo: + _requirements = {"transformers": ">=4.44.0", "diffusers": ">=0.2.0"} + + return LoopSequentialPipelineBlocks.from_blocks_dict({"block_one": DummyBlockOne, "block_two": DummyBlockTwo}) + + def test_sequential_block_requirements_save_load(self, tmp_path): + pipe = self.get_dummy_block_pipe() + pipe.save_pretrained(str(tmp_path)) + + config_path = tmp_path / "modular_config.json" + + with open(config_path, "r") as f: + config = json.load(f) + + assert "requirements" in config + requirements = config["requirements"] + + expected_requirements = { + "xyz": ">=0.8.0", + "abc": ">=10.0.0", + "transformers": ">=4.44.0", + "diffusers": ">=0.2.0", + } + assert expected_requirements == requirements + + def test_sequential_block_requirements_warnings(self, tmp_path): + pipe = self.get_dummy_block_pipe() + + logger = logging.get_logger("diffusers.modular_pipelines.modular_pipeline_utils") + logger.setLevel(30) + + with CaptureLogger(logger) as cap_logger: + pipe.save_pretrained(str(tmp_path)) + + template = "{req} was specified in the requirements but wasn't found in the current environment" + msg_xyz = template.format(req="xyz") + msg_abc = template.format(req="abc") + assert msg_xyz in str(cap_logger.out) + assert msg_abc in str(cap_logger.out) + + def test_conditional_block_requirements_save_load(self, tmp_path): + pipe = self.get_dummy_conditional_block_pipe() + pipe.save_pretrained(str(tmp_path)) + + config_path = tmp_path / "modular_config.json" + with open(config_path, "r") as f: + config = json.load(f) + + assert "requirements" in config + expected_requirements = { + "xyz": ">=0.8.0", + "abc": ">=10.0.0", + "transformers": ">=4.44.0", + "diffusers": ">=0.2.0", + } + assert expected_requirements == config["requirements"] + + def test_loop_block_requirements_save_load(self, tmp_path): + pipe = self.get_dummy_loop_block_pipe() + pipe.save_pretrained(str(tmp_path)) + + config_path = tmp_path / "modular_config.json" + with open(config_path, "r") as f: + config = json.load(f) + + assert "requirements" in config + expected_requirements = { + "xyz": ">=0.8.0", + "abc": ">=10.0.0", + "transformers": ">=4.44.0", + "diffusers": ">=0.2.0", + } + assert expected_requirements == config["requirements"] + + @slow @nightly @require_torch From 41e1003316173218656998281b28f9e0f6fcbcff Mon Sep 17 00:00:00 2001 From: kaixuanliu Date: Thu, 26 Mar 2026 15:10:53 +0800 Subject: [PATCH 4/8] avoid hardcode device in flux-control example (#13336) Signed-off-by: Liu, Kaixuan --- examples/flux-control/train_control_flux.py | 2 +- examples/flux-control/train_control_lora_flux.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/flux-control/train_control_flux.py b/examples/flux-control/train_control_flux.py index c5f93fa2e9..5c81775103 100644 --- a/examples/flux-control/train_control_flux.py +++ b/examples/flux-control/train_control_flux.py @@ -1105,7 +1105,7 @@ def main(args): # text encoding. captions = batch["captions"] - text_encoding_pipeline = text_encoding_pipeline.to("cuda") + text_encoding_pipeline = text_encoding_pipeline.to(accelerator.device) with torch.no_grad(): prompt_embeds, pooled_prompt_embeds, text_ids = text_encoding_pipeline.encode_prompt( captions, prompt_2=None diff --git a/examples/flux-control/train_control_lora_flux.py b/examples/flux-control/train_control_lora_flux.py index f5d3c822b3..f372284d7a 100644 --- a/examples/flux-control/train_control_lora_flux.py +++ b/examples/flux-control/train_control_lora_flux.py @@ -1251,7 +1251,7 @@ def main(args): # text encoding. captions = batch["captions"] - text_encoding_pipeline = text_encoding_pipeline.to("cuda") + text_encoding_pipeline = text_encoding_pipeline.to(accelerator.device) with torch.no_grad(): prompt_embeds, pooled_prompt_embeds, text_ids = text_encoding_pipeline.encode_prompt( captions, prompt_2=None From b757035df6fe080b56a672c4000e458bb442821a Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Thu, 26 Mar 2026 15:39:10 +0530 Subject: [PATCH 5/8] fix claude workflow to include id-token with write. (#13338) --- .github/workflows/claude_review.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/claude_review.yml b/.github/workflows/claude_review.yml index e772415d63..2df3b47eb1 100644 --- a/.github/workflows/claude_review.yml +++ b/.github/workflows/claude_review.yml @@ -10,6 +10,7 @@ permissions: contents: write pull-requests: write issues: read + id-token: write jobs: claude-review: From 7298f5be9339a81bc9ce551290213f04cd7a12ce Mon Sep 17 00:00:00 2001 From: dg845 <58458699+dg845@users.noreply.github.com> Date: Thu, 26 Mar 2026 17:51:29 -0700 Subject: [PATCH 6/8] Update LTX-2 Docs to Cover LTX-2.3 Models (#13337) * Update LTX-2 docs to cover multimodal guidance and prompt enhancement * Apply suggestions from code review Co-authored-by: Sayak Paul Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Apply reviewer feedback --------- Co-authored-by: Sayak Paul Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/api/pipelines/ltx2.md | 166 +++++++++++++++++++++++--- src/diffusers/pipelines/ltx2/utils.py | 149 +++++++++++++++++++++++ 2 files changed, 300 insertions(+), 15 deletions(-) diff --git a/docs/source/en/api/pipelines/ltx2.md b/docs/source/en/api/pipelines/ltx2.md index 85b0f96918..bcddd40e66 100644 --- a/docs/source/en/api/pipelines/ltx2.md +++ b/docs/source/en/api/pipelines/ltx2.md @@ -18,7 +18,7 @@ LoRA -LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution. +[LTX-2](https://hf.co/papers/2601.03233) is a DiT-based foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution. You can find all the original LTX-Video checkpoints under the [Lightricks](https://huggingface.co/Lightricks) organization. @@ -293,6 +293,7 @@ import torch from diffusers import LTX2ConditionPipeline from diffusers.pipelines.ltx2.pipeline_ltx2_condition import LTX2VideoCondition from diffusers.pipelines.ltx2.export_utils import encode_video +from diffusers.pipelines.ltx2.utils import DEFAULT_NEGATIVE_PROMPT from diffusers.utils import load_image, load_video device = "cuda" @@ -315,19 +316,6 @@ prompt = ( "landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the " "solitude and beauty of a winter drive through a mountainous region." ) -negative_prompt = ( - "blurry, out of focus, overexposed, underexposed, low contrast, washed out colors, excessive noise, " - "grainy texture, poor lighting, flickering, motion blur, distorted proportions, unnatural skin tones, " - "deformed facial features, asymmetrical face, missing facial features, extra limbs, disfigured hands, " - "wrong hand count, artifacts around text, inconsistent perspective, camera shake, incorrect depth of " - "field, background too sharp, background clutter, distracting reflections, harsh shadows, inconsistent " - "lighting direction, color banding, cartoonish rendering, 3D CGI look, unrealistic materials, uncanny " - "valley effect, incorrect ethnicity, wrong gender, exaggerated expressions, wrong gaze direction, " - "mismatched lip sync, silent or muted audio, distorted voice, robotic voice, echo, background noise, " - "off-sync audio, incorrect dialogue, added dialogue, repetitive speech, jittery movement, awkward " - "pauses, incorrect timing, unnatural transitions, inconsistent framing, tilted camera, flat lighting, " - "inconsistent tone, cinematic oversaturation, stylized filters, or AI artifacts." -) cond_video = load_video( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4" @@ -343,7 +331,7 @@ frame_rate = 24.0 video, audio = pipe( conditions=conditions, prompt=prompt, - negative_prompt=negative_prompt, + negative_prompt=DEFAULT_NEGATIVE_PROMPT, width=width, height=height, num_frames=121, @@ -366,6 +354,154 @@ encode_video( Because the conditioning is done via latent frames, the 8 data space frames corresponding to the specified latent frame for an image condition will tend to be static. +## Multimodal Guidance + +LTX-2.X pipelines support multimodal guidance. It is composed of three terms, all using a CFG-style update rule: + +1. Classifier-Free Guidance (CFG): standard [CFG](https://huggingface.co/papers/2207.12598) where the perturbed ("weaker") output is generated using the negative prompt. +2. Spatio-Temporal Guidance (STG): [STG](https://huggingface.co/papers/2411.18664) moves away from a perturbed output created from short-cutting self-attention operations and substitutes in the attention values instead. The idea is that this creates sharper videos and better spatiotemporal consistency. +3. Modality Isolation Guidance: moves away from a perturbed output created from disabling cross-modality (audio-to-video and video-to-audio) cross attention. This guidance is more specific to [LTX-2.X](https://huggingface.co/papers/2601.03233) models, with the idea that this produces better consistency between the generated audio and video. + +These are controlled by the `guidance_scale`, `stg_scale`, and `modality_scale` arguments and can be set separately for video and audio. Additionally, for STG the transformer block indices where self-attention is skipped needs to be specified via the `spatio_temporal_guidance_blocks` argument. The LTX-2.X pipelines also support [guidance rescaling](https://huggingface.co/papers/2305.08891) to help reduce over-exposure, which can be a problem when the guidance scales are set to high values. + +```py +import torch +from diffusers import LTX2ImageToVideoPipeline +from diffusers.pipelines.ltx2.export_utils import encode_video +from diffusers.pipelines.ltx2.utils import DEFAULT_NEGATIVE_PROMPT +from diffusers.utils import load_image + +device = "cuda" +width = 768 +height = 512 +random_seed = 42 +frame_rate = 24.0 +generator = torch.Generator(device).manual_seed(random_seed) +model_path = "dg845/LTX-2.3-Diffusers" + +pipe = LTX2ImageToVideoPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16) +pipe.enable_sequential_cpu_offload(device=device) +pipe.vae.enable_tiling() + +prompt = ( + "An astronaut hatches from a fragile egg on the surface of the Moon, the shell cracking and peeling apart in " + "gentle low-gravity motion. Fine lunar dust lifts and drifts outward with each movement, floating in slow arcs " + "before settling back onto the ground. The astronaut pushes free in a deliberate, weightless motion, small " + "fragments of the egg tumbling and spinning through the air. In the background, the deep darkness of space subtly " + "shifts as stars glide with the camera's movement, emphasizing vast depth and scale. The camera performs a " + "smooth, cinematic slow push-in, with natural parallax between the foreground dust, the astronaut, and the " + "distant starfield. Ultra-realistic detail, physically accurate low-gravity motion, cinematic lighting, and a " + "breath-taking, movie-like shot." +) + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg", +) + +video, audio = pipe( + image=image, + prompt=prompt, + negative_prompt=DEFAULT_NEGATIVE_PROMPT, + width=width, + height=height, + num_frames=121, + frame_rate=frame_rate, + num_inference_steps=30, + guidance_scale=3.0, # Recommended LTX-2.3 guidance parameters + stg_scale=1.0, # Note that 0.0 (not 1.0) means that STG is disabled (all other guidance is disabled at 1.0) + modality_scale=3.0, + guidance_rescale=0.7, + audio_guidance_scale=7.0, # Note that a higher CFG guidance scale is recommended for audio + audio_stg_scale=1.0, + audio_modality_scale=3.0, + audio_guidance_rescale=0.7, + spatio_temporal_guidance_blocks=[28], + use_cross_timestep=True, + generator=generator, + output_type="np", + return_dict=False, +) + +encode_video( + video[0], + fps=frame_rate, + audio=audio[0].float().cpu(), + audio_sample_rate=pipe.vocoder.config.output_sampling_rate, + output_path="ltx2_3_i2v_stage_1.mp4", +) +``` + +## Prompt Enhancement + +The LTX-2.X models are sensitive to prompting style. Refer to the [official prompting guide](https://ltx.io/model/model-blog/prompting-guide-for-ltx-2) for recommendations on how to write a good prompt. Using prompt enhancement, where the supplied prompts are enhanced using the pipeline's text encoder (by default a [Gemma 3](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) model) given a system prompt, can also improve sample quality. The optional `processor` pipeline component needs to be present to use prompt enhancement. Enable prompt enhancement by supplying a `system_prompt` argument: + + +```py +import torch +from transformers import Gemma3Processor +from diffusers import LTX2Pipeline +from diffusers.pipelines.ltx2.export_utils import encode_video +from diffusers.pipelines.ltx2.utils import DEFAULT_NEGATIVE_PROMPT, T2V_DEFAULT_SYSTEM_PROMPT + +device = "cuda" +width = 768 +height = 512 +random_seed = 42 +frame_rate = 24.0 +generator = torch.Generator(device).manual_seed(random_seed) +model_path = "dg845/LTX-2.3-Diffusers" + +pipe = LTX2Pipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16) +pipe.enable_model_cpu_offload(device=device) +pipe.vae.enable_tiling() +if getattr(pipe, "processor", None) is None: + processor = Gemma3Processor.from_pretrained("google/gemma-3-12b-it-qat-q4_0-unquantized") + pipe.processor = processor + +prompt = ( + "An astronaut hatches from a fragile egg on the surface of the Moon, the shell cracking and peeling apart in " + "gentle low-gravity motion. Fine lunar dust lifts and drifts outward with each movement, floating in slow arcs " + "before settling back onto the ground. The astronaut pushes free in a deliberate, weightless motion, small " + "fragments of the egg tumbling and spinning through the air. In the background, the deep darkness of space subtly " + "shifts as stars glide with the camera's movement, emphasizing vast depth and scale. The camera performs a " + "smooth, cinematic slow push-in, with natural parallax between the foreground dust, the astronaut, and the " + "distant starfield. Ultra-realistic detail, physically accurate low-gravity motion, cinematic lighting, and a " + "breath-taking, movie-like shot." +) + +video, audio = pipe( + prompt=prompt, + negative_prompt=DEFAULT_NEGATIVE_PROMPT, + width=width, + height=height, + num_frames=121, + frame_rate=frame_rate, + num_inference_steps=30, + guidance_scale=3.0, + stg_scale=1.0, + modality_scale=3.0, + guidance_rescale=0.7, + audio_guidance_scale=7.0, + audio_stg_scale=1.0, + audio_modality_scale=3.0, + audio_guidance_rescale=0.7, + spatio_temporal_guidance_blocks=[28], + use_cross_timestep=True, + system_prompt=T2V_DEFAULT_SYSTEM_PROMPT, + generator=generator, + output_type="np", + return_dict=False, +) + +encode_video( + video[0], + fps=frame_rate, + audio=audio[0].float().cpu(), + audio_sample_rate=pipe.vocoder.config.output_sampling_rate, + output_path="ltx2_3_t2v_stage_1.mp4", +) +``` + ## LTX2Pipeline [[autodoc]] LTX2Pipeline diff --git a/src/diffusers/pipelines/ltx2/utils.py b/src/diffusers/pipelines/ltx2/utils.py index f80469817f..52d446c468 100644 --- a/src/diffusers/pipelines/ltx2/utils.py +++ b/src/diffusers/pipelines/ltx2/utils.py @@ -1,6 +1,155 @@ +# Copyright 2026 Lightricks and The HuggingFace Team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + # Pre-trained sigma values for distilled model are taken from # https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-pipelines/src/ltx_pipelines/utils/constants.py DISTILLED_SIGMA_VALUES = [1.0, 0.99375, 0.9875, 0.98125, 0.975, 0.909375, 0.725, 0.421875] # Reduced schedule for super-resolution stage 2 (subset of distilled values) STAGE_2_DISTILLED_SIGMA_VALUES = [0.909375, 0.725, 0.421875] + + +# Default negative prompt from +# https://github.com/Lightricks/LTX-2/blob/ae855f8538843825f9015a419cf4ba5edaf5eec2/packages/ltx-pipelines/src/ltx_pipelines/utils/constants.py#L131-L143 +DEFAULT_NEGATIVE_PROMPT = ( + "blurry, out of focus, overexposed, underexposed, low contrast, washed out colors, excessive noise, " + "grainy texture, poor lighting, flickering, motion blur, distorted proportions, unnatural skin tones, " + "deformed facial features, asymmetrical face, missing facial features, extra limbs, disfigured hands, " + "wrong hand count, artifacts around text, inconsistent perspective, camera shake, incorrect depth of " + "field, background too sharp, background clutter, distracting reflections, harsh shadows, inconsistent " + "lighting direction, color banding, cartoonish rendering, 3D CGI look, unrealistic materials, uncanny " + "valley effect, incorrect ethnicity, wrong gender, exaggerated expressions, wrong gaze direction, " + "mismatched lip sync, silent or muted audio, distorted voice, robotic voice, echo, background noise, " + "off-sync audio, incorrect dialogue, added dialogue, repetitive speech, jittery movement, awkward " + "pauses, incorrect timing, unnatural transitions, inconsistent framing, tilted camera, flat lighting, " + "inconsistent tone, cinematic oversaturation, stylized filters, or AI artifacts." +) + + +# System prompts for prompt enhancement +# https://github.com/Lightricks/LTX-2/blob/ae855f8538843825f9015a419cf4ba5edaf5eec2/packages/ltx-core/src/ltx_core/text_encoders/gemma/encoders/prompts/gemma_t2v_system_prompt.txt#L1 +# Disable line-too-long rule in ruff to keep the prompts exactly the same (e.g. in terms of newlines) +# Supported in ruff>=0.15.0 +# ruff: disable[E501] +T2V_DEFAULT_SYSTEM_PROMPT = """ +You are a Creative Assistant. Given a user's raw input prompt describing a scene or concept, expand it into a detailed +video generation prompt with specific visuals and integrated audio to guide a text-to-video model. + +#### Guidelines +- Strictly follow all aspects of the user's raw input: include every element requested (style, visuals, motions, + actions, camera movement, audio). + - If the input is vague, invent concrete details: lighting, textures, materials, scene settings, etc. + - For characters: describe gender, clothing, hair, expressions. DO NOT invent unrequested characters. +- Use active language: present-progressive verbs ("is walking," "speaking"). If no action specified, describe natural + movements. +- Maintain chronological flow: use temporal connectors ("as," "then," "while"). +- Audio layer: Describe complete soundscape (background audio, ambient sounds, SFX, speech/music when requested). + Integrate sounds chronologically alongside actions. Be specific (e.g., "soft footsteps on tile"), not vague (e.g., + "ambient sound is present"). +- Speech (only when requested): + - For ANY speech-related input (talking, conversation, singing, etc.), ALWAYS include exact words in quotes with + voice characteristics (e.g., "The man says in an excited voice: 'You won't believe what I just saw!'"). + - Specify language if not English and accent if relevant. +- Style: Include visual style at the beginning: "Style: