diff --git a/.ai/review-rules.md b/.ai/review-rules.md
new file mode 100644
index 0000000000..12efc94c4b
--- /dev/null
+++ b/.ai/review-rules.md
@@ -0,0 +1,11 @@
+# PR Review Rules
+
+Review-specific rules for Claude. Focus on correctness — style is handled by ruff.
+
+Before reviewing, read and apply the guidelines in:
+- [AGENTS.md](AGENTS.md) — coding style, dependencies, copied code, model conventions
+- [skills/model-integration/SKILL.md](skills/model-integration/SKILL.md) — attention pattern, pipeline rules, implementation checklist, gotchas
+- [skills/parity-testing/SKILL.md](skills/parity-testing/SKILL.md) — testing rules, comparison utilities
+- [skills/parity-testing/pitfalls.md](skills/parity-testing/pitfalls.md) — known pitfalls (dtype mismatches, config assumptions, etc.)
+
+## Common mistakes (add new rules below this line)
diff --git a/.github/workflows/claude_review.yml b/.github/workflows/claude_review.yml
new file mode 100644
index 0000000000..82baa7980d
--- /dev/null
+++ b/.github/workflows/claude_review.yml
@@ -0,0 +1,42 @@
+name: Claude PR Review
+
+on:
+ issue_comment:
+ types: [created]
+ pull_request_review_comment:
+ types: [created]
+
+permissions:
+ contents: write
+ pull-requests: write
+ issues: read
+ id-token: write
+
+jobs:
+ claude-review:
+ if: |
+ (
+ github.event_name == 'issue_comment' &&
+ github.event.issue.pull_request &&
+ github.event.issue.state == 'open' &&
+ contains(github.event.comment.body, '@claude') &&
+ (github.event.comment.author_association == 'MEMBER' ||
+ github.event.comment.author_association == 'OWNER' ||
+ github.event.comment.author_association == 'COLLABORATOR')
+ ) || (
+ github.event_name == 'pull_request_review_comment' &&
+ contains(github.event.comment.body, '@claude') &&
+ (github.event.comment.author_association == 'MEMBER' ||
+ github.event.comment.author_association == 'OWNER' ||
+ github.event.comment.author_association == 'COLLABORATOR')
+ )
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 1
+ - uses: anthropics/claude-code-action@v1
+ with:
+ anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
+ claude_args: |
+ --append-system-prompt "Review this PR against the rules in .ai/review-rules.md. Focus on correctness, not style (ruff handles style). Only review changes under src/diffusers/. Do NOT commit changes unless the comment explicitly asks you to using the phrase 'commit this'."
diff --git a/docs/source/en/api/pipelines/cogvideox.md b/docs/source/en/api/pipelines/cogvideox.md
index ec673e0763..b296bbe255 100644
--- a/docs/source/en/api/pipelines/cogvideox.md
+++ b/docs/source/en/api/pipelines/cogvideox.md
@@ -41,16 +41,15 @@ The quantized CogVideoX 5B model below requires ~16GB of VRAM.
```py
import torch
-from diffusers import CogVideoXPipeline, AutoModel
+from diffusers import CogVideoXPipeline, AutoModel, TorchAoConfig
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video
+from torchao.quantization import Int8WeightOnlyConfig
# quantize weights to int8 with torchao
pipeline_quant_config = PipelineQuantizationConfig(
- quant_backend="torchao",
- quant_kwargs={"quant_type": "int8wo"},
- components_to_quantize="transformer"
+ quant_mapping={"transformer": TorchAoConfig(Int8WeightOnlyConfig())}
)
# fp8 layerwise weight-casting
diff --git a/docs/source/en/api/pipelines/ltx2.md b/docs/source/en/api/pipelines/ltx2.md
index 85b0f96918..bcddd40e66 100644
--- a/docs/source/en/api/pipelines/ltx2.md
+++ b/docs/source/en/api/pipelines/ltx2.md
@@ -18,7 +18,7 @@
-LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
+[LTX-2](https://hf.co/papers/2601.03233) is a DiT-based foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
You can find all the original LTX-Video checkpoints under the [Lightricks](https://huggingface.co/Lightricks) organization.
@@ -293,6 +293,7 @@ import torch
from diffusers import LTX2ConditionPipeline
from diffusers.pipelines.ltx2.pipeline_ltx2_condition import LTX2VideoCondition
from diffusers.pipelines.ltx2.export_utils import encode_video
+from diffusers.pipelines.ltx2.utils import DEFAULT_NEGATIVE_PROMPT
from diffusers.utils import load_image, load_video
device = "cuda"
@@ -315,19 +316,6 @@ prompt = (
"landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the "
"solitude and beauty of a winter drive through a mountainous region."
)
-negative_prompt = (
- "blurry, out of focus, overexposed, underexposed, low contrast, washed out colors, excessive noise, "
- "grainy texture, poor lighting, flickering, motion blur, distorted proportions, unnatural skin tones, "
- "deformed facial features, asymmetrical face, missing facial features, extra limbs, disfigured hands, "
- "wrong hand count, artifacts around text, inconsistent perspective, camera shake, incorrect depth of "
- "field, background too sharp, background clutter, distracting reflections, harsh shadows, inconsistent "
- "lighting direction, color banding, cartoonish rendering, 3D CGI look, unrealistic materials, uncanny "
- "valley effect, incorrect ethnicity, wrong gender, exaggerated expressions, wrong gaze direction, "
- "mismatched lip sync, silent or muted audio, distorted voice, robotic voice, echo, background noise, "
- "off-sync audio, incorrect dialogue, added dialogue, repetitive speech, jittery movement, awkward "
- "pauses, incorrect timing, unnatural transitions, inconsistent framing, tilted camera, flat lighting, "
- "inconsistent tone, cinematic oversaturation, stylized filters, or AI artifacts."
-)
cond_video = load_video(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
@@ -343,7 +331,7 @@ frame_rate = 24.0
video, audio = pipe(
conditions=conditions,
prompt=prompt,
- negative_prompt=negative_prompt,
+ negative_prompt=DEFAULT_NEGATIVE_PROMPT,
width=width,
height=height,
num_frames=121,
@@ -366,6 +354,154 @@ encode_video(
Because the conditioning is done via latent frames, the 8 data space frames corresponding to the specified latent frame for an image condition will tend to be static.
+## Multimodal Guidance
+
+LTX-2.X pipelines support multimodal guidance. It is composed of three terms, all using a CFG-style update rule:
+
+1. Classifier-Free Guidance (CFG): standard [CFG](https://huggingface.co/papers/2207.12598) where the perturbed ("weaker") output is generated using the negative prompt.
+2. Spatio-Temporal Guidance (STG): [STG](https://huggingface.co/papers/2411.18664) moves away from a perturbed output created from short-cutting self-attention operations and substitutes in the attention values instead. The idea is that this creates sharper videos and better spatiotemporal consistency.
+3. Modality Isolation Guidance: moves away from a perturbed output created from disabling cross-modality (audio-to-video and video-to-audio) cross attention. This guidance is more specific to [LTX-2.X](https://huggingface.co/papers/2601.03233) models, with the idea that this produces better consistency between the generated audio and video.
+
+These are controlled by the `guidance_scale`, `stg_scale`, and `modality_scale` arguments and can be set separately for video and audio. Additionally, for STG the transformer block indices where self-attention is skipped needs to be specified via the `spatio_temporal_guidance_blocks` argument. The LTX-2.X pipelines also support [guidance rescaling](https://huggingface.co/papers/2305.08891) to help reduce over-exposure, which can be a problem when the guidance scales are set to high values.
+
+```py
+import torch
+from diffusers import LTX2ImageToVideoPipeline
+from diffusers.pipelines.ltx2.export_utils import encode_video
+from diffusers.pipelines.ltx2.utils import DEFAULT_NEGATIVE_PROMPT
+from diffusers.utils import load_image
+
+device = "cuda"
+width = 768
+height = 512
+random_seed = 42
+frame_rate = 24.0
+generator = torch.Generator(device).manual_seed(random_seed)
+model_path = "dg845/LTX-2.3-Diffusers"
+
+pipe = LTX2ImageToVideoPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
+pipe.enable_sequential_cpu_offload(device=device)
+pipe.vae.enable_tiling()
+
+prompt = (
+ "An astronaut hatches from a fragile egg on the surface of the Moon, the shell cracking and peeling apart in "
+ "gentle low-gravity motion. Fine lunar dust lifts and drifts outward with each movement, floating in slow arcs "
+ "before settling back onto the ground. The astronaut pushes free in a deliberate, weightless motion, small "
+ "fragments of the egg tumbling and spinning through the air. In the background, the deep darkness of space subtly "
+ "shifts as stars glide with the camera's movement, emphasizing vast depth and scale. The camera performs a "
+ "smooth, cinematic slow push-in, with natural parallax between the foreground dust, the astronaut, and the "
+ "distant starfield. Ultra-realistic detail, physically accurate low-gravity motion, cinematic lighting, and a "
+ "breath-taking, movie-like shot."
+)
+
+image = load_image(
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg",
+)
+
+video, audio = pipe(
+ image=image,
+ prompt=prompt,
+ negative_prompt=DEFAULT_NEGATIVE_PROMPT,
+ width=width,
+ height=height,
+ num_frames=121,
+ frame_rate=frame_rate,
+ num_inference_steps=30,
+ guidance_scale=3.0, # Recommended LTX-2.3 guidance parameters
+ stg_scale=1.0, # Note that 0.0 (not 1.0) means that STG is disabled (all other guidance is disabled at 1.0)
+ modality_scale=3.0,
+ guidance_rescale=0.7,
+ audio_guidance_scale=7.0, # Note that a higher CFG guidance scale is recommended for audio
+ audio_stg_scale=1.0,
+ audio_modality_scale=3.0,
+ audio_guidance_rescale=0.7,
+ spatio_temporal_guidance_blocks=[28],
+ use_cross_timestep=True,
+ generator=generator,
+ output_type="np",
+ return_dict=False,
+)
+
+encode_video(
+ video[0],
+ fps=frame_rate,
+ audio=audio[0].float().cpu(),
+ audio_sample_rate=pipe.vocoder.config.output_sampling_rate,
+ output_path="ltx2_3_i2v_stage_1.mp4",
+)
+```
+
+## Prompt Enhancement
+
+The LTX-2.X models are sensitive to prompting style. Refer to the [official prompting guide](https://ltx.io/model/model-blog/prompting-guide-for-ltx-2) for recommendations on how to write a good prompt. Using prompt enhancement, where the supplied prompts are enhanced using the pipeline's text encoder (by default a [Gemma 3](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) model) given a system prompt, can also improve sample quality. The optional `processor` pipeline component needs to be present to use prompt enhancement. Enable prompt enhancement by supplying a `system_prompt` argument:
+
+
+```py
+import torch
+from transformers import Gemma3Processor
+from diffusers import LTX2Pipeline
+from diffusers.pipelines.ltx2.export_utils import encode_video
+from diffusers.pipelines.ltx2.utils import DEFAULT_NEGATIVE_PROMPT, T2V_DEFAULT_SYSTEM_PROMPT
+
+device = "cuda"
+width = 768
+height = 512
+random_seed = 42
+frame_rate = 24.0
+generator = torch.Generator(device).manual_seed(random_seed)
+model_path = "dg845/LTX-2.3-Diffusers"
+
+pipe = LTX2Pipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
+pipe.enable_model_cpu_offload(device=device)
+pipe.vae.enable_tiling()
+if getattr(pipe, "processor", None) is None:
+ processor = Gemma3Processor.from_pretrained("google/gemma-3-12b-it-qat-q4_0-unquantized")
+ pipe.processor = processor
+
+prompt = (
+ "An astronaut hatches from a fragile egg on the surface of the Moon, the shell cracking and peeling apart in "
+ "gentle low-gravity motion. Fine lunar dust lifts and drifts outward with each movement, floating in slow arcs "
+ "before settling back onto the ground. The astronaut pushes free in a deliberate, weightless motion, small "
+ "fragments of the egg tumbling and spinning through the air. In the background, the deep darkness of space subtly "
+ "shifts as stars glide with the camera's movement, emphasizing vast depth and scale. The camera performs a "
+ "smooth, cinematic slow push-in, with natural parallax between the foreground dust, the astronaut, and the "
+ "distant starfield. Ultra-realistic detail, physically accurate low-gravity motion, cinematic lighting, and a "
+ "breath-taking, movie-like shot."
+)
+
+video, audio = pipe(
+ prompt=prompt,
+ negative_prompt=DEFAULT_NEGATIVE_PROMPT,
+ width=width,
+ height=height,
+ num_frames=121,
+ frame_rate=frame_rate,
+ num_inference_steps=30,
+ guidance_scale=3.0,
+ stg_scale=1.0,
+ modality_scale=3.0,
+ guidance_rescale=0.7,
+ audio_guidance_scale=7.0,
+ audio_stg_scale=1.0,
+ audio_modality_scale=3.0,
+ audio_guidance_rescale=0.7,
+ spatio_temporal_guidance_blocks=[28],
+ use_cross_timestep=True,
+ system_prompt=T2V_DEFAULT_SYSTEM_PROMPT,
+ generator=generator,
+ output_type="np",
+ return_dict=False,
+)
+
+encode_video(
+ video[0],
+ fps=frame_rate,
+ audio=audio[0].float().cpu(),
+ audio_sample_rate=pipe.vocoder.config.output_sampling_rate,
+ output_path="ltx2_3_t2v_stage_1.mp4",
+)
+```
+
## LTX2Pipeline
[[autodoc]] LTX2Pipeline
diff --git a/docs/source/en/optimization/fp16.md b/docs/source/en/optimization/fp16.md
index 941f53604c..0e427d3a0a 100644
--- a/docs/source/en/optimization/fp16.md
+++ b/docs/source/en/optimization/fp16.md
@@ -248,6 +248,24 @@ Refer to the [diffusers/benchmarks](https://huggingface.co/datasets/diffusers/be
The [diffusers-torchao](https://github.com/sayakpaul/diffusers-torchao#benchmarking-results) repository also contains benchmarking results for compiled versions of Flux and CogVideoX.
+## Kernels
+
+[Kernels](https://huggingface.co/docs/kernels/index) is a library for building, distributing, and loading optimized compute kernels on the [Hub](https://huggingface.co/kernels-community). It supports [attention](./attention_backends#set_attention_backend) kernels and custom CUDA kernels for operations like RMSNorm, GEGLU, RoPE, and AdaLN.
+
+The [Diffusers Pipeline Integration](https://github.com/huggingface/kernels/blob/main/skills/cuda-kernels/references/diffusers-integration.md) guide shows how to integrate a kernel with the [add cuda-kernels](https://github.com/huggingface/kernels/blob/main/skills/cuda-kernels/SKILL.md) skill. This skill enables an agent, like Claude or Codex, to write custom kernels targeted towards a specific model and your hardware.
+
+> [!TIP]
+> Install the [add cuda-kernels](https://github.com/huggingface/kernels/blob/main/skills/cuda-kernels/SKILL.md) skill to teach an agent how to write a kernel. The [Custom kernels for all from Codex and Claude](https://huggingface.co/blog/custom-cuda-kernels-agent-skills) blog post covers this in more detail.
+
+For example, a custom RMSNorm kernel (generated by the `add cuda-kernels` skill) with [torch.compile](#torchcompile) speeds up LTX-Video generation 1.43x on an H100.
+
+
+
## Dynamic quantization
[Dynamic quantization](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html) improves inference speed by reducing precision to enable faster math operations. This particular type of quantization determines how to scale the activations based on the data at runtime rather than using a fixed scaling factor. As a result, the scaling factor is more accurately aligned with the data.
diff --git a/docs/source/en/quantization/torchao.md b/docs/source/en/quantization/torchao.md
index de90c3006e..1fdcb7879a 100644
--- a/docs/source/en/quantization/torchao.md
+++ b/docs/source/en/quantization/torchao.md
@@ -29,24 +29,7 @@ from diffusers import DiffusionPipeline, PipelineQuantizationConfig, TorchAoConf
from torchao.quantization import Int8WeightOnlyConfig
pipeline_quant_config = PipelineQuantizationConfig(
- quant_mapping={"transformer": TorchAoConfig(Int8WeightOnlyConfig(group_size=128)))}
-)
-pipeline = DiffusionPipeline.from_pretrained(
- "black-forest-labs/FLUX.1-dev",
- quantization_config=pipeline_quant_config,
- torch_dtype=torch.bfloat16,
- device_map="cuda"
-)
-```
-
-For simple use cases, you could also provide a string identifier in [`TorchAo`] as shown below.
-
-```py
-import torch
-from diffusers import DiffusionPipeline, PipelineQuantizationConfig, TorchAoConfig
-
-pipeline_quant_config = PipelineQuantizationConfig(
- quant_mapping={"transformer": TorchAoConfig("int8wo")}
+ quant_mapping={"transformer": TorchAoConfig(Int8WeightOnlyConfig(group_size=128, version=2))}
)
pipeline = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
@@ -91,18 +74,15 @@ Weight-only quantization stores the model weights in a specific low-bit data typ
Dynamic activation quantization stores the model weights in a low-bit dtype, while also quantizing the activations on-the-fly to save additional memory. This lowers the memory requirements from model weights, while also lowering the memory overhead from activation computations. However, this may come at a quality tradeoff at times, so it is recommended to test different models thoroughly.
-The quantization methods supported are as follows:
+Refer to the [official torchao documentation](https://docs.pytorch.org/ao/stable/index.html) for a better understanding of the available quantization methods. An exhaustive list of configuration options are available [here](https://docs.pytorch.org/ao/main/workflows/inference.html#inference-workflows).
-| **Category** | **Full Function Names** | **Shorthands** |
-|--------------|-------------------------|----------------|
-| **Integer quantization** | `int4_weight_only`, `int8_dynamic_activation_int4_weight`, `int8_weight_only`, `int8_dynamic_activation_int8_weight` | `int4wo`, `int4dq`, `int8wo`, `int8dq` |
-| **Floating point 8-bit quantization** | `float8_weight_only`, `float8_dynamic_activation_float8_weight`, `float8_static_activation_float8_weight` | `float8wo`, `float8wo_e5m2`, `float8wo_e4m3`, `float8dq`, `float8dq_e4m3`, `float8dq_e4m3_tensor`, `float8dq_e4m3_row` |
-| **Floating point X-bit quantization** | `fpx_weight_only` | `fpX_eAwB` where `X` is the number of bits (1-7), `A` is exponent bits, and `B` is mantissa bits. Constraint: `X == A + B + 1` |
-| **Unsigned Integer quantization** | `uintx_weight_only` | `uint1wo`, `uint2wo`, `uint3wo`, `uint4wo`, `uint5wo`, `uint6wo`, `uint7wo` |
+Some example popular quantization configurations are as follows:
-Some quantization methods are aliases (for example, `int8wo` is the commonly used shorthand for `int8_weight_only`). This allows using the quantization methods described in the torchao docs as-is, while also making it convenient to remember their shorthand notations.
-
-Refer to the [official torchao documentation](https://docs.pytorch.org/ao/stable/index.html) for a better understanding of the available quantization methods and the exhaustive list of configuration options available.
+| **Category** | **Configuration Classes** |
+|---|---|
+| **Integer quantization** | [`Int4WeightOnlyConfig`](https://docs.pytorch.org/ao/stable/api_reference/generated/torchao.quantization.Int4WeightOnlyConfig.html), [`Int8WeightOnlyConfig`](https://docs.pytorch.org/ao/stable/api_reference/generated/torchao.quantization.Int8WeightOnlyConfig.html), [`Int8DynamicActivationInt8WeightConfig`](https://docs.pytorch.org/ao/stable/api_reference/generated/torchao.quantization.Int8DynamicActivationInt8WeightConfig.html) |
+| **Floating point 8-bit quantization** | [`Float8WeightOnlyConfig`](https://docs.pytorch.org/ao/stable/api_reference/generated/torchao.quantization.Float8WeightOnlyConfig.html), [`Float8DynamicActivationFloat8WeightConfig`](https://docs.pytorch.org/ao/stable/api_reference/generated/torchao.quantization.Float8DynamicActivationFloat8WeightConfig.html) |
+| **Unsigned integer quantization** | [`IntxWeightOnlyConfig`](https://docs.pytorch.org/ao/stable/api_reference/generated/torchao.quantization.IntxWeightOnlyConfig.html) |
## Serializing and Deserializing quantized models
@@ -111,8 +91,9 @@ To serialize a quantized model in a given dtype, first load the model with the d
```python
import torch
from diffusers import AutoModel, TorchAoConfig
+from torchao.quantization import Int8WeightOnlyConfig
-quantization_config = TorchAoConfig("int8wo")
+quantization_config = TorchAoConfig(Int8WeightOnlyConfig())
transformer = AutoModel.from_pretrained(
"black-forest-labs/Flux.1-Dev",
subfolder="transformer",
@@ -137,18 +118,19 @@ image = pipe(prompt, num_inference_steps=30, guidance_scale=7.0).images[0]
image.save("output.png")
```
-If you are using `torch<=2.6.0`, some quantization methods, such as `uint4wo`, cannot be loaded directly and may result in an `UnpicklingError` when trying to load the models, but work as expected when saving them. In order to work around this, one can load the state dict manually into the model. Note, however, that this requires using `weights_only=False` in `torch.load`, so it should be run only if the weights were obtained from a trustable source.
+If you are using `torch<=2.6.0`, some quantization methods, such as `uint4` weight-only, cannot be loaded directly and may result in an `UnpicklingError` when trying to load the models, but work as expected when saving them. In order to work around this, one can load the state dict manually into the model. Note, however, that this requires using `weights_only=False` in `torch.load`, so it should be run only if the weights were obtained from a trustable source.
```python
import torch
from accelerate import init_empty_weights
from diffusers import FluxPipeline, AutoModel, TorchAoConfig
+from torchao.quantization import IntxWeightOnlyConfig
# Serialize the model
transformer = AutoModel.from_pretrained(
"black-forest-labs/Flux.1-Dev",
subfolder="transformer",
- quantization_config=TorchAoConfig("uint4wo"),
+ quantization_config=TorchAoConfig(IntxWeightOnlyConfig(dtype=torch.uint4)),
torch_dtype=torch.bfloat16,
)
transformer.save_pretrained("/path/to/flux_uint4wo", safe_serialization=False, max_shard_size="50GB")
diff --git a/examples/flux-control/train_control_flux.py b/examples/flux-control/train_control_flux.py
index c5f93fa2e9..5c81775103 100644
--- a/examples/flux-control/train_control_flux.py
+++ b/examples/flux-control/train_control_flux.py
@@ -1105,7 +1105,7 @@ def main(args):
# text encoding.
captions = batch["captions"]
- text_encoding_pipeline = text_encoding_pipeline.to("cuda")
+ text_encoding_pipeline = text_encoding_pipeline.to(accelerator.device)
with torch.no_grad():
prompt_embeds, pooled_prompt_embeds, text_ids = text_encoding_pipeline.encode_prompt(
captions, prompt_2=None
diff --git a/examples/flux-control/train_control_lora_flux.py b/examples/flux-control/train_control_lora_flux.py
index f5d3c822b3..f372284d7a 100644
--- a/examples/flux-control/train_control_lora_flux.py
+++ b/examples/flux-control/train_control_lora_flux.py
@@ -1251,7 +1251,7 @@ def main(args):
# text encoding.
captions = batch["captions"]
- text_encoding_pipeline = text_encoding_pipeline.to("cuda")
+ text_encoding_pipeline = text_encoding_pipeline.to(accelerator.device)
with torch.no_grad():
prompt_embeds, pooled_prompt_embeds, text_ids = text_encoding_pipeline.encode_prompt(
captions, prompt_2=None
diff --git a/src/diffusers/pipelines/ltx2/utils.py b/src/diffusers/pipelines/ltx2/utils.py
index f80469817f..52d446c468 100644
--- a/src/diffusers/pipelines/ltx2/utils.py
+++ b/src/diffusers/pipelines/ltx2/utils.py
@@ -1,6 +1,155 @@
+# Copyright 2026 Lightricks and The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
# Pre-trained sigma values for distilled model are taken from
# https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-pipelines/src/ltx_pipelines/utils/constants.py
DISTILLED_SIGMA_VALUES = [1.0, 0.99375, 0.9875, 0.98125, 0.975, 0.909375, 0.725, 0.421875]
# Reduced schedule for super-resolution stage 2 (subset of distilled values)
STAGE_2_DISTILLED_SIGMA_VALUES = [0.909375, 0.725, 0.421875]
+
+
+# Default negative prompt from
+# https://github.com/Lightricks/LTX-2/blob/ae855f8538843825f9015a419cf4ba5edaf5eec2/packages/ltx-pipelines/src/ltx_pipelines/utils/constants.py#L131-L143
+DEFAULT_NEGATIVE_PROMPT = (
+ "blurry, out of focus, overexposed, underexposed, low contrast, washed out colors, excessive noise, "
+ "grainy texture, poor lighting, flickering, motion blur, distorted proportions, unnatural skin tones, "
+ "deformed facial features, asymmetrical face, missing facial features, extra limbs, disfigured hands, "
+ "wrong hand count, artifacts around text, inconsistent perspective, camera shake, incorrect depth of "
+ "field, background too sharp, background clutter, distracting reflections, harsh shadows, inconsistent "
+ "lighting direction, color banding, cartoonish rendering, 3D CGI look, unrealistic materials, uncanny "
+ "valley effect, incorrect ethnicity, wrong gender, exaggerated expressions, wrong gaze direction, "
+ "mismatched lip sync, silent or muted audio, distorted voice, robotic voice, echo, background noise, "
+ "off-sync audio, incorrect dialogue, added dialogue, repetitive speech, jittery movement, awkward "
+ "pauses, incorrect timing, unnatural transitions, inconsistent framing, tilted camera, flat lighting, "
+ "inconsistent tone, cinematic oversaturation, stylized filters, or AI artifacts."
+)
+
+
+# System prompts for prompt enhancement
+# https://github.com/Lightricks/LTX-2/blob/ae855f8538843825f9015a419cf4ba5edaf5eec2/packages/ltx-core/src/ltx_core/text_encoders/gemma/encoders/prompts/gemma_t2v_system_prompt.txt#L1
+# Disable line-too-long rule in ruff to keep the prompts exactly the same (e.g. in terms of newlines)
+# Supported in ruff>=0.15.0
+# ruff: disable[E501]
+T2V_DEFAULT_SYSTEM_PROMPT = """
+You are a Creative Assistant. Given a user's raw input prompt describing a scene or concept, expand it into a detailed
+video generation prompt with specific visuals and integrated audio to guide a text-to-video model.
+
+#### Guidelines
+- Strictly follow all aspects of the user's raw input: include every element requested (style, visuals, motions,
+ actions, camera movement, audio).
+ - If the input is vague, invent concrete details: lighting, textures, materials, scene settings, etc.
+ - For characters: describe gender, clothing, hair, expressions. DO NOT invent unrequested characters.
+- Use active language: present-progressive verbs ("is walking," "speaking"). If no action specified, describe natural
+ movements.
+- Maintain chronological flow: use temporal connectors ("as," "then," "while").
+- Audio layer: Describe complete soundscape (background audio, ambient sounds, SFX, speech/music when requested).
+ Integrate sounds chronologically alongside actions. Be specific (e.g., "soft footsteps on tile"), not vague (e.g.,
+ "ambient sound is present").
+- Speech (only when requested):
+ - For ANY speech-related input (talking, conversation, singing, etc.), ALWAYS include exact words in quotes with
+ voice characteristics (e.g., "The man says in an excited voice: 'You won't believe what I just saw!'").
+ - Specify language if not English and accent if relevant.
+- Style: Include visual style at the beginning: "Style: