Compare commits

..

1 Commits

Author SHA1 Message Date
DN6
a224a17733 update 2026-03-30 17:05:11 +05:30
15 changed files with 113 additions and 637 deletions

View File

@@ -10,34 +10,24 @@ Strive to write code as simple and explicit as possible.
---
## Code formatting
### Dependencies
- No new mandatory dependency without discussion (e.g. `einops`)
- Optional deps guarded with `is_X_available()` and a dummy in `utils/dummy_*.py`
## Code formatting
- `make style` and `make fix-copies` should be run as the final step before opening a PR
### Copied Code
- Many classes are kept in sync with a source via a `# Copied from ...` header comment
- Do not edit a `# Copied from` block directly — run `make fix-copies` to propagate changes from the source
- Remove the header to intentionally break the link
### Models
- See [models.md](models.md) for model conventions, attention pattern, implementation rules, dependencies, and gotchas.
- See the [model-integration](./skills/model-integration/SKILL.md) skill for the full integration workflow, file structure, test setup, and other details.
### Pipelines & Schedulers
- Pipelines inherit from `DiffusionPipeline`
- Schedulers use `SchedulerMixin` with `ConfigMixin`
- Use `@torch.no_grad()` on pipeline `__call__`
- Support `output_type="latent"` for skipping VAE decode
- Support `generator` parameter for reproducibility
- Use `self.progress_bar(timesteps)` for progress tracking
- Don't subclass an existing pipeline for a variant — DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`)
- All layer calls should be visible directly in `forward` — avoid helper functions that hide `nn.Module` calls.
- Avoid graph breaks for `torch.compile` compatibility — do not insert NumPy operations in forward implementations and any other patterns that can break `torch.compile` compatibility with `fullgraph=True`.
- See the **model-integration** skill for the attention pattern, pipeline rules, test setup instructions, and other important details.
## Skills
Task-specific guides live in `.ai/skills/` and are loaded on demand by AI agents. Available skills include:
- [model-integration](./skills/model-integration/SKILL.md) (adding/converting pipelines)
- [parity-testing](./skills/parity-testing/SKILL.md) (debugging numerical parity).
Task-specific guides live in `.ai/skills/` and are loaded on demand by AI agents.
Available skills: **model-integration** (adding/converting pipelines), **parity-testing** (debugging numerical parity).

View File

@@ -1,76 +0,0 @@
# Model conventions and rules
Shared reference for model-related conventions, patterns, and gotchas.
Linked from `AGENTS.md`, `skills/model-integration/SKILL.md`, and `review-rules.md`.
## Coding style
- All layer calls should be visible directly in `forward` — avoid helper functions that hide `nn.Module` calls.
- Avoid graph breaks for `torch.compile` compatibility — do not insert NumPy operations in forward implementations and any other patterns that can break `torch.compile` compatibility with `fullgraph=True`.
- No new mandatory dependency without discussion (e.g. `einops`). Optional deps guarded with `is_X_available()` and a dummy in `utils/dummy_*.py`.
## Common model conventions
- Models use `ModelMixin` with `register_to_config` for config serialization
## Attention pattern
Attention must follow the diffusers pattern: both the `Attention` class and its processor are defined in the model file. The processor's `__call__` handles the actual compute and must use `dispatch_attention_fn` rather than calling `F.scaled_dot_product_attention` directly. The attention class inherits `AttentionModuleMixin` and declares `_default_processor_cls` and `_available_processors`.
```python
# transformer_mymodel.py
class MyModelAttnProcessor:
_attention_backend = None
_parallel_config = None
def __call__(self, attn, hidden_states, attention_mask=None, ...):
query = attn.to_q(hidden_states)
key = attn.to_k(hidden_states)
value = attn.to_v(hidden_states)
# reshape, apply rope, etc.
hidden_states = dispatch_attention_fn(
query, key, value,
attn_mask=attention_mask,
backend=self._attention_backend,
parallel_config=self._parallel_config,
)
hidden_states = hidden_states.flatten(2, 3)
return attn.to_out[0](hidden_states)
class MyModelAttention(nn.Module, AttentionModuleMixin):
_default_processor_cls = MyModelAttnProcessor
_available_processors = [MyModelAttnProcessor]
def __init__(self, query_dim, heads=8, dim_head=64, ...):
super().__init__()
self.to_q = nn.Linear(query_dim, heads * dim_head, bias=False)
self.to_k = nn.Linear(query_dim, heads * dim_head, bias=False)
self.to_v = nn.Linear(query_dim, heads * dim_head, bias=False)
self.to_out = nn.ModuleList([nn.Linear(heads * dim_head, query_dim), nn.Dropout(0.0)])
self.set_processor(MyModelAttnProcessor())
def forward(self, hidden_states, attention_mask=None, **kwargs):
return self.processor(self, hidden_states, attention_mask, **kwargs)
```
Consult the implementations in `src/diffusers/models/transformers/` if you need further references.
## Gotchas
1. **Forgetting `__init__.py` lazy imports.** Every new class must be registered in the appropriate `__init__.py` with lazy imports. Missing this causes `ImportError` that only shows up when users try `from diffusers import YourNewClass`.
2. **Using `einops` or other non-PyTorch deps.** Reference implementations often use `einops.rearrange`. Always rewrite with native PyTorch (`reshape`, `permute`, `unflatten`). Don't add the dependency. If a dependency is truly unavoidable, guard its import: `if is_my_dependency_available(): import my_dependency`.
3. **Missing `make fix-copies` after `# Copied from`.** If you add `# Copied from` annotations, you must run `make fix-copies` to propagate them. CI will fail otherwise.
4. **Wrong `_supports_cache_class` / `_no_split_modules`.** These class attributes control KV cache and device placement. Copy from a similar model and verify -- wrong values cause silent correctness bugs or OOM errors.
5. **Missing `@torch.no_grad()` on pipeline `__call__`.** Forgetting this causes GPU OOM from gradient accumulation during inference.
6. **Config serialization gaps.** Every `__init__` parameter in a `ModelMixin` subclass must be captured by `register_to_config`. If you add a new param but forget to register it, `from_pretrained` will silently use the default instead of the saved value.
7. **Forgetting to update `_import_structure` and `_lazy_modules`.** The top-level `src/diffusers/__init__.py` has both -- missing either one causes partial import failures.
8. **Hardcoded dtype in model forward.** Don't hardcode `torch.float32` or `torch.bfloat16` in the model's forward pass. Use the dtype of the input tensors or `self.dtype` so the model works with any precision.

View File

@@ -3,8 +3,8 @@
Review-specific rules for Claude. Focus on correctness — style is handled by ruff.
Before reviewing, read and apply the guidelines in:
- [AGENTS.md](AGENTS.md) — coding style, copied code
- [models.md](models.md) — model conventions, attention pattern, implementation rules, dependencies, gotchas
- [AGENTS.md](AGENTS.md) — coding style, dependencies, copied code, model conventions
- [skills/model-integration/SKILL.md](skills/model-integration/SKILL.md) — attention pattern, pipeline rules, implementation checklist, gotchas
- [skills/parity-testing/SKILL.md](skills/parity-testing/SKILL.md) — testing rules, comparison utilities
- [skills/parity-testing/pitfalls.md](skills/parity-testing/pitfalls.md) — known pitfalls (dtype mismatches, config assumptions, etc.)

View File

@@ -65,19 +65,89 @@ docs/source/en/api/
- [ ] Run `make style` and `make quality`
- [ ] Test parity with reference implementation (see `parity-testing` skill)
### Model conventions, attention pattern, and implementation rules
### Attention pattern
See [../../models.md](../../models.md) for the attention pattern, implementation rules, common conventions, dependencies, and gotchas. These apply to all model work.
Attention must follow the diffusers pattern: both the `Attention` class and its processor are defined in the model file. The processor's `__call__` handles the actual compute and must use `dispatch_attention_fn` rather than calling `F.scaled_dot_product_attention` directly. The attention class inherits `AttentionModuleMixin` and declares `_default_processor_cls` and `_available_processors`.
### Model integration specific rules
```python
# transformer_mymodel.py
**Don't combine structural changes with behavioral changes.** Restructuring code to fit diffusers APIs (ModelMixin, ConfigMixin, etc.) is unavoidable. But don't also "improve" the algorithm, refactor computation order, or rename internal variables for aesthetics. Keep numerical logic as close to the reference as possible, even if it looks unclean. For standard → modular, this is stricter: copy loop logic verbatim and only restructure into blocks. Clean up in a separate commit after parity is confirmed.
class MyModelAttnProcessor:
_attention_backend = None
_parallel_config = None
def __call__(self, attn, hidden_states, attention_mask=None, ...):
query = attn.to_q(hidden_states)
key = attn.to_k(hidden_states)
value = attn.to_v(hidden_states)
# reshape, apply rope, etc.
hidden_states = dispatch_attention_fn(
query, key, value,
attn_mask=attention_mask,
backend=self._attention_backend,
parallel_config=self._parallel_config,
)
hidden_states = hidden_states.flatten(2, 3)
return attn.to_out[0](hidden_states)
class MyModelAttention(nn.Module, AttentionModuleMixin):
_default_processor_cls = MyModelAttnProcessor
_available_processors = [MyModelAttnProcessor]
def __init__(self, query_dim, heads=8, dim_head=64, ...):
super().__init__()
self.to_q = nn.Linear(query_dim, heads * dim_head, bias=False)
self.to_k = nn.Linear(query_dim, heads * dim_head, bias=False)
self.to_v = nn.Linear(query_dim, heads * dim_head, bias=False)
self.to_out = nn.ModuleList([nn.Linear(heads * dim_head, query_dim), nn.Dropout(0.0)])
self.set_processor(MyModelAttnProcessor())
def forward(self, hidden_states, attention_mask=None, **kwargs):
return self.processor(self, hidden_states, attention_mask, **kwargs)
```
Consult the implementations in `src/diffusers/models/transformers/` if you need further references.
### Implementation rules
1. **Don't combine structural changes with behavioral changes.** Restructuring code to fit diffusers APIs (ModelMixin, ConfigMixin, etc.) is unavoidable. But don't also "improve" the algorithm, refactor computation order, or rename internal variables for aesthetics. Keep numerical logic as close to the reference as possible, even if it looks unclean. For standard → modular, this is stricter: copy loop logic verbatim and only restructure into blocks. Clean up in a separate commit after parity is confirmed.
2. **Pipelines must inherit from `DiffusionPipeline`.** Consult implementations in `src/diffusers/pipelines` in case you need references.
3. **Don't subclass an existing pipeline for a variant.** DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`).
### Test setup
- Slow tests gated with `@slow` and `RUN_SLOW=1`
- All model-level tests must use the `BaseModelTesterConfig`, `ModelTesterMixin`, `MemoryTesterMixin`, `AttentionTesterMixin`, `LoraTesterMixin`, and `TrainingTesterMixin` classes initially to write the tests. Any additional tests should be added after discussions with the maintainers. Use `tests/models/transformers/test_models_transformer_flux.py` as a reference.
### Common diffusers conventions
- Pipelines inherit from `DiffusionPipeline`
- Models use `ModelMixin` with `register_to_config` for config serialization
- Schedulers use `SchedulerMixin` with `ConfigMixin`
- Use `@torch.no_grad()` on pipeline `__call__`
- Support `output_type="latent"` for skipping VAE decode
- Support `generator` parameter for reproducibility
- Use `self.progress_bar(timesteps)` for progress tracking
## Gotchas
1. **Forgetting `__init__.py` lazy imports.** Every new class must be registered in the appropriate `__init__.py` with lazy imports. Missing this causes `ImportError` that only shows up when users try `from diffusers import YourNewClass`.
2. **Using `einops` or other non-PyTorch deps.** Reference implementations often use `einops.rearrange`. Always rewrite with native PyTorch (`reshape`, `permute`, `unflatten`). Don't add the dependency. If a dependency is truly unavoidable, guard its import: `if is_my_dependency_available(): import my_dependency`.
3. **Missing `make fix-copies` after `# Copied from`.** If you add `# Copied from` annotations, you must run `make fix-copies` to propagate them. CI will fail otherwise.
4. **Wrong `_supports_cache_class` / `_no_split_modules`.** These class attributes control KV cache and device placement. Copy from a similar model and verify -- wrong values cause silent correctness bugs or OOM errors.
5. **Missing `@torch.no_grad()` on pipeline `__call__`.** Forgetting this causes GPU OOM from gradient accumulation during inference.
6. **Config serialization gaps.** Every `__init__` parameter in a `ModelMixin` subclass must be captured by `register_to_config`. If you add a new param but forget to register it, `from_pretrained` will silently use the default instead of the saved value.
7. **Forgetting to update `_import_structure` and `_lazy_modules`.** The top-level `src/diffusers/__init__.py` has both -- missing either one causes partial import failures.
8. **Hardcoded dtype in model forward.** Don't hardcode `torch.float32` or `torch.bfloat16` in the model's forward pass. Use the dtype of the input tensors or `self.dtype` so the model works with any precision.
---
## Modular Pipeline Conversion

View File

@@ -161,8 +161,6 @@
- local: training/ddpo
title: Reinforcement learning training with DDPO
title: Methods
- local: training/nemo_automodel
title: NeMo Automodel
title: Training
- isExpanded: false
sections:

View File

@@ -1,378 +0,0 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# NeMo Automodel
[NeMo Automodel](https://github.com/NVIDIA-NeMo/Automodel) is a PyTorch DTensor-native training library from NVIDIA for fine-tuning and pretraining diffusion models at scale. It is Hugging Face native — train any Diffusers-format model from the Hub with no checkpoint conversion. The same YAML recipe and hackable training script runs on any scale from 1 GPU to hundreds of nodes, with [FSDP2](https://pytorch.org/docs/stable/fsdp.html) distributed training, multiresolution bucketed dataloading, and pre-encoded latent space training for maximum GPU utilization. It uses [flow matching](https://huggingface.co/papers/2210.02747) for training and is fully open source (Apache 2.0), NVIDIA-supported, and actively maintained.
NeMo Automodel integrates directly with Diffusers. It loads pretrained models from the Hugging Face Hub using Diffusers model classes and generates outputs with the [`DiffusionPipeline`].
The typical workflow is to install NeMo Automodel (pip or Docker), prepare your data by encoding it into `.meta` files, configure a YAML recipe, launch training with `torchrun`, and run inference with the resulting checkpoint.
## Supported models
| Model | Hugging Face ID | Task | Parameters | Use case |
|-------|----------------|------|------------|----------|
| Wan 2.1 T2V 1.3B | [Wan-AI/Wan2.1-T2V-1.3B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers) | Text-to-Video | 1.3B | video generation on limited hardware (fits on single 40GB A100) |
| FLUX.1-dev | [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) | Text-to-Image | 12B | high-quality image generation |
| HunyuanVideo 1.5 | [hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v) | Text-to-Video | 13B | high-quality video generation |
## Installation
### Hardware requirements
| Component | Minimum | Recommended |
|-----------|---------|-------------|
| GPU | A100 40GB | A100 80GB / H100 |
| GPUs | 4 | 8+ |
| RAM | 128 GB | 256 GB+ |
| Storage | 500 GB SSD | 2 TB NVMe |
Install NeMo Automodel with pip. For the full set of installation methods (including from source), see the [NeMo Automodel installation guide](https://docs.nvidia.com/nemo/automodel/latest/guides/installation.html).
```bash
pip3 install nemo-automodel
```
Alternatively, use the pre-built Docker container which includes all dependencies.
```bash
docker pull nvcr.io/nvidia/nemo-automodel:26.02.00
docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/nemo-automodel:26.02.00
```
> [!WARNING]
> Checkpoints are lost when the container exits unless you bind-mount the checkpoint directory to the host. For example, add `-v /host/path/checkpoints:/workspace/checkpoints` to the `docker run` command.
## Data preparation
NeMo Automodel trains diffusion models in latent space. Raw images or videos must be preprocessed into `.meta` files containing VAE latents and text embeddings before training. This avoids re-encoding on every training step.
Use the built-in preprocessing tool to encode your data. The tool automatically distributes work across all available GPUs.
<hfoptions id="data-prep">
<hfoption id="video preprocessing">
The video preprocessing command is the same for both Wan 2.1 and HunyuanVideo, but the flags differ. Wan 2.1 uses `--processor wan` with `--resolution_preset` and `--caption_format sidecar`, while HunyuanVideo uses `--processor hunyuan` with `--target_frames` to set the frame count and `--caption_format meta_json`.
**Wan 2.1:**
```bash
python -m tools.diffusion.preprocessing_multiprocess video \
--video_dir /data/videos \
--output_dir /cache \
--processor wan \
--resolution_preset 512p \
--caption_format sidecar
```
**HunyuanVideo:**
```bash
python -m tools.diffusion.preprocessing_multiprocess video \
--video_dir /data/videos \
--output_dir /cache \
--processor hunyuan \
--target_frames 121 \
--caption_format meta_json
```
</hfoption>
<hfoption id="image preprocessing">
```bash
python -m tools.diffusion.preprocessing_multiprocess image \
--image_dir /data/images \
--output_dir /cache \
--processor flux \
--resolution_preset 512p
```
</hfoption>
</hfoptions>
### Output format
Preprocessing produces a cache directory organized by resolution bucket. NeMo Automodel supports multi-resolution training through bucketed sampling. Samples are grouped by spatial resolution so each batch contains same-size samples, avoiding padding waste.
```
/cache/
├── 512x512/ # Resolution bucket
│ ├── <hash1>.meta # VAE latents + text embeddings
│ ├── <hash2>.meta
│ └── ...
├── 832x480/ # Another resolution bucket
│ └── ...
├── metadata.json # Global config (processor, model, total items)
└── metadata_shard_0000.json # Per-sample metadata (paths, resolutions, captions)
```
> [!TIP]
> See the [Diffusion Dataset Preparation](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/dataset.html) guide for caption formats, input data requirements, and all available preprocessing arguments.
## Training configuration
Fine-tuning is driven by two components:
1. A recipe script ([finetune.py](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/diffusion/finetune/finetune.py)) is a Python entry point that contains the training loop: loading the model, building the dataloader, running forward/backward passes, computing the flow matching loss, checkpointing, and logging.
2. A YAML configuration file specifies all settings the recipe uses: which model to fine-tune, where the data lives, optimizer hyperparameters, parallelism strategy, and more. You customize training by editing this file rather than modifying code, allowing you to scale from 1 to hundreds of GPUs.
Any YAML field can also be overridden from the CLI:
```bash
torchrun --nproc-per-node=8 examples/diffusion/finetune/finetune.py \
-c examples/diffusion/finetune/wan2_1_t2v_flow.yaml \
--optim.learning_rate 1e-5 \
--step_scheduler.num_epochs 50
```
Below is the annotated config for fine-tuning Wan 2.1 T2V 1.3B, with each section explained.
```yaml
seed: 42
# ── Experiment tracking (optional) ──────────────────────────────────────────
# Weights & Biases integration for logging metrics, losses, and learning rates.
# Set mode: "disabled" to turn off.
wandb:
project: wan-t2v-flow-matching
mode: online
name: wan2_1_t2v_fm
# ── Model ───────────────────────────────────────────────────────────────────
# pretrained_model_name_or_path: any Hugging Face model ID or local path.
# mode: "finetune" loads pretrained weights; "pretrain" trains from scratch.
model:
pretrained_model_name_or_path: Wan-AI/Wan2.1-T2V-1.3B-Diffusers
mode: finetune
# ── Training schedule ───────────────────────────────────────────────────────
# global_batch_size: effective batch across all GPUs.
# Gradient accumulation is computed automatically: global / (local × num_gpus).
step_scheduler:
global_batch_size: 8
local_batch_size: 1
ckpt_every_steps: 1000 # Save a checkpoint every N steps
num_epochs: 100
log_every: 2 # Log metrics every N steps
# ── Data ────────────────────────────────────────────────────────────────────
# _target_: the dataloader factory function.
# Use build_video_multiresolution_dataloader for video models (Wan, HunyuanVideo).
# Use build_text_to_image_multiresolution_dataloader for image models (FLUX).
# model_type: "wan" or "hunyuan" (selects the correct latent format).
# base_resolution: target resolution for multiresolution bucketing.
data:
dataloader:
_target_: nemo_automodel.components.datasets.diffusion.build_video_multiresolution_dataloader
cache_dir: PATH_TO_YOUR_DATA
model_type: wan
base_resolution: [512, 512]
dynamic_batch_size: false # When true, adjusts batch per bucket to maintain constant memory
shuffle: true
drop_last: false
num_workers: 0
# ── Optimizer ───────────────────────────────────────────────────────────────
# learning_rate: 5e-6 is a good starting point for fine-tuning.
# Adjust weight_decay and betas for your dataset.
optim:
learning_rate: 5e-6
optimizer:
weight_decay: 0.01
betas: [0.9, 0.999]
# ── Learning rate scheduler ─────────────────────────────────────────────────
# Supports cosine, linear, and constant schedules.
lr_scheduler:
lr_decay_style: cosine
lr_warmup_steps: 0
min_lr: 1e-6
# ── Flow matching ───────────────────────────────────────────────────────────
# adapter_type: model-specific adapter — must match the model:
# "simple" for Wan 2.1, "flux" for FLUX.1-dev, "hunyuan" for HunyuanVideo.
# timestep_sampling: "uniform" for Wan, "logit_normal" for FLUX and HunyuanVideo.
# flow_shift: shifts the flow schedule (model-dependent).
# i2v_prob: probability of image-to-video conditioning during training (video models).
flow_matching:
adapter_type: "simple"
adapter_kwargs: {}
timestep_sampling: "uniform"
logit_mean: 0.0
logit_std: 1.0
flow_shift: 3.0
num_train_timesteps: 1000
i2v_prob: 0.3
use_loss_weighting: true
# ── FSDP2 distributed training ──────────────────────────────────────────────
# dp_size: number of GPUs for data parallelism (typically = total GPUs on node).
# tp_size, cp_size, pp_size: tensor, context, and pipeline parallelism.
# For most fine-tuning, dp_size is all you need; leave others at 1.
fsdp:
tp_size: 1
cp_size: 1
pp_size: 1
dp_replicate_size: 1
dp_size: 8
# ── Checkpointing ──────────────────────────────────────────────────────────
# checkpoint_dir: where to save checkpoints (use a persistent path with Docker).
# restore_from: path to resume training from a previous checkpoint.
checkpoint:
enabled: true
checkpoint_dir: PATH_TO_YOUR_CKPT_DIR
model_save_format: torch_save
save_consolidated: false
restore_from: null
```
### Config field reference
The table below lists the minimal required configs. See the [NeMo Automodel examples](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/diffusion/finetune) have full example configs for all models.
| Section | Required? | What to Change |
|---------|-----------|----------------|
| `model` | Yes | Set `pretrained_model_name_or_path` to the Hugging Face model ID. Set `mode: finetune` or `mode: pretrain`. |
| `step_scheduler` | Yes | `global_batch_size` is the effective batch size across all GPUs. `ckpt_every_steps` controls checkpoint frequency. Gradient accumulation is computed automatically. |
| `data` | Yes | Set `cache_dir` to the path containing your preprocessed `.meta` files. Change `_target_` and `model_type` for different models. |
| `optim` | Yes | `learning_rate: 5e-6` is a good default for fine-tuning. Adjust for your dataset and model. |
| `lr_scheduler` | Yes | Choose `cosine`, `linear`, or `constant` for `lr_decay_style`. Set `lr_warmup_steps` for gradual warmup. |
| `flow_matching` | Yes | `adapter_type` must match the model (`simple` for Wan, `flux` for FLUX, `hunyuan` for HunyuanVideo). See model-specific configs for `adapter_kwargs`. |
| `fsdp` | Yes | Set `dp_size` to the number of GPUs. For multi-node, set to total GPUs across all nodes. |
| `checkpoint` | Recommended | Set `checkpoint_dir` to a persistent path, especially in Docker. Use `restore_from` to resume from a previous checkpoint. |
| `wandb` | Optional | Configure to enable Weights & Biases experiment tracking. Set `mode: disabled` to turn off. |
## Launch training
<hfoptions id="launch-training">
<hfoption id="single-node">
```bash
torchrun --nproc-per-node=8 \
examples/diffusion/finetune/finetune.py \
-c examples/diffusion/finetune/wan2_1_t2v_flow.yaml
```
</hfoption>
<hfoption id="multi-node">
Run the following on each node, setting `NODE_RANK` accordingly:
```bash
export MASTER_ADDR=node0.hostname
export MASTER_PORT=29500
export NODE_RANK=0 # 0 on master, 1 on second node, etc.
torchrun \
--nnodes=2 \
--nproc-per-node=8 \
--node_rank=${NODE_RANK} \
--rdzv_backend=c10d \
--rdzv_endpoint=${MASTER_ADDR}:${MASTER_PORT} \
examples/diffusion/finetune/finetune.py \
-c examples/diffusion/finetune/wan2_1_t2v_flow_multinode.yaml
```
> [!NOTE]
> For multi-node training, set `fsdp.dp_size` in the YAML to the **total** number of GPUs across all nodes (e.g., 16 for 2 nodes with 8 GPUs each).
</hfoption>
</hfoptions>
## Generation
After training, generate videos or images from text prompts using the fine-tuned checkpoint.
<hfoptions id="generation">
<hfoption id="Wan 2.1">
```bash
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_wan.yaml
```
With a fine-tuned checkpoint:
```bash
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_wan.yaml \
--model.checkpoint ./checkpoints/step_1000 \
--inference.prompts '["A dog running on a beach"]'
```
</hfoption>
<hfoption id="FLUX">
```bash
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_flux.yaml
```
With a fine-tuned checkpoint:
```bash
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_flux.yaml \
--model.checkpoint ./checkpoints/step_1000 \
--inference.prompts '["A dog running on a beach"]'
```
</hfoption>
<hfoption id="HunyuanVideo">
```bash
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_hunyuan.yaml
```
With a fine-tuned checkpoint:
```bash
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_hunyuan.yaml \
--model.checkpoint ./checkpoints/step_1000 \
--inference.prompts '["A dog running on a beach"]'
```
</hfoption>
</hfoptions>
## Diffusers integration
NeMo Automodel is built on top of Diffusers and uses it as the backbone for model loading and inference. It loads models directly from the Hugging Face Hub using Diffusers model classes such as [`WanTransformer3DModel`], [`FluxTransformer2DModel`], and [`HunyuanVideoTransformer3DModel`], and generates outputs via Diffusers pipelines like [`WanPipeline`] and [`FluxPipeline`].
This integration provides several benefits for Diffusers users:
- **No checkpoint conversion**: pretrained weights from the Hub work out of the box. Point `pretrained_model_name_or_path` at any Diffusers-format model ID and start training immediately.
- **Day-0 model support**: when a new diffusion model is added to Diffusers and uploaded to the Hub, it can be fine-tuned with NeMo Automodel without waiting for a dedicated training script.
- **Pipeline-compatible outputs**: fine-tuned checkpoints are saved in a format that can be loaded directly back into Diffusers pipelines for inference, sharing on the Hub, or further optimization with tools like quantization and compilation.
- **Scalable training for Diffusers models**: NeMo Automodel adds distributed training capabilities (FSDP2, multi-node, multiresolution bucketing) that go beyond what the built-in Diffusers training scripts provide, while keeping the same model and pipeline interfaces.
- **Shared ecosystem**: any model, LoRA adapter, or pipeline component from the Diffusers ecosystem remains compatible throughout the training and inference workflow.
## NVIDIA Team
- Pranav Prashant Thombre, pthombre@nvidia.com
- Linnan Wang, linnanw@nvidia.com
- Alexandros Koumparoulis, akoumparouli@nvidia.com
## Resources
- [NeMo Automodel GitHub](https://github.com/NVIDIA-NeMo/Automodel)
- [Diffusion Fine-Tuning Guide](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/finetune.html)
- [Diffusion Dataset Preparation](https://docs.nvidia.com/nemo/automodel/latest/guides/diffusion/dataset.html)
- [Diffusion Model Coverage](https://docs.nvidia.com/nemo/automodel/latest/model-coverage/diffusion.html)
- [NeMo Automodel for Transformers (LLM/VLM fine-tuning)](https://huggingface.co/docs/transformers/en/community_integrations/nemo_automodel_finetuning)

View File

@@ -347,17 +347,16 @@ When LoRA was first adapted from language models to diffusion models, it was app
More recently, SOTA text-to-image diffusion models replaced the Unet with a diffusion Transformer(DiT). With this change, we may also want to explore
applying LoRA training onto different types of layers and blocks. To allow more flexibility and control over the targeted modules we added `--lora_layers`- in which you can specify in a comma separated string
the exact modules for LoRA training. Here are some examples of target modules you can provide:
- for attention only layers: `--lora_layers="attn.to_k,attn.to_q,attn.to_v,attn.to_out.0,attn.to_qkv_mlp_proj"`
- to train the same modules as in the fal trainer: `--lora_layers="attn.to_k,attn.to_q,attn.to_v,attn.to_out.0,attn.to_qkv_mlp_proj,attn.add_k_proj,attn.add_q_proj,attn.add_v_proj,attn.to_add_out,ff.linear_in,ff.linear_out,ff_context.linear_in,ff_context.linear_out"`
- to train the same modules as in ostris ai-toolkit / replicate trainer: `--lora_blocks="attn.to_k,attn.to_q,attn.to_v,attn.to_out.0,attn.to_qkv_mlp_proj,attn.add_k_proj,attn.add_q_proj,attn.add_v_proj,attn.to_add_out,ff.linear_in,ff.linear_out,ff_context.linear_in,ff_context.linear_out,norm_out.linear,norm_out.proj_out"`
- for attention only layers: `--lora_layers="attn.to_k,attn.to_q,attn.to_v,attn.to_out.0"`
- to train the same modules as in the fal trainer: `--lora_layers="attn.to_k,attn.to_q,attn.to_v,attn.to_out.0,attn.add_k_proj,attn.add_q_proj,attn.add_v_proj,attn.to_add_out,ff.net.0.proj,ff.net.2,ff_context.net.0.proj,ff_context.net.2"`
- to train the same modules as in ostris ai-toolkit / replicate trainer: `--lora_blocks="attn.to_k,attn.to_q,attn.to_v,attn.to_out.0,attn.add_k_proj,attn.add_q_proj,attn.add_v_proj,attn.to_add_out,ff.net.0.proj,ff.net.2,ff_context.net.0.proj,ff_context.net.2,norm1_context.linear, norm1.linear,norm.linear,proj_mlp,proj_out"`
> [!NOTE]
> `--lora_layers` can also be used to specify which **blocks** to apply LoRA training to. To do so, simply add a block prefix to each layer in the comma separated string:
> **single DiT blocks**: to target the ith single transformer block, add the prefix `single_transformer_blocks.i`, e.g. - `single_transformer_blocks.i.attn.to_k`
> **MMDiT blocks**: to target the ith MMDiT block, add the prefix `transformer_blocks.i`, e.g. - `transformer_blocks.i.attn.to_k`
> **MMDiT blocks**: to target the ith MMDiT block, add the prefix `transformer_blocks.i`, e.g. - `transformer_blocks.i.attn.to_k`
> [!NOTE]
> keep in mind that while training more layers can improve quality and expressiveness, it also increases the size of the output LoRA weights.
> [!NOTE]
In FLUX2, the q, k, and v projections are fused into a single linear layer named attn.to_qkv_mlp_proj within the single transformer block. Also, the attention output is just attn.to_out, not attn.to_out.0 — its no longer a ModuleList like in transformer block.
## Training Image-to-Image

View File

@@ -1256,13 +1256,7 @@ def main(args):
if args.lora_layers is not None:
target_modules = [layer.strip() for layer in args.lora_layers.split(",")]
else:
# target_modules = ["to_k", "to_q", "to_v", "to_out.0"] # just train transformer_blocks
# train transformer_blocks and single_transformer_blocks
target_modules = ["to_k", "to_q", "to_v", "to_out.0"] + [
"to_qkv_mlp_proj",
*[f"single_transformer_blocks.{i}.attn.to_out" for i in range(48)],
]
target_modules = ["to_k", "to_q", "to_v", "to_out.0"]
# now we will add new LoRA weights the transformer layers
transformer_lora_config = LoraConfig(

View File

@@ -1206,13 +1206,7 @@ def main(args):
if args.lora_layers is not None:
target_modules = [layer.strip() for layer in args.lora_layers.split(",")]
else:
# target_modules = ["to_k", "to_q", "to_v", "to_out.0"] # just train transformer_blocks
# train transformer_blocks and single_transformer_blocks
target_modules = ["to_k", "to_q", "to_v", "to_out.0"] + [
"to_qkv_mlp_proj",
*[f"single_transformer_blocks.{i}.attn.to_out" for i in range(48)],
]
target_modules = ["to_k", "to_q", "to_v", "to_out.0"]
# now we will add new LoRA weights the transformer layers
transformer_lora_config = LoraConfig(

View File

@@ -1249,13 +1249,7 @@ def main(args):
if args.lora_layers is not None:
target_modules = [layer.strip() for layer in args.lora_layers.split(",")]
else:
# target_modules = ["to_k", "to_q", "to_v", "to_out.0"] # just train transformer_blocks
# train transformer_blocks and single_transformer_blocks
target_modules = ["to_k", "to_q", "to_v", "to_out.0"] + [
"to_qkv_mlp_proj",
*[f"single_transformer_blocks.{i}.attn.to_out" for i in range(24)],
]
target_modules = ["to_k", "to_q", "to_v", "to_out.0"]
# now we will add new LoRA weights the transformer layers
transformer_lora_config = LoraConfig(

View File

@@ -1200,13 +1200,7 @@ def main(args):
if args.lora_layers is not None:
target_modules = [layer.strip() for layer in args.lora_layers.split(",")]
else:
# target_modules = ["to_k", "to_q", "to_v", "to_out.0"] # just train transformer_blocks
# train transformer_blocks and single_transformer_blocks
target_modules = ["to_k", "to_q", "to_v", "to_out.0"] + [
"to_qkv_mlp_proj",
*[f"single_transformer_blocks.{i}.attn.to_out" for i in range(24)],
]
target_modules = ["to_k", "to_q", "to_v", "to_out.0"]
# now we will add new LoRA weights the transformer layers
transformer_lora_config = LoraConfig(

View File

@@ -862,23 +862,23 @@ def _native_attention_backward_op(
key.requires_grad_(True)
value.requires_grad_(True)
with torch.enable_grad():
query_t, key_t, value_t = (x.permute(0, 2, 1, 3) for x in (query, key, value))
out = torch.nn.functional.scaled_dot_product_attention(
query=query_t,
key=key_t,
value=value_t,
attn_mask=ctx.attn_mask,
dropout_p=ctx.dropout_p,
is_causal=ctx.is_causal,
scale=ctx.scale,
enable_gqa=ctx.enable_gqa,
)
out = out.permute(0, 2, 1, 3)
query_t, key_t, value_t = (x.permute(0, 2, 1, 3) for x in (query, key, value))
out = torch.nn.functional.scaled_dot_product_attention(
query=query_t,
key=key_t,
value=value_t,
attn_mask=ctx.attn_mask,
dropout_p=ctx.dropout_p,
is_causal=ctx.is_causal,
scale=ctx.scale,
enable_gqa=ctx.enable_gqa,
)
out = out.permute(0, 2, 1, 3)
grad_query_t, grad_key_t, grad_value_t = torch.autograd.grad(
outputs=out, inputs=[query_t, key_t, value_t], grad_outputs=grad_out, retain_graph=False
)
grad_out_t = grad_out.permute(0, 2, 1, 3)
grad_query_t, grad_key_t, grad_value_t = torch.autograd.grad(
outputs=out, inputs=[query_t, key_t, value_t], grad_outputs=grad_out_t, retain_graph=False
)
grad_query = grad_query_t.permute(0, 2, 1, 3)
grad_key = grad_key_t.permute(0, 2, 1, 3)

View File

@@ -764,7 +764,7 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
token = kwargs.pop("token", None)
revision = kwargs.pop("revision", None)
from_flax = kwargs.pop("from_flax", False)
torch_dtype = kwargs.pop("torch_dtype", None)
torch_dtype = kwargs.pop("torch_dtype", torch.float32)
custom_pipeline = kwargs.pop("custom_pipeline", None)
custom_revision = kwargs.pop("custom_revision", None)
provider = kwargs.pop("provider", None)

View File

@@ -44,9 +44,9 @@ class AutoencoderTesterMixin:
if isinstance(output, dict):
output = output.to_tuple()[0]
assert output is not None
self.assertIsNotNone(output)
expected_shape = inputs_dict["sample"].shape
assert output.shape == expected_shape, "Input and output shapes do not match"
self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
def test_enable_disable_tiling(self):
if not hasattr(self.model_class, "enable_tiling"):

View File

@@ -98,64 +98,6 @@ def _context_parallel_worker(rank, world_size, master_port, model_class, init_di
dist.destroy_process_group()
def _context_parallel_backward_worker(
rank, world_size, master_port, model_class, init_dict, cp_dict, inputs_dict, return_dict
):
"""Worker function for context parallel backward pass testing."""
try:
# Set up distributed environment
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = str(master_port)
os.environ["RANK"] = str(rank)
os.environ["WORLD_SIZE"] = str(world_size)
# Get device configuration
device_config = DEVICE_CONFIG.get(torch_device, DEVICE_CONFIG["cuda"])
backend = device_config["backend"]
device_module = device_config["module"]
# Initialize process group
dist.init_process_group(backend=backend, rank=rank, world_size=world_size)
# Set device for this process
device_module.set_device(rank)
device = torch.device(f"{torch_device}:{rank}")
# Create model in training mode
model = model_class(**init_dict)
model.to(device)
model.train()
# Move inputs to device
inputs_on_device = {k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in inputs_dict.items()}
# Enable context parallelism
cp_config = ContextParallelConfig(**cp_dict)
model.enable_parallelism(config=cp_config)
# Run forward and backward pass
output = model(**inputs_on_device, return_dict=False)[0]
loss = output.sum()
loss.backward()
# Check that backward actually produced at least one valid gradient
grads = [p.grad for p in model.parameters() if p.requires_grad and p.grad is not None]
has_valid_grads = len(grads) > 0 and all(torch.isfinite(g).all() for g in grads)
# Only rank 0 reports results
if rank == 0:
return_dict["status"] = "success"
return_dict["has_valid_grads"] = bool(has_valid_grads)
except Exception as e:
if rank == 0:
return_dict["status"] = "error"
return_dict["error"] = str(e)
finally:
if dist.is_initialized():
dist.destroy_process_group()
def _custom_mesh_worker(
rank,
world_size,
@@ -262,51 +204,6 @@ class ContextParallelTesterMixin:
def test_context_parallel_batch_inputs(self, cp_type):
self.test_context_parallel_inference(cp_type, batch_size=2)
@pytest.mark.parametrize("cp_type", ["ulysses_degree", "ring_degree"], ids=["ulysses", "ring"])
def test_context_parallel_backward(self, cp_type, batch_size: int = 1):
if not torch.distributed.is_available():
pytest.skip("torch.distributed is not available.")
if not hasattr(self.model_class, "_cp_plan") or self.model_class._cp_plan is None:
pytest.skip("Model does not have a _cp_plan defined for context parallel inference.")
if cp_type == "ring_degree":
active_backend, _ = _AttentionBackendRegistry.get_active_backend()
if active_backend == AttentionBackendName.NATIVE:
pytest.skip("Ring attention is not supported with the native attention backend.")
world_size = 2
init_dict = self.get_init_dict()
inputs_dict = self.get_dummy_inputs(batch_size=batch_size)
# Move all tensors to CPU for multiprocessing
inputs_dict = {k: v.cpu() if isinstance(v, torch.Tensor) else v for k, v in inputs_dict.items()}
cp_dict = {cp_type: world_size}
# Find a free port for distributed communication
master_port = _find_free_port()
# Use multiprocessing manager for cross-process communication
manager = mp.Manager()
return_dict = manager.dict()
# Spawn worker processes
mp.spawn(
_context_parallel_backward_worker,
args=(world_size, master_port, self.model_class, init_dict, cp_dict, inputs_dict, return_dict),
nprocs=world_size,
join=True,
)
assert return_dict.get("status") == "success", (
f"Context parallel backward pass failed: {return_dict.get('error', 'Unknown error')}"
)
assert return_dict.get("has_valid_grads"), "Context parallel backward pass did not produce valid gradients."
@pytest.mark.parametrize("cp_type", ["ulysses_degree", "ring_degree"], ids=["ulysses", "ring"])
def test_context_parallel_backward_batch_inputs(self, cp_type):
self.test_context_parallel_backward(cp_type, batch_size=2)
@pytest.mark.parametrize(
"cp_type,mesh_shape,mesh_dim_names",
[