# Parallelism Parallelism strategies help speed up diffusion transformers by distributing computations across multiple devices, allowing for faster inference/training times. Refer to the [Distributed inferece](../training/distributed_inference) guide to learn more. ## ParallelConfig [[autodoc]] ParallelConfig ## ContextParallelConfig [[autodoc]] ContextParallelConfig [[autodoc]] hooks.apply_context_parallel