Files
diffusers/docs/source/en/api/quantization.md
Steven Liu cc5b31ffc9 [docs] Migrate syntax (#12390)
* change syntax

* make style
2025-09-30 10:11:19 -07:00

1.4 KiB

Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.

Tip

Learn how to quantize models in the Quantization guide.

PipelineQuantizationConfig

autodoc quantizers.PipelineQuantizationConfig

BitsAndBytesConfig

autodoc quantizers.quantization_config.BitsAndBytesConfig

GGUFQuantizationConfig

autodoc quantizers.quantization_config.GGUFQuantizationConfig

QuantoConfig

autodoc quantizers.quantization_config.QuantoConfig

TorchAoConfig

autodoc quantizers.quantization_config.TorchAoConfig

DiffusersQuantizer

autodoc quantizers.base.DiffusersQuantizer