Rate this Page

FromIntXQuantizationAwareTrainingConfig#

class torchao.quantization.qat.FromIntXQuantizationAwareTrainingConfig[source][source]#

(Deprecated) Please use QATConfig instead.

Config for converting a model with fake quantized modules, such as FakeQuantizedLinear() and FakeQuantizedEmbedding(), back to model with the original, corresponding modules without fake quantization. This should be used with quantize_().

Example usage:

from torchao.quantization import quantize_
quantize_(
    model_with_fake_quantized_linears,
    FromIntXQuantizationAwareTrainingConfig(),
)