FromIntXQuantizationAwareTrainingConfig¶
- class torchao.quantization.qat.FromIntXQuantizationAwareTrainingConfig[source]¶
(Deprecated) Please use
QATConfiginstead.Config for converting a model with fake quantized modules, such as
FakeQuantizedLinear()andFakeQuantizedEmbedding(), back to model with the original, corresponding modules without fake quantization. This should be used withquantize_().Example usage:
from torchao.quantization import quantize_ quantize_( model_with_fake_quantized_linears, FromIntXQuantizationAwareTrainingConfig(), )