Rate this Page

Int4WeightOnlyEmbeddingQATQuantizer#

class torchao.quantization.qat.Int4WeightOnlyEmbeddingQATQuantizer(group_size: int = 256, scale_precision: dtype = torch.float32, zero_point_precision: dtype = torch.int32)[source][source]#

Quantizer for performing QAT on a model, where embedding layers have int4 fake quantized grouped per channel weights.

convert(model: Module, *args: Any, **kwargs: Any) Module[source][source]#

Swap all Int4WeightOnlyQATEmbedding modules with Int4WeightOnlyEmbedding.

prepare(model: Module, *args: Any, **kwargs: Any) Module[source][source]#

Swap nn.Embedding modules with Int4WeightOnlyQATEmbedding.