Shortcuts

Int4WeightOnlyEmbeddingQATQuantizer

class torchao.quantization.qat.Int4WeightOnlyEmbeddingQATQuantizer(group_size: int = 256, scale_precision: dtype = torch.float32, zero_point_precision: dtype = torch.int32)[source]

Quantizer for performing QAT on a model, where embedding layers have int4 fake quantized grouped per channel weights.

convert(model: Module, *args: Any, **kwargs: Any) Module[source]

Swap all Int4WeightOnlyQATEmbedding modules with Int4WeightOnlyEmbedding.

prepare(model: Module, *args: Any, **kwargs: Any) Module[source]

Swap nn.Embedding modules with Int4WeightOnlyQATEmbedding.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources