Shortcuts

Int8DynamicActivationInt8WeightConfig

class torchao.quantization.Int8DynamicActivationInt8WeightConfig(layout: Optional[Layout] = PlainLayout(), act_mapping_type: Optional[MappingType] = MappingType.SYMMETRIC, weight_only_decode: bool = False, set_inductor_config: bool = True)[source]

Configuration for applying int8 dynamic symmetric per-token activation and int8 per-channel weight quantization to linear layers.

Parameters:
  • layout – Optional[Layout] = PlainLayout() - Tensor layout for the quantized weights. Controls how the quantized data is stored and accessed.

  • act_mapping_type – Optional[MappingType] = MappingType.SYMMETRIC - Mapping type for activation quantization. SYMMETRIC uses symmetric quantization around zero.

  • weight_only_decode – bool = False - If True, only quantizes weights during forward pass and keeps activations in original precision during decode operations.

  • set_inductor_config – bool = True - If True, adjusts torchinductor settings to recommended values for better performance with this quantization scheme.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources