Shortcuts

FakeQuantizeConfig

class torchao.quantization.qat.FakeQuantizeConfig(dtype: Union[dtype, TorchAODType], granularity: Optional[Union[Granularity, str]] = None, mapping_type: Optional[MappingType] = None, scale_precision: dtype = torch.float32, zero_point_precision: dtype = torch.int32, zero_point_domain: ZeroPointDomain = ZeroPointDomain.INT, is_dynamic: bool = True, range_learning: bool = False, eps: Optional[float] = None, *, group_size: Optional[int] = None, is_symmetric: Optional[bool] = None)[source]

Config for how to fake quantize weights or activations.

Parameters:
  • dtype – dtype to simulate during fake quantization, e.g. torch.int8. For PyTorch versions older than 2.6, you may use TorchAODType to represent torch.int1 to torch.int7 instead, e.g. TorchAODType.INT4.

  • granularity

    granularity of scales and zero points, e.g. PerGroup(32). We also support the following strings:

    1. ’per_token’: equivalent to PerToken()

    2. ’per_channel’: equivalent to PerAxis(0)

    3. ’per_group’: equivalent to PerGroup(group_size), must be combined

      with separate group_size kwarg, Alternatively, just set the group_size kwarg and leave this field empty.

  • mapping_type – whether to use symmetric (default) or asymmetric quantization Alternatively, set is_symmetric (bool) and leave this field empty.

  • scale_precision – scale dtype (default torch.fp32)

  • zero_point_precision – zero point dtype (default torch.int32)

  • zero_point_domain – whether zero point is in integer (default) or float domain

  • is_dynamic – whether to use dynamic (default) or static scale and zero points

  • range_learning (prototype) – whether to learn scale and zero points during training (default false), not compatible with is_dynamic.

kwargs (optional):
group_size: size of each group in per group fake quantization,

can be set instead of granularity

is_symmetric: whether to use symmetric or asymmetric quantization,

can be set instead of mapping_type

Example usage:

# Per token asymmetric quantization
FakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
FakeQuantizeConfig(torch.int8, PerToken(), MappingType.ASYMMETRIC)

# Per channel symmetric quantization
FakeQuantizeConfig(torch.int4, "per_channel")
FakeQuantizeConfig(torch.int4, "per_channel", is_symmetric=True)
FakeQuantizeConfig(torch.int4, PerAxis(0), MappingType.SYMMETRIC)

# Per group symmetric quantization
FakeQuantizeConfig(torch.int4, group_size=32)
FakeQuantizeConfig(torch.int4, group_size=32, is_symmetric=True)
FakeQuantizeConfig(torch.int4, "per_group", group_size=32, is_symmetric=True)
FakeQuantizeConfig(torch.int4, PerGroup(32), MappingType.SYMMETRIC)
property group_size: int

If this is per group granularity, return the group size. Otherwise, throw an error.

property is_symmetric: bool

Return True if mapping type is symmetric, else False (asymmetric).

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources