Float8FakeQuantizeConfig¶
- class torchao.quantization.qat.Float8FakeQuantizeConfig(dtype: dtype = torch.float8_e4m3fn, granularity: Union[PerTensor, PerRow] = PerRow(), hp_value_lb: Optional[float] = None, hp_value_ub: Optional[float] = None)[source]¶
Config for float8 fake quantization, targeting
Float8Tensor
.- Parameters:
dtype (torch.dtype) – the dtype for float8 Tensor
granularity (FP8Granularity) – the granularity for the Tensor, currently either PerRow() or PerTensor()
hp_value_lb (Optional[float]) – the lower bound for high precision floating point value for calculating scale
hp_value_ub (Optional[float]) – the upper bound for high precision floating point value for calculating scale