Aliases in torch.ao#
Created On: Dec 01, 2025 | Last Updated On: Dec 01, 2025
The following are aliases to their counterparts in torch.ao in nested namespaces.
torch.ao.nn.intrinsic.qat.modules#
The following are aliases to their counterparts in torch.ao.nn.intrinsic.qat in the torch.ao.nn.intrinsic.qat.module namespace.
torch.ao.nn.intrinsic.qat.modules.conv_fused (Aliases)#
A ConvReLU1d module is a fused module of Conv1d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. |
|
A ConvReLU2d module is a fused module of Conv2d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. |
|
A ConvReLU3d module is a fused module of Conv3d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. |
|
A ConvBnReLU1d module is a module fused from Conv1d, BatchNorm1d and ReLU, attached with FakeQuantize modules for weight, used in quantization aware training. |
|
A ConvBnReLU2d module is a module fused from Conv2d, BatchNorm2d and ReLU, attached with FakeQuantize modules for weight, used in quantization aware training. |
|
A ConvBnReLU3d module is a module fused from Conv3d, BatchNorm3d and ReLU, attached with FakeQuantize modules for weight, used in quantization aware training. |
torch.ao.nn.intrinsic.qat.modules.linear_fused (Aliases)#
A LinearBn1d module is a module fused from Linear and BatchNorm1d, attached with FakeQuantize modules for weight, used in quantization aware training. |
torch.ao.nn.intrinsic.qat.modules.linear_relu (Aliases)#
A LinearReLU module fused from Linear and ReLU modules, attached with FakeQuantize modules for weight, used in quantization aware training. |
torch.ao.nn.intrinsic.quantized.modules#
The following are aliases to their counterparts in torch.ao.nn.intrinsic.quantized in the torch.ao.nn.intrinsic.quantized.modules namespace.
torch.ao.nn.intrinsic.quantized.modules.conv_relu (Aliases)#
A ConvReLU1d module is a fused module of Conv1d and ReLU |
|
A ConvReLU2d module is a fused module of Conv2d and ReLU |
|
A ConvReLU3d module is a fused module of Conv3d and ReLU |
torch.ao.nn.intrinsic.quantized.modules.bn_relu (Aliases)#
A BNReLU2d module is a fused module of BatchNorm2d and ReLU |
|
A BNReLU3d module is a fused module of BatchNorm3d and ReLU |
torch.ao.nn.intrinsic.quantized.modules.conv_add (Aliases)#
A ConvAdd2d module is a fused module of Conv2d and Add |
|
A ConvAddReLU2d module is a fused module of Conv2d, Add and Relu |
torch.ao.nn.intrinsic.quantized.modules.linear_relu (Aliases)#
For onednn backend only A LinearLeakyReLU module fused from Linear and LeakyReLU modules We adopt the same interface as |
|
A LinearReLU module fused from Linear and ReLU modules |
|
A LinearTanh module fused from Linear and Tanh modules |
torch.ao.nn.intrinsic.quantized.dynamic.modules#
The following are aliases to their counterparts in the torch.ao.nn.intrinsic.quantized.dynamic namespace.
torch.ao.nn.intrinsic.quantized.dynamic.modules.linear_relu (Aliases)#
A LinearReLU module fused from Linear and ReLU modules that can be used for dynamic quantization. |
torch.ao.nn.intrinsic.modules#
The following are aliases to their counterparts in the torch.ao.nn.intrinsic namespace.
torch.ao.nn.intrinsic.modules.fused (Aliases)#
This is a sequential container which calls the Conv2d modules with extra Add. |
|
This is a sequential container which calls the Conv2d, add, Relu. |
|
This is a sequential container which calls the Linear and BatchNorm1d modules. |
|
This is a sequential container which calls the Linear and LeakyReLU modules. |
|
This is a sequential container which calls the Linear and Tanh modules. |
torch.ao.nn.intrinsic.modules.torch.ao.nn.qat.modules#
The following are aliases to their counterparts in the torch.ao.nn.qat namespace.
torch.ao.nn.intrinsic.modules.conv (Aliases)#
A Conv1d module attached with FakeQuantize modules for weight, used for quantization aware training. |
|
A Conv2d module attached with FakeQuantize modules for weight, used for quantization aware training. |
|
A Conv3d module attached with FakeQuantize modules for weight, used for quantization aware training. |
torch.ao.nn.intrinsic.modules.embedding_ops (Aliases)#
An embedding bag module attached with FakeQuantize modules for weight, used for quantization aware training. |
|
An embedding bag module attached with FakeQuantize modules for weight, used for quantization aware training. |
torch.ao.nn.intrinsic.modules.linear (Aliases)#
A linear module attached with FakeQuantize modules for weight, used for quantization aware training. |
torch.ao.nn.quantizable.modules#
The following are aliases to their counterparts in the torch.ao.nn.quantizable namespace.
torch.ao.nn.quantizable.modules.activation (Aliases)#
torch.ao.nn.quantizable.modules.rnn (Aliases)#
A quantizable long short-term memory (LSTM). |
|
A quantizable long short-term memory (LSTM) cell. |
torch.ao.nn.quantized.dynamic.modules#
The following are aliases to their counterparts in the torch.ao.nn.quantized.dynamic namespace.
torch.ao.nn.quantized.dynamic.modules.conv (Aliases)#
A dynamically quantized conv module with floating point tensors as inputs and outputs. |
|
A dynamically quantized conv module with floating point tensors as inputs and outputs. |
|
A dynamically quantized conv module with floating point tensors as inputs and outputs. |
|
A dynamically quantized transposed convolution module with floating point tensors as inputs and outputs. |
|
A dynamically quantized transposed convolution module with floating point tensors as inputs and outputs. |
|
A dynamically quantized transposed convolution module with floating point tensors as inputs and outputs. |
torch.ao.nn.quantized.dynamic.modules.linear (Aliases)#
A dynamic quantized linear module with floating point tensor as inputs and outputs. |
torch.ao.nn.quantized.dynamic.modules.rnn (Aliases)#
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. |
|
A gated recurrent unit (GRU) cell |
|
A dynamic quantized LSTM module with floating point tensor as inputs and outputs. |
|
A long short-term memory (LSTM) cell. |
|
An Elman RNN cell with tanh or ReLU non-linearity. |
|