Rate this Page

PReLU#

class torch.nn.modules.activation.PReLU(num_parameters=1, init=0.25, device=None, dtype=None)[source]#

Applies the element-wise PReLU function.

PReLU(x)=max(0,x)+amin(0,x)\text{PReLU}(x) = \max(0,x) + a * \min(0,x)

or

PReLU(x)={x, if x0ax, otherwise \text{PReLU}(x) = \begin{cases} x, & \text{ if } x \ge 0 \\ ax, & \text{ otherwise } \end{cases}

Here aa is a learnable parameter. When called without arguments, nn.PReLU() uses a single parameter aa across all input channels. If called with nn.PReLU(nChannels), a separate aa is used for each input channel.

Note

weight decay should not be used when learning aa for good performance.

Note

Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.

Parameters
  • num_parameters (int) – number of aa to learn. Although it takes an int as input, there is only two values are legitimate: 1, or the number of channels at input. Default: 1

  • init (float) – the initial value of aa. Default: 0.25

Shape:
  • Input: ()( *) where * means, any number of additional dimensions.

  • Output: ()(*), same shape as the input.

Variables

weight (Tensor) – the learnable weights of shape (num_parameters).

../_images/PReLU.png

Examples:

>>> m = nn.PReLU()
>>> input = torch.randn(2)
>>> output = m(input)
extra_repr()[source]#

Return the extra representation of the module.

Return type

str

forward(input)[source]#

Runs the forward pass.

Return type

Tensor

reset_parameters()[source]#

Resets parameters based on their initialization used in __init__.