torch.fft.ifftn#
- torch.fft.ifftn(input, s=None, dim=None, norm=None, *, out=None) Tensor#
Computes the N dimensional inverse discrete Fourier transform of
input.Note
Supports torch.half and torch.chalf on CUDA with GPU Architecture SM53 or greater. However it only supports powers of 2 signal length in every transformed dimensions.
- Parameters
input (Tensor) – the input tensor
s (Tuple[int], optional) – Signal size in the transformed dimensions. If given, each dimension
dim[i]will either be zero-padded or trimmed to the lengths[i]before computing the IFFT. If a length-1is specified, no padding is done in that dimension. Default:s = [input.size(d) for d in dim]dim (Tuple[int], optional) – Dimensions to be transformed. Default: all dimensions, or the last
len(s)dimensions ifsis given.norm (str, optional) –
Normalization mode. For the backward transform (
ifftn()), these correspond to:"forward"- no normalization"backward"- normalize by1/n"ortho"- normalize by1/sqrt(n)(making the IFFT orthonormal)
Where
n = prod(s)is the logical IFFT size. Calling the forward transform (fftn()) with the same normalization mode will apply an overall normalization of1/nbetween the two transforms. This is required to makeifftn()the exact inverse.Default is
"backward"(normalize by1/n).
- Keyword Arguments
out (Tensor, optional) – the output tensor.
Example
>>> x = torch.rand(10, 10, dtype=torch.complex64) >>> ifftn = torch.fft.ifftn(x)
The discrete Fourier transform is separable, so
ifftn()here is equivalent to two one-dimensionalifft()calls:>>> two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1) >>> torch.testing.assert_close(ifftn, two_iffts, check_stride=False)