torch / torch
torch.irfft¶
-
torch.
irfft
(input, signal_ndim, normalized=False, onesided=True, signal_sizes=None) → Tensor¶ Complex-to-real Inverse Discrete Fourier Transform.
Warning
The function
torch.irfft()
is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and callingtorch.fft.irfft()
for one-sided input, ortorch.fft.ifft()
for two-sided input.This method computes the complex-to-real inverse discrete Fourier transform. It is mathematically equivalent with
ifft()
with differences only in formats of the input and output.The argument specifications are almost identical with
ifft()
. Similar toifft()
, ifnormalized
is set toTrue
, this normalizes the result by multiplying it with so that the operator is unitary, where is the size of signal dimension .Note
Due to the conjugate symmetry,
input
do not need to contain the full complex frequency values. Roughly half of the values will be sufficient, as is the case wheninput
is given byrfft()
withrfft(signal, onesided=True)
. In such case, set theonesided
argument of this method toTrue
. Moreover, the original signal shape information can sometimes be lost, optionally setsignal_sizes
to be the size of the original signal (without the batch dimensions if in batched mode) to recover it with correct shape.Therefore, to invert an
rfft()
, thenormalized
andonesided
arguments should be set identically forirfft()
, and preferably asignal_sizes
is given to avoid size mismatch. See the example below for a case of size mismatch.See
rfft()
for details on conjugate symmetry.The inverse of this function is
rfft()
.Warning
Generally speaking, input to this function should contain values following conjugate symmetry. Note that even if
onesided
isTrue
, often symmetry on some part is still needed. When this requirement is not satisfied, the behavior ofirfft()
is undefined. Sincetorch.autograd.gradcheck()
estimates numerical Jacobian with point perturbations,irfft()
will almost certainly fail the check.Note
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.
Warning
Due to limited dynamic range of half datatype, performing this operation in half precision may cause the first element of result to overflow for certain inputs.
Warning
For CPU tensors, this method is currently only available with MKL. Use
torch.backends.mkl.is_available()
to check if MKL is installed.- Parameters
input (Tensor) – the input tensor of at least
signal_ndim
+ 1
dimensionssignal_ndim (int) – the number of dimensions in each signal.
signal_ndim
can only be 1, 2 or 3normalized (bool, optional) – controls whether to return normalized results. Default:
False
onesided (bool, optional) – controls whether
input
was halfed to avoid redundancy, e.g., byrfft()
. Default:True
signal_sizes (list or
torch.Size
, optional) – the size of the original signal (without batch dimension). Default:None
- Returns
A tensor containing the complex-to-real inverse Fourier transform result
- Return type
Example:
>>> x = torch.randn(4, 4) >>> torch.rfft(x, 2, onesided=True).shape torch.Size([4, 3, 2]) >>> >>> # notice that with onesided=True, output size does not determine the original signal size >>> x = torch.randn(4, 5) >>> torch.rfft(x, 2, onesided=True).shape torch.Size([4, 3, 2]) >>> >>> # now we use the original shape to recover x >>> x tensor([[-0.8992, 0.6117, -1.6091, -0.4155, -0.8346], [-2.1596, -0.0853, 0.7232, 0.1941, -0.0789], [-2.0329, 1.1031, 0.6869, -0.5042, 0.9895], [-0.1884, 0.2858, -1.5831, 0.9917, -0.8356]]) >>> y = torch.rfft(x, 2, onesided=True) >>> torch.irfft(y, 2, onesided=True, signal_sizes=x.shape) # recover x tensor([[-0.8992, 0.6117, -1.6091, -0.4155, -0.8346], [-2.1596, -0.0853, 0.7232, 0.1941, -0.0789], [-2.0329, 1.1031, 0.6869, -0.5042, 0.9895], [-0.1884, 0.2858, -1.5831, 0.9917, -0.8356]])
此页内容是否对您有帮助