python - How to correctly use Numpy's FFT function in PyTorch? -
i introduced pytorch , began running through library's documentation , tutorials. in "creating extensions using numpy , scipy" tutorial ( http://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html), under "parameter-less example", sample function created using numpy called "badfftfunction".
the description function states:
"this layer doesn’t particularly useful or mathematically correct.
it aptly named badfftfunction"
the function , usage given as:
from numpy.fft import rfft2, irfft2 class badfftfunction(function): def forward(self, input): numpy_input = input.numpy() result = abs(rfft2(numpy_input)) return torch.floattensor(result) def backward(self, grad_output): numpy_go = grad_output.numpy() result = irfft2(numpy_go) return torch.floattensor(result) def incorrect_fft(input): return badfftfunction()(input) input = variable(torch.randn(8, 8), requires_grad=true) result = incorrect_fft(input) print(result.data) result.backward(torch.randn(result.size())) print(input.grad)
unfortunately, introduced signal processing well, , unsure of (likely obvious) error in function.
i wondering, how might 1 go fixing function forward , backward outputs correct? how can badfftfunction fixed differentiable fft function can used in pytorch?
any appreciated. thank you.
i think errors are: first, function, despite having fft in name, returns amplitudes/absolute values of fft output, not full complex coefficients. also, using inverse fft compute gradient of amplitudes doesn't make sense mathematically (?).
there package called pytorch-fft tries make fft-function available in pytorch. can see experimental code autograd functionality here. note discussion in issue.
Comments
Post a Comment