torch.autograd.gradgradcheck¶
- torch.autograd.gradgradcheck(func, inputs, grad_outputs=None, *, eps=1e-06, atol=1e-05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_fwd_over_rev=False, check_rev_over_rev=True, fast_mode=False, masked=False)[source]¶
- Check gradients of gradients computed via small finite differences against analytical gradients wrt tensors in - inputsand- grad_outputsthat are of floating point or complex type and with- requires_grad=True.- This function checks that backpropagating through the gradients computed to the given - grad_outputsare correct.- The check between numerical and analytical gradients uses - allclose().- Note - The default values are designed for - inputand- grad_outputsof double precision. This check will likely fail if they are of less precision, e.g.,- FloatTensor.- Warning - If any checked tensor in - inputand- grad_outputshas overlapping memory, i.e., different indices pointing to the same memory address (e.g., from- torch.expand()), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address.- Parameters
- func (function) – a Python function that takes Tensor inputs and returns a Tensor or a tuple of Tensors 
- grad_outputs (tuple of Tensor or Tensor, optional) – The gradients with respect to the function’s outputs. 
- eps (float, optional) – perturbation for finite differences 
- atol (float, optional) – absolute tolerance 
- rtol (float, optional) – relative tolerance 
- gen_non_contig_grad_outputs (bool, optional) – if - grad_outputsis- Noneand- gen_non_contig_grad_outputsis- True, the randomly generated gradient outputs are made to be noncontiguous
- raise_exception (bool, optional) – indicating whether to raise an exception if the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks. 
- nondet_tol (float, optional) – tolerance for non-determinism. When running identical inputs through the differentiation, the results must either match exactly (default, 0.0) or be within this tolerance. Note that a small amount of nondeterminism in the gradient will lead to larger inaccuracies in the second derivative. 
- check_undefined_grad (bool, optional) – if True, check if undefined output grads are supported and treated as zeros 
- check_batched_grad (bool, optional) – if True, check if we can compute batched gradients using prototype vmap support. Defaults to False. 
- fast_mode (bool, optional) – if True, run a faster implementation of gradgradcheck that no longer computes the entire jacobian. 
- masked (bool, optional) – if True, the gradients of unspecified elements of sparse tensors are ignored (default, False). 
 
- Returns
- True if all differences satisfy allclose condition 
- Return type