Function torch::autograd::grad#
Defined in File autograd.h
Function Documentation#
-
variable_list torch::autograd::grad(const variable_list &outputs, const variable_list &inputs, const variable_list &grad_outputs = {}, std::optional<bool> retain_graph = std::nullopt, bool create_graph = false, bool allow_unused = false)#
Computes and returns the sum of gradients of outputs with respect to the inputs.
grad_outputsshould be a sequence of length matchingoutputcontaining the “vector” in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can betorch::Tensor()).- Parameters
outputs – outputs of the differentiated function.
inputs – Inputs w.r.t. which the gradient will be returned (and not accumulated into
at::Tensor::grad).grad_outputs – The “vector” in the Jacobian-vector product. Usually gradients w.r.t. each output.
torch::Tensor()values can be specified for scalar Tensors or ones that don’t require grad. If atorch::Tensor()value would be acceptable for all grad_tensors, then this argument is optional. Default:{}.retain_graph – If
false, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option totrueis not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph.create_graph – If
true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default:false.allow_unused – If
false, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an error. Defaults tofalse.