Automatic differentiation package - torch.autograd

torch.autograd provides classes and functions implementing automatic differentiation of arbitrary scalar valued functions. It requires minimal changes to the existing code - you only need to wrap all tensors in Variable objects.

torch.autograd.backward(variables, grad_variables=None, retain_graph=None, create_graph=None, retain_variables=None)[source]

Computes the sum of gradients of given variables w.r.t. graph leaves.

The graph is differentiated using the chain rule. If any of variables are non-scalar (i.e. their data has more than one element) and require gradient, the function additionaly requires specifying grad_variables. It should be a sequence of matching length, that contains gradient of the differentiated function w.r.t. corresponding variables (None is an acceptable value for all variables that don’t need gradient tensors).

This function accumulates gradients in the leaves - you might need to zero them before calling it.

Parameters:
  • variables (sequence of Variable) – Variables of which the derivative will be computed.
  • grad_variables (sequence of (Tensor, Variable or None)) – Gradients w.r.t. each element of corresponding variables. Any tensors will be automatically converted to Variables that are volatile unless create_graph is True. None values can be specified for scalar Variables or ones that don’t require grad. If a None value would be acceptable for all grad_variables, then this argument is optional.
  • retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
  • create_graph (bool, optional) – If true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False, unless grad_variables contains at least one non-volatile Variable.
torch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=None, only_inputs=True)[source]

Computes and returns the sum of gradients of outputs w.r.t. the inputs.

grad_outputs should be a sequence of length matching output containing the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be None). Gradients can be given as Tensors when one doesn’t need the graph of the derivative, or as Variables, in which case the graph will be created.

If only_inputs is True, the function will only return a list of gradients w.r.t the specified inputs. If it’s False, then gradient w.r.t. all remaining leaves will still be computed, and will be accumulated into their .grad attribute.

Parameters:
  • outputs (sequence of Variable) – outputs of the differentiated function.
  • inputs (sequence of Variable) – Inputs w.r.t. which the gradient will be returned (and not accumulated into .grad).
  • grad_outputs (sequence of Tensor or Variable) – Gradients w.r.t. each output. Any tensors will be automatically converted to Variables that are volatile unless create_graph is True. None values can be specified for scalar Variables or ones that don’t require grad. If a None value would be acceptable for all grad_variables, then this argument is optional.
  • retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
  • create_graph (bool, optional) – If True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False, unless grad_variables contains at least one non-volatile Variable.
  • only_inputs (bool, optional) – If True, gradient w.r.t. leaves that are part of the graph, but don’t appear in inputs won’t be computed and accumulated. Defaults to True.

Variable

API compatibility

Variable API is nearly the same as regular Tensor API (with the exception of a couple in-place methods, that would overwrite inputs required for gradient computation). In most cases Tensors can be safely replaced with Variables and the code will remain to work just fine. Because of this, we’re not documenting all the operations on variables, and you should refer to torch.Tensor docs for this purpose.

In-place operations on Variables

Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them.

In-place correctness checks

All Variable s keep track of in-place operations applied to them, and if the implementation detects that a variable was saved for backward in one of the functions, but it was modified in-place afterwards, an error will be raised once backward pass is started. This ensures that if you’re using in-place functions and not seeing any errors, you can be sure that the computed gradients are correct.

class torch.autograd.Variable[source]

Wraps a tensor and records the operations applied to it.

Variable is a thin wrapper around a Tensor object, that also holds the gradient w.r.t. to it, and a reference to a function that created it. This reference allows retracing the whole chain of operations that created the data. If the Variable has been created by the user, its grad_fn will be None and we call such objects leaf Variables.

Since autograd only supports scalar valued function differentiation, grad size always matches the data size. Also, grad is normally only allocated for leaf variables, and will be always zero otherwise.

Variables:
  • data – Wrapped tensor of any type.
  • grad – Variable holding the gradient of type and location matching the .data. This attribute is lazily allocated and can’t be reassigned.
  • requires_grad – Boolean indicating whether the Variable has been created by a subgraph containing any Variable, that requires it. See Excluding subgraphs from backward for more details. Can be changed only on leaf Variables.
  • volatile – Boolean indicating that the Variable should be used in inference mode, i.e. don’t save the history. See Excluding subgraphs from backward for more details. Can be changed only on leaf Variables.
  • is_leaf – Boolean indicating if the Variable is a graph leaf (i.e if it was created by the user).
  • grad_fn – Gradient function graph trace.
Parameters:
  • data (any tensor class) – Tensor to wrap.
  • requires_grad (bool) – Value of the requires_grad flag. Keyword only.
  • volatile (bool) – Value of the volatile flag. Keyword only.
backward(gradient=None, retain_graph=None, create_graph=None, retain_variables=None)[source]

Computes the gradient of current variable w.r.t. graph leaves.

The graph is differentiated using the chain rule. If the variable is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionaly requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self.

This function accumulates gradients in the leaves - you might need to zero them before calling it.

Parameters:
  • grad_variables (Tensor, Variable or None) – Gradient w.r.t. the variable. If it is a tensor, it will be automatically converted to a Variable that is volatile unless create_graph is True. None values can be specified for scalar Variables or ones that don’t require grad. If a None value would be acceptable then this argument is optional.
  • retain_graph (bool, optional) – If False, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
  • create_graph (bool, optional) – If true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False, unless gradient is a volatile Variable.
detach()[source]

Returns a new Variable, detached from the current graph.

Result will never require gradient. If the input is volatile, the output will be volatile too.

Note

Returned Variable uses the same data tensor, as the original one, and in-place modifications on either of them will be seen, and may trigger errors in correctness checks.

detach_()[source]

Detaches the Variable from the graph that created it, making it a leaf.

register_hook(hook)[source]

Registers a backward hook.

The hook will be called every time a gradient with respect to the variable is computed. The hook should have the following signature:

hook(grad) -> Variable or None

The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad.

This function returns a handle with a method handle.remove() that removes the hook from the module.

Example

>>> v = Variable(torch.Tensor([0, 0, 0]), requires_grad=True)
>>> h = v.register_hook(lambda grad: grad * 2)  # double the gradient
>>> v.backward(torch.Tensor([1, 1, 1]))
>>> v.grad.data
 2
 2
 2
[torch.FloatTensor of size 3]
>>> h.remove()  # removes the hook
reinforce(reward)[source]

Registers a reward obtained as a result of a stochastic process.

Differentiating stochastic nodes requires providing them with reward value. If your graph contains any stochastic operations, you should call this function on their outputs. Otherwise an error will be raised.

Parameters:reward (Tensor) – Tensor with per-element rewards. It has to match the device location and shape of Variable’s data.
retain_grad()[source]

Enables .grad attribute for non-leaf Variables.

Function

class torch.autograd.Function[source]

Records operation history and defines formulas for differentiating ops.

Every operation performed on Variable s creates a new function object, that performs the computation, and records that it happened. The history is retained in the form of a DAG of functions, with edges denoting data dependencies (input <- output). Then, when backward is called, the graph is processed in the topological ordering, by calling backward() methods of each Function object, and passing returned gradients on to next Function s.

Normally, the only way users interact with functions is by creating subclasses and defining new operations. This is a recommended way of extending torch.autograd.

Since Function logic is a hotspot in most scripts, almost all of it was moved to our C backend, to ensure that the framework overhead is minimal.

Each function is meant to be used only once (in the forward pass).

Variables:
  • saved_tensors – Tuple of Tensors that were saved in the call to forward().
  • saved_variables – Tuple of Variables that correspond to the tensors saved in the call to forward().
  • needs_input_grad – Tuple of booleans of length num_inputs, indicating whether a given input requires gradient. This can be used to optimize buffers saved for backward, and ignoring gradient computation in backward().
  • num_inputs – Number of inputs given to forward().
  • num_outputs – Number of tensors returned by forward().
  • requires_grad – Boolean indicating whether the backward() will ever need to be called.
static backward(*grad_outputs)[source]

Defines a formula for differentiating the operation.

This function is to be overriden by all subclasses.

All arguments are tensors. It has to accept exactly as many arguments, as many outputs did forward() return, and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.

static forward(*args, **kwargs)[source]

Performs the operation.

This function is to be overriden by all subclasses.

It can take and return an arbitrary number of tensors.