Adadelta¶
- class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0, foreach=None, *, maximize=False)[source]¶
- Implements Adadelta algorithm. - For further details regarding the algorithm we refer to ADADELTA: An Adaptive Learning Rate Method. - Parameters:
- params (iterable) – iterable of parameters to optimize or dicts defining parameter groups 
- rho (float, optional) – coefficient used for computing a running average of squared gradients (default: 0.9) 
- eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6) 
- lr (float, optional) – coefficient that scale delta before it is applied to the parameters (default: 1.0) 
- weight_decay (float, optional) – weight decay (L2 penalty) (default: 0) 
- foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None) 
- maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False) 
 
 - add_param_group(param_group)¶
- Add a param group to the - Optimizers param_groups.- This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the - Optimizeras training progresses.- Parameters:
- param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options. 
 
 - load_state_dict(state_dict)¶
- Loads the optimizer state. - Parameters:
- state_dict (dict) – optimizer state. Should be an object returned from a call to - state_dict().
 
 - state_dict()¶
- Returns the state of the optimizer as a - dict.- It contains two entries: - state - a dict holding current optimization state. Its content
- differs between optimizer classes. 
 
- param_groups - a list containing all parameter groups where each
- parameter group is a dict 
 
 
 - step(closure=None)[source]¶
- Performs a single optimization step. - Parameters:
- closure (Callable, optional) – A closure that reevaluates the model and returns the loss. 
 
 - zero_grad(set_to_none=False)¶
- Sets the gradients of all optimized - torch.Tensors to zero.- Parameters:
- set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests - zero_grad(set_to_none=True)followed by a backward pass,- .grads are guaranteed to be None for params that did not receive a gradient. 3.- torch.optimoptimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).