torch.optim.adagrad.adagrad#
- torch.optim.adagrad.adagrad(params, grads, state_sums, state_steps, fused=None, grad_scale=None, found_inf=None, has_sparse_grad=False, foreach=None, differentiable=False, has_complex=False, *, lr, weight_decay, lr_decay, eps, maximize)[source]#
Functional API that performs Adagrad algorithm computation.
See
Adagrad
for details.