StepLR¶
- class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=- 1, verbose=False)[source]¶
- Decays the learning rate of each parameter group by gamma every step_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr. - Parameters:
- optimizer (Optimizer) – Wrapped optimizer. 
- step_size (int) – Period of learning rate decay. 
- gamma (float) – Multiplicative factor of learning rate decay. Default: 0.1. 
- last_epoch (int) – The index of last epoch. Default: -1. 
- verbose (bool) – If - True, prints a message to stdout for each update. Default:- False.
 
 - Example - >>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.05 if epoch < 30 >>> # lr = 0.005 if 30 <= epoch < 60 >>> # lr = 0.0005 if 60 <= epoch < 90 >>> # ... >>> scheduler = StepLR(optimizer, step_size=30, gamma=0.1) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() - get_last_lr()¶
- Return last computed learning rate by current scheduler. 
 - load_state_dict(state_dict)¶
- Loads the schedulers state. - Parameters:
- state_dict (dict) – scheduler state. Should be an object returned from a call to - state_dict().
 
 - print_lr(is_verbose, group, lr, epoch=None)¶
- Display the current learning rate.