Shortcuts

Introduction || Tensors || Autograd || Building Models || TensorBoard Support || Training Models || Model Understanding

Training with PyTorch

Created On: Nov 30, 2021 | Last Updated: May 31, 2023 | Last Verified: Nov 05, 2024

Follow along with the video below or on youtube.

Introduction

In past videos, we’ve discussed and demonstrated:

  • Building models with the neural network layers and functions of the torch.nn module

  • The mechanics of automated gradient computation, which is central to gradient-based model training

  • Using TensorBoard to visualize training progress and other activities

In this video, we’ll be adding some new tools to your inventory:

  • We’ll get familiar with the dataset and dataloader abstractions, and how they ease the process of feeding data to your model during a training loop

  • We’ll discuss specific loss functions and when to use them

  • We’ll look at PyTorch optimizers, which implement algorithms to adjust model weights based on the outcome of a loss function

Finally, we’ll pull all of these together and see a full PyTorch training loop in action.

Dataset and DataLoader

The Dataset and DataLoader classes encapsulate the process of pulling your data from storage and exposing it to your training loop in batches.

The Dataset is responsible for accessing and processing single instances of data.

The DataLoader pulls instances of data from the Dataset (either automatically or with a sampler that you define), collects them in batches, and returns them for consumption by your training loop. The DataLoader works with all kinds of datasets, regardless of the type of data they contain.

For this tutorial, we’ll be using the Fashion-MNIST dataset provided by TorchVision. We use torchvision.transforms.Normalize() to zero-center and normalize the distribution of the image tile content, and download both training and validation data splits.

import torch
import torchvision
import torchvision.transforms as transforms

# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter
from datetime import datetime


transform = transforms.Compose(
    [transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))])

# Create datasets for training & validation, download if necessary
training_set = torchvision.datasets.FashionMNIST('./data', train=True, transform=transform, download=True)
validation_set = torchvision.datasets.FashionMNIST('./data', train=False, transform=transform, download=True)

# Create data loaders for our datasets; shuffle for training, not for validation
training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False)

# Class labels
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
        'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')

# Report split sizes
print('Training set has {} instances'.format(len(training_set)))
print('Validation set has {} instances'.format(len(validation_set)))
  0%|          | 0.00/26.4M [00:00<?, ?B/s]
  0%|          | 65.5k/26.4M [00:00<01:12, 363kB/s]
  1%|          | 229k/26.4M [00:00<00:38, 681kB/s]
  3%|3         | 918k/26.4M [00:00<00:09, 2.57MB/s]
  7%|7         | 1.93M/26.4M [00:00<00:05, 4.09MB/s]
 25%|##5       | 6.72M/26.4M [00:00<00:01, 15.4MB/s]
 38%|###8      | 10.1M/26.4M [00:00<00:00, 17.4MB/s]
 59%|#####9    | 15.6M/26.4M [00:01<00:00, 26.3MB/s]
 73%|#######3  | 19.3M/26.4M [00:01<00:00, 29.0MB/s]
 85%|########5 | 22.5M/26.4M [00:01<00:00, 26.0MB/s]
100%|##########| 26.4M/26.4M [00:01<00:00, 19.3MB/s]

  0%|          | 0.00/29.5k [00:00<?, ?B/s]
100%|##########| 29.5k/29.5k [00:00<00:00, 323kB/s]

  0%|          | 0.00/4.42M [00:00<?, ?B/s]
  1%|1         | 65.5k/4.42M [00:00<00:12, 359kB/s]
  5%|5         | 229k/4.42M [00:00<00:06, 676kB/s]
 20%|##        | 885k/4.42M [00:00<00:01, 2.52MB/s]
 44%|####3     | 1.93M/4.42M [00:00<00:00, 4.06MB/s]
100%|##########| 4.42M/4.42M [00:00<00:00, 6.04MB/s]

  0%|          | 0.00/5.15k [00:00<?, ?B/s]
100%|##########| 5.15k/5.15k [00:00<00:00, 64.5MB/s]
Training set has 60000 instances
Validation set has 10000 instances

As always, let’s visualize the data as a sanity check:

import matplotlib.pyplot as plt
import numpy as np

# Helper function for inline image display
def matplotlib_imshow(img, one_channel=False):
    if one_channel:
        img = img.mean(dim=0)
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    if one_channel:
        plt.imshow(npimg, cmap="Greys")
    else:
        plt.imshow(np.transpose(npimg, (1, 2, 0)))

dataiter = iter(training_loader)
images, labels = next(dataiter)

# Create a grid from the images and show them
img_grid = torchvision.utils.make_grid(images)
matplotlib_imshow(img_grid, one_channel=True)
print('  '.join(classes[labels[j]] for j in range(4)))
trainingyt
Pullover  Shirt  T-shirt/top  Pullover

The Model

The model we’ll use in this example is a variant of LeNet-5 - it should be familiar if you’ve watched the previous videos in this series.

import torch.nn as nn
import torch.nn.functional as F

# PyTorch models inherit from torch.nn.Module
class GarmentClassifier(nn.Module):
    def __init__(self):
        super(GarmentClassifier, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 4 * 4, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 4 * 4)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x


model = GarmentClassifier()

Loss Function

For this example, we’ll be using a cross-entropy loss. For demonstration purposes, we’ll create batches of dummy output and label values, run them through the loss function, and examine the result.

loss_fn = torch.nn.CrossEntropyLoss()

# NB: Loss functions expect data in batches, so we're creating batches of 4
# Represents the model's confidence in each of the 10 classes for a given input
dummy_outputs = torch.rand(4, 10)
# Represents the correct class among the 10 being tested
dummy_labels = torch.tensor([1, 5, 3, 7])

print(dummy_outputs)
print(dummy_labels)

loss = loss_fn(dummy_outputs, dummy_labels)
print('Total loss for this batch: {}'.format(loss.item()))
tensor([[0.9591, 0.6155, 0.3836, 0.9017, 0.2801, 0.0155, 0.7058, 0.6998, 0.3721,
         0.1306],
        [0.2195, 0.6004, 0.2857, 0.0715, 0.9724, 0.4519, 0.6233, 0.2529, 0.3019,
         0.0661],
        [0.1652, 0.4119, 0.4370, 0.7732, 0.5536, 0.2109, 0.8750, 0.5931, 0.8990,
         0.7475],
        [0.0237, 0.1560, 0.7129, 0.1450, 0.6850, 0.7162, 0.0809, 0.1327, 0.7504,
         0.5032]])
tensor([1, 5, 3, 7])
Total loss for this batch: 2.31016206741333

Optimizer

For this example, we’ll be using simple stochastic gradient descent with momentum.

It can be instructive to try some variations on this optimization scheme:

  • Learning rate determines the size of the steps the optimizer takes. What does a different learning rate do to the your training results, in terms of accuracy and convergence time?

  • Momentum nudges the optimizer in the direction of strongest gradient over multiple steps. What does changing this value do to your results?

  • Try some different optimization algorithms, such as averaged SGD, Adagrad, or Adam. How do your results differ?

# Optimizers specified in the torch.optim package
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

The Training Loop

Below, we have a function that performs one training epoch. It enumerates data from the DataLoader, and on each pass of the loop does the following:

  • Gets a batch of training data from the DataLoader

  • Zeros the optimizer’s gradients

  • Performs an inference - that is, gets predictions from the model for an input batch

  • Calculates the loss for that set of predictions vs. the labels on the dataset

  • Calculates the backward gradients over the learning weights

  • Tells the optimizer to perform one learning step - that is, adjust the model’s learning weights based on the observed gradients for this batch, according to the optimization algorithm we chose

  • It reports on the loss for every 1000 batches.

  • Finally, it reports the average per-batch loss for the last 1000 batches, for comparison with a validation run

def train_one_epoch(epoch_index, tb_writer):
    running_loss = 0.
    last_loss = 0.

    # Here, we use enumerate(training_loader) instead of
    # iter(training_loader) so that we can track the batch
    # index and do some intra-epoch reporting
    for i, data in enumerate(training_loader):
        # Every data instance is an input + label pair
        inputs, labels = data

        # Zero your gradients for every batch!
        optimizer.zero_grad()

        # Make predictions for this batch
        outputs = model(inputs)

        # Compute the loss and its gradients
        loss = loss_fn(outputs, labels)
        loss.backward()

        # Adjust learning weights
        optimizer.step()

        # Gather data and report
        running_loss += loss.item()
        if i % 1000 == 999:
            last_loss = running_loss / 1000 # loss per batch
            print('  batch {} loss: {}'.format(i + 1, last_loss))
            tb_x = epoch_index * len(training_loader) + i + 1
            tb_writer.add_scalar('Loss/train', last_loss, tb_x)
            running_loss = 0.

    return last_loss

Per-Epoch Activity

There are a couple of things we’ll want to do once per epoch:

  • Perform validation by checking our relative loss on a set of data that was not used for training, and report this

  • Save a copy of the model

Here, we’ll do our reporting in TensorBoard. This will require going to the command line to start TensorBoard, and opening it in another browser tab.

# Initializing in a separate cell so we can easily add more epochs to the same run
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
writer = SummaryWriter('runs/fashion_trainer_{}'.format(timestamp))
epoch_number = 0

EPOCHS = 5

best_vloss = 1_000_000.

for epoch in range(EPOCHS):
    print('EPOCH {}:'.format(epoch_number + 1))

    # Make sure gradient tracking is on, and do a pass over the data
    model.train(True)
    avg_loss = train_one_epoch(epoch_number, writer)


    running_vloss = 0.0
    # Set the model to evaluation mode, disabling dropout and using population
    # statistics for batch normalization.
    model.eval()

    # Disable gradient computation and reduce memory consumption.
    with torch.no_grad():
        for i, vdata in enumerate(validation_loader):
            vinputs, vlabels = vdata
            voutputs = model(vinputs)
            vloss = loss_fn(voutputs, vlabels)
            running_vloss += vloss

    avg_vloss = running_vloss / (i + 1)
    print('LOSS train {} valid {}'.format(avg_loss, avg_vloss))

    # Log the running loss averaged per batch
    # for both training and validation
    writer.add_scalars('Training vs. Validation Loss',
                    { 'Training' : avg_loss, 'Validation' : avg_vloss },
                    epoch_number + 1)
    writer.flush()

    # Track best performance, and save the model's state
    if avg_vloss < best_vloss:
        best_vloss = avg_vloss
        model_path = 'model_{}_{}'.format(timestamp, epoch_number)
        torch.save(model.state_dict(), model_path)

    epoch_number += 1
EPOCH 1:
  batch 1000 loss: 1.7112672429084779
  batch 2000 loss: 0.8577006395570934
  batch 3000 loss: 0.7324311459679157
  batch 4000 loss: 0.6518343883473426
  batch 5000 loss: 0.5967925738897175
  batch 6000 loss: 0.5550905751540558
  batch 7000 loss: 0.5399707095120102
  batch 8000 loss: 0.510569079022971
  batch 9000 loss: 0.5011951463112492
  batch 10000 loss: 0.4978302592553664
  batch 11000 loss: 0.4549619335355237
  batch 12000 loss: 0.4542041786725167
  batch 13000 loss: 0.4330055938346195
  batch 14000 loss: 0.38813667908031496
  batch 15000 loss: 0.42151428227126597
LOSS train 0.42151428227126597 valid 0.41239458322525024
EPOCH 2:
  batch 1000 loss: 0.3999252210833074
  batch 2000 loss: 0.4133423487952241
  batch 3000 loss: 0.39105599092853666
  batch 4000 loss: 0.3798893722899084
  batch 5000 loss: 0.37811744438915046
  batch 6000 loss: 0.36990713225270155
  batch 7000 loss: 0.3648085614600568
  batch 8000 loss: 0.3812756339304324
  batch 9000 loss: 0.36063716649048727
  batch 10000 loss: 0.375656757060613
  batch 11000 loss: 0.36718515076479524
  batch 12000 loss: 0.3901891091627767
  batch 13000 loss: 0.34442503743298586
  batch 14000 loss: 0.35611958603558014
  batch 15000 loss: 0.368399496303231
LOSS train 0.368399496303231 valid 0.3523043096065521
EPOCH 3:
  batch 1000 loss: 0.3308938933504978
  batch 2000 loss: 0.33953966970246985
  batch 3000 loss: 0.34013472435545555
  batch 4000 loss: 0.3302302423509245
  batch 5000 loss: 0.34523740530032954
  batch 6000 loss: 0.3473767702369223
  batch 7000 loss: 0.3436835055643314
  batch 8000 loss: 0.32411279603144794
  batch 9000 loss: 0.32234897140963586
  batch 10000 loss: 0.31995433341810714
  batch 11000 loss: 0.3041407881287159
  batch 12000 loss: 0.3263021752410059
  batch 13000 loss: 0.2969634303740022
  batch 14000 loss: 0.32115704255946914
  batch 15000 loss: 0.3264363438347209
LOSS train 0.3264363438347209 valid 0.3565257787704468
EPOCH 4:
  batch 1000 loss: 0.2968000005021986
  batch 2000 loss: 0.3095523064374283
  batch 3000 loss: 0.30101679946084914
  batch 4000 loss: 0.31528814292627794
  batch 5000 loss: 0.3033160864501642
  batch 6000 loss: 0.31108621507365025
  batch 7000 loss: 0.3144443661635742
  batch 8000 loss: 0.31760865413093414
  batch 9000 loss: 0.29802479403747567
  batch 10000 loss: 0.3210136414517474
  batch 11000 loss: 0.3011195660521043
  batch 12000 loss: 0.2922065930677272
  batch 13000 loss: 0.2976533522809623
  batch 14000 loss: 0.2968676603698023
  batch 15000 loss: 0.29075112840639483
LOSS train 0.29075112840639483 valid 0.3270612061023712
EPOCH 5:
  batch 1000 loss: 0.27412157164561174
  batch 2000 loss: 0.28823659704350574
  batch 3000 loss: 0.29084581970689033
  batch 4000 loss: 0.28300159823927423
  batch 5000 loss: 0.2798697927295798
  batch 6000 loss: 0.3034453427876615
  batch 7000 loss: 0.26870096008038674
  batch 8000 loss: 0.2826998904711581
  batch 9000 loss: 0.2799305796687622
  batch 10000 loss: 0.2721546982230029
  batch 11000 loss: 0.29707919335272526
  batch 12000 loss: 0.2783425407634713
  batch 13000 loss: 0.30441510546845896
  batch 14000 loss: 0.2813333863238513
  batch 15000 loss: 0.2820604321137471
LOSS train 0.2820604321137471 valid 0.32663190364837646

To load a saved version of the model:

saved_model = GarmentClassifier()
saved_model.load_state_dict(torch.load(PATH))

Once you’ve loaded the model, it’s ready for whatever you need it for - more training, inference, or analysis.

Note that if your model has constructor parameters that affect model structure, you’ll need to provide them and configure the model identically to the state in which it was saved.

Other Resources

Total running time of the script: ( 2 minutes 59.657 seconds)

Gallery generated by Sphinx-Gallery

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources