torchvision.transforms

class torchvision.transforms.Compose(transforms)

Composes several transforms together.

Parameters:transforms (List[Transform]) – list of transforms to compose.

Example

>>> transforms.Compose([
>>>     transforms.CenterCrop(10),
>>>     transforms.ToTensor(),
>>> ])

Transforms on PIL.Image

class torchvision.transforms.Scale(size, interpolation=2)

Rescales the input PIL.Image to the given ‘size’. ‘size’ will be the size of the smaller edge. For example, if height > width, then image will be rescaled to (size * height / width, size) size: size of the smaller edge interpolation: Default: PIL.Image.BILINEAR

class torchvision.transforms.CenterCrop(size)

Crops the given PIL.Image at the center to have a region of the given size. size can be a tuple (target_height, target_width) or an integer, in which case the target will be of a square shape (size, size)

class torchvision.transforms.RandomCrop(size, padding=0)

Crops the given PIL.Image at a random location to have a region of the given size. size can be a tuple (target_height, target_width) or an integer, in which case the target will be of a square shape (size, size)

class torchvision.transforms.RandomHorizontalFlip

Randomly horizontally flips the given PIL.Image with a probability of 0.5

class torchvision.transforms.RandomSizedCrop(size, interpolation=2)

Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio This is popularly used to train the Inception networks size: size of the smaller edge interpolation: Default: PIL.Image.BILINEAR

class torchvision.transforms.Pad(padding, fill=0)

Pads the given PIL.Image on all sides with the given “pad” value

Transforms on torch.*Tensor

class torchvision.transforms.Normalize(mean, std)

Given mean: (R, G, B) and std: (R, G, B), will normalize each channel of the torch.*Tensor, i.e. channel = (channel - mean) / std

Conversion Transforms

class torchvision.transforms.ToTensor

Converts a PIL.Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0].

class torchvision.transforms.ToPILImage

Converts a torch.*Tensor of shape C x H x W or a numpy ndarray of shape H x W x C to a PIL.Image while preserving value range.

Generic Transforms

class torchvision.transforms.Lambda(lambd)

Applies a lambda as a transform.