Cityscapes¶
- class torchvision.datasets.Cityscapes(root: Union[str, Path], split: str = 'train', mode: str = 'fine', target_type: Union[list[str], str] = 'instance', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]¶
Cityscapes Dataset.
- Parameters:
root (str or
pathlib.Path) – Root directory of dataset where directoryleftImg8bitandgtFineorgtCoarseare located.split (string, optional) – The image split to use,
train,testorvalif mode=”fine” otherwisetrain,train_extraorvalmode (string, optional) – The quality mode to use,
fineorcoarsetarget_type (string or list, optional) – Type of target to use,
instance,semantic,polygonorcolor. Can also be a list to output a tuple with all specified target types.transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g,
transforms.RandomCroptarget_transform (callable, optional) – A function/transform that takes in the target and transforms it.
transforms (callable, optional) – A function/transform that takes input sample and its target as entry and returns a transformed version.
Examples
Get semantic segmentation target
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine', target_type='semantic') img, smnt = dataset[0]
Get multiple targets
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine', target_type=['instance', 'color', 'polygon']) img, (inst, col, poly) = dataset[0]
Validate on the “coarse” set
dataset = Cityscapes('./data/cityscapes', split='val', mode='coarse', target_type='semantic') img, smnt = dataset[0]
- Special-members: