vit_l_32¶
- torchvision.models.vit_l_32(*, weights: Optional[ViT_L_32_Weights] = None, progress: bool = True, **kwargs: Any) VisionTransformer[source]¶
- Constructs a vit_l_32 architecture from An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. - Parameters:
- weights ( - ViT_L_32_Weights, optional) – The pretrained weights to use. See- ViT_L_32_Weightsbelow for more details and possible values. By default, no pre-trained weights are used.
- progress (bool, optional) – If True, displays a progress bar of the download to stderr. Default is True. 
- **kwargs – parameters passed to the - torchvision.models.vision_transformer.VisionTransformerbase class. Please refer to the source code for more details about this class.
 
 - class torchvision.models.ViT_L_32_Weights(value)[source]¶
- The model builder above accepts the following values as the - weightsparameter.- ViT_L_32_Weights.DEFAULTis equivalent to- ViT_L_32_Weights.IMAGENET1K_V1. You can also use strings, e.g.- weights='DEFAULT'or- weights='IMAGENET1K_V1'.- ViT_L_32_Weights.IMAGENET1K_V1: - These weights were trained from scratch by using a modified version of DeIT’s training recipe. Also available as - ViT_L_32_Weights.DEFAULT.- acc@1 (on ImageNet-1K) - 76.972 - acc@5 (on ImageNet-1K) - 93.07 - categories - tench, goldfish, great white shark, … (997 omitted) - num_params - 306535400 - min_size - height=224, width=224 - recipe - GFLOPS - 15.38 - File size - 1169.4 MB - The inference transforms are available at - ViT_L_32_Weights.IMAGENET1K_V1.transformsand perform the following preprocessing operations: Accepts- PIL.Image, batched- (B, C, H, W)and single- (C, H, W)image- torch.Tensorobjects. The images are resized to- resize_size=[256]using- interpolation=InterpolationMode.BILINEAR, followed by a central crop of- crop_size=[224]. Finally the values are first rescaled to- [0.0, 1.0]and then normalized using- mean=[0.485, 0.456, 0.406]and- std=[0.229, 0.224, 0.225].