0.2.0_1 ▼

Notes

  • Autograd mechanics
    • Excluding subgraphs from backward
      • requires_grad
      • volatile
    • How autograd encodes the history
    • In-place operations on Variables
    • In-place correctness checks
  • Broadcasting semantics
    • General semantics
    • In-place semantics
    • Backwards compatibility
  • CUDA semantics
    • Best practices
      • Use pinned memory buffers
      • Use nn.DataParallel instead of multiprocessing
  • Extending PyTorch
    • Extending torch.autograd
    • Extending torch.nn
      • Adding a Module
    • Writing custom C extensions
  • Multiprocessing best practices
    • Sharing CUDA tensors
    • Best practices and tips
      • Avoiding and fighting deadlocks
      • Reuse buffers passed through a Queue
      • Asynchronous multiprocess training (e.g. Hogwild)
        • Hogwild
  • Serialization semantics
    • Best practices
      • Recommended approach for saving a model

Package Reference

  • torch
    • Tensors
      • Creation Ops
      • Indexing, Slicing, Joining, Mutating Ops
    • Random sampling
    • Serialization
    • Parallelism
    • Math operations
      • Pointwise Ops
      • Reduction Ops
      • Comparison Ops
      • Other Operations
      • BLAS and LAPACK Operations
  • torch.Tensor
  • torch.sparse
  • torch.Storage
  • torch.nn
    • Parameters
    • Containers
      • Module
      • Sequential
      • ModuleList
      • ParameterList
    • Convolution Layers
      • Conv1d
      • Conv2d
      • Conv3d
      • ConvTranspose1d
      • ConvTranspose2d
      • ConvTranspose3d
    • Pooling Layers
      • MaxPool1d
      • MaxPool2d
      • MaxPool3d
      • MaxUnpool1d
      • MaxUnpool2d
      • MaxUnpool3d
      • AvgPool1d
      • AvgPool2d
      • AvgPool3d
      • FractionalMaxPool2d
      • LPPool2d
      • AdaptiveMaxPool1d
      • AdaptiveMaxPool2d
      • AdaptiveAvgPool1d
      • AdaptiveAvgPool2d
    • Padding Layers
      • ReflectionPad2d
      • ReplicationPad2d
      • ReplicationPad3d
      • ZeroPad2d
      • ConstantPad2d
    • Non-linear Activations
      • ReLU
      • ReLU6
      • ELU
      • SELU
      • PReLU
      • LeakyReLU
      • Threshold
      • Hardtanh
      • Sigmoid
      • Tanh
      • LogSigmoid
      • Softplus
      • Softshrink
      • Softsign
      • Tanhshrink
      • Softmin
      • Softmax
      • LogSoftmax
    • Normalization layers
      • BatchNorm1d
      • BatchNorm2d
      • BatchNorm3d
      • InstanceNorm1d
      • InstanceNorm2d
      • InstanceNorm3d
    • Recurrent layers
      • RNN
      • LSTM
      • GRU
      • RNNCell
      • LSTMCell
      • GRUCell
    • Linear layers
      • Linear
    • Dropout layers
      • Dropout
      • Dropout2d
      • Dropout3d
      • AlphaDropout
    • Sparse layers
      • Embedding
      • EmbeddingBag
    • Distance functions
      • CosineSimilarity
      • PairwiseDistance
    • Loss functions
      • L1Loss
      • MSELoss
      • CrossEntropyLoss
      • NLLLoss
      • PoissonNLLLoss
      • NLLLoss2d
      • KLDivLoss
      • BCELoss
      • BCEWithLogitsLoss
      • MarginRankingLoss
      • HingeEmbeddingLoss
      • MultiLabelMarginLoss
      • SmoothL1Loss
      • SoftMarginLoss
      • MultiLabelSoftMarginLoss
      • CosineEmbeddingLoss
      • MultiMarginLoss
      • TripletMarginLoss
    • Vision layers
      • PixelShuffle
      • Upsample
      • UpsamplingNearest2d
      • UpsamplingBilinear2d
    • DataParallel layers (multi-GPU, distributed)
      • DataParallel
      • DistributedDataParallel
    • Utilities
      • clip_grad_norm
      • weight_norm
      • remove_weight_norm
      • PackedSequence
      • pack_padded_sequence
      • pad_packed_sequence
  • torch.nn.functional
    • Convolution functions
      • conv1d
      • conv2d
      • conv3d
      • conv_transpose1d
      • conv_transpose2d
      • conv_transpose3d
    • Pooling functions
      • avg_pool1d
      • avg_pool2d
      • avg_pool3d
      • max_pool1d
      • max_pool2d
      • max_pool3d
      • max_unpool1d
      • max_unpool2d
      • max_unpool3d
      • lp_pool2d
      • adaptive_max_pool1d
      • adaptive_max_pool2d
      • adaptive_avg_pool1d
      • adaptive_avg_pool2d
    • Non-linear activation functions
      • threshold
      • relu
      • hardtanh
      • relu6
      • elu
      • selu
      • leaky_relu
      • prelu
      • rrelu
      • logsigmoid
      • hardshrink
      • tanhshrink
      • softsign
      • softplus
      • softmin
      • softmax
      • softshrink
      • log_softmax
      • tanh
      • sigmoid
    • Normalization functions
      • batch_norm
      • normalize
    • Linear functions
      • linear
    • Dropout functions
      • dropout
      • alpha_dropout
      • dropout2d
      • dropout3d
    • Distance functions
      • pairwise_distance
      • cosine_similarity
    • Loss functions
      • binary_cross_entropy
      • poisson_nll_loss
      • cosine_embedding_loss
      • cross_entropy
      • hinge_embedding_loss
      • kl_div
      • l1_loss
      • mse_loss
      • margin_ranking_loss
      • multilabel_margin_loss
      • multilabel_soft_margin_loss
      • multi_margin_loss
      • nll_loss
      • binary_cross_entropy_with_logits
      • smooth_l1_loss
      • soft_margin_loss
      • triplet_margin_loss
    • Vision functions
      • pixel_shuffle
      • pad
      • upsample
      • upsample_nearest
      • upsample_bilinear
      • grid_sample
      • affine_grid
  • torch.nn.init
  • torch.optim
    • How to use an optimizer
      • Constructing it
      • Per-parameter options
      • Taking an optimization step
        • optimizer.step()
        • optimizer.step(closure)
    • Algorithms
    • How to adjust Learning Rate
  • torch.autograd
    • Variable
      • API compatibility
      • In-place operations on Variables
      • In-place correctness checks
    • Function
  • torch.multiprocessing
    • Strategy management
    • Sharing CUDA tensors
    • Sharing strategies
      • File descriptor - file_descriptor
      • File system - file_system
  • torch.distributed
    • Initialization
      • TCP initialization
      • Shared file-system initialization
      • Environment variable initialization
    • Groups
    • Point-to-point communication
    • Collective functions
  • torch.legacy
  • torch.cuda
    • Communication collectives
    • Streams and events
    • NVIDIA Tools Extension (NVTX)
  • torch.utils.ffi
  • torch.utils.data
  • torch.utils.model_zoo

torchvision Reference

  • torchvision
  • torchvision.datasets
    • MNIST
    • COCO
      • Captions
      • Detection
    • LSUN
    • ImageFolder
    • Imagenet-12
    • CIFAR
    • STL10
    • SVHN
    • PhotoTour
  • torchvision.models
  • torchvision.transforms
    • Transforms on PIL.Image
    • Transforms on torch.*Tensor
    • Conversion Transforms
    • Generic Transforms
  • torchvision.utils
PyTorch
  • Docs »
  • PyTorch documentation
  • Edit on GitHub

PyTorch documentation¶

PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.

Notes

  • Autograd mechanics
  • Broadcasting semantics
  • CUDA semantics
  • Extending PyTorch
  • Multiprocessing best practices
  • Serialization semantics

Package Reference

  • torch
  • torch.Tensor
  • torch.sparse
  • torch.Storage
  • torch.nn
  • torch.nn.functional
  • torch.nn.init
  • torch.optim
  • torch.autograd
  • torch.multiprocessing
  • torch.distributed
  • torch.legacy
  • torch.cuda
  • torch.utils.ffi
  • torch.utils.data
  • torch.utils.model_zoo

torchvision Reference

  • torchvision
  • torchvision.datasets
  • torchvision.models
  • torchvision.transforms
  • torchvision.utils

Indices and tables¶

  • Index
  • Module Index
Next

© Copyright 2017, Torch Contributors.

Built with Sphinx using a theme provided by Read the Docs.