PyTorch documentation#
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
Features described in this documentation are classified by release status:
Stable (API-Stable): These features will be maintained long-term and there should generally be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).
Unstable (API-Unstable): Encompasses all features that are under active development where APIs may change based on user feedback, requisite performance improvements or because coverage across operators is not yet complete. The APIs and performance characteristics of these features may change.
- Python API
- torch
- torch.nn
- torch.nn.functional
- torch.Tensor
- Tensor Attributes
- Tensor Views
- torch.amp
- torch.autograd
- torch.library
- torch.accelerator
- torch.cpu
- torch.cuda
- torch.cuda.memory
- torch.mps
- torch.xpu
- torch.mtia
- torch.mtia.memory
- Meta device
- torch.backends
- torch.export
- torch.distributed
- torch.distributed.tensor
- torch.distributed.algorithms.join
- torch.distributed.elastic
- torch.distributed.fsdp
- torch.distributed.fsdp.fully_shard
- torch.distributed.tensor.parallel
- torch.distributed.optim
- torch.distributed.pipelining
- torch.distributed.checkpoint
- torch.distributions
- torch.compiler
- torch.fft
- torch.func
- torch.futures
- torch.fx
- torch.fx.experimental
- torch.hub
- torch.jit
- torch.linalg
- torch.monitor
- torch.signal
- torch.special
- torch.overrides
- torch.package
- torch.profiler
- torch.nn.init
- torch.nn.attention
- torch.onnx
- torch.optim
- Complex Numbers
- DDP Communication Hooks
- Quantization
- Distributed RPC Framework
- torch.random
- torch.masked
- torch.nested
- torch.Size
- torch.sparse
- torch.Storage
- torch.testing
- torch.utils
- torch.utils.benchmark
- torch.utils.bottleneck
- torch.utils.checkpoint
- torch.utils.cpp_extension
- torch.utils.data
- torch.utils.deterministic
- torch.utils.jit
- torch.utils.dlpack
- torch.utils.mobile_optimizer
- torch.utils.model_zoo
- torch.utils.tensorboard
- torch.utils.module_tracker
- Type Info
- Named Tensors
- Named Tensors operator coverage
- torch.__config__
- torch.__future__
- torch._logging
- Torch Environment Variables
- Developer Notes
- Automatic Mixed Precision examples
- Autograd mechanics
- Broadcasting semantics
- CPU threading and TorchScript inference
- CUDA semantics
- PyTorch Custom Operators Landing Page
- Distributed Data Parallel
- Extending PyTorch
- Extending torch.func with autograd.Function
- Frequently Asked Questions
- FSDP Notes
- Getting Started on Intel GPU
- Gradcheck mechanics
- HIP (ROCm) semantics
- Features for large-scale deployments
- LibTorch Stable ABI
- Modules
- MPS backend
- Multiprocessing best practices
- Numerical accuracy
- Out Notes
- Reproducibility
- Serialization semantics
- Windows FAQ