ATen: Tensor Library#
ATen (A Tensor Library) is the foundational tensor and mathematical operation
library on which all of PyTorch is built. It provides the core Tensor class
and hundreds of mathematical operations that work on tensors.
When to use ATen directly:
When writing low-level tensor operations or custom kernels
When you need direct access to tensor data and metadata
When working with the PyTorch internals or extending PyTorch
Basic usage:
#include <ATen/ATen.h>
// Create tensors
at::Tensor a = at::ones({2, 3});
at::Tensor b = at::randn({2, 3});
// Perform operations
at::Tensor c = a + b;
at::Tensor d = at::matmul(a.t(), b);
// Move to GPU
if (at::cuda::is_available()) {
at::Tensor gpu_tensor = c.to(at::kCUDA);
}
For most applications, prefer using the higher-level torch:: namespace
(see Neural Network Modules (torch::nn), Optimizers (torch::optim)) which provides a more user-friendly API.
Header Files#
The following headers are part of the ATen public API:
ATen/ATen.h- Main ATen headerATen/Backend.h- Backend enumerationATen/core/Tensor.h- Tensor classATen/core/ivalue.h- IValue type (see C10: Core Utilities)ATen/core/ScalarType.h- Data type definitionsATen/TensorOptions.h- Tensor creation optionsATen/Scalar.h- Scalar typeATen/Layout.h- Tensor layoutATen/DeviceGuard.h- Device context managementATen/native/TensorShape.h- Tensor shape operationsATen/cuda/CUDAContext.h- CUDA context (see CUDA Support)ATen/cudnn/Descriptors.h- cuDNN descriptorsATen/mkl/Descriptors.h- MKL descriptors
Note
The core at::Tensor class is defined in a generated header file
(TensorBody.h) that only exists after building PyTorch. The documentation
below describes the API manually.