Installation#

Precompiled Binaries#

Torch-TensorRT 2.x is centered primarily around Python. As such, precompiled releases can be found on pypi.org

Dependencies#

You need to have CUDA, PyTorch, and TensorRT (python package is sufficient) installed to use Torch-TensorRT

Installing Torch-TensorRT#

You can install the python package using

python -m pip install torch torch-tensorrt tensorrt

Packages are uploaded for Linux on x86 and Windows

Installing Torch-TensorRT for a specific CUDA version#

Similar to PyTorch, Torch-TensorRT has builds compiled for different versions of CUDA. These are distributed on PyTorch’s package index

For example CUDA 11.8

python -m pip install torch torch-tensorrt tensorrt --extra-index-url https://download.pytorch.org/whl/cu118

Installing Nightly Builds#

Torch-TensorRT distributed nightlies targeting the PyTorch nightly. These can be installed from the PyTorch nightly package index (separated by CUDA version)

python -m pip install --pre torch torch-tensorrt tensorrt --extra-index-url https://download.pytorch.org/whl/nightly/cu130

C++ Precompiled Binaries (TorchScript Only)#

Precompiled tarballs for releases are provided here: pytorch/TensorRT

Compiling From Source#

Building on Linux#

Dependencies#

  • Torch-TensorRT is built with Bazel, so begin by installing it.

    • The easiest way is to install bazelisk using the method of your choosing bazelbuild/bazelisk

    • Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html

    • Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions

    export BAZEL_VERSION=$(cat <PATH_TO_TORCHTRT_ROOT>/.bazelversion)
    mkdir bazel
    cd bazel
    curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
    unzip bazel-$BAZEL_VERSION-dist.zip
    bash ./compile.sh
    cp output/bazel /usr/local/bin/
    
  • You will also need to have CUDA installed on the system (or if running in a container, the system must have the CUDA driver installed and the container must have CUDA)

    • Specify your CUDA version here if not the version used in the branch being built: pytorch/TensorRT

  • LibTorch — by default Bazel automatically detects the PyTorch installation from your active environment and compiles against it. This ensures the C++ headers always match the runtime libtorch_cuda.so, avoiding ABI mismatches.

    Detection order:

    1. TORCH_PATH environment variable — absolute path to the torch package directory.

    2. VIRTUAL_ENV — used when a virtualenv or uv venv is activated (source .venv/bin/activate).

    3. CONDA_PREFIX — used when a conda environment is activated (conda activate myenv).

    4. .venv/bin/python3 relative to the repository root

    5. python3 / python on PATH — system Python fallback.

    If auto-detection fails, set TORCH_PATH explicitly:

    TORCH_PATH=$(python3 -c "import torch, os; print(os.path.dirname(torch.__file__))") \
        bazelisk build //:libtorchtrt -c opt
    

    Pinning to a specific nightly (frozen deps)

    If you prefer reproducible builds against a fixed PyTorch nightly, MODULE.bazel contains a commented-out http_archive block for libtorch. Comment out the local_torch line and uncomment the http_archive block. The pinned version must match the PyTorch installed in your Python environment — a mismatch between compiled headers and the runtime libtorch_cuda.so can cause ABI breakage. To pin to a specific build rather than latest, replace the URL with a dated nightly, e.g.: libtorch-shared-with-deps-2.6.0.dev20250101%2Bcu130.zip

  • TensorRT is not required to be installed on the system to build Torch-TensorRT, in fact this is preferable to ensure reproducible builds. If versions other than the default are needed point the WORKSPACE file to the URL of the tarball or download the tarball for TensorRT from https://developer.nvidia.com and update the paths in the WORKSPACE file here pytorch/TensorRT

    For example:

    http_archive(
        name = "tensorrt",
        build_file = "@//third_party/tensorrt/archive:BUILD",
        sha256 = "<TENSORRT SHA256>", # Optional but recommended
        strip_prefix = "TensorRT-<TENSORRT VERSION>",
        urls = [
            "https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/<TENSORRT DOWNLOAD PATH>",
            # OR
            "file:///<ABSOLUTE PATH TO FILE>/TensorRT-<TENSORRT VERSION>.Linux.x86_64-gnu.cuda-<CUDA VERSION>.tar.gz"
        ],
    )
    

    Remember at runtime, these libraries must be added to your LD_LIBRARY_PATH explicitly

If you have a local version of TensorRT installed, this can be used as well by commenting out the above lines and uncommenting the following lines pytorch/TensorRT

Building the Package#

Once the WORKSPACE has been configured properly, all that is required to build torch-tensorrt is the following command

python -m pip install --pre . --extra-index-url https://download.pytorch.org/whl/nightly/cu130

If you use the uv (https://docs.astral.sh/uv/) tool to manage python and your projects, the command is slightly simpler

uv pip install -e .

To build the wheel file

python -m pip wheel --no-deps --pre . --extra-index-url https://download.pytorch.org/whl/nightly/cu130 -w dist

Additional Build Options#

Some features in the library are optional and allow builds to be lighter or more portable.

Python Only Distribution#

There are multiple features of the library which require C++ components to be enabled. This includes both the TorchScript frontend which accepts TorchScript modules for compilation and the Torch-TensorRT runtime, the default executor for modules compiled with Torch-TensorRT, be it with the TorchScript or Dynamo frontend.

In the case you may want a build which does not require C++ you can disable these features and avoid building these components. As a result, the only available runtime will be the Python based on which has implications for features like serialization.

PYTHON_ONLY=1 python -m pip install --pre . --extra-index-url https://download.pytorch.org/whl/nightly/cu130
No TorchScript Frontend#

The TorchScript frontend is a legacy feature of Torch-TensorRT which is now in maintenance as TorchDynamo has become the preferred compiler technology for this project. It contains quite a bit of C++ code that is no longer necessary for most users. Therefore you can exclude this component from your build to speed up build times. The C++ based runtime will still be available to use.

NO_TORCHSCRIPT=1 python -m pip install --pre . --extra-index-url https://download.pytorch.org/whl/nightly/cu130

Building the C++ Library Standalone (TorchScript Only)#

Release Build#
bazel build //:libtorchtrt -c opt

A tarball with the include files and library can then be found in bazel-bin

Debug Build#

To build with debug symbols use the following command

bazel build //:libtorchtrt -c dbg

A tarball with the include files and library can then be found in bazel-bin

Choosing the Right ABI#

For the old versions, there were two ABI options to compile Torch-TensorRT which were incompatible with each other, pre-cxx11-abi and cxx11-abi. The complexity came from the different distributions of PyTorch. Fortunately, PyTorch has switched to cxx11-abi for all distributions. Below is a table with general pairings of PyTorch distribution sources and the recommended commands:

PyTorch Source

Recommended Python Compilation Command

Recommended C++ Compilation Command

PyTorch whl file from PyTorch.org

python -m pip install .

bazel build //:libtorchtrt -c opt

libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org

python setup.py bdist_wheel

bazel build //:libtorchtrt -c opt

PyTorch preinstalled in an NGC container

python setup.py bdist_wheel

bazel build //:libtorchtrt -c opt

PyTorch from the NVIDIA Forums for Jetson

python setup.py bdist_wheel

bazel build //:libtorchtrt -c opt

PyTorch built from Source

python setup.py bdist_wheel

bazel build //:libtorchtrt -c opt

NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information

Platform-specific Installation#

Building on Windows#

  • Microsoft VS 2022 Tools

  • Bazelisk

  • CUDA

Build steps#

  • Open the app “x64 Native Tools Command Prompt for VS 2022” - note that Admin privileges may be necessary

  • Ensure Bazelisk (Bazel launcher) is installed on your machine and available from the command line. Package installers such as Chocolatey can be used to install Bazelisk

  • Install latest version of Torch (i.e. with pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu130)

  • Clone the Torch-TensorRT repository and navigate to its root directory

  • Run pip install ninja wheel setuptools

  • Run pip install --pre -r py/requirements.txt

  • Run set DISTUTILS_USE_SDK=1

  • Run python setup.py bdist_wheel

  • Run pip install dist/*.whl

Advanced setup and Troubleshooting#

In the WORKSPACE file, the cuda_win, libtorch_win, and tensorrt_win are Windows-specific modules which can be customized. For instance, if you would like to build with a different version of CUDA, or your CUDA installation is in a non-standard location, update the path in the cuda_win module.

Similarly, if you would like to use a different version of pytorch or tensorrt, customize the urls in the libtorch_win and tensorrt_win modules, respectively.

Local versions of these packages can also be used on Windows. See toolchains\\ci_workspaces\\WORKSPACE.win.release.tmpl for an example of using a local version of TensorRT on Windows.

Alternative Build Systems#

Building with CMake (TorchScript Only)#

It is possible to build the API libraries (in cpp/) and the torchtrtc executable using CMake instead of Bazel. Currently, the python API and the tests cannot be built with CMake. Begin by installing CMake.

  • Latest releases of CMake and instructions on how to install are available for different platforms [on their website](https://cmake.org/download/).

A few useful CMake options include:

  • CMake finders for TensorRT are provided in cmake/Modules. In order for CMake to use them, pass -DCMAKE_MODULE_PATH=cmake/Modules when configuring the project with CMake.

  • Libtorch provides its own CMake finder. In case CMake doesn’t find it, pass the path to your install of libtorch with -DTorch_DIR=<path to libtorch>/share/cmake/Torch

  • If TensorRT is not found with the provided cmake finder, specify -DTensorRT_ROOT=<path to TensorRT>

  • Finally, configure and build the project in a build directory of your choice with the following command from the root of Torch-TensorRT project:

cmake -S. -B<build directory> \
    [-DCMAKE_MODULE_PATH=cmake/Module] \
    [-DTorch_DIR=<path to libtorch>/share/cmake/Torch] \
    [-DTensorRT_ROOT=<path to TensorRT>] \
    [-DCMAKE_BUILD_TYPE=Debug|Release]
cmake --build <build directory>

Building Natively on aarch64 (Jetson)#

Prerequisites#

Install or compile a build of PyTorch/LibTorch for aarch64

NVIDIA hosts builds the latest release branch for Jetson here:

Environment Setup#

To build natively on aarch64-linux-gnu platform, configure the WORKSPACE with local available dependencies.

  1. Replace WORKSPACE with the corresponding WORKSPACE file in //toolchains/jp_workspaces

  2. Configure the correct paths to directory roots containing local dependencies in the new_local_repository rules:

    NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package. In the case that you installed with sudo pip install this will be /usr/local/lib/python3.8/dist-packages/torch. In the case you installed with pip install --user this will be $HOME/.local/lib/python3.8/site-packages/torch.

new_local_repository(
    name = "libtorch",
    path = "/usr/local/lib/python3.8/dist-packages/torch",
    build_file = "third_party/libtorch/BUILD"
)
Compile C++ Library and Compiler CLI#

NOTE: Due to shifting dependency locations between Jetpack 4.5 and 4.6 there is a now a flag to inform bazel of the Jetpack version

--platforms //toolchains:jetpack_x.x

Compile Torch-TensorRT library using bazel command:

bazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0
Compile Python API#

NOTE: Due to shifting dependencies locations between Jetpack 4.5 and newer Jetpack versions there is now a flag for setup.py which sets the jetpack version (default: 5.0)

Compile the Python API using the following command from the //py directory:

python3 setup.py install

If you are building for Jetpack 4.5 add the --jetpack-version 5.0 flag