Rate this Page

NUMA Binding Utilities#

Created On: Jul 25, 2025 | Last Updated On: Aug 12, 2025

class torch.numa.binding.AffinityMode(value)[source]#

See behavior description for each affinity mode in torch.distributed.run.

class torch.numa.binding.NumaOptions(affinity_mode: torch.numa.binding.AffinityMode, should_fall_back_if_binding_fails: bool = False)[source]#
affinity_mode: AffinityMode#

If true, we will fall back to using the original command/entrypoint if we fail to compute or apply NUMA bindings.

You should avoid using this option! It is only intended as a safety mechanism for facilitating mass rollouts of numa binding.

torch.numa.binding.maybe_get_temporary_python_executable_with_numa_bindings(*, python_executable_path, gpu_index, numa_options)[source]#
Parameters

python_executable_path (str) – E.g., “/usr/local/bin/python”

Returns

Path to a temporary file. This file can be executed just like the original python executable, except it will first apply NUMA bindings.

Return type

Optional[str]

torch.numa.binding.maybe_wrap_command_with_numa_bindings(*, command_args, gpu_index, numa_options)[source]#
Parameters
  • command_args (tuple[str, ...]) – Full shell command, like (“/usr/local/bin/python”, “train.py”)

  • gpu_index (int) – The index of the GPU which command_args should bind to

Returns

command_args, but wrapped so that it runs with NUMA bindings corresponding to gpu_index and numa_options. E.g., (“numactl”, “–cpunodebind=0”, “/usr/local/bin/python”, “train.py”)

Return type

Optional[tuple[str, …]]