Rate this Page

torch.random#

Created On: Aug 07, 2019 | Last Updated On: Jun 18, 2025

torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices', device_type='cuda')[source]#

Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in.

Parameters:
  • devices (iterable of Device IDs) – devices for which to fork the RNG. CPU RNG state is always forked. By default, fork_rng() operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed

  • enabled (bool) – if False, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.

  • device_type (str) – device type str, default is cuda. As for supported device, see details in accelerator

Return type:

Generator

torch.random.get_rng_state()[source]#

Returns the random number generator state as a torch.ByteTensor.

Note

The returned state is for the default generator on CPU only.

See also: torch.random.fork_rng().

Return type:

Tensor

torch.random.initial_seed()[source]#

Returns the initial seed for generating random numbers as a Python long.

Note

The returned seed is for the default generator on CPU only.

Return type:

int

torch.random.manual_seed(seed)[source]#

Sets the seed for generating random numbers on all devices. Returns a torch.Generator object.

Parameters:

seed (int) – The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula 0xffff_ffff_ffff_ffff + seed.

Return type:

Generator

torch.random.seed()[source]#

Sets the seed for generating random numbers to a non-deterministic random number on all devices. Returns a 64 bit number used to seed the RNG.

Return type:

int

torch.random.set_rng_state(new_state)[source]#

Sets the random number generator state.

Note

This function only works for CPU. For CUDA, please use torch.manual_seed(), which works for both CPU and CUDA.

Parameters:

new_state (torch.ByteTensor) – The desired state

torch.random.thread_safe_generator()[source]#

Returns a thread-safe random number generator for use in DataLoader workers. This function provides a convenient way for transforms and user code to use thread-safe random number generation without manually checking worker context. When called in a DataLoader thread worker, returns the worker’s thread-local torch.Generator. When called in the main process or process workers, returns None (which causes PyTorch functions to use the default global RNG). :returns: Thread-local generator in thread workers, None otherwise. :rtype: Optional[torch.Generator]

Example::
>>> from torch.random import thread_safe_generator
>>> generator = thread_safe_generator()
>>> torch.randint(0, 10, (5,), generator=generator)
Example with transforms::
>>> from torch.random import thread_safe_generator
>>> class MyRandomTransform:
...     def __call__(self, img):
...         generator = thread_safe_generator()
...         offset = torch.randint(0, 10, (2,), generator=generator)
...         return img[..., offset[0]:, offset[1]:]
Return type:

Generator | None