Shortcuts

torch.random

torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')[source]

Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in.

Parameters
  • devices (iterable of CUDA IDs) – CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default, fork_rng() operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed

  • enabled (bool) – if False, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.

torch.random.get_rng_state()[source]

Returns the random number generator state as a torch.ByteTensor.

torch.random.initial_seed()[source]

Returns the initial seed for generating random numbers as a Python long.

torch.random.manual_seed(seed)[source]

Sets the seed for generating random numbers. Returns a torch.Generator object.

Parameters

seed (python:int) – The desired seed.

torch.random.seed()[source]

Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG.

torch.random.set_rng_state(new_state)[source]

Sets the random number generator state.

Parameters

new_state (torch.ByteTensor) – The desired state

Random Number Generator

torch.random.get_rng_state()[source]

Returns the random number generator state as a torch.ByteTensor.

torch.random.set_rng_state(new_state)[source]

Sets the random number generator state.

Parameters

new_state (torch.ByteTensor) – The desired state

torch.random.manual_seed(seed)[source]

Sets the seed for generating random numbers. Returns a torch.Generator object.

Parameters

seed (python:int) – The desired seed.

torch.random.seed()[source]

Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG.

torch.random.initial_seed()[source]

Returns the initial seed for generating random numbers as a Python long.

torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')[source]

Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in.

Parameters
  • devices (iterable of CUDA IDs) – CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default, fork_rng() operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed

  • enabled (bool) – if False, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources