Shortcuts

Actor Modules

Actor modules represent policies in RL. They map observations to actions, either deterministically or stochastically.

TensorDictModules and SafeModules

Actor(*args, **kwargs)

General class for deterministic actors in RL.

MultiStepActorWrapper(*args, **kwargs)

A wrapper around a multi-action actor.

SafeModule(*args, **kwargs)

tensordict.nn.TensorDictModule subclass that accepts a TensorSpec as argument to control the output domain.

SafeSequential(*args, **kwargs)

A safe sequence of TensorDictModules.

TanhModule(*args, **kwargs)

A Tanh module for deterministic policies with bounded action space.

Probabilistic actors

ProbabilisticActor(*args, **kwargs)

General class for probabilistic actors in RL.

SafeProbabilisticModule(*args, **kwargs)

tensordict.nn.ProbabilisticTensorDictModule subclass that accepts a TensorSpec as an argument to control the output domain.

SafeProbabilisticTensorDictSequential(*args, ...)

tensordict.nn.ProbabilisticTensorDictSequential subclass that accepts a TensorSpec as argument to control the output domain.

Q-Value actors

QValueActor(*args, **kwargs)

A Q-Value actor class.

DistributionalQValueActor(*args, **kwargs)

A Distributional DQN actor class.

QValueModule(*args, **kwargs)

Q-Value TensorDictModule for Q-value policies.

DistributionalQValueModule(*args, **kwargs)

Distributional Q-Value hook for Q-value policies.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources