Shortcuts

RewardScaling

class torchrl.envs.transforms.RewardScaling(loc: float | Tensor, scale: float | Tensor, in_keys: Sequence[NestedKey] | None = None, out_keys: Sequence[NestedKey] | None = None, standard_normal: bool = False)[source]

Affine transform of the reward.

The reward is transformed according to:

\[reward = reward * scale + loc\]
Parameters:
  • loc (number or torch.Tensor) – location of the affine transform

  • scale (number or torch.Tensor) – scale of the affine transform

  • standard_normal (bool, optional) –

    if True, the transform will be

    \[reward = (reward-loc)/scale\]

    as it is done for standardization. Default is False.

transform_reward_spec(reward_spec: TensorSpec) TensorSpec[source]

Transforms the reward spec such that the resulting spec matches transform mapping.

Parameters:

reward_spec (TensorSpec) – spec before the transform

Returns:

expected spec after the transform

Docs

Lorem ipsum dolor sit amet, consectetur

View Docs

Tutorials

Lorem ipsum dolor sit amet, consectetur

View Tutorials

Resources

Lorem ipsum dolor sit amet, consectetur

View Resources