LazyStackedCompositeSpec¶
- class torchrl.data.LazyStackedCompositeSpec(*args, **kwargs)[source]¶
Deprecated version of
torchrl.data.StackedComposite.- assert_is_in(value: Tensor) None¶
Asserts whether a tensor belongs to the box, and raises an exception otherwise.
- Parameters:
value (torch.Tensor) – value to be checked.
- cardinality(*args, **kwargs) Any¶
The cardinality of the spec.
This refers to the number of possible outcomes in a spec. It is assumed that the cardinality of a composite spec is the cartesian product of all possible outcomes.
- clear_device_()¶
Clears the device of the Composite.
- clone() T¶
Clones the Composite spec.
Locked specs will not produce locked clones.
- contains(item: torch.Tensor | tensordict.base.TensorDictBase) bool¶
If the value
valcould have been generated by theTensorSpec, returnsTrue, otherwiseFalse.See
is_in()for more information.
- cpu()¶
Casts the TensorSpec to ‘cpu’ device.
- cuda(device=None)¶
Casts the TensorSpec to ‘cuda’ device.
- property device: Union[device, str, int]¶
The device of the spec.
Only
Compositespecs can have aNonedevice. All leaves must have a non-null device.
- empty()¶
Create a spec like self, but with no entries.
- encode(val: numpy.ndarray | list | torch.Tensor | tensordict.base.TensorDictBase, *, ignore_device: bool = False) torch.Tensor | tensordict.base.TensorDictBase¶
Encodes a value given the specified spec, and return the corresponding tensor.
This method is to be used in environments that return a value (eg, a numpy array) that can be easily mapped to the TorchRL required domain. If the value is already a tensor, the spec will not change its value and return it as-is.
- Parameters:
val (np.ndarray or torch.Tensor) – value to be encoded as tensor.
- Keyword Arguments:
ignore_device (bool, optional) – if
True, the spec device will be ignored. This is used to group tensor casting within a call toTensorDict(..., device="cuda")which is faster.- Returns:
torch.Tensor matching the required tensor specs.
- enumerate(use_mask: bool = False) TensorDictBase¶
Returns all the samples that can be obtained from the TensorSpec.
The samples will be stacked along the first dimension.
This method is only implemented for discrete specs.
- Parameters:
use_mask (bool, optional) – If
Trueand the spec has a mask, samples that are masked are excluded. Default isFalse.
- erase_memoize_cache() None¶
Clears the memoized cache for cached encode execution.
See also
- expand(*shape)¶
Returns a new Spec with the expanded shape.
- Parameters:
*shape (tuple or iterable of int) – the new shape of the Spec. Must be broadcastable with the current shape: its length must be at least as long as the current shape length, and its last values must be compliant too; ie they can only differ from it if the current dimension is a singleton.
- flatten(start_dim: int, end_dim: int) T¶
Flattens a
TensorSpec.Check
flatten()for more information on this method.
- get(item, default=_NoDefault.ZERO)¶
Gets an item from the Composite.
If the item is absent, a default value can be passed.
- classmethod implements_for_spec(torch_function: Callable) Callable¶
Register a torch function override for TensorSpec.
- index(index: Union[int, Tensor, ndarray, slice, list], tensor_to_index: torch.Tensor | tensordict.base.TensorDictBase) torch.Tensor | tensordict.base.TensorDictBase¶
Indexes the input tensor.
This method is to be used with specs that encode one or more categorical variables (e.g.,
OneHotorCategorical), such that indexing of a tensor with a sample can be done without caring about the actual representation of the index.- Parameters:
index (int, torch.Tensor, slice or list) – index of the tensor
tensor_to_index – tensor to be indexed
- Returns:
indexed tensor
- Exanples:
>>> from torchrl.data import OneHot >>> import torch >>> >>> one_hot = OneHot(n=100) >>> categ = one_hot.to_categorical_spec() >>> idx_one_hot = torch.zeros((100,), dtype=torch.bool) >>> idx_one_hot[50] = 1 >>> print(one_hot.index(idx_one_hot, torch.arange(100))) tensor(50) >>> idx_categ = one_hot.to_categorical(idx_one_hot) >>> print(categ.index(idx_categ, torch.arange(100))) tensor(50)
- is_empty(recurse: bool = False)¶
Whether the composite spec contains specs or not.
- Parameters:
recurse (bool) – whether to recursively assess if the spec is empty. If
True, will returnTrueif there are no leaves. IfFalse(default) will return whether there is any spec defined at the root level.
- is_in(value) bool¶
If the value
valcould have been generated by theTensorSpec, returnsTrue, otherwiseFalse.More precisely, the
is_inmethods checks that the valuevalis within the limits defined by thespaceattribute (the box), and that thedtype,device,shapepotentially other metadata match those of the spec. If any of these checks fails, theis_inmethod will returnFalse.- Parameters:
val (torch.Tensor) – value to be checked.
- Returns:
boolean indicating if values belongs to the TensorSpec box.
- items(include_nested: bool = False, leaves_only: bool = False, *, is_leaf: collections.abc.Callable[[type], bool] | None = None, step_mdp_static_only: bool = False) _CompositeSpecItemsView¶
Items of the Composite.
- Parameters:
include_nested (bool, optional) – if
False, the returned keys will not be nested. They will represent only the immediate children of the root, and not the whole nested sequence, i.e. aComposite(next=Composite(obs=None))will lead to the keys["next"]. Default is ``False`, i.e. nested keys will not be returned.leaves_only (bool, optional) – if
False, the values returned will contain every level of nesting, i.e. aComposite(next=Composite(obs=None))will lead to the keys["next", ("next", "obs")]. Default isFalse.
- Keyword Arguments:
is_leaf (callable, optional) – reads a type and returns a boolean indicating if that type should be seen as a leaf. By default, all non-Composite nodes are considered as leaves.
step_mdp_static_only (bool, optional) – if
True, only keys that are static under step_mdp will be returned. Default isFalse.
- keys(include_nested: bool = False, leaves_only: bool = False, *, is_leaf: collections.abc.Callable[[type], bool] | None = None, step_mdp_static_only: bool = False) _CompositeSpecKeysView¶
Keys of the Composite.
The keys argument reflect those of
tensordict.TensorDict.- Parameters:
include_nested (bool, optional) – if
False, the returned keys will not be nested. They will represent only the immediate children of the root, and not the whole nested sequence, i.e. aComposite(next=Composite(obs=None))will lead to the keys["next"]. Default is ``False`, i.e. nested keys will not be returned.leaves_only (bool, optional) – if
False, the values returned will contain every level of nesting, i.e. aComposite(next=Composite(obs=None))will lead to the keys["next", ("next", "obs")]. Default isFalse.
- Keyword Arguments:
is_leaf (callable, optional) – reads a type and returns a boolean indicating if that type should be seen as a leaf. By default, all non-Composite nodes are considered as leaves.
step_mdp_static_only (bool, optional) – if
True, only keys that are static under step_mdp will be returned. Default isFalse.
- lock_(recurse: bool | None = None) None¶
Locks the Composite and prevents modification of its content.
The recurse argument control whether the lock will be propagated to sub-specs. The current default is
Falsebut it will be turned toTruefor consistency with the TensorDict API in v0.8.Examples
>>> shape = [3, 4, 5] >>> spec = Composite( ... a=Composite( ... b=Composite(shape=shape[:3], device="cpu"), shape=shape[:2] ... ), ... shape=shape[:1], ... ) >>> spec["a"] = spec["a"].clone() >>> recurse = False >>> spec.lock_(recurse=recurse) >>> try: ... spec["a"] = spec["a"].clone() ... except RuntimeError: ... print("failed!") failed! >>> try: ... spec["a", "b"] = spec["a", "b"].clone() ... print("succeeded!") ... except RuntimeError: ... print("failed!") succeeded! >>> recurse = True >>> spec.lock_(recurse=recurse) >>> try: ... spec["a", "b"] = spec["a", "b"].clone() ... print("succeeded!") ... except RuntimeError: ... print("failed!") failed!
- make_neg_dim(dim: int)¶
Converts a specific dimension to
-1.
- memoize_encode(mode: bool = True) None¶
Creates a cached sequence of callables for the encode method that speeds up its execution.
This should only be used whenever the input type, shape etc. are expected to be consistent across calls for a given spec.
- Parameters:
mode (bool, optional) – Whether the cache should be used. Defaults to True.
See also
the cache can be erased via
erase_memoize_cache().
- property names¶
Returns the names of the dimensions of this Composite.
- property ndim¶
Number of dimensions of the spec shape.
Shortcut for
len(spec.shape).
- ndimension()¶
Number of dimensions of the spec shape.
Shortcut for
len(spec.shape).
- one(shape: Size = None) TensorDictBase¶
Returns a one-filled tensor in the box.
Note
Even though there is no guarantee that
1belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case ofoneis to generate empty data buffers, not meaningful data.- Parameters:
shape (torch.Size) – shape of the one-tensor
- Returns:
a one-filled tensor sampled in the TensorSpec box.
- ones(shape: Size = None) torch.Tensor | tensordict.base.TensorDictBase¶
Proxy to
one().
- pop(key: NestedKey, default: Any = _NoDefault.ZERO) Any¶
Removes and returns the value associated with the specified key from the composite spec.
This method searches for the given key in the composite spec, removes it, and returns its associated value. If the key is not found, it returns the provided default value if specified, otherwise raises a KeyError.
- Parameters:
key (NestedKey) – The key to be removed from the composite spec. It can be a single key or a nested key.
default (Any, optional) – The value to return if the specified key is not found in the composite spec. If not provided and the key is not found, a KeyError is raised.
- Returns:
The value associated with the specified key that was removed from the composite spec.
- Return type:
Any
- Raises:
KeyError – If the specified key is not found in the composite spec and no default value is provided.
- project(val: TensorDictBase) TensorDictBase¶
If the input tensor is not in the TensorSpec box, it maps it back to it given some defined heuristic.
- Parameters:
val (torch.Tensor) – tensor to be mapped to the box.
- Returns:
a torch.Tensor belonging to the TensorSpec box.
- rand(shape: Size = None) TensorDictBase¶
Returns a random tensor in the space defined by the spec.
The sampling will be done uniformly over the space, unless the box is unbounded in which case normal values will be drawn.
- Parameters:
shape (torch.Size) – shape of the random tensor
- Returns:
a random tensor sampled in the TensorSpec box.
- refine_names(*names)¶
Refines the dimension names of self according to names.
Refining is a special case of renaming that “lifts” unnamed dimensions. A None dim can be refined to have any name; a named dim can only be refined to have the same name.
Because named specs can coexist with unnamed specs, refining names gives a nice way to write named-spec-aware code that works with both named and unnamed specs.
names may contain up to one Ellipsis (…). The Ellipsis is expanded greedily; it is expanded in-place to fill names to the same length as self.ndim using names from the corresponding indices of self.names.
Returns: the same composite spec with dimensions named according to the input.
Examples
>>> spec = Composite({}, shape=[3, 4, 5, 6]) >>> spec_refined = spec.refine_names(None, None, None, "d") >>> assert spec_refined.names == [None, None, None, "d"] >>> spec_refined = spec.refine_names("a", None, None, "d") >>> assert spec_refined.names == ["a", None, None, "d"]
- sample(shape: Size = None) torch.Tensor | tensordict.base.TensorDictBase¶
Returns a random tensor in the space defined by the spec.
See
rand()for details.
- separates(*keys: NestedKey, default: Any = None) Composite¶
Splits the composite spec by extracting specified keys and their associated values into a new composite spec.
This method iterates over the provided keys, removes them from the current composite spec, and adds them to a new composite spec. If a key is not found, the specified default value is used. The new composite spec is returned.
- Parameters:
*keys (NestedKey) – One or more keys to be extracted from the composite spec. Each key can be a single key or a nested key.
default (Any, optional) – The value to use if a specified key is not found in the composite spec. Defaults to None.
- Returns:
A new composite spec containing the extracted keys and their associated values.
- Return type:
Note
If none of the specified keys are found, the method returns None.
- set(name: str, spec: TensorSpec) StackedComposite¶
Sets a spec in the Composite spec.
- squeeze(dim: int | None = None)¶
Returns a new Spec with all the dimensions of size
1removed.When
dimis given, a squeeze operation is done only in that dimension.- Parameters:
dim (int or None) – the dimension to apply the squeeze operation to
- to(dest: Union[dtype, device, str, int]) T¶
Casts a TensorSpec to a device or a dtype.
Returns the same spec if no change is made.
- to_numpy(val: TensorDict, safe: bool | None = None) dict¶
Returns the
np.ndarraycorrespondent of an input tensor.This is intended to be the inverse operation of
encode().- Parameters:
val (torch.Tensor) – tensor to be transformed_in to numpy.
safe (bool) – boolean value indicating whether a check should be performed on the value against the domain of the spec. Defaults to the value of the
CHECK_SPEC_ENCODEenvironment variable.
- Returns:
a np.ndarray.
- type_check(value: torch.Tensor | tensordict.base.TensorDictBase, selected_keys: tensordict._nestedkey.NestedKey | collections.abc.Sequence[tensordict._nestedkey.NestedKey] | None = None)¶
Checks the input value
dtypeagainst theTensorSpecdtypeand raises an exception if they don’t match.- Parameters:
value (torch.Tensor) – tensor whose dtype has to be checked.
key (str, optional) – if the TensorSpec has keys, the value dtype will be checked against the spec pointed by the indicated key.
- unflatten(dim: int, sizes: tuple[int]) T¶
Unflattens a
TensorSpec.Check
unflatten()for more information on this method.
- unlock_(recurse: bool | None = None) T¶
Unlocks the Composite and allows modification of its content.
This is only a first-level lock modification, unless specified otherwise through the
recursearg.
- unsqueeze(dim: int)¶
Returns a new Spec with one more singleton dimension (at the position indicated by
dim).- Parameters:
dim (int or None) – the dimension to apply the unsqueeze operation to.
- values(include_nested: bool = False, leaves_only: bool = False, *, is_leaf: collections.abc.Callable[[type], bool] | None = None, step_mdp_static_only: bool = False) _CompositeSpecValuesView¶
Values of the Composite.
- Parameters:
include_nested (bool, optional) – if
False, the returned keys will not be nested. They will represent only the immediate children of the root, and not the whole nested sequence, i.e. aComposite(next=Composite(obs=None))will lead to the keys["next"]. Default is ``False`, i.e. nested keys will not be returned.leaves_only (bool, optional) – if
False, the values returned will contain every level of nesting, i.e. aComposite(next=Composite(obs=None))will lead to the keys["next", ("next", "obs")]. Default isFalse.
- Keyword Arguments:
is_leaf (callable, optional) – reads a type and returns a boolean indicating if that type should be seen as a leaf. By default, all non-Composite nodes are considered as leaves.
step_mdp_static_only (bool, optional) – if
True, only keys that are static under step_mdp will be returned. Default isFalse.
- zero(shape: Size = None) TensorDictBase¶
Returns a zero-filled tensor in the box.
Note
Even though there is no guarantee that
0belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case ofzerois to generate empty data buffers, not meaningful data.- Parameters:
shape (torch.Size) – shape of the zero-tensor
- Returns:
a zero-filled tensor sampled in the TensorSpec box.
- zeros(shape: Size = None) torch.Tensor | tensordict.base.TensorDictBase¶
Proxy to
zero().