RayCollector¶
- class torchrl.collectors.distributed.RayCollector(create_env_fn: collections.abc.Callable | torchrl.envs.common.EnvBase | list[collections.abc.Callable] | list[torchrl.envs.common.EnvBase], policy: collections.abc.Callable[[tensordict.base.TensorDictBase], tensordict.base.TensorDictBase] | None = None, *, policy_factory: collections.abc.Callable[[], collections.abc.Callable] | list[collections.abc.Callable[[], collections.abc.Callable]] | None = None, trust_policy: bool | None = None, frames_per_batch: int, total_frames: int = -1, device: torch.device | list[torch.device] | None = None, storing_device: torch.device | list[torch.device] | None = None, env_device: torch.device | list[torch.device] | None = None, policy_device: torch.device | list[torch.device] | None = None, max_frames_per_traj=-1, init_random_frames=-1, reset_at_each_iter=False, postproc=None, split_trajs=False, exploration_type=InteractionType.RANDOM, collector_class: ~collections.abc.Callable[[~tensordict._td.TensorDict], ~tensordict._td.TensorDict] = <class 'torchrl.collectors._single.Collector'>, collector_kwargs: dict[str, typing.Any] | list[dict] | None = None, num_workers_per_collector: int = 1, sync: bool = False, ray_init_config: dict[str, typing.Any] | None = None, remote_configs: dict[str, typing.Any] | list[dict[str, typing.Any]] | None = None, num_collectors: int | None = None, update_after_each_batch: bool = False, max_weight_update_interval: int = -1, replay_buffer: torchrl.data.replay_buffers.replay_buffers.ReplayBuffer | None = None, weight_updater: torchrl.collectors.weight_update.WeightUpdaterBase | collections.abc.Callable[[], torchrl.collectors.weight_update.WeightUpdaterBase] | None = None, weight_sync_schemes: dict[str, torchrl.weight_update.weight_sync_schemes.WeightSyncScheme] | None = None, weight_recv_schemes: dict[str, torchrl.weight_update.weight_sync_schemes.WeightSyncScheme] | None = None, use_env_creator: bool = False, no_cuda_sync: bool | None = None)[source]¶
Distributed data collector with Ray backend.
This Python class serves as a ray-based solution to instantiate and coordinate multiple data collectors in a distributed cluster. Like TorchRL non-distributed collectors, this collector is an iterable that yields TensorDicts until a target number of collected frames is reached, but handles distributed data collection under the hood.
The class dictionary input parameter “ray_init_config” can be used to provide the kwargs to call Ray initialization method ray.init(). If “ray_init_config” is not provided, the default behavior is to autodetect an existing Ray cluster or start a new Ray instance locally if no existing cluster is found. Refer to Ray documentation for advanced initialization kwargs.
Similarly, dictionary input parameter “remote_configs” can be used to specify the kwargs for ray.remote() when called to create each remote collector actor, including collector compute resources.The sum of all collector resources should be available in the cluster. Refer to Ray documentation for advanced configuration of the ray.remote() method. Default kwargs are:
>>> kwargs = { ... "num_cpus": 1, ... "num_gpus": 0.2, ... "memory": 2 * 1024 ** 3, ... }
The coordination between collector instances can be specified as “synchronous” or “asynchronous”. In synchronous coordination, this class waits for all remote collectors to collect a rollout, concatenates all rollouts into a single TensorDict instance and finally yields the concatenated data. On the other hand, if the coordination is to be carried out asynchronously, this class provides the rollouts as they become available from individual remote collectors.
- Parameters:
create_env_fn (Callable or List[Callabled]) – list of Callables, each returning an instance of
EnvBase.policy (Callable, optional) –
Policy to be executed in the environment. Must accept
tensordict.tensordict.TensorDictBaseobject as input. IfNoneis provided, the policy used will be aRandomPolicyinstance with the environmentaction_spec. Accepted policies are usually subclasses ofTensorDictModuleBase. This is the recommended usage of the collector. Other callables are accepted too: If the policy is not aTensorDictModuleBase(e.g., a regularModuleinstances) it will be wrapped in a nn.Module first. Then, the collector will try to assess if these modules require wrapping in aTensorDictModuleor not.If the policy forward signature matches any of
forward(self, tensordict),forward(self, td)orforward(self, <anything>: TensorDictBase)(or any typing with a single argument typed as a subclass ofTensorDictBase) then the policy won’t be wrapped in aTensorDictModule.In all other cases an attempt to wrap it will be undergone as such:
TensorDictModule(policy, in_keys=env_obs_key, out_keys=env.action_keys).
Note
If the policy needs to be passed as a policy factory (e.g., in case it mustn’t be serialized / pickled directly), the
policy_factoryshould be used instead.
- Keyword Arguments:
policy_factory (Callable[[], Callable], list of Callable[[], Callable], optional) –
a callable (or list of callables) that returns a policy instance. This is exclusive with the policy argument.
Note
policy_factory comes in handy whenever the policy cannot be serialized.
trust_policy (bool, optional) – if
True, a non-TensorDictModule policy will be trusted to be assumed to be compatible with the collector. This defaults toTruefor CudaGraphModules andFalseotherwise.frames_per_batch (int) – A keyword-only argument representing the total number of elements in a batch.
total_frames (int, Optional) – lower bound of the total number of frames returned by the collector. The iterator will stop once the total number of frames equates or exceeds the total number of frames passed to the collector. Default value is -1, which mean no target total number of frames (i.e. the collector will run indefinitely).
device (int, str or torch.device, optional) – The generic device of the collector. The
deviceargs fills any non-specified device: ifdeviceis notNoneand any ofstoring_device,policy_deviceorenv_deviceis not specified, its value will be set todevice. Defaults toNone(No default device). Lists of devices are supported.storing_device (int, str or torch.device, optional) – The remote device on which the output
TensorDictwill be stored. Ifdeviceis passed andstoring_deviceisNone, it will default to the value indicated bydevice. For long trajectories, it may be necessary to store the data on a different device than the one where the policy and env are executed. Defaults toNone(the output tensordict isn’t on a specific device, leaf tensors sit on the device where they were created). Lists of devices are supported.env_device (int, str or torch.device, optional) – The remote device on which the environment should be cast (or executed if that functionality is supported). If not specified and the env has a non-
Nonedevice,env_devicewill default to that value. Ifdeviceis passed andenv_device=None, it will default todevice. If the value as such specified ofenv_devicediffers frompolicy_deviceand one of them is notNone, the data will be cast toenv_devicebefore being passed to the env (i.e., passing different devices to policy and env is supported). Defaults toNone. Lists of devices are supported.policy_device (int, str or torch.device, optional) – The remote device on which the policy should be cast. If
deviceis passed andpolicy_device=None, it will default todevice. If the value as such specified ofpolicy_devicediffers fromenv_deviceand one of them is notNone, the data will be cast topolicy_devicebefore being passed to the policy (i.e., passing different devices to policy and env is supported). Defaults toNone. Lists of devices are supported.create_env_kwargs (dict, optional) – Dictionary of kwargs for
create_env_fn.max_frames_per_traj (int, optional) – Maximum steps per trajectory. Note that a trajectory can span across multiple batches (unless
reset_at_each_iteris set toTrue, see below). Once a trajectory reachesn_steps, the environment is reset. If the environment wraps multiple environments together, the number of steps is tracked for each environment independently. Negative values are allowed, in which case this argument is ignored. Defaults toNone(i.e., no maximum number of steps).init_random_frames (int, optional) – Number of frames for which the policy is ignored before it is called. This feature is mainly intended to be used in offline/model-based settings, where a batch of random trajectories can be used to initialize training. If provided, it will be rounded up to the closest multiple of frames_per_batch. Defaults to
None(i.e. no random frames).reset_at_each_iter (bool, optional) – Whether environments should be reset at the beginning of a batch collection. Defaults to
False.postproc (Callable, optional) – A post-processing transform, such as a
Transformor aMultiStepinstance. Defaults toNone.split_trajs (bool, optional) – Boolean indicating whether the resulting TensorDict should be split according to the trajectories. See
split_trajectories()for more information. Defaults toFalse.exploration_type (ExplorationType, optional) – interaction mode to be used when collecting data. Must be one of
torchrl.envs.utils.ExplorationType.DETERMINISTIC,torchrl.envs.utils.ExplorationType.RANDOM,torchrl.envs.utils.ExplorationType.MODEortorchrl.envs.utils.ExplorationType.MEAN.collector_class (Python class or constructor) – a collector class to be remotely instantiated. Can be
Collector,MultiSyncCollector,MultiAsyncCollectoror a derived class of these. Defaults toCollector.collector_kwargs (dict or list, optional) – a dictionary of parameters to be passed to the remote data-collector. If a list is provided, each element will correspond to an individual set of keyword arguments for the dedicated collector.
num_workers_per_collector (int) – the number of copies of the env constructor that is to be used on the remote nodes. Defaults to 1 (a single env per collector). On a single worker node all the sub-workers will be executing the same environment. If different environments need to be executed, they should be dispatched across worker nodes, not subnodes.
ray_init_config (dict, Optional) – kwargs used to call ray.init().
remote_configs (list of dicts, Optional) – ray resource specs for each remote collector. A single dict can be provided as well, and will be used in all collectors.
num_collectors (int, Optional) – total number of collectors to be instantiated.
sync (bool) – if
True, the resulting tensordict is a stack of all the tensordicts collected on each node. IfFalse(default), each tensordict results from a separate node in a “first-ready, first-served” fashion.update_after_each_batch (bool, optional) – if
True, the weights will be updated after each collection. Forsync=True, this means that all workers will see their weights updated. Forsync=False, only the worker from which the data has been gathered will be updated. This is equivalent to max_weight_update_interval=0. Defaults toFalse, i.e. updates have to be executed manually throughtorchrl.collectors.DataCollector.update_policy_weights_()max_weight_update_interval (int, optional) – the maximum number of batches that can be collected before the policy weights of a worker is updated. For sync collections, this parameter is overwritten by
update_after_each_batch. For async collections, it may be that one worker has not seen its parameters being updated for a certain time even ifupdate_after_each_batchis turned on. Defaults to -1 (no forced update).replay_buffer (RayReplayBuffer, optional) –
if provided, the collector will not yield tensordicts but populate the buffer instead. Defaults to
None.Note
although it is not enfoced (to allow users to implement their own replay buffer class), a
RayReplayBufferinstance should be used here.weight_updater (WeightUpdaterBase or constructor, optional) – (Deprecated) An instance of
WeightUpdaterBaseor its subclass, responsible for updating the policy weights on remote inference workers managed by Ray. If not provided, aRayWeightUpdaterwill be used by default, leveraging Ray’s distributed capabilities. Consider using a constructor if the updater needs to be serialized.weight_sync_schemes (dict[str, WeightSyncScheme], optional) –
Dictionary of weight sync schemes for SENDING weights to remote collector workers. Keys are model identifiers (e.g., “policy”) and values are WeightSyncScheme instances configured to send weights via Ray. This is the recommended way to configure weight synchronization for propagating weights from the main process to remote collectors. If not provided, defaults to
{"policy": RayWeightSyncScheme()}.Note
Weight synchronization is lazily initialized. When using
policy_factorywithout a centralpolicy, weight sync is deferred until the first call toupdate_policy_weights_()with actual weights. This allows sub-collectors to each have their own independent policies created via the factory. If you have a central policy and want to sync its weights to remote collectors, callupdate_policy_weights_(policy)before starting iteration.weight_recv_schemes (dict[str, WeightSyncScheme], optional) – Dictionary of weight sync schemes for RECEIVING weights from a parent process or training loop. Keys are model identifiers (e.g., “policy”) and values are WeightSyncScheme instances configured to receive weights. This is typically used when RayCollector is itself a worker in a larger distributed setup. Defaults to
None.use_env_creator (bool, optional) – if
True, the environment constructor functions will be wrapped inEnvCreator. This is useful for multiprocessed settings where shared memory needs to be managed, but Ray has its own object storage mechanism, so this is typically not needed. Defaults toFalse.
Examples
>>> from torch import nn >>> from tensordict.nn import TensorDictModule >>> from torchrl.envs.libs.gym import GymEnv >>> from torchrl.collectors import Collector >>> from torchrl.collectors.distributed import RayCollector >>> env_maker = lambda: GymEnv("Pendulum-v1", device="cpu") >>> policy = TensorDictModule(nn.Linear(3, 1), in_keys=["observation"], out_keys=["action"]) >>> distributed_collector = RayCollector( ... create_env_fn=[env_maker], ... policy=policy, ... collector_class=Collector, ... max_frames_per_traj=50, ... init_random_frames=-1, ... reset_at_each_iter=-False, ... collector_kwargs={ ... "device": "cpu", ... "storing_device": "cpu", ... }, ... num_collectors=1, ... total_frames=10000, ... frames_per_batch=200, ... ) >>> for i, data in enumerate(collector): ... if i == 2: ... print(data) ... break
- add_collectors(create_env_fn, num_envs, policy, collector_kwargs, remote_configs)[source]¶
Creates and adds a number of remote collectors to the set.
- async async_shutdown(shutdown_ray: bool = False)[source]¶
Finishes processes started by the collector during async execution.
- Parameters:
shutdown_ray (bool) – If True, also shutdown the Ray cluster. Defaults to False. Note: Setting this to True will kill all Ray actors in the cluster, including any replay buffers or other services. Only set to True if you’re sure you want to shut down the entire Ray cluster.
- cascade_execute(attr_path: str, *args, **kwargs) Any¶
Execute a method on a nested attribute of this collector.
This method allows remote callers to invoke methods on nested attributes of the collector without needing to know the full structure. It’s particularly useful for calling methods on weight sync schemes from the sender side.
- Parameters:
attr_path – Full path to the callable, e.g., “_receiver_schemes[‘model_id’]._set_dist_connection_info”
*args – Positional arguments to pass to the method.
**kwargs – Keyword arguments to pass to the method.
- Returns:
The return value of the method call.
Examples
>>> collector.cascade_execute( ... "_receiver_schemes['policy']._set_dist_connection_info", ... connection_info_ref, ... worker_idx=0 ... )
- init_updater(*args, **kwargs)¶
Initialize the weight updater with custom arguments.
This method passes the arguments to the weight updater’s init method. If no weight updater is set, this is a no-op.
- Parameters:
*args – Positional arguments for weight updater initialization
**kwargs – Keyword arguments for weight updater initialization
- load_state_dict(state_dict: collections.OrderedDict | list[collections.OrderedDict]) None[source]¶
Calls parent method for each remote collector.
- pause()¶
Context manager that pauses the collector if it is running free.
- receive_weights(policy_or_weights: tensordict.base.TensorDictBase | tensordict.nn.common.TensorDictModuleBase | torch.nn.modules.module.Module | dict | None = None, *, weights: tensordict.base.TensorDictBase | dict | None = None, policy: tensordict.nn.common.TensorDictModuleBase | torch.nn.modules.module.Module | None = None) None¶
Receive and apply weights to the collector’s policy.
This method applies weights to the local policy. When receiver schemes are registered, it delegates to those schemes. Otherwise, it directly applies the provided weights.
The method accepts weights in multiple forms for convenience:
Examples
>>> # Receive from registered schemes (distributed collectors) >>> collector.receive_weights() >>> >>> # Apply weights from a policy module (positional) >>> collector.receive_weights(trained_policy) >>> >>> # Apply weights from a TensorDict (positional) >>> collector.receive_weights(weights_tensordict) >>> >>> # Use keyword arguments for clarity >>> collector.receive_weights(weights=weights_td) >>> collector.receive_weights(policy=trained_policy)
- Parameters:
policy_or_weights –
The weights to apply. Can be:
nn.Module: A policy module whose weights will be extracted and appliedTensorDictModuleBase: A TensorDict module whose weights will be extractedTensorDictBase: A TensorDict containing weightsdict: A regular dict containing weightsNone: Receive from registered schemes or mirror from original policy
- Keyword Arguments:
weights – Alternative to positional argument. A TensorDict or dict containing weights to apply. Cannot be used together with
policy_or_weightsorpolicy.policy – Alternative to positional argument. An
nn.ModuleorTensorDictModuleBasewhose weights will be extracted. Cannot be used together withpolicy_or_weightsorweights.
- Raises:
ValueError – If conflicting parameters are provided or if arguments are passed when receiver schemes are registered.
- register_scheme_receiver(weight_recv_schemes: dict[str, torchrl.weight_update.weight_sync_schemes.WeightSyncScheme], *, synchronize_weights: bool = True)¶
Set up receiver schemes for this collector to receive weights from parent collectors.
This method initializes receiver schemes and stores them in _receiver_schemes for later use by _receive_weights_scheme() and receive_weights().
Receiver schemes enable cascading weight updates across collector hierarchies: - Parent collector sends weights via its weight_sync_schemes (senders) - Child collector receives weights via its weight_recv_schemes (receivers) - If child is also a parent (intermediate node), it can propagate to its own children
- Parameters:
weight_recv_schemes (dict[str, WeightSyncScheme]) – Dictionary of {model_id: WeightSyncScheme} to set up as receivers. These schemes will receive weights from parent collectors.
- Keyword Arguments:
synchronize_weights (bool, optional) – If True, synchronize weights immediately after registering the schemes. Defaults to True.
- property remote_collectors¶
Returns list of remote collectors.
- set_seed(seed: int, static_seed: bool = False) list[int][source]¶
Calls parent method for each remote collector iteratively and returns final seed.
- shutdown(timeout: float | None = None, shutdown_ray: bool = False) None[source]¶
Finishes processes started by the collector.
- Parameters:
timeout (float, optional) – Timeout for stopping the collection thread.
shutdown_ray (bool) – If True, also shutdown the Ray cluster. Defaults to False. Note: Setting this to True will kill all Ray actors in the cluster, including any replay buffers or other services. Only set to True if you’re sure you want to shut down the entire Ray cluster.
- state_dict() list[collections.OrderedDict][source]¶
Calls parent method for each remote collector and returns a list of results.
- update_policy_weights_(policy_or_weights: tensordict.base.TensorDictBase | tensordict.nn.common.TensorDictModuleBase | torch.nn.modules.module.Module | dict | None = None, *, weights: tensordict.base.TensorDictBase | dict | None = None, policy: tensordict.nn.common.TensorDictModuleBase | torch.nn.modules.module.Module | None = None, worker_ids: int | list[int] | torch.device | list[torch.device] | None = None, model_id: str | None = None, weights_dict: dict[str, Any] | None = None, **kwargs) None¶
Update policy weights for the data collector.
This method synchronizes the policy weights used by the collector with the latest trained weights. It supports both local and remote weight updates, depending on the collector configuration.
The method accepts weights in multiple forms for convenience:
Examples
>>> # Pass policy module as positional argument >>> collector.update_policy_weights_(policy_module) >>> >>> # Pass TensorDict weights as positional argument >>> collector.update_policy_weights_(weights_tensordict) >>> >>> # Use keyword arguments for clarity >>> collector.update_policy_weights_(weights=weights_td, model_id="actor") >>> collector.update_policy_weights_(policy=actor_module, model_id="actor") >>> >>> # Update multiple models atomically >>> collector.update_policy_weights_(weights_dict={ ... "actor": actor_weights, ... "critic": critic_weights, ... })
- Parameters:
policy_or_weights –
The weights to update with. Can be:
nn.Module: A policy module whose weights will be extractedTensorDictModuleBase: A TensorDict module whose weights will be extractedTensorDictBase: A TensorDict containing weightsdict: A regular dict containing weightsNone: Will try to get weights from server using_get_server_weights()
- Keyword Arguments:
weights – Alternative to positional argument. A TensorDict or dict containing weights to update. Cannot be used together with
policy_or_weightsorpolicy.policy – Alternative to positional argument. An
nn.ModuleorTensorDictModuleBasewhose weights will be extracted. Cannot be used together withpolicy_or_weightsorweights.worker_ids – Identifiers for the workers to update. Relevant when the collector has multiple workers. Can be int, list of ints, device, or list of devices.
model_id – The model identifier to update (default:
"policy"). Cannot be used together withweights_dict.weights_dict – Dictionary mapping model_id to weights for updating multiple models atomically. Keys should match model_ids registered in
weight_sync_schemes. Cannot be used together withmodel_id,policy_or_weights,weights, orpolicy.
- Raises:
TypeError – If
worker_idsis provided but noweight_updateris configured.ValueError – If conflicting parameters are provided.
Note
Users should extend the
WeightUpdaterBaseclasses to customize the weight update logic for specific use cases.See also
LocalWeightsUpdaterBaseandRemoteWeightsUpdaterBase().
- property worker_idx: int | None¶
Get the worker index for this collector.
- Returns:
The worker index (0-indexed).
- Raises:
RuntimeError – If worker_idx has not been set.