Shortcuts

torchrl.trainers.algorithms.configs.collectors.MultiAsyncCollectorConfig

class torchrl.trainers.algorithms.configs.collectors.MultiAsyncCollectorConfig(create_env_fn: Any = '???', num_workers: int | None = None, policy: Any = None, policy_factory: Any = None, frames_per_batch: int | None = None, init_random_frames: int | None = 0, total_frames: int = -1, device: str | None = None, storing_device: str | None = None, policy_device: str | None = None, env_device: str | None = None, create_env_kwargs: dict | None = None, collector_class: Any = None, max_frames_per_traj: int | None = None, reset_at_each_iter: bool = False, postproc: ConfigBase | None = None, split_trajs: bool = False, exploration_type: str = 'RANDOM', reset_when_done: bool = True, update_at_each_batch: bool = False, preemptive_threshold: float | None = None, num_threads: int | None = None, num_sub_threads: int = 1, cat_results: Any = None, set_truncated: bool = False, use_buffers: bool = False, replay_buffer: ConfigBase | None = None, extend_buffer: bool = False, trust_policy: bool = True, compile_policy: Any = None, cudagraph_policy: Any = None, no_cuda_sync: bool = False, weight_updater: Any = None, weight_sync_schemes: Any = None, weight_recv_schemes: Any = None, track_policy_version: bool = False, worker_idx: int | None = None, trajs_per_batch: int | None = None, trajs_per_write: int | None = None, init_fn: Any = None, _target_: str = 'torchrl.collectors.MultiAsyncCollector', _partial_: bool = False)[source]

Hydra configuration for MultiAsyncCollector.

MultiAsyncCollector shares its constructor surface with MultiSyncCollector (both forward to the same multi-worker base), so the same kwargs are exposed here.

Docs

Lorem ipsum dolor sit amet, consectetur

View Docs

Tutorials

Lorem ipsum dolor sit amet, consectetur

View Tutorials

Resources

Lorem ipsum dolor sit amet, consectetur

View Resources