Shortcuts

torchrl.trainers.algorithms.configs.collectors.SyncDataCollectorConfig

class torchrl.trainers.algorithms.configs.collectors.SyncDataCollectorConfig(create_env_fn: ConfigBase = '???', policy: Any = None, policy_factory: Any = None, frames_per_batch: int | None = None, total_frames: int = - 1, init_random_frames: int | None = 0, device: str | None = None, storing_device: str | None = None, policy_device: str | None = None, env_device: str | None = None, create_env_kwargs: dict | None = None, max_frames_per_traj: int | None = None, reset_at_each_iter: bool = False, postproc: Any = None, split_trajs: bool = False, exploration_type: str = 'RANDOM', return_same_td: bool = False, interruptor: Any = None, set_truncated: bool = False, use_buffers: bool = False, replay_buffer: Any = None, extend_buffer: bool = False, trust_policy: bool = True, compile_policy: Any = None, cudagraph_policy: Any = None, no_cuda_sync: bool = False, weight_updater: Any = None, track_policy_version: bool = False, _target_: str = 'torchrl.collectors.SyncDataCollector', _partial_: bool = False)[source]

A class to configure a synchronous data collector.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources