Shortcuts

torchrl.trainers.algorithms.configs.trainers.DQNTrainerConfig

class torchrl.trainers.algorithms.configs.trainers.DQNTrainerConfig(collector: Any, total_frames: int, optim_steps_per_batch: int | None, loss_module: Any, optimizer: Any, logger: Any, save_trainer_file: Any, replay_buffer: Any, frame_skip: int = 1, clip_grad_norm: bool = True, clip_norm: float | None = None, progress_bar: bool = True, seed: int | None = None, save_trainer_interval: int = 10000, log_interval: int = 10000, create_env_fn: Any = None, value_network: Any = None, target_net_updater: Any = None, eps_init: float = 1.0, eps_end: float = 0.05, annealing_num_steps: int = 250000, async_collection: bool = False, log_timings: bool = False, auto_log_optim_steps: bool = True, enable_logging: bool = True, log_rewards: bool = True, log_observations: bool = False, hooks: list[Any] | None = None, mixing_strategy: str | None = None, done_key: Any = 'done', terminated_key: Any = 'terminated', reward_key: Any = 'reward', episode_reward_key: Any = 'reward_sum', aggregated_reward_key: Any = None, aggregated_episode_reward_key: Any = None, action_key: Any = 'action', observation_key: Any = 'observation', _target_: str = 'torchrl.trainers.algorithms.configs.trainers._make_dqn_trainer')[source]

Hydra configuration for DQNTrainer.

Every kwarg accepted by DQNTrainer.__init__ is exposed as a field here.

Docs

Lorem ipsum dolor sit amet, consectetur

View Docs

Tutorials

Lorem ipsum dolor sit amet, consectetur

View Tutorials

Resources

Lorem ipsum dolor sit amet, consectetur

View Resources