Shortcuts

RPCWeightSyncScheme

class torchrl.weight_update.RPCWeightSyncScheme(strategy: Literal['state_dict', 'tensordict'] = 'state_dict')[source]

Weight synchronization for torch.distributed.rpc.

This scheme uses RPC calls to synchronize weights across distributed workers. Each remote collector gets its own transport, following the same pattern as multiprocess collectors.

create_receiver() WeightReceiver

Create a receiver for this scheme (legacy).

Returns:

WeightReceiver instance configured for this scheme.

create_sender() WeightSender

Create a sender for this scheme (legacy).

Returns:

WeightSender instance configured for this scheme.

create_transport(pipe_or_context: Any) TransportBackend[source]

Create RPC-based transport for a specific remote collector.

Parameters:

pipe_or_context – A tuple of (collector_info, collector_rref, collector_class) for the remote collector.

Returns:

RPCTransport configured for this specific remote collector.

get_receiver() WeightReceiver

Get the receiver instance.

Returns:

Receiver instance for receiving weights in this worker

Raises:

RuntimeError – If init_on_worker() hasn’t been called yet

get_sender() WeightSender

Get the sender instance.

Returns:

Sender instance for sending weights to workers

Raises:

RuntimeError – If init_on_sender() hasn’t been called yet

init_on_sender(model_id: str, context: Any = None, **kwargs) None

Initialize on the main process (sender side).

This method is called once in the collector’s _run_processes() method, after workers have been started and are ready to receive messages.

Parameters:
  • model_id – Identifier for the model being synchronized

  • context – Optional context object (e.g., collector) providing: - .pipes: list[mp.Connection] - .get_model(model_id: str) -> nn.Module - .get_cached_weights(model_id: str) -> TensorDict | None - .num_workers: int

  • **kwargs – Alternative to context (pipes, num_workers, model, cached_weights, etc.)

init_on_worker(model_id: str, context: Any = None, **kwargs) None

Initialize on worker process (receiver side).

This method is called once in each worker’s initialization.

Parameters:
  • model_id – Identifier for the model being synchronized

  • context – Optional context object (e.g., inner collector) providing: - .pipe: mp.Connection - .get_model(model_id: str) -> nn.Module

  • **kwargs – Alternative to context (pipe, model, etc.)

prepare_weights(weights: Any, model_id: str, strategy: WeightStrategy, context: Any = None) Any

Prepare weights for sending.

This method handles weight extraction, conversion, and any scheme-specific preparation (e.g., cache lookups for SharedMemWeightSyncScheme).

Parameters:
  • weights – Raw weights input (can be None, nn.Module, TensorDict, dict, str reference, etc.)

  • model_id – The model identifier (e.g., “policy”)

  • strategy – WeightStrategy for extracting/converting weights

  • context – Optional context (e.g., collector) for model resolution

Returns:

Prepared weights ready to send via transport

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources