Shortcuts

RayModuleTransformSender

class torchrl.weight_update.RayModuleTransformSender(scheme: RayModuleTransformScheme)[source]

Specialized sender for RayModuleTransform actors.

This sender handles weight updates for models hosted within Ray actors. Unlike the base WeightSender which uses pipes for multiprocessing, this sender directly communicates with Ray actors via their remote methods.

For Ray actors, there is typically only one shared actor instance, so we store a single transport rather than per-worker transports.

send(weights: Any = None, worker_ids: int | list[int] | None = None) None

Send weights synchronously to workers.

This method: 1. Prepares weights (extracts from model if weights=None) 2. Sends to specified workers (or all if worker_ids=None) 3. Waits for acknowledgments from those workers 4. Returns when workers have applied the weights

Parameters:
  • weights – Weights to send. Can be: - None: Extract from model via context.get_model(model_id) - nn.Module: Extract weights from module - TensorDict: Use directly - dict: Convert to TensorDict

  • worker_ids – Which workers to send to: - None: Send to all workers (default) - int: Send to single worker - list[int]: Send to specific workers

Note: This is a blocking call that ensures specified workers are updated before returning.

send_async(weights: Any = None, worker_ids: int | list[int] | None = None) None

Send weights asynchronously to workers (non-blocking).

This initiates the send but returns immediately without waiting for workers to acknowledge. You must call wait_async() before the next send_async() or send() call.

Parameters:
  • weights – Same as send()

  • worker_ids – Same as send()

Raises:

RuntimeError – If a previous send_async() is still pending

update_weights(weights: Any) None[source]

Send weights to the Ray actor.

Parameters:

weights – Weights to send.

wait_async() None

Wait for a pending async send to complete.

Blocks until all workers have acknowledged the previous send_async(). This must be called after send_async() before any subsequent sends.

Raises:

RuntimeError – If no async send is pending

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources