Shortcuts

VLLMDoubleBufferTransport

class torchrl.weight_update.llm.VLLMDoubleBufferTransport(remote_addr: str, local_addr: str | None = None, num_threads: int = 1)[source]

Transport for vLLM using double-buffered memory-mapped storage.

This transport writes weights to a shared directory and reads them back using TensorDict’s memory-mapping capabilities.

Parameters:
  • remote_addr – Directory path where sender writes weights.

  • local_addr – Directory path where receiver reads weights. If None, uses same path as remote_addr (for local testing).

  • num_threads – Number of threads for memmap operations.

check_connection() bool[source]

Check if the transport is ready.

For file-based transport, always returns True.

receive_weights(timeout: float = 1.0) TensorDict[source]

Reads the weights from the shared directory.

Parameters:

timeout – Not used for file-based transport (kept for API compatibility).

Returns:

TensorDict with flattened keys containing the weights.

send_weights(model_id: str, weights: Any) None[source]

Writes the weights to a shared directory.

Parameters:
  • model_id – Identifier for the model (used for logging).

  • weights – TensorDict or dict of weights to write.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources