VLLMDoubleBufferTransport¶
- class torchrl.weight_update.llm.VLLMDoubleBufferTransport(remote_addr: str, local_addr: str | None = None, num_threads: int = 1)[source]¶
Transport for vLLM using double-buffered memory-mapped storage.
This transport writes weights to a shared directory and reads them back using TensorDict’s memory-mapping capabilities.
- Parameters:
remote_addr – Directory path where sender writes weights.
local_addr – Directory path where receiver reads weights. If None, uses same path as remote_addr (for local testing).
num_threads – Number of threads for memmap operations.
- check_connection() bool[source]¶
Check if the transport is ready.
For file-based transport, always returns True.
- receive_weights(timeout: float | None = None, *, weights: Any = None, model: Any = None, strategy: Any = None) Any | None[source]¶
Reads the weights from the shared directory.
- Parameters:
timeout – Ignored (file-based transport is instant).
weights – Ignored.
model – Ignored.
strategy – Ignored.
- Returns:
TensorDict with flattened keys containing the weights.