VLLMDoubleBufferSyncScheme¶
- class torchrl.weight_update.llm.VLLMDoubleBufferSyncScheme(remote_addr: str, local_addr: str | None = None, num_threads: int = 1, strategy: Literal['tensordict', 'state_dict'] = 'tensordict')[source]¶
Weight synchronization scheme for vLLM using double-buffered storage.
This scheme uses memory-mapped TensorDict storage to transfer weights from a trainer to vLLM inference workers. It’s simpler than NCCL-based approaches and doesn’t require process group coordination.
- Parameters:
remote_addr – Directory path where sender writes weights.
local_addr – Directory path where receiver reads weights. If None, uses same path as remote_addr (for local testing).
num_threads – Number of threads for memmap operations. Defaults to 1.
strategy – Weight extraction strategy (“tensordict” or “state_dict”).
Example
>>> # Local testing (same machine) >>> scheme = VLLMDoubleBufferSyncScheme( ... remote_addr="/tmp/weights", ... strategy="tensordict" ... ) >>> >>> # Distributed setup (different machines) >>> # On trainer node: >>> scheme = VLLMDoubleBufferSyncScheme( ... remote_addr="/mnt/shared/weights", # NFS mount ... num_threads=4 ... ) >>> >>> # On vLLM worker node: >>> scheme = VLLMDoubleBufferSyncScheme( ... remote_addr="/mnt/shared/weights", # Same NFS mount ... num_threads=4 ... )
- create_receiver(vllm_engine) VLLMDoubleBufferWeightReceiver[source]¶
Create a weight receiver for a vLLM worker process.
- Parameters:
vllm_engine – The vLLM engine instance (must have .llm_engine.model_executor attribute).
- create_sender() VLLMDoubleBufferWeightSender[source]¶
Create a weight sender for the trainer process.
- create_transport(pipe_or_context: Any = None) VLLMDoubleBufferTransport[source]¶
Create transport for double-buffered storage.
- Parameters:
pipe_or_context – Not used for file-based transport (kept for API compatibility).
- Returns:
A VLLMDoubleBufferTransport instance.
- get_receiver() WeightReceiver¶
Get the receiver instance.
- Returns:
Receiver instance for receiving weights in this worker
- Raises:
RuntimeError – If init_on_worker() hasn’t been called yet
- get_sender() WeightSender¶
Get the sender instance.
- Returns:
Sender instance for sending weights to workers
- Raises:
RuntimeError – If init_on_sender() hasn’t been called yet
- init_on_sender(model_id: str, context: Any = None, **kwargs) None¶
Initialize on the main process (sender side).
This method is called once in the collector’s _run_processes() method, after workers have been started and are ready to receive messages.
- Parameters:
model_id – Identifier for the model being synchronized
context – Optional context object (e.g., collector) providing: - .pipes: list[mp.Connection] - .get_model(model_id: str) -> nn.Module - .get_cached_weights(model_id: str) -> TensorDict | None - .num_workers: int
**kwargs – Alternative to context (pipes, num_workers, model, cached_weights, etc.)
- init_on_worker(model_id: str, context: Any = None, **kwargs) None¶
Initialize on worker process (receiver side).
This method is called once in each worker’s initialization.
- Parameters:
model_id – Identifier for the model being synchronized
context – Optional context object (e.g., inner collector) providing: - .pipe: mp.Connection - .get_model(model_id: str) -> nn.Module
**kwargs – Alternative to context (pipe, model, etc.)
- prepare_weights(weights: Any, model_id: str, strategy: WeightStrategy, context: Any = None) Any¶
Prepare weights for sending.
This method handles weight extraction, conversion, and any scheme-specific preparation (e.g., cache lookups for SharedMemWeightSyncScheme).
- Parameters:
weights – Raw weights input (can be None, nn.Module, TensorDict, dict, str reference, etc.)
model_id – The model identifier (e.g., “policy”)
strategy – WeightStrategy for extracting/converting weights
context – Optional context (e.g., collector) for model resolution
- Returns:
Prepared weights ready to send via transport