Shortcuts

VLLMDoubleBufferWeightReceiver

class torchrl.weight_update.llm.VLLMDoubleBufferWeightReceiver(scheme: VLLMDoubleBufferSyncScheme, vllm_engine)[source]

Receives weights in a vLLM worker using double-buffered storage.

This receiver reads weights from a shared directory and loads them into the vLLM engine using the engine’s load_weights interface.

Example

>>> receiver = scheme.create_receiver(vllm_engine)
>>>
>>> # Poll for new weights
>>> if receiver.poll_and_apply():
...     print("Weights updated!")
apply_weights(weights: TensorDict) None[source]

Apply weights to vLLM engine using RPC.

This method uses RPC to tell all vLLM workers to load weights from the shared storage directory. Similar to how AsyncVLLM._update_weights_with_nccl_broadcast_simple uses collective_rpc to coordinate workers.

Parameters:

weights – TensorDict with flattened keys containing weights.

poll_and_apply(timeout: float = 180.0) bool[source]

Poll for and apply weights from shared storage.

Parameters:

timeout – Not used for file-based transport (kept for API compatibility).

Returns:

True if weights were successfully read and applied, False otherwise.

receive(timeout: float = 0.001) bool

Check for and apply new weights (non-blocking).

This method is called in the worker’s main loop to check if new weights have been sent. If weights are available, they are applied to the registered model immediately.

Parameters:

timeout – Maximum time to wait for weights (seconds). Use 0 for immediate return.

Returns:

True if weights were received and applied False if no weights were available

Note: For SharedMemWeightSyncScheme, this always returns False since workers automatically see updates via shared memory.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources