MonarchTransport¶
- class torchrl.modules.inference_server.MonarchTransport(*, max_queue_size: int = 1000)[source]¶
Transport using Monarch for distributed inference on GPU clusters.
Uses Monarch’s actor model and RDMA-capable channels for efficient cross-node communication. Monarch is imported lazily at instantiation time; importing the class itself does not require Monarch.
Note
This transport requires
monarchto be installed. It is designed for large-scale GPU clusters where Monarch is the preferred communication layer.- Keyword Arguments:
max_queue_size (int) – maximum size of the request queue. Default:
1000.