RayTransport¶
- class torchrl.modules.inference_server.RayTransport(*, max_queue_size: int = 1000)[source]¶
Transport using Ray queues for distributed inference.
Uses
ray.util.queue.Queuefor both request submission and response routing. Per-actor response queues ensure correct result routing without serialising Queue objects through other queues.Ray is imported lazily at instantiation time; importing the class itself does not require Ray.
- Keyword Arguments:
max_queue_size (int) – maximum size of the request queue. Default:
1000.
Example
>>> import ray >>> ray.init() >>> transport = RayTransport() >>> client = transport.client() >>> # pass *client* to a Ray actor for remote inference requests