InferenceClient¶
- class torchrl.modules.inference_server.InferenceClient(transport: InferenceTransport)[source]¶
Actor-side handle for an
InferenceServer.Wraps a transport’s
submit()so that callingclient(td)looks like a regular synchronous policy call, while the actual computation is batched on the server.- Parameters:
transport (InferenceTransport) – the transport shared with the server.
Example
>>> client = transport.client() >>> td_out = client(td_in) # blocking >>> future = client.submit(td_in) # non-blocking >>> td_out = future.result()
- submit(td: TensorDictBase) Future[TensorDictBase][source]¶
Submit a request and return a Future immediately.