GreenContext#
- class torch.cuda.green_contexts.GreenContext[source]#
Wrapper around a CUDA green context.
Warning
This API is in beta and may change in future releases.
- static create(*, num_sms=None, workqueue_scope=None, workqueue_concurrency_limit=None, device_id=None)[source]#
Create a CUDA green context.
At least one of
num_smsorworkqueue_scopemust be specified. Both can be combined to partition SMs and configure workqueues in the same green context.- Parameters:
num_sms (int, optional) – The number of SMs to use in the green context. When
None, SMs are not partitioned.workqueue_scope (str, optional) – Workqueue sharing scope. One of
"device_ctx"(shared across all contexts, default driver behaviour) or"balanced"(non-overlapping workqueues with other balanced green contexts). WhenNone, no workqueue configuration is applied.workqueue_concurrency_limit (int, optional) – Maximum number of concurrent stream-ordered workloads for the workqueue. Requires
workqueue_scopeto be set.device_id (int, optional) – The device index of green context. When
None, the current device is used.
- Return type:
- static max_workqueue_concurrency(device_id=None)[source]#
Return the maximum workqueue concurrency limit for the device.
This queries the device for the default number of concurrent stream-ordered workloads supported by workqueue configuration resources.