GPU Acceleration
Running Tasks on GPU
You can run any code on a cloud GPU by passing a gpu
argument in your function decorator.
Available GPUs
Currently available GPU options are:
T4
(16Gi)A10G
(24Gi)A100-40
(40Gi)RTX4090
(24Gi)
Check GPU Availability
Run beam machine list
to check whether a machine is available.
Looking for a specific GPU that isn’t listed here? Let us know!
Prioritizing GPU Types
You can split traffic across multiple GPUs by passing a list to the gpu
parameter.
The list is ordered by priority. You can choose which GPUs to prioritize by specifying them at the front of the list.
In this example, the T4
is prioritized over the A10G
, followed by the A100-40
.
Using Multiple GPUs
You can run workloads across multiple GPUs by using the gpu_count
parameter.
This feature is available by request only. Please send us a message in Slack, and we’ll enable it on your account.
GPU Regions
Beam runs on servers distributed around the world, with primary locations in the United States, Europe, and Asia. If you would like your workloads to run in a specific region of the globe, please reach out.
Was this page helpful?