Environment
Image
Defines a custom container image that your code will run in.
An Image object encapsulates the configuration of a custom container image that will be used as the runtime environment for executing tasks.
The Python version to be used in the image. Defaults to Python 3.8.
A list of Python packages to install in the container image. Alternatively, a
string containing a path to a requirements.txt can be provided. Default is [].
A list of shell commands to run when building your container image. These
commands can be used for setting up the environment, installing dependencies,
etc. Default is [].
A custom base image to replace the default ubuntu20.04 image used in your container. This can be a public or private image from Docker Hub, Amazon ECR, Google Cloud Artifact Registry, or
NVIDIA GPU Cloud Registry. The formats for these registries are respectively
docker.io/my-org/my-image:0.1.0
,
111111111111.dkr.ecr.us-east-1.amazonaws.com/my-image:latest
,
us-east4-docker.pkg.dev/my-project/my-repo/my-image:0.1.0
, and nvcr.io/my-org/my-repo:0.1.0
. Default is None.A key/value pair or key list of environment variables that contain credentials to
a private registry. When provided as a dict, you must supply the correct keys and values.
When provided as a list, the keys are used to lookup the environment variable value
for you. Default is None.
List of Base Image Creds
Dict of Base Image Creds
Adds environment variables to an image. These will be available when building the image
and when the container is running. This can be a string, a list of strings, or a
dictionary of strings. The string must be in the format of
KEY=VALUE
. If a list of
strings is provided, each element should be in the same format. Default is None.Builds the image on a GPU.
Image.from_registry()
Create an Image from a remote container registry.
The full URI of the registry image.
Credentials for private registries. Either a dict of key to value, or a list
of env var keys to read at build time.
Image.from_id()
Create an image from a filesystem snapshot.
Snapshot to use as the base.
Image.from_dockerfile()
Build the base image using a local Dockerfile.
Path to Dockerfile.
Directory to sync as build context. Defaults to the Dockerfile directory.
Image.add_python_packages()
Queue pip packages to install during the build. Accepts a list or a path to requirements.txt.
Package names or a
requirements.txt
path.Image.add_commands()
Shell commands to run during the build in the order added.
Shell commands.
Image.with_envs()
Add environment variables available during build and at runtime.
One
KEY=VALUE
, a list of them, or a dict.Image.with_secrets()
Expose platform secrets to the build environment.
Secret names created via the platform.
Image.micromamba()
Switch package management to micromamba and target a micromamba Python.
Image.add_micromamba_packages()
Install micromamba packages and optional channels.
Package names or a
requirements.txt
path.Micromamba channels.
Image.build_with_gpu()
Request the build to run on a GPU node. Useful when installers detect GPU and compile CUDA parts.
GPU type such as
T4
, A10G
, H100
, 4090
.Context
Context is a dataclass used to store various useful fields you might want to access in your entry point logic.
Field Name | Type | Default Value | Purpose |
---|---|---|---|
container_id | Optional[str] | None | Unique identifier for a container |
stub_id | Optional[str] | None | Identifier for a stub |
stub_type | Optional[str] | None | Type of the stub (function, endpoint, task queue, etc) |
callback_url | Optional[str] | None | URL called when the task status changes |
task_id | Optional[str] | None | Identifier for the specific task |
timeout | Optional[int] | None | Maximum time allowed for the task to run (seconds) |
on_start_value | Optional[Any] | None | Any values returned from the on_start function |
bind_port | int | 0 | Port number to bind a service to |
python_version | str | "" | Version of Python to be used |
Client
You can use this to track the state of tasks and deployments.
Authentication token for the Beam API. If not provided, will use the
BEAM_TOKEN
environment variable.Client.upload_file()
Upload a local file to be used as input to a function or deployment.
The path to the local file to upload.
Client.get_task_by_id()
Retrieve a task by its task ID.
The task ID to retrieve.
Client.get_deployment_by_id()
Retrieve a deployment using its deployment ID.
The deployment ID to retrieve.
Client.get_deployment_by_stub_id()
Retrieve a deployment using the associated stub ID.
The stub ID associated with the deployment.
Sandbox
A sandboxed container for running Python code or arbitrary processes. You can use this to create isolated environments where you can execute code, manage files, and run processes.Sandbox.connect()
Connect to an existing sandbox instance by ID.
The container ID of the existing sandbox instance.
Sandbox.create()
Create a new sandbox instance.
This method creates a new containerized sandbox environment with the
specified configuration.
Sandbox.create_from_memory_snapshot()
Create a new sandbox instance from a memory snapshot.
This method creates a new containerized sandbox environment with the
specified configuration from a memory snapshot.
Sandbox.debug()
Print the debug buffer contents to stdout.
This method outputs any debug information that has been collected
during sandbox operations.
SandboxInstance
A sandbox instance that provides access to the sandbox internals. This class represents an active sandboxed container and provides methods for process management, file system operations, preview URLs, and lifecycle management.SandboxInstance.expose_port()
Dynamically expose a port to the internet.
This method creates a public URL that allows external access to a specific
port within the sandbox. The URL is SSL-terminated and provides secure
access to services running in the sandbox.
The port number to expose within the sandbox.
SandboxInstance.list_urls()
List the URLs / ports that are exposed on the sandbox.
This method returns a list of preview URLs / ports that are exposed on the sandbox.
SandboxInstance.sandbox_id()
Get the ID of the sandbox.
SandboxInstance.terminate()
Terminate the container associated with this sandbox instance.
This method stops the sandbox container and frees up associated resources.
Once terminated, the sandbox instance cannot be used for further operations.
SandboxInstance.update_ttl()
Update the keep warm setting of the sandbox.
This method allows you to change how long the sandbox will remain active
before automatically shutting down.
The number of seconds to keep the sandbox alive. Use -1 for sandboxes that
never timeout.
SandboxInstance.create_image_from_filesystem()
Create a filesystem snapshot of the current sandbox.
This method captures the filesystem state of the sandbox as an immutable artifact.
You can later restore this snapshot into a new sandbox instance.
SandboxInstance.snapshot_memory()
Create a memory snapshot of the current sandbox.
This method captures the memory state of the sandbox as an immutable artifact.
You can later restore this snapshot into a new sandbox instance.
SandboxProcess
Represents a running process within a sandbox. This class provides control and monitoring capabilities for processes running in the sandbox. It allows you to wait for completion, kill processes, check status, and access output streams.SandboxProcess.kill()
Kill the process.
This method forcefully terminates the running process. Use this
when you need to stop a process that is not responding or when
you want to cancel a long-running operation.
SandboxProcess.status()
Get the status of the process.
This method returns the current exit code and status string of the process.
An exit code of -1 indicates the process is still running.
SandboxProcess.wait()
Wait for the process to complete.
This method blocks until the process finishes execution and returns
the exit code. It polls the process status until completion.
SandboxProcessManager
Manager for executing and controlling processes within a sandbox. This class provides a high-level interface for running commands and Python code within the sandbox environment. It supports both blocking and non-blocking execution, environment variable configuration, and working directory specification.SandboxProcessManager.exec()
Run an arbitrary command in the sandbox.
This method executes shell commands within the sandbox environment.
The command is executed using the shell available in the sandbox.
The command and its arguments to execute.
The working directory to run the command in. Default is None.
Environment variables to set for the command. Default is None.
SandboxProcessManager.get_process()
Get a process by its PID.
The process ID to look up.
SandboxProcessManager.list_processes()
List all processes running in the sandbox.
SandboxProcessManager.run_code()
Run Python code in the sandbox.
This method executes Python code within the sandbox environment. The code
is executed using the Python interpreter available in the sandbox.
The Python code to execute.
Whether to wait for the process to complete. If True, returns
SandboxProcessResponse. If False, returns SandboxProcess.
The working directory to run the code in. Default is None.
Environment variables to set for the process. Default is None.
SandboxProcessResponse
Response object containing the results of a completed process execution. This class encapsulates the output and status information from a process that has finished running in the sandbox.SandboxProcessStream
A stream-like interface for reading process output in real-time. This class provides an iterator interface for reading stdout or stderr from a running process. It buffers output and provides both line-by-line iteration and bulk reading capabilities. Example:SandboxProcessStream()
SandboxProcessStream.read()
Read all remaining output from the stream.
SandboxProcessError
SandboxConnectionError
SandboxFileInfo
Metadata of a file in the sandbox. This class provides detailed information about files and directories within the sandbox filesystem, including permissions, ownership, and modification times.SandboxFileSystem
File system interface for managing files within a sandbox. This class provides a comprehensive API for file operations within the sandbox, including uploading, downloading, listing, and managing files and directories.SandboxFileSystem.create_directory()
Create a directory in the sandbox.
Note: This method is not yet implemented.
The path where the directory should be created.
SandboxFileSystem.delete_directory()
Delete a directory in the sandbox.
Note: This method is not yet implemented.
The path of the directory to delete.
SandboxFileSystem.delete_file()
Delete a file in the sandbox.
This method removes a file from the sandbox filesystem.
The path to the file within the sandbox.
SandboxFileSystem.download_file()
Download a file from the sandbox to a local path.
This method downloads a file from the sandbox filesystem and
saves it to the specified local path.
The path to the file within the sandbox.
The destination path on the local filesystem.
SandboxFileSystem.find_in_files()
Find files matching a pattern in the sandbox.
This method searches for files within the specified directory
that match the given pattern.
The directory path to search in.
The pattern to match files against.
SandboxFileSystem.list_files()
List the files in a directory in the sandbox.
This method returns information about all files and directories
within the specified directory in the sandbox.
The path to the directory within the sandbox.
SandboxFileSystem.replace_in_files()
Replace a string in all files in a directory.
This method performs a find-and-replace operation on all files
within the specified directory, replacing occurrences of the
old string with the new string.
The directory path to search in.
The string to find and replace.
The string to replace with.
SandboxFileSystem.stat_file()
Get the metadata of a file in the sandbox.
This method retrieves detailed information about a file or directory
within the sandbox, including size, permissions, ownership, and
modification time.
The path to the file within the sandbox.
SandboxFileSystem.upload_file()
Upload a local file to the sandbox.
This method reads a file from the local filesystem and uploads
it to the specified path within the sandbox.
The path to the local file to upload.
The destination path within the sandbox.
SandboxFileSystemError
SandboxFilePosition
A position in a file.SandboxFilePosition()
SandboxFileSearchMatch
A match in a file.SandboxFileSearchMatch()
SandboxFileSearchRange
A range in a file.SandboxFileSearchRange()
Pod
A Pod is an object that allows you to run arbitrary services in a fast, scalable, and secure remote container on Beam.
You can think of a Pod as a lightweight compute environment that you fully control—complete with a custom container, ports you can expose, environment variables, volumes, secrets, and GPUs.
- Build your container (if necessary) and sync your local files to the remote environment.
- Create a Pod container with the specified resources (2 CPU cores, 512 MiB memory).
- Run
python -m http.server 8000
inside that remote container. - Expose the container on port 8000. You’ll get back a container ID and a URL to access it.
- Once the Pod is running, you can perform additional operations—like opening an interactive shell inside the container or deploying the Pod as a named app.
The command to run in the container. By default, nothing is specified, so you
must provide an entrypoint to actually run anything. You can override or
provide this entrypoint at creation time using
pod.create(entrypoint=...)
.A list of ports to expose. If provided, the container will be accessible
through an HTTP URL for each port opened. For example, if
[8000]
is
specified, you’ll get <Pod URL>:8000
.An optional name for the pod. If you plan to deploy this Pod (i.e., treat it
as a persistent app), you should specify a name. If you do not specify a name,
Beam will generate a random name at deploy time, or you must specify
--name=...
in the CLI.The amount of CPU allocated to the container. For example,
2
means 2 CPU
cores, "500m"
might mean half a CPU core, 1.0
means 1 CPU core, etc.The amount of memory (in MiB) allocated to the container. You can also specify
this as a string with units (e.g.,
"512Mi"
, "2Gi"
).The type or name of the GPU device to be used for GPU-accelerated tasks. You
can specify multiple GPUs by providing a list (in which case the scheduler
prioritizes their selection based on the order in the list). If no GPU is
required, leave it empty.
The number of GPUs allocated to the container. If a GPU is specified but this
value is set to 0, it will automatically be updated to 1.
The container image to be used for running the Pod. Defaults to a basic Beam
Image
object, which can be customized (e.g., base_image=
,
python_packages=
, and more).A list of volumes to be mounted into the container. Volumes allow you to
persist data or mount external storage services, such as S3-compatible
buckets.
A list of secrets that are injected into the container as environment
variables. Each secret must be configured in your Beam project.
A dictionary of environment variables to inject into the container.
For example:
{"MY_API_KEY": "abc123"}
.The number of seconds to keep the container alive after the last request. A
value of
-1
means never scale down to zero (i.e., keep the container running
indefinitely). This only applies if you deploy the Pod.If
False
, allows the container to be accessed without an auth token. This is
useful for public-facing services. If you need to secure it behind an auth
token, set it to True
.Create
Create a new container that runs until it completes or is explicitly killed.
Deploy
Deploy the Pod as a named persistent service. Pods can be deployed programmatically via Python, or CLI.
Deploying via Python
app.py
app.py
app.py
Function
Decorator for defining a remote function.
This method allows you to run the decorated function in a remote container.
Function
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
MiB, or as a string with units (e.g., “1Gi”).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty. Multiple GPUs can be
specified as a list.
The container image used for task execution.
The maximum number of seconds a task can run before timing out. Set to -1 to
disable the timeout.
The maximum number of times a task will be retried if the container crashes.
An optional URL to send a callback to when a task is completed, timed out, or
cancelled.
A list of storage volumes to be associated with the function.
A list of secrets that are injected into the container as environment
variables.
An optional name for this function, used during deployment. If not specified,
you must specify the name at deploy time with the
--name
argument.The task policy for the function. This helps manage the lifecycle of an
individual task. Setting values here will override timeout and retries.
A list of exceptions that will trigger a retry.
Remote
You can run any function remotely on Beam by using the .remote()
method:
python example.py
:
Map
You can scale out workloads to many containers using the .map()
method. You might use this for parallelizing computational-heavy tasks, such as batch inference or data processing jobs.
Schedule
This method allows you to schedule the decorated function to run at specific intervals defined by a cron expression.
The cron expression or predefined schedule that determines when the task will run.
This parameter defines the interval or specific time when the task should execute.
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
megabytes (e.g., 128 for 128 megabytes).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty.
The container image used for the task execution..
The maximum number of seconds a task can run before it times out. Default is
180. Set it to -1 to disable the timeout.
The number of concurrent tasks to handle per container. Modifying this
parameter can improve throughput for certain workloads. Workers will share the
CPU, Memory, and GPU defined. You may need to increase these values to
increase concurrency.
The maximum number of tasks that can be pending in the queue. If the number of
pending tasks exceeds this value, the task queue will stop accepting new
tasks.
An optional URL to send a callback to when a task is completed, timed out, or
cancelled.
The maximum number of times a task will be retried if the container crashes.
A list of volumes to be mounted to the container.
A list of secrets that are injected into the container as environment
variables.
An optional name for this endpoint, used during deployment. If not specified,
you must specify the name at deploy time with the
--name
argumentPredefined Schedule | Description | Cron Expression |
---|---|---|
@yearly (or @annually ) | Run once a year at midnight on January 1st | 0 0 1 1 * |
@monthly | Run once a month at midnight on the first day of the month | 0 0 1 * * |
@weekly | Run once a week at midnight on Sunday | 0 0 * * 0 |
@daily (or @midnight ) | Run once a day at midnight | 0 0 * * * |
@hourly | Run once an hour at the beginning of the hour | 0 * * * * |
Endpoint
Decorator used for deploying a web endpoint.
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
megabytes (e.g., 128 for 128 megabytes).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty.
The container image used for the task execution..
The maximum number of seconds a task can run before it times out. Default is
180. Set it to -1 to disable the timeout.
The number of concurrent tasks to handle per container. Modifying this
parameter can improve throughput for certain workloads. Workers will share the
CPU, Memory, and GPU defined. You may need to increase these values to
increase concurrency.
The duration in seconds to keep the task queue warm even if there are no
pending tasks. Keeping the queue warm helps to reduce the latency when new
tasks arrive. Default is 10s.
The maximum number of tasks that can be pending in the queue. If the number of
pending tasks exceeds this value, the task queue will stop accepting new
tasks.
A function that runs when the container first starts. The return values of the
on_start
function can be retrieved by passing a context
argument to your
handler function.A list of volumes to be mounted to the container.
A list of secrets that are injected into the container as environment
variables.
An optional name for this endpoint, used during deployment. If not specified,
you must specify the name at deploy time with the
--name
argumentIf false, allows the endpoint to be invoked without an auth token.
The maximum number of times a task will be retried if the container crashes.
Capture a memory snapshot of the running container after
on_start
completes,
speeding up cold boot. Initial checkpoints can take up to 3 minutes to
capture, and 5 minutes to distribute among our servers.Serve
beam serve
monitors changes in your local file system, live-reloads the remote environment as you work, and forwards remote container logs to your local shell.
Serve is great for prototyping. You can develop in a containerized cloud environment in real-time, with adjustable CPU, memory, GPU resources.
It’s also great for testing an app before deploying it. Served functions are orchestrated identically to deployments, which means you can test your Beam workflow end-to-end before deploying.
To start an ephemeral serve
session, you’ll use the serve
command:
Sessions end automatically after 10 minutes of inactivity.
By default, Beam will sync all the files in your working directory to the
remote container. This allows you to use the files you have locally while
developing. If you want to prevent some files from getting uploaded, you can
create a
.beamignore
.Task Queue
Decorator for defining a task queue.
This method allows you to create a task queue out of the decorated function.
The tasks are executed asynchronously. You can interact with the task queue either through an API (when deployed), or directly in Python through the .put()
method.
Task Queue
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
megabytes (e.g., 128 for 128 megabytes).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty.
The container image used for the task execution..
The maximum number of seconds a task can run before it times out. Default is
180. Set it to -1 to disable the timeout.
The number of concurrent tasks to handle per container. Modifying this
parameter can improve throughput for certain workloads. Workers will share the
CPU, Memory, and GPU defined. You may need to increase these values to
increase concurrency.
The duration in seconds to keep the task queue warm even if there are no
pending tasks. Keeping the queue warm helps to reduce the latency when new
tasks arrive. Default is 10s.
The maximum number of tasks that can be pending in the queue. If the number of
pending tasks exceeds this value, the task queue will stop accepting new
tasks.
An optional URL to send a callback to when a task is completed, timed out, or
cancelled.
The maximum number of times a task will be retried if the container crashes.
A list of volumes to be mounted to the container.
A list of secrets that are injected into the container as environment
variables.
An optional name for this endpoint, used during deployment. If not specified,
you must specify the name at deploy time with the
--name
argumentA list of exceptions that will trigger a retry.
Capture a memory snapshot of the running container after
on_start
completes,
speeding up cold boot. Initial checkpoints can take up to 3 minutes to
capture, and 5 minutes to distribute among our servers.Serve
beam serve
monitors changes in your local file system, live-reloads the remote environment as you work, and forwards remote container logs to your local shell.
Serve is great for prototyping. You can develop in a containerized cloud environment in real-time, with adjustable CPU, memory, GPU resources.
It’s also great for testing an app before deploying it. Served functions are orchestrated identically to deployments, which means you can test your Beam workflow end-to-end before deploying.
To start an ephemeral serve
session, you’ll use the serve
command:
Sessions end automatically after 10 minutes of inactivity.
By default, Beam will sync all the files in your working directory to the
remote container. This allows you to use the files you have locally while
developing. If you want to prevent some files from getting uploaded, you can
create a
.beamignore
.ASGI
Decorator used for creating and deploying an ASGI application.
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
MiB, or as a string with units (e.g., “1Gi”).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty.
The container image used for task execution.
A list of volumes to be mounted to the container.
The maximum number of seconds a task can run before timing out. Set to -1 to
disable the timeout.
The maximum number of times a task will be retried if the container crashes.
The number of processes handling tasks per container. Workers share CPU,
memory, and GPU resources.
The maximum number of concurrent requests the ASGI application can handle.
The duration in seconds to keep the task queue warm when there are no pending
tasks.
The maximum number of tasks that can be pending in the queue.
A list of secrets injected into the container as environment variables.
An optional name for this ASGI application, used during deployment.
If false, allows the ASGI application to be invoked without an auth token.
Configure deployment autoscaling using various strategies.
An optional URL to send a callback when a task is completed, timed out, or
canceled.
The task policy for the function, overriding timeout and retries.
Serve
beam serve
monitors changes in your local file system, live-reloads the remote environment as you work, and forwards remote container logs to your local shell.
Serve is great for prototyping. You can develop in a containerized cloud environment in real-time, with adjustable CPU, memory, GPU resources.
It’s also great for testing an app before deploying it. Served functions are orchestrated identically to deployments, which means you can test your Beam workflow end-to-end before deploying.
To start an ephemeral serve
session, you’ll use the serve
command:
Sessions end automatically after 10 minutes of inactivity.
By default, Beam will sync all the files in your working directory to the
remote container. This allows you to use the files you have locally while
developing. If you want to prevent some files from getting uploaded, you can
create a
.beamignore
.Realtime
Decorator for creating a real-time application built on top of ASGI/websockets.The handler function runs every time a message is received over the websocket.
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
MiB, or as a string with units (e.g., “1Gi”).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU is required, leave it empty.
The container image used for task execution.
A list of volumes to be mounted to the ASGI application.
The maximum number of seconds a task can run before timing out. Set to -1 to
disable the timeout.
The number of processes handling tasks per container. Workers share CPU,
memory, and GPU resources.
The maximum number of concurrent requests the ASGI application can handle.
This allows processing multiple requests concurrently.
The duration in seconds to keep the task queue warm even if there are no
pending tasks.
The maximum number of tasks that can be pending in the queue.
A list of secrets injected into the container as environment variables.
An optional name for this ASGI application, used during deployment. If not
specified, you must provide the name during deployment.
If false, allows the ASGI application to be invoked without an auth token.
Configure a deployment autoscaler to scale the function horizontally using
various autoscaling strategies.
An optional URL to send a callback to when a task is completed, timed out, or
canceled.
Serve
beam serve
monitors changes in your local file system, live-reloads the remote environment as you work, and forwards remote container logs to your local shell.
Serve is great for prototyping. You can develop in a containerized cloud environment in real-time, with adjustable CPU, memory, GPU resources.
It’s also great for testing an app before deploying it. Served functions are orchestrated identically to deployments, which means you can test your Beam workflow end-to-end before deploying.
To start an ephemeral serve
session, you’ll use the serve
command:
Sessions end automatically after 10 minutes of inactivity.
By default, Beam will sync all the files in your working directory to the
remote container. This allows you to use the files you have locally while
developing. If you want to prevent some files from getting uploaded, you can
create a
.beamignore
.Function
Decorator for defining a remote function.
This method allows you to run the decorated function in a remote container.
Function
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
MiB, or as a string with units (e.g., “1Gi”).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty. Multiple GPUs can be
specified as a list.
The container image used for task execution.
The maximum number of seconds a task can run before timing out. Set to -1 to
disable the timeout.
The maximum number of times a task will be retried if the container crashes.
An optional URL to send a callback to when a task is completed, timed out, or
cancelled.
A list of storage volumes to be associated with the function.
A list of secrets that are injected into the container as environment
variables.
An optional name for this function, used during deployment. If not specified,
you must specify the name at deploy time with the
--name
argument.The task policy for the function. This helps manage the lifecycle of an
individual task. Setting values here will override timeout and retries.
A list of exceptions that will trigger a retry.
Determines whether the function continues running in the background after the
client disconnects.
Bot
Decorator for defining a bot with multiple states and transitions.
The bot
decorator allows you to define a bot with specific states (locations) and transitions. These bots run as distributed, stateful workflows, where each transition is executed in a remote container.
The underlying language model (e.g.,
"gpt-4o"
) used by the bot.The Open API key used to authenticate requests to Open AI
A list of
BotLocation
objects defining the bot’s states. Each location
corresponds to a type (e.g., BaseModel
) that the bot operates on.A human-readable description of the bot’s purpose.
Specifies whether the bot requires an auth token passed to invoke it.
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
megabytes (e.g., 128 for 128 megabytes).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU required, leave it empty.
The container image used for the task execution..
The maximum number of seconds a task can run before it times out. Default is
180. Set it to -1 to disable the timeout.
The number of concurrent tasks to handle per container. Modifying this
parameter can improve throughput for certain workloads. Workers will share the
CPU, Memory, and GPU defined. You may need to increase these values to
increase concurrency.
The duration in seconds to keep the task queue warm even if there are no
pending tasks. Keeping the queue warm helps to reduce the latency when new
tasks arrive. Default is 10s.
The maximum number of tasks that can be pending in the queue. If the number of
pending tasks exceeds this value, the task queue will stop accepting new
tasks.
A function that runs when the container first starts. The return values of the
on_start
function can be retrieved by passing a context
argument to your
handler function.A list of volumes to be mounted to the container.
A list of secrets that are injected into the container as environment
variables.
An optional name for this endpoint, used during deployment. If not specified,
you must specify the name at deploy time with the
--name
argumentIf false, allows the endpoint to be invoked without an auth token.
The maximum number of times a task will be retried if the container crashes.
Autoscaling
QueueDepthAutoscaler
Adds an autoscaler to an app.
The number of containers to keep running at baseline. The containers will
continue running until the deployment is stopped.
The max number of tasks that can be queued up to a single container. This can
help manage throughput and cost of compute. When
max_tasks_per_container
is
0, a container can process any number of tasks.The maximum number of containers that the autoscaler can create. It defines an
upper limit to avoid excessive resource consumption.
Data Structures
Simple Queue
Creates a Queue instance.
Use this a concurrency safe distributed queue, accessible both locally and within remote containers.
Serialization is done using cloudpickle, so any object that supported by that should work here. The interface is that of a standard python queue.
Because this is backed by a distributed queue, it will persist between runs.
Simple Queue
The name of the queue (any arbitrary string).
Map
Creates a Map Instance. Use this a concurrency safe key/value store, accessible both locally and within remote containers. Serialization is done using cloudpickle, so any object that supported by that should work here. The interface is that of a standard python dictionary. Because this is backed by a distributed dictionary, it will persist between runs.Map
The name of the map (any arbitrary string).
Storage
Beam allows you to create highly-available storage volumes that can be used across tasks. You might use volumes for things like storing model weights or large datasets.Volume
Creates a Volume instance.
When your container runs, your volume will be available at ./{mount_path}
and /volumes/{name}
.
The name of the volume, a descriptive identifier for the data volume.
The path where the volume is mounted within the container environment.
CloudBucket
Creates a CloudBucket instance.
When your container runs, your cloud bucket will be available at ./{mount_path}
and /volumes/{name}
.
The name of the cloud bucket, must be the same as the bucket name in the cloud
provider.
The path where the cloud bucket is mounted within the container environment.
Configuration for the cloud bucket.
CloudBucketConfig
Configuration for a cloud bucket.
Whether the volume is read-only.
The beam secret name for the S3 access key for the external provider.
The beam secret name for the S3 secret key for the external provider.
The S3 endpoint for the external provider.
The region for the external provider.
Output
A file that a task has created.
Use this to save a file you may want to save and share later.
The length of time the pre-signed URL will be available for. The file will be
automatically deleted after this period.
Files
Saving a file and generating a public URL.PIL Images
Saving aPIL.Image
object.
Directories
Saving a directory.Experimental
Signal
Creates a Signal instance. Signals can be used to notify a container to perform specific actions using a flag.
For example, signals can reload global state, send a webhook, or terminate the container.
This is a great tool for automated retraining and deployment.
The name of the signal.
A function to be called when the signal is set. If not provided, no handler
will be executed.
The number of seconds after which the signal will be automatically cleared if
both
handler
and clear_after_interval
are set.Integrations
vllm
A wrapper around the vLLM library that allows you to deploy it as an ASGI app.
The number of CPU cores allocated to the container.
The amount of memory allocated to the container. It should be specified in
MiB, or as a string with units (e.g., “1Gi”).
The type or name of the GPU device to be used for GPU-accelerated tasks. If
not applicable or no GPU is required, leave it empty.
The container image used for task execution. This will include an
add_python_packages
call with ["fastapi", "vllm", "huggingface_hub"]
added
to ensure vLLM can run.The number of workers to run in the container.
The maximum number of concurrent requests the container can handle.
The number of seconds to keep the container warm after the last request.
The maximum number of pending tasks allowed in the container.
The maximum number of seconds to wait for the container to start.
Whether the endpoints require authorization.
The name of the container. If not specified, you must provide it during
deployment.
The volumes to mount into the container. Default is a single volume named
“vllm_cache” mounted to ”./vllm_cache”, used as the download directory for
vLLM models.
A list of secrets to pass to the container. To enable Hugging Face
authentication for downloading models, set the
HF_TOKEN
in the secrets.The autoscaler to use for scaling container deployments.
The arguments to configure the vLLM model.
Utils
env
You can use env.is_remote()
to only import Python packages when your app is running remotely. This is used to avoid import errors, since your Beam app might be using Python packages that aren’t installed on your local computer.
env.is_remote()
is to import packages inline in your functions. For more information on this topic, visit this page.