Image
with the Python packages required for this app.
Because this script will run remotely, we need to make sure our local Python interpreter doesn’t try to install these packages locally.
We’ll use the if env.is_remote()
flag to conditionally import the Python packages only when the script is running remotely on Beam.
@endpoint()
decorator to it, we can expose this function as a RESTful API.
There are a few things to take note of:
image
with the Python requirements we defined aboveon_start
function that runs once when the container first boots. The value from on_start
(in this case, our pipe
handler) is available in the inference function using the context
value: pipe = context.on_start_value
volumes
, which are used to store the downloaded LoRAs and model weights on Beamkeep_warm_seconds
, which tells Beam how long to keep the container running between requestsOutput.from_pil_image(image).save()
method below.
This will generate a sharable URL to access the images created from the inference function:
beam serve
command:
curl
command in your shell to call the API.
The API will return a pre-signed URL with the image generated:
medieval rich kingpin sitting in a tavern, raw
beam serve
command is used for temporary APIs. When you’re ready to move to production, deploy a persistent endpoint: