Stable Diffusion API
Building an API to generate images with stable diffusion.
Introduction
Beam is a new way of quickly prototyping AI projects. In this example, we’ll show how to deploy a serverless API endpoint that generates images with stable diffusion.
Setting up the environment
First, we’ll setup the environment to run Stable Diffusion.
We’re going to define a few things:
App
with a unique nameRuntime
with CPU and memory requirements, and anA10G
GPUImage
with Python packages required to run stable diffusionVolume
to mount a storage volume to cache the model weights
Inference function
You’ll write a simple function that takes a prompt passed from the user and returns an image generated using Stable Diffusion.
You need an access token from Huggingface to run this example. You can sign up for Huggingface and access your token on the settings page, and store it in the Beam Secrets Manager.
Saving image outputs
Notice the image.save()
method below. We’re going to save our generated images to an Output
file, by passing an outputs
argument to our function:
Here’s the full inference function:
Adding callbacks
If you supply a callback_url
argument, Beam will make a POST request to your server whenever a task completes:
Deployment
In your teriminal, run:
You’ll see the deployment appear in the dashboard.
Calling the API
In the dashboard, click Call API to view the API URL.
Paste the code into your terminal to make a request.
The API returns a Task ID.
Querying the status of a job
You can use the /v1/task/{task_id}/status/
API to retrieve the status of a job. Using the task ID, here’s how you can get the output with the API.
This returns a url
to the generated image in the outputs
object.
Was this page helpful?