Faster Whisper
This guide will walk you through deploying and invoking a transcription API using the Faster Whisper model on Beam. The API can be invoked with either a URL to an .mp3
file or a base64-encoded audio file.
View the Code
See the code for this example on Github.
Initial Setup
In your Python file, add the following code to define your endpoint and handle the transcription:
Deployment
To deploy the app, run the following command:
If you named your file something different than app.py
, make sure to
customize the command with your correct file name.
This command will deploy your app as a web endpoint. The endpoint URL will be printed out in the shell.
Invoking the API
Once the API is running, you can invoke it with a URL to an .mp3
file using the following cURL command:
If you want to test with sample .mp3
files, you can find many samples on
this website.
Replace the URL with the URL printed in your shell, and [YOUR-AUTH-TOKEN]
with your authentication token.
Summary
You’ve successfully set up a highly performant serverless API for transcribing audio files using the Faster Whisper model on Beam. The API can handle both URLs to audio files and base64-encoded audio files. With the provided setup, you can easily serve, invoke, and develop your transcription API.
Was this page helpful?