Deploying Huggingface Models
A short tutorial on using pre-trained Huggingface models
Define the environment
The first thing we’ll do is define the environment that our app will run on. For this example, we’re building a Sentiment Analysis model using Huggingface.
First, you’ll define a Runtime
with an Image
.
We’re going to be defining which packages to install in the runtime, and the hardware this code will run on.
Inference function
Now, we’ll write some code to predict the sentiment of a given text prompt.
Our function takes keyword arguments, as (**inputs)
.
Adding a REST API
To prepare to deploy the API, we’ll add a rest_api
decorator to our inference function.
Add the following decorator to your predict_sentiment
function.
The complete app.py
file will look like this:
Deploying the app
To deploy the model, enter your terminal and cd
to the directory you’re
working on.
Then, run the following:
After running this command, you’ll see some logs in the console that show the progress of your deployment.
At the bottom of the console, you’ll see a URL for invoking your function. Here’s what a cURL request would look like:
Was this page helpful?