How To Deploy ML Models as a Service Using Flask
4 Steps: ML Model -> API Service -> Access from Anywhere
Let’s talk about machine learning models as a service today. If you’re wondering, “What is ML as a service, though?” You’re at the right place.
This will be a very simple guide, designed to help you understand the flow of operations for any ML model as a service.
Let me paint you a picture: When you use any AI Tools, you have no idea what is going on in the background, and all you have to do is click the “Generate” button and taaaddaaaa! you get the result.
So, this is a type of ML model as a service. It simply implies that you are building a service that abstracts an ML model from the users.
Technically, the term “ML as a service” means “to run an ML model as a service where the model is wrapped in a program/API so that we can invoke the model functions from anywhere on the internet through HTTP protocols by anyone (users or other applications)”.
You got the basic idea, right?
What is the main goal of this article?
In this article, I’d like to teach you how to develop an ML model as a service (API). I’ll be working with some basic examples here, but keep in mind that the basics remain the same if you want to design a complicated one with more terrifying architecture.
Focus on the fundamentals! alright?
Before this, I wrote an article titled “How to Deploy an ML Model in Production,” which focused on the factors to bear in mind. You should also check it out.
Who are these for?
This is intended for anybody interested in the proper operations in a data science team. Specifically intended for DevOps, MLOps, and LLMOps enthusiasts.
Are you one of them? Suppose you are. Please let me know in the comments how much this article was useful to you after reading it.
What are the Prerequisites?
Here, you will learn about the operational flow, from environment setup to deployment of the service. Before proceeding, please ensure that
Python 3.6 or later is installed.
Also required is a fundamental familiarity with Python, Flask, and RESTful APIs.
Finally, you will need basic administrative capabilities to install any essential software packages.
Are you ready?
The model I will be building here is “The Classification model using Iris dataset”. As I said, I will be working with a simple example. The focus is the process.
#Ad: Hold On! Learn how to prompt for image generation or how to perform A/B Testing using Python through my eBooks for FREE here. Also, please support my work by making a purchase from the premium products.
Step 1: Install the Required Packages
So, to get started, we need to set up the environment by installing the required dependencies correctly. For this project, I will need:
The core libraries like numpy and pandas
Scikit-learn for ML modelling and
Flask to build the API
The below commands are the ones you need to run in a new terminal to set up the environment:
# This will download the latest version of pip
curl https://bootstrap.pypa.io/pip/3.6/get-pip.py -o get-pip.py
# Now install the python's distutils package for managing modules
sudo apt-get install python3-distutils -y
# (optional) Just to install or reinstall pip to ensure compatibility
python3 get-pip.py --force-reinstall
# Lastly, this installs the core libraries, scikit and flask
python3 -m pip install --user numpy scikit-learn flask flask-restful
Note: If you subsequently need to develop an automation pipeline, you may utilize this shell script during the configuration stage.
Step 2: Generate the ML Model
Now that the environment of the project is set. We will start building the ML model now and save it so that it can be used by the API to make the prediction/classification later.
Here, the structure will be:
# Files & Folder Structure:
IrisProject
| -- Models
|-- iris_classifier_model.pk
| -- model_generator.py
| -- iris_classifier.py
First, you need to create a new folder named IrisProject
for the project and within this folder, create a new sub-folder named Models
. In this sub-folder, we will save all the serialized models.
mkdir IrisProject
cd IrisProject
mkdir models
Now, let’s get our hands dirty with the real work — “Building the model”
Let’s create a file named model_generator.py
and start coding; where I used the iris dataset and utilized the “K-Nearest Neighbors (KNN)” to classify the iris species. Lastly, use the pickle
library to save the trained model for later use.
# model_generator.py
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import pickle
# Load the Iris dataset
iris = datasets.load_iris()
# Split the dataset into training and test sets
validation_size = 0.20
seed = 100
X_train, X_test, Y_train, Y_test = train_test_split(
iris.data, iris.target, test_size=validation_size, random_state=seed
)
# Train the model using K-Nearest Neighbors
knn = KNeighborsClassifier()
knn.fit(X_train, Y_train)
# Save the trained model to the 'models' directory
with open('models/iris_classifier_model.pk', 'wb') as model_file:
pickle.dump(knn, model_file)
Finally, to execute this file and save the model we just trained, run this command in a new terminal:
python3 model_generator.py
This command will save the serialized model file iris_classifier_model.pk
to the models
folder.
Step 3: Develop RESTful API with Flask
We have our ML model with us now. And what do you think, is the next thing to do?
You are right! It’s time to create an API to expose the model’s functionality as a service. So, this is the way I will proceed:
I will create a file named
iris_classifier.py
and there, I will begin the coding part.First, let’s build a function
classify
that will act as an API endpoint that accepts the query parameters and returns the predictions inJSON
format.Finally, create an object for the
resource
we’ll need to input when invoking the API. Here we will specify how to gather the query parameters from the URL, classify the flower, and finally add these resources to the API.
This is how the code goes:
# iris_classifier.py
from flask import Flask, request
from flask_restful import Resource, Api
import pickle
app = Flask(__name__)
api = Api(app)
def classify(petal_len, petal_wd, sepal_len, sepal_wd):
species = ['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica']
with open('models/iris_classifier_model.pk', 'rb') as model_file:
model = pickle.load(model_file)
species_class = int(
model.predict([[petal_len, petal_wd, sepal_len, sepal_wd]])[0]
)
return species[species_class]
class IrisPredict(Resource):
def get(self):
# Parse query parameters from the URL
sl = float(request.args.get('sl'))
sw = float(request.args.get('sw'))
pl = float(request.args.get('pl'))
pw = float(request.args.get('pw'))
# Classify the flower
result = classify(sl, sw, pl, pw)
# Return the prediction as JSON
return {
'sepal_length': sl,
'sepal_width': sw,
'petal_length': pl,
'petal_width': pw,
'species': result
}
# Add the resource to the API
api.add_resource(IrisPredict, '/classify/')
Hurrayyyy! We are almost done. Just need to deploy and run the model as a service.
Step 4: Run the ML Model as a Service
So, to run the API and expose it as a service, let’s proceed in this way:
First, set the environment variables.
Then, start and run the
Flask
Server.Lastly, to test the API, make a simple request using
curl
.
This is how it goes:
# Setting Up the Environment variables:
export FLASK_APP=iris_classifier.py
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
# Running the Flask Server
python3 -m flask run --host=0.0.0.0 --port=8000
# Making a request to see if the API works
curl "http://0.0.0.0:8000/classify/?sl=5.1&sw=3.5&pl=1.4&pw=0.3"
Ahhaaahhh! It worked! The output looks something like this:
{
"sepal_length": 5.1,
"sepal_width": 3.5,
"petal_length": 1.4,
"petal_width": 0.3,
"species": "Iris-Setosa"
}
It classifies that, according to the parameters we provided as:
sepal_length = 5.1
sepal_width = 3.5
petal_length = 1.4
petal_width = 0.3
The predicted species using our ML model is Iris-Setosa
.
Yeah, that’s how it is done. It seems easy right?
Wrapping it Up:
By following this tutorial, you have:
Set up a Python environment with the required dependencies.
Built and saved an ML model using scikit-learn.
Developed a RESTful API with Flask to expose the model.
Deployed the API as a service accessible over HTTP.
This framework can be adapted for more complex models and applications.
I need you to experiment with different datasets, models, and deployment scenarios to extend your skills further!
Hey, please give some ❤️ on this article, if you had a good read. (This will help me a lot. Thanks!)
Connect: LinkedIn | Gumroad Shop | Medium | GitHub
Subscribe: Substack Newsletter | Appreciation Tip: Support