Deploy Your Deep Learning Model on Kubernetes

Kubernetes October 2, 2020

As enterprises mature in their appreciation and use of AI, machine learning, and deep learning, a critical question arises: How can they scale and industrialize ML development? Many conversations around machine learning focus on the actual model, however, this is only one step along the way to a complete solution. To achieve actual application and scale in production, models must be developed within a repeatable process that accounts for the critical activities that precede and follow model development including finally getting it into a public facing deployment.

This post demonstrates how to deploy, scale, and manage a Deep Learning Model that serves up image recognition predictions using Kubermatic Kubernetes Platform.

Kubermatic Kubernetes Platform is a production grade open-source Kubernetes cluster management tool that offers flexibility and automation to integrate with your ML/DL workflows with full cluster lifecycle management.

Let’s get to it!

1. Making The Model Accessible Using Flask Server

We are deploying a Deep Learning model for image recognition. We used the CIFAR10 dataset that consists of 60000 32x32 colour images in 10 classes with the Gluon library in APACHE MXnet and NVIDIA GPUs to accelerate the workload. If you would like to use a pretrained model on CIFAR10 dataset check out this link.

We trained the model over a span of 200 epochs, as long as the validation error kept decreasing slowly without causing the model to overfit. We can better observe the training process though this plot :

Plot showing deep learning process with CIFAR10 Dataset

One important step after training is to save the model’s parameters so that we can load them later.

file_name = "net.params"
net.save_parameters(file_name)

Once the model is ready, the next step is to wrap your prediction code in a Flask server. This allows the server to accept an image as an argument to its request and return the model’s prediction in the response.

from gluoncv.model_zoo import get_model
import matplotlib.pyplot as plt
from mxnet import gluon, nd, image
from mxnet.gluon.data.vision import transforms
from gluoncv import utils
from PIL import Image
import io
import flask 
app = flask.Flask(__name__)

@app.route("/predict",methods=["POST"])
def predict():
    if flask.request.method == "POST":
        if flask.request.files.get("img"):
            img = Image.open(io.BytesIO(flask.request.files["img"].read()))
            transform_fn = transforms.Compose([
            transforms.Resize(32),
            transforms.CenterCrop(32),
            transforms.ToTensor(),
            transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
            img = transform_fn(nd.array(img))
            net = get_model('cifar_resnet20_v1', classes=10)
            net.load_parameters('net.params')
            pred = net(img.expand_dims(axis=0))
            class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
                       'dog', 'frog', 'horse', 'ship', 'truck']
            ind = nd.argmax(pred, axis=1).astype('int')
            prediction = 'The input picture is classified as [%s], with probability %.3f.'%
                         (class_names[ind.asscalar()], nd.softmax(pred)[0][ind].asscalar())
    return prediction

if __name__ == '__main__':
    app.run(host='0.0.0.0')

2. Dockerizing the Model:

In order to deploy our model to Kubernetes, we first need to create a container image with our model. In this section, we will install Docker and create a container image of our model.

Here are the steps to follow :

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce
sudo systemctl start docker
mkdir kubermatic-dl
cd kubermatic-dl
flask
gluoncv
matplotlib
mxnet
requests
Pillow
FROM python:3.6
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r ./requirements.txt
COPY app.py /app
CMD ["python", "app.py"]

We can break this Dockerfile down into three steps. First, creating the Dockerfile will instruct Docker to download a base image of Python 3. Once completed, we ask Docker to use the Python package manager pip to install the packages detailed in requirements.txt. Finally, we tell Docker to run our script via python app.py.

sudo docker build -t kubermatic-dl:latest .

This instructs Docker to build a container for the code located in our current working directory kubermatic-dl.

sudo docker run -d -p 5000:5000 kubermatic-dl

Status of Container

3. Upload the Model to Docker Hub:

Before we can deploy the model on Kubernetes, we first need to make it publicly available. We will do this by adding it to DockerHub.

You will need to create a Docker Hub account if you don’t already have one.

sudo docker login
sudo docker tag <your image id> <your docker hub id>/<app name>

sudo docker push <your docker hub name>/<app-name>

Uploading Deep Learning Model to Docker Hub

To check your image id, you simply run sudo docker images

4. Deploy the Model to a Kubernetes Cluster Using Kubermatic Kubernetes Platform:

First, we need to create a project on the Kubermatic Kubernetes Platform, then we create a Kubernetes cluster. You can find a quick start tutorial here.

Creating a Kubernetes Cluster with Kubermatic Kubernetes Platform

Once the cluster is created, download the kubeconfig that is used to configure access to your cluster, change it into the download directory, and export it into your environment.

Export of Kubernetes Cluster into your environment

kubectl cluster-info

Checking the cluster information

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubermatic-dl-deployment
spec:
  selector:
    matchLabels:
      app: kubermatic-dl
  replicas: 3
  template:
    metadata:
      labels:
        app: kubermatic-dl
    spec:
     containers:
     - name: kubermatic-dl
       image: kubermatic00/kubermatic-dl:latest
       imagePullPolicy: Always
       ports:
       - containerPort: 8080
kubectl apply -f deployment.yaml
kubectl expose deployment kubermatic-dl-deployment  --type=LoadBalancer --port 80 --target-port 5000
kubectl get service

Check the services in order to determine the status of our deployment

Picture of a horse and a dog

Test API: Input picture is classified

It’s Aliiiiive!

Summary

In this tutorial, we created a deep learning model to be served as a REST API using Flask. We then put the application inside of a Docker container, uploaded the container to Docker Hub, and deployed it with Kubernetes. With just a few commands Kubermatic Kubernetes Platform deployed our app and exposed it to the world.

Chaimaa Zyani

Chaimaa Zyani

Data Scientist