Until recently, all of us were busy learning and implementing the Software Development Life Cycle (SDLC). In that software development approach, developers and engineers go from requirement elicitation to product release and maintenance.
However, in the dynamic world of machine learning (ML), the journey from development to deployment and putting it into production is not the same. It often shows intricacy and multifaceted behaviour. These projects also involve cloud storage for deployment.
That is where companies try to simplify AI/ML production and deployment using user-friendly web frameworks. Frameworks like Streamlit and Flask and tools like Kubernetes, Docker, and Helm streamline product deployment significantly.
This article is a quick guide on what these frameworks are. It also highlights how these frameworks and tools help in ML deployment.
What is Docker?
Docker is a software development platform & framework. It offers numerous toolsets to simplify creating, testing, building, deploying, and managing apps. It runs applications in isolated environments called containers. Containers are virtualized environments that packages an app and its dependencies (libraries, configuration files, databases, binaries, etc.) into a single unit.
Complex artificial intelligence and machine learning applications often require frameworks like Docker to bind all the project components together. It ensures the application runs consistently across multiple environments, from development to production. It is scalable, offers version control for the environment, and helps in efficient resource utilization.
Building and Deploying ML projects using Docker
For building and deploying machine learning projects using Docker, create a Docker for Python & ML.
# Create an official Python runtime as base image FROM python:3.8-slim # Define the working directory within the container WORKDIR /usr/src/app # Install essential libraries and tools RUN apt-get update && \ apt-get install -y build-essential # Copy the requirements.txt into Docker container COPY requirements.txt . # Now, install all Python dependencies necessary for ML project RUN pip install --no-cache-dir -r requirements.txt # Run the program as the container starts CMD ["python", "your_code.py"]
Next we have to build a Docker image. To create it, run the following code:
$ docker build -t ml-python-image . #Here the -t flag represents a tag or name to the Docker image. The "." at the end signifies the directory containing your Dockerfile. #Once we finish the image-building process, it is time to run a Docker container. To trigger a container from your image, you have to run the following command: $ docker run --name ml-python-container ml-python-image #Once your Docker container gets initiated, you can share the Docker image through Docker Hub. It is a cloud service that helps developers to share container images. $ docker login $ docker tag ml-python-image:latest project_username/ml-python-image:latest #Lastly, push the image using the docker push command. $ docker push latest_username/ml-python-image:latest
What is Kubernetes?
Kubernetes is another popular tool used for container orchestration. It is also known as “k8s” or “kube.” This technology helps developers to deploy complex machine-learning projects through containers. This container orchestration platform can schedule and automate the deployment. It also allows easy management and containerization scalability.
Kubernetes can manage a node/machine, cloud instance, or VM & can even cluster them in a group of nodes. It also comprises small deployable units called pods. All ML applications get deployed using pods. Within a pod, one can host multiple containers that work as a single unit. To set up your Kubernetes, you can use it on the local machine through Minikube or configure it on cloud platforms.
Deploying ML models on Kubernetes
Let us explore the steps used to deploy a machine-learning model on Kubernetes.
- For deploying any ML model, you have to containerize it. Any containerization tool (like Docker) can help you do that. Build and push that Docker image.
- Next, you must construct a deployment configuration for Kubernetes by defining the “deployment.yaml” file.
apiVersion: apps/v1 kind: Deployment metadata: name: ml-model-deployment spec: replicas: 3 selector: matchLabels: app: ml-model template: metadata: labels: app: ml-model spec: containers: - name: ml-model image: username/ml-model:v1 ports: - containerPort: 5000
3. Then, we must use the “kubectl” command to apply the ML project deployment on Kubernetes.
$ kubectl apply -f deployment.yaml
4. Now, to make your model accessible, expose it using a service. The command will be:
$ kubectl expose deployment ml-model-deployment --type=LoadBalancer --port=80 --target-port = 5000
5. Then, you have to retrieve the service URL for accessing your machine-learning models.
$ kubectl get services
6. Once your Kubernetes-based deployment is ready for use and access, you can scale it through Kubernetes. The command is:
$ kubectl scale deployment ml-model-deployment --replicas=5
What is Helm?
Helm is a post-development tool that helps ML developers install, represent, and upgrade apps operating on Kubernetes. It also automates configuring, packaging, and deploying Kubernetes applications by integrating all configuration files within a single reusable unit.
ML developers leverage Helm because it helps quick deployment, upgrade, and management of large & complex applications to Kubernetes. Helm uses packaged applications called “charts.” These are packages of pre-configured Kubernetes resources. So, rather than managing numerous Kubernetes manifests, developers can handle everything through a single Helm chart.
Steps for Deploying ML models using Helm
Let’s suppose you have an ML model for sales forecasting and prediction. Here are the steps to deploy the ML model using Helm.
- First, you must prepare the model by training it and integrating the API (Streamlit, Flask, etc.). Streamlit is an open-source application development framework – designed for data science and machine learning projects. Flask is another framework that offers various libraries to develop lightweight web apps in Python.
- Next, you should dockerize your ML model by creating a Docker image.
- Then, build a Helm chart by structuring the Helm chart for your ML model. In this phase, provide all necessary dependencies and resources like Services, ConfigMaps, Services, etc.
- Finally, for deploying the ML model, install the chart using the command “helm install [chart-name]” (without quotes). Kubernetes will create a resource defined in the chart.
Today, almost every company is leveraging the power of machine learning. So, seamlessly deploying ML models demands a holistic approach. The emergence of powerful tools and frameworks like Docker, Kubernetes, and Helm has revolutionized machine learning deployment. These technologies have collectively addressed the unique challenges of transitioning from development to production in the dynamic landscape of AI and ML.
In this landscape, VE3 stands out as a game-changing solution that complements Docker, Kubernetes, and Helm. We integrate seamlessly with these tools, adding a layer of intelligence to the deployment process. With our advanced capabilities in monitoring, resource optimization, and predictive scaling, we ensure that machine learning deployments run efficiently and adapt dynamically to changing workloads. The power of streamlined deployment, scalability, efficient resource utilization, and intelligent adaptation through us has become accessible, making AI integration a seamless and effective reality in today’s tech landscape.