Kubernetes can be intimidating for beginners, but following a step-by-step deployment process can simplify the task. This guide will help make the process even easier.
We'll go through the deployment step by step for clarity.
Assume you already have a program written in Python named program.py. You should have a cloud server with Linux installed, and apart from Kubernetes, you'll need Docker, which you likely already know how to use. Nonetheless, it's worth revisiting how to build containers in Docker, which is where we'll start.
There are various ways to build a Docker container image. One of the most convenient tools is buildah. After installing it, create a directory for building the image and specify dependencies in a requirements.txt file. Here’s an example of such a file; you should replace the dependencies with your own.
Next, open and examine the Dockerfile, the configuration file for Docker that contains instructions for building the container image. Pay attention to the following lines:
FROM
: Specifies the interpreter. For example: FROM python:3.8
.
RUN mkdir
: Creates a directory inside the image, e.g., RUN mkdir /my_project
.
WORKDIR
: Sets the working directory path, e.g., WORKDIR /my_project
.
ADD
: Necessary for creating a container for Kubernetes, e.g., ADD . /my_project/
.
RUN pip install -r
: Runs pip
to install dependencies from the requirements file, e.g., RUN pip install -r requirements.txt
.
EXPOSE
: Opens a port, e.g., EXPOSE 8000
.
CMD
: Specifies the command to run the application, e.g., CMD ["python", "/my_project/program.py"]
.
With this setup, you'll have a my_project
directory containing program.py
, the dependencies file, and the Dockerfile. Now, build the image using buildah
:
buildah bud -f ./Dockerfile
Copy the generated hash and use it in the following command:
buildah push <hash> docker-daemon:program:v0
Then, check the created container image:
docker image ls
And verify its functionality:
docker run --rm -d -v `pwd`:/my_project -p 8000:8000 program:v0
If you see a "Hello" message, the image is ready. The next step is to push the container to a repository.
To deploy a Kubernetes application, start by creating a deployment.yaml
file, which will also be used to maintain the desired number of replicas. Here’s a basic example of deployment.yaml
:
kind: Deployment
metadata:
name: program
labels:
app: program
spec:
replicas: 3
selector:
matchLabels:
app: program
template:
metadata:
labels:
app: program
spec:
containers:
- name: program
image: <your login>/<your repository>:program
ports:
- containerPort: 8000
protocol: TCP
resources:
limits:
memory: 840Mi
cpu: 1
requests:
memory: 420Mi
cpu: 500m
After saving the file, deploy the container with kubectl
and verify its status:
kubectl create -f deployment.yaml
kubectl get pod -o wide
If issues arise with pod availability due to IP changes when pods move between nodes, resolve this using a service.yaml
file:
kind: Service
metadata:
labels:
app: program
name: program
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: program
type: ClusterIP
Deploy the service to fix pod availability:
kubectl create -f service.yaml
Let's say we have an application written in the latest version of Python, and we’ll call it newgenAI
. Our task is to deploy this application in an already created Kubernetes cluster.
In the Dockerfile, we perform the following steps:
Specify the correct interpreter:
FROM python:3.11
Create a directory:
RUN mkdir /newgenAI
Set the path:
WORKDIR /newgenAI
Add the path for Kubernetes:
ADD . /newgenAI
Install dependencies:
RUN pip install -r requirements.txt
Open the port:
EXPOSE 9000
Start the application:
CMD ["python", "/newgenAI/newgenAI.py"]
Now, we have a directory named newgenAI
containing newgenAI.py
, the dependencies file, and the Dockerfile. Let's build the image:
buildah bud -f ./Dockerfile
buildah
will output a hash in response, which you need to insert into the following command:
buildah push <insert hash here> docker-daemon:newgenAI:v0
Check the image's functionality:
docker run --rm -d -v pwd:/newgenAI -p 9000:9000 newgenAI:v0
After getting a "Hello" response, push the ready container image to the repository with the command docker push repo/
(replace repo/
with the name of the directory you created).
First, create and configure the deployment.yaml
file based on the template provided earlier, paying attention to the allocated resources. Since we are working on a generative AI project, 1 CPU and 840 MB of RAM might not be sufficient, so make sure to set appropriate values and create a reserve.
Now, deploy the created image using kubectl
:
kubectl create -f deployment.yaml
Finally, check if it’s running:
kubectl get pod -o wide
The response should show a status of "Running."
Now you know how to deploy an application on Kubernetes and can tackle more complex tasks with K8s.