Deploying Applications on Civo Kubernetes
Deploying applications in Kubernetes allows you to easily manage, scale, and maintain your workloads. In this guide, we will walk through deploying a simple Nginx web server on Civo Kubernetes. We will also explore key concepts behind the deployment process and why Kubernetes makes application management easier.
ποΈ Understanding Kubernetes Deploymentsβ
A Deployment in Kubernetes helps manage and maintain a set of application instances. It ensures the correct number of replicas are running and allows for easy updates or rollbacks. Kubernetes takes care of:
β
Automatic Scheduling β Distributes workloads efficiently across cluster nodes.
β
Self-Healing β Restarts failed pods automatically.
β
Scaling β Easily scale applications up or down.
β
Rolling Updates β Deploy new versions with zero downtime.
π Prerequisitesβ
Before you begin, ensure you have:
- β A Civo account β Sign up here ποΈ
- β Civo CLI installed β Installation guide π§
- β kubectl installed β Install kubectl π₯οΈ
- β A running Kubernetes cluster β If you havenβt set up a cluster yet, follow this guide to create one.
π Deploying a Simple Applicationβ
Letβs deploy Nginx, a popular web server, on Civo Kubernetes.
1οΈβ£ Create a Deployment YAML Fileβ
Kubernetes uses YAML configuration files to define applications. Create a file called my-app.yaml and add the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:latest
ports:
- containerPort: 80
π Explanation:
Defines a Deployment named my-app.
Runs 2 replicas of the application to ensure high availability.
Uses the latest Nginx image from Docker Hub.
Opens port 80 to allow traffic to reach the application.
Ensures automatic rescheduling of pods if any instance fails.
2οΈβ£ Apply the Deploymentβ
Run the following command to create the deployment in Kubernetes:
sh
Copy
Edit
kubectl apply -f my-app.yaml
β What happens here?
Kubernetes reads the YAML file and creates a Deployment object.
It schedules two pods (containers) across available nodes in your cluster.
The Deployment ensures that two replicas of Nginx remain running at all times.
3οΈβ£ Verify the Deploymentβ
To check if the pods are running successfully, execute:
kubectl get pods
π Expected Output:
NAME READY STATUS RESTARTS AGE
my-app-6b9dc6b87d-v8x9p 1/1 Running 0 10s
my-app-6b9dc6b87d-lm2cq 1/1 Running 0 10s
If the pods are still initializing, wait a few moments and check again.
4οΈβ£ Expose the Applicationβ
To make the application accessible from the internet, create a LoadBalancer service:
kubectl expose deployment my-app --type=LoadBalancer --port=80
π Explanation:
Creates a Service that allows external traffic to reach the my-app deployment.
The LoadBalancer type ensures that the application gets a publicly accessible IP address.
Traffic is automatically balanced across all running replicas of the application.
5οΈβ£ Get the Service URLβ
Run the following command to retrieve the external IP of your application:
kubectl get svc
π Expected Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-app LoadBalancer 10.43.210.1 192.168.1.100 80:32567/TCP 2m
Once the EXTERNAL-IP is assigned, you can access your app in a browser:
http://<EXTERNAL-IP>
π If the external IP is <pending>, wait a few minutes and try again.
π Scaling the Application One of the key benefits of Kubernetes is easy scaling. You can increase or decrease the number of replicas running your application with a single command:
kubectl scale deployment my-app --replicas=5
β This updates your deployment to run five instances instead of two, ensuring higher availability and better traffic handling.
To verify the change, run:
kubectl get pods
You should now see five running pods instead of two.
π― Why Use Kubernetes for Deployment?β
Kubernetes provides several advantages for deploying and managing applications:
β
Self-Healing β Automatically restarts failed containers to maintain availability.
β
Scalability β Easily increase or decrease the number of running instances.
β
Load Balancing β Distributes traffic evenly between application replicas.
β
Rolling Updates β Deploy updates without downtime.
β
Resource Efficiency β Optimizes CPU and memory usage across nodes.