Usually, I’m using the three nodes cluster on my local machine as a testing environment before deploying workloads or make some changes in the production (like Azure Kubernetes Services, Google Kubernetes Engine or Elastic Kubernetes Services). For my testing environment I prefer to use k3s as Kubernetes engine and nginx as ingress controller. I like it because it’s fast, lightweight, and easy to install. k3s use Traefik ingress controller by default and I will explain how to change it into the nginx.
I have three Hyper-V VMs with Ubuntu 22.04 installed, but you can start from any supported Linux OS – most of them will works well with k3s, some of them (RHEL, CentOS and Raspberry Pi OS) needs couple additional steps to setup.
How to set static IP on Ubuntu?
Let’s start from scratch. First of all, I edit my network configuration and set static IPs to my Ubuntu VMs. Making changes in /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
eth0:
addresses:
- 172.28.50.1/20
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
routes:
- to: default
via: 172.28.48.1
version: 2
And run:
sudo netplan apply
How to create Kubernetes cluster?
Secondly, run a first cluster node with --cluster-init and --disable traefik
flags, because I plan to install nginx ingress controller on my cluster.
curl -sfL https://get.k3s.io | K3S_TOKEN=MyOwnSuperSecret sh -s - server --cluster-init --disable traefik
Output should look like:
[INFO] Finding release for channel stable
[INFO] Using v1.25.3+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.3+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.3+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
Next, let’s run kubectl get node and check that our first node installed and ready
root@node1:/home/anton# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane,etcd,master 4m v1.25.3+k3s1 172.28.50.1 <none> Ubuntu 22.04.1 LTS 5.15.0-52-generic containerd://1.6.8-k3s1
How to add another node to Kubernetes cluster?
Now we are ready to join other machines to the cluster. Let’s do it with the next command (also, with --disable traefik
flag):
curl -sfL https://get.k3s.io | K3S_TOKEN=MyOwnSuperSecret sh -s - server --server https://<ip or hostname of the first server>:6443 --disable traefik
Output should look similar to the first run, but now kubectl get node shows that all three nodes are in the cluster and ready to use
root@node1:/home/anton# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane,etcd,master 14m v1.25.3+k3s1 172.28.50.1 <none> Ubuntu 22.04.1 LTS 5.15.0-52-generic containerd://1.6.8-k3s1
node2 Ready control-plane,etcd,master 5m48s v1.25.3+k3s1 172.28.50.2 <none> Ubuntu 22.04.1 LTS 5.15.0-52-generic containerd://1.6.8-k3s1
node3 Ready control-plane,etcd,master 4m44s v1.25.3+k3s1 172.28.50.3 <none> Ubuntu 22.04.1 LTS 5.15.0-52-generic containerd://1.6.8-k3s1
How to install Nginx Ingress Controller?
The cluster ready to run apps but still has no ingress controller. Let’s apply manifest with the latest version of nginx ingress controller (currently 1.4.0).
# Install latest version of ingress nginx controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
And check the installed version with the following commands:
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
Correct output should look like:
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.4.0
Build: 50be2bf95fd1ef480420e2aa1d6c5c7c138c95ea
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
-------------------------------------------------------------------------------
Just to be sure that everything installed correctly let’s describe our new ingress controller service
root@node1:/home/anton# kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.4.0
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.97.189
IPs: 10.43.97.189
LoadBalancer Ingress: 172.28.50.1, 172.28.50.2, 172.28.50.3
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31117/TCP
Endpoints: 10.42.2.4:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30331/TCP
Endpoints: 10.42.2.4:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31772
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 2m30s service-controller Ensuring load balancer
Normal AppliedDaemonSet 2m30s service-controller Applied LoadBalancer DaemonSet kube-system/svclb-ingress-nginx-controller-2551e15b
Normal UpdatedLoadBalancer 2m22s service-controller Updated LoadBalancer with new IPs: [] -> [172.28.50.1]
Normal UpdatedLoadBalancer 2m16s service-controller Updated LoadBalancer with new IPs: [172.28.50.1] -> [172.28.50.1 172.28.50.3]
Normal UpdatedLoadBalancer 2m10s service-controller Updated LoadBalancer with new IPs: [172.28.50.1 172.28.50.3] -> [172.28.50.1 172.28.50.2 172.28.50.3]
As you can see, IP addresses of all cluster nodes (172.28.50.1, 172.28.50.2, 172.28.50.3) successfully added to the ingress controller.
Create an example deployment for testing
It’s time to add example deployment and test our environment. I’ve created the simple manifest that creating a deployment, service and ingress for testing.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
selector:
matchLabels:
name: nginx-app
template:
metadata:
labels:
name: nginx-app
spec:
containers:
- name: nginx-container
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
name: nginx-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
rules:
- host: test.borisov.cloud
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
Run the following command to apply this manifest directly from my GitHub:
kubectl apply -f https://raw.githubusercontent.com/AntonBorisovRu/borisov.cloud/main/k3s-install-with-nginx-ingress/example-deployment.yaml
Now you can add a string:
<one of your nodes IP> test.borisov.cloud
into your local machine hosts file (/etc/hosts
for Linux or c:\windows\system32\drivers\etc\hosts
for Windows) and try to open test.borisov.cloud
in a browser.
If you see the “Welcome to nginx” page – everything works well.
Also, you can scale up your example deployment and check that pods spread across nodes of the cluster
root@node1:/home/anton# kubectl scale --replicas=3 deployment/nginx-app
deployment.apps/nginx-app scaled
root@node1:/home/anton# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-app-545fdd5bc4-ktthr 1/1 Running 0 11m 10.42.1.7 node3 <none> <none>
nginx-app-545fdd5bc4-nrbdn 1/1 Running 0 27s 10.42.2.6 node2 <none> <none>
nginx-app-545fdd5bc4-z2qqx 1/1 Running 0 27s 10.42.0.6 node1 <none> <none>
All mentioned code and commands you can found in my GitHub repository.
toda roba