In this tutorial we will see how to setup K3s cluster on oracle cloud. We also configure load balancer and ingress controller which is a bit tricky to setup on bare-metal cluster.

Oracle gives free ARM compute resources (24GB RAM & 4 vCPUs) which is enough for running Kubernetes cluster on cloud without any penny. You can launch two ARM VMs (dividing the resources between two, like 12+12GB ram and 2+2 vCPUs) for making one node as master and another for worker.

Pre-requisite

  • Launch an ARM-based machine (Ubuntu) on oracle, at least 2 vCPUs and 12GB RAM. Make sure to setup SSH keys for login into the VM
  • Reserve an IP address and attach into that VM
  • Setup k3sup in your local machine. k3sup is an awesome tool to bootstrap Kubernetes cluster with k3s
  • Allow the following ports in Oracle VNC https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#networking
  • Allow port 6443 in iptables for accessing the cluster from any machine. SSH into remote machine and run
sudo iptables -I INPUT -p tcp -m tcp --dport 6443 -j ACCEPT

Setup cluster

Assuming that you just spin up the VM and you can SSH into it with key. Now it’s time to make master node, run the below command from your local machine

k3sup install \
  --ip=<master-server-ip> \
  --user ubuntu \
  --sudo \
  --cluster \
  --k3s-channel=stable \
  --merge \
  --local-path $HOME/.kube/config \
  --context=oracle \
  --ssh-key ~/keys/oracle \
  --k3s-extra-args "--no-deploy=traefik --no-deploy=servicelb --disable servicelb --disable traefik --kube-proxy-arg proxy-mode=ipvs --flannel-backend=none@server:* --disable-network-policy"

At line 2, change the server IP which is your reserved IP for VM. Also at line 10, provide the path of the ssh key for the server. That’s it. Note, we also disabled some services at line 11 that we will setup later at this tutorial. The above command will bootstrap master node and when done, it will update your local kubeconfig file and set this cluster as default context.

$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER          AUTHINFO           NAMESPACE
          docker-desktop                docker-desktop   docker-desktop
          kubernetes-admin@kubernetes   kubernetes       kubernetes-admin
*         oracle                        oracle           oracle             default
$ kubectl get nodes -owide
NAME        STATUS   ROLES                       AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master   NotReady    control-plane,etcd,master   1d   v1.22.7+k3s1   10.0.0.239    &lt;none&gt;        Ubuntu 20.04.4 LTS   5.13.0-1027-oracle   containerd://1.5.9-k3s1</pre>

If you look closely, you will notice STATUS is NotReady. Don’t worry. We will ‘fix’ it in later step by installing calico network. Note the internal IP. In our case it is 10.0.0.239. Now we will setup Calico, MetalLb and Nginx ingress controller

Calico CNI

We disabled flannel networking (–flannel-backend=none@server:* –disable-network-policy) while setting up cluster because we will use Calico. I like Calico over flannel because it packed with numerous features like NetworkPolicy. To install Calico

kubectl apply -f https://projectcalico.docs.tigera.io/manifests/calico.yaml

Now we need to edit calico-config configmap and add and entry

$ kubectl -n kube-system edit cm calico-config

Add the following entry in cni_network_config block

...
"ipam": {
    "type": "calico-ipam"
},
  "container_settings": {
    "allow_ip_forwarding": true
},
"policy": {
    "type": "k8s"
},
...

Save and exit.

We should see all the resources are deployed and running

$ kubectl get all -n kube-system | grep -i calico

pod/calico-kube-controllers-65898446b5-m8lrx   1/1     Running   0             1d
pod/calico-node-jhg4w                          1/1     Running   1 (1d ago)   1d
daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   1d
deployment.apps/calico-kube-controllers   1/1     1            1           1d
replicaset.apps/calico-kube-controllers-65898446b5   1         1         1       1d</pre>

MetalLB

We also disabled Klipper (–no-deploy=servicelb –disable servicelb), a load balancer solution provided by k3s. We did it because we will use MetalLB, a bare metal load balancer solution. I will use v0.12.1 and install it by manifest

$ kubectl create ns metallb-system
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/dbe1a20bb820e2b99337fd658ee40a2bbb53df42/manifests/metallb.yaml
$ kubectl get pods -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-66445f859d-wkscj   1/1     Running   0          1d
pod/speaker-6gqfg                 1/1     Running   0          1d</pre>

Now we need to apply the following config map for finalizing MetalLB installation

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.0.240/32 ## master's private IP (10.0.0.239) + 1 = (10.0.0.240)

Replace your instance internal IP + next following range at line 12. So in my case it should be 10.0.0.240/32. We just allocated 10.0.0.240 for our loadbalancer IP.

Nginx Ingress controller

Since we decided not to install Traefik Ingress controller (–no-deploy=traefik), we will setup Nginx ingress controller instead. Why? It’s just my personal preference. Traefik provide lots of extra features and I don’t need those, I just need ingress to work. Execute below commands for ingress controller setup.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml

Make sure all ingress resources are running

$ kubectl get all -n ingress-nginx
pod/ingress-nginx-controller-5849c9f946-fhktj   1/1     Running     0          1d

You should see in External IP address, ingress got LoadBalancer IP from MetalLB.

Setup Iptables for forwarding traffic to MetalLB load balancer

The last step is to forward incoming traffic to MetaLB IP (10.0.0.240) so that it can accept ARP response. First, we setup kube-proxy as ipvs mode (–kube-proxy-arg proxy-mode=ipvs) for handling ARP requests.

By default traffic is routed to 10.0.0.239. We need to forward traffic to 10.0.0.240. It seems like those IPs are in same subnet, however 10.0.0.240 is a virtual IP provided by MetalLB. Since it’s virtual, it’s not in same subnet. So we need forwarding

{
 sudo iptables -t nat -A PREROUTING -d 10.0.0.239 -p tcp --dport 80 -j DNAT --to-destination 10.0.0.240:80
 sudo iptables -t nat -A PREROUTING -d 10.0.0.239 -p tcp --dport 443 -j DNAT --to-destination 10.0.0.240:443
 sudo iptables-save
}

At this stage you should have your personal Kubernetes cluster in cloud ready to roll. If you need to add a worker node, you can do this with k3sup in a matter of time.

Testing

Let’s test our configuration by creating a sample nginx deployment and expose it to ClusterIP. Then prepare an ingress for accessing it via our master node’s public IP

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx
            port:
              number: 80

Apply the above manifest. Then try to visit your master node public IP. If you see 502 bad gateway error, we need to apply some iptables rules (solution from stackoverflow)

{
 sudo iptables -P INPUT ACCEPT
 sudo iptables -P FORWARD ACCEPT
 sudo iptables -P OUTPUT ACCEPT
 sudo iptables -F
}

Now try to visit again. You should see our desired welcome to nginx page 😎

Uninstall

Below command will completely uninstall our k3s setup

$ sudo /usr/local/bin/k3s-uninstall.sh