In this tutorial we will see how to setup K3s cluster on oracle cloud. We also configure load balancer and ingress controller which is a bit tricky to setup on bare-metal cluster.

Oracle gives free ARM compute resources (24GB RAM & 4 vCPUs) which is enough for running Kubernetes cluster on cloud without any penny. You can launch two ARM VMs (dividing the resources between two, like 12+12GB ram and 2+2 vCPUs) for making one node as master and another for worker.


  • Launch an ARM-based machine (Ubuntu) on oracle, at least 2 vCPUs and 12GB RAM. Make sure to setup SSH keys for login into the VM
  • Reserve an IP address and attach into that VM
  • Setup k3sup in your local machine. k3sup is an awesome tool to bootstrap Kubernetes cluster with k3s
  • Allow the following ports in Oracle VNC
  • Allow port 6443 in iptables for accessing the cluster from any machine. SSH into remote machine and run
sudo iptables -I INPUT -p tcp -m tcp --dport 6443 -j ACCEPT

Setup cluster

Assuming that you just spin up the VM and you can SSH into it with key. Now it’s time to make master node, run the below command from your local machine

k3sup install \
  --ip=<master-server-ip> \
  --user ubuntu \
  --sudo \
  --cluster \
  --k3s-channel=stable \
  --merge \
  --local-path $HOME/.kube/config \
  --context=oracle \
  --ssh-key ~/keys/oracle \
  --k3s-extra-args "--no-deploy=traefik --no-deploy=servicelb --disable servicelb --disable traefik --kube-proxy-arg proxy-mode=ipvs --flannel-backend=none@server:* --disable-network-policy"

At line 2, change the server IP which is your reserved IP for VM. Also at line 10, provide the path of the ssh key for the server. That’s it. Note, we also disabled some services at line 11 that we will setup later at this tutorial. The above command will bootstrap master node and when done, it will update your local kubeconfig file and set this cluster as default context.

$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER          AUTHINFO           NAMESPACE
          docker-desktop                docker-desktop   docker-desktop
          kubernetes-admin@kubernetes   kubernetes       kubernetes-admin
*         oracle                        oracle           oracle             default
$ kubectl get nodes -owide
k8s-master   NotReady    control-plane,etcd,master   1d   v1.22.7+k3s1    &lt;none&gt;        Ubuntu 20.04.4 LTS   5.13.0-1027-oracle   containerd://1.5.9-k3s1</pre>

If you look closely, you will notice STATUS is NotReady. Don’t worry. We will ‘fix’ it in later step by installing calico network. Note the internal IP. In our case it is Now we will setup Calico, MetalLb and Nginx ingress controller

Calico CNI

We disabled flannel networking (–flannel-backend=none@server:* –disable-network-policy) while setting up cluster because we will use Calico. I like Calico over flannel because it packed with numerous features like NetworkPolicy. To install Calico

kubectl apply -f

Now we need to edit calico-config configmap and add and entry

$ kubectl -n kube-system edit cm calico-config

Add the following entry in cni_network_config block

"ipam": {
    "type": "calico-ipam"
  "container_settings": {
    "allow_ip_forwarding": true
"policy": {
    "type": "k8s"

Save and exit.

We should see all the resources are deployed and running

$ kubectl get all -n kube-system | grep -i calico

pod/calico-kube-controllers-65898446b5-m8lrx   1/1     Running   0             1d
pod/calico-node-jhg4w                          1/1     Running   1 (1d ago)   1d
daemonset.apps/calico-node   1         1         1       1            1    1d
deployment.apps/calico-kube-controllers   1/1     1            1           1d
replicaset.apps/calico-kube-controllers-65898446b5   1         1         1       1d</pre>


We also disabled Klipper (–no-deploy=servicelb –disable servicelb), a load balancer solution provided by k3s. We did it because we will use MetalLB, a bare metal load balancer solution. I will use v0.12.1 and install it by manifest

$ kubectl create ns metallb-system
$ kubectl apply -f
$ kubectl get pods -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-66445f859d-wkscj   1/1     Running   0          1d
pod/speaker-6gqfg                 1/1     Running   0          1d</pre>

Now we need to apply the following config map for finalizing MetalLB installation

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2
      - ## master's private IP ( + 1 = (

Replace your instance internal IP + next following range at line 12. So in my case it should be We just allocated for our loadbalancer IP.

Nginx Ingress controller

Since we decided not to install Traefik Ingress controller (–no-deploy=traefik), we will setup Nginx ingress controller instead. Why? It’s just my personal preference. Traefik provide lots of extra features and I don’t need those, I just need ingress to work. Execute below commands for ingress controller setup.

kubectl apply -f

Make sure all ingress resources are running

$ kubectl get all -n ingress-nginx
pod/ingress-nginx-controller-5849c9f946-fhktj   1/1     Running     0          1d

You should see in External IP address, ingress got LoadBalancer IP from MetalLB.

Setup Iptables for forwarding traffic to MetalLB load balancer

The last step is to forward incoming traffic to MetaLB IP ( so that it can accept ARP response. First, we setup kube-proxy as ipvs mode (–kube-proxy-arg proxy-mode=ipvs) for handling ARP requests.

By default traffic is routed to We need to forward traffic to It seems like those IPs are in same subnet, however is a virtual IP provided by MetalLB. Since it’s virtual, it’s not in same subnet. So we need forwarding

 sudo iptables -t nat -A PREROUTING -d -p tcp --dport 80 -j DNAT --to-destination
 sudo iptables -t nat -A PREROUTING -d -p tcp --dport 443 -j DNAT --to-destination
 sudo iptables-save

At this stage you should have your personal Kubernetes cluster in cloud ready to roll. If you need to add a worker node, you can do this with k3sup in a matter of time.


Let’s test our configuration by creating a sample nginx deployment and expose it to ClusterIP. Then prepare an ingress for accessing it via our master node’s public IP

apiVersion: apps/v1
kind: Deployment
    app: nginx
  name: nginx
  replicas: 1
      app: nginx
      creationTimestamp: null
        app: nginx
      - image: nginx
        name: nginx
apiVersion: v1
kind: Service
  creationTimestamp: null
    app: nginx
  name: nginx
  - port: 80
    protocol: TCP
    targetPort: 80
    app: nginx
kind: Ingress
  name: nginx
  ingressClassName: nginx
  - http:
      - path: /
        pathType: Prefix
            name: nginx
              number: 80

Apply the above manifest. Then try to visit your master node public IP. If you see 502 bad gateway error, we need to apply some iptables rules (solution from stackoverflow)

 sudo iptables -P INPUT ACCEPT
 sudo iptables -P FORWARD ACCEPT
 sudo iptables -P OUTPUT ACCEPT
 sudo iptables -F

Now try to visit again. You should see our desired welcome to nginx page 😎


Below command will completely uninstall our k3s setup

$ sudo /usr/local/bin/