Harnessing the Power of Kubernetes on Suble.io: A Comprehensive Setup Guide


Welcome to our detailed guide on setting up Kubernetes on Suble.io. As you've decided to embark on this journey, you probably already know the immense potential Kubernetes holds for managing containerized applications at scale. But, if you're still wrapping your head around the basics, we're here to shed some light before diving into the setup process.

Kubernetes, also known as K8s, is an open-source platform designed to automate deploying, scaling, and managing containerized applications. Containers allow you to bundle your software with all its dependencies, which leads to efficient, reliable, and fast deployments. Kubernetes takes it a step further by managing a cluster of machines and orchestrating containers across them. From handling failover for your applications to providing a consistent environment for deployment, Kubernetes offers a range of advantages that help organizations streamline their operations.

Now, let's talk about our platform - Suble.io. We provide you with the necessary virtualized environment to host your applications, whether they are small-scale projects or large enterprise solutions. For this guide, we recommend setting up Kubernetes on at least three VMs under the 'Mega Package' (4GB each) to ensure you have the necessary resources for the cluster to function effectively. We also suggest assigning one floating IP address for seamless network access and flexibility.

Through this blog post, we'll walk you through the step-by-step process of configuring Kubernetes on Suble.io, with clear instructions and useful tips. Whether you're an experienced developer looking to expand your toolkit or a newcomer in the world of container orchestration, this guide aims to provide you with the know-how to get your Kubernetes cluster up and running.

Stay tuned as we explore this exciting journey of setting up Kubernetes on Suble.io, making the most of containerization and simplifying application management. So, let's get started!

Terminology and Notation

local$  <command>  # This command must be executed on your local computer
all$    <command>  # This command must be executed on all servers as root
master$ <command>  # This command must be executed on the master server as root
worker$ <command>  # This command must be executed on all worker servers as root

Step 1 - Preparing ressources

Create 3 instances of vms, we recommend at least 4 gb memory (Mega package) and in this guide we will be using Ubuntu v20.04

Order 1 Floating IP (Optional but highly recommended), this will be used for our Load balancer later on.

Step 2 - Install containerd and Kubernetes Packages

In this second step, we will focus on installing the foundational pieces of our Kubernetes cluster - containerd and Kubernetes packages. We will use kubeadm, a tool that simplifies the installation process and establishes a secure and robust cluster quickly and efficiently.

Containerd, an industry-standard container runtime, is the underlying layer that Kubernetes will utilize for executing containers. It's vital to configure containerd correctly, particularly if you are installing on a distribution that employs systemd as its init system. We will ensure containerd is set up to use the systemd cgroups, providing smoother integration and management of system processes.

On each server, we will be fetching the containerd.service unit file. This step ensures containerd is correctly recognized and managed by systemd, allowing the whole system to run more harmoniously.

By the end of this step, you'll have containerd installed and Kubernetes packages ready, setting up a strong foundation for our Kubernetes cluster. Let's get started!

all$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
all$ mv containerd.service /usr/lib/systemd/system/

Next, refresh the systemd unit files on every server.

all$ systemctl daemon-reload

Subsequently, on all servers, run the ensuing commands to install the necessary packages.

all$ wget https://github.com/containerd/containerd/releases/download/v1.6.2/containerd-1.6.2-linux-amd64.tar.gz
all$ tar Czxvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
all$ systemctl enable --now containerd

Proceed to install runc.

all$ wget https://github.com/opencontainers/runc/releases/download/v1.1.6/runc.amd64
all$ install -m 755 runc.amd64 /usr/local/sbin/runc

Go ahead and set up the CNI plugins.

all$ wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
all$ mkdir -p /opt/cni/bin
all$ tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz

Configure config.toml

all$ mkdir -p /etc/containerd/
all$ containerd config default | sudo tee /etc/containerd/config.toml

For employing the systemd cgroup driver with runc, you need to modify the /etc/containerd/config.toml file and adjust the SystemdCgroup value to "true".

    SystemdCgroup = true
all$ systemctl restart containerd

Proceed with the installation of Kubernetes.

all$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
all$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb http://packages.cloud.google.com/apt/ kubernetes-xenial main
all$ apt-get update
all$ apt-get install kubeadm kubectl kubelet

Adjust the sysctl configurations.

It's crucial to ensure that your system can effectively route traffic between the nodes and pods. To accomplish this, apply the following sysctl settings on every server.

all$ cat <<EOF | tee /etc/modules-load.d/k8s.conf
all$ modprobe overlay
all$ modprobe br_netfilter
all$ cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward                = 1
net.ipv6.conf.default.forwarding   = 1
all$ sysctl --system

Ensuring these settings are in place is necessary to facilitate the forwarding of IPv4 and IPv6 packets across various network interfaces. This is a vital requirement given that each container operates with its own virtual network interface. Without this configuration, the communication between containers and external networks could be hindered, which would disrupt the overall functioning and efficiency of your Kubernetes setup.

Step 4 - Deploying and configuring the control plane

In this step, we'll focus on deploying and setting up the control plane. The control plane, also known as the master node in Kubernetes, is essentially the brain of your cluster. It's responsible for managing the state of the cluster, scheduling applications, and responding to cluster events.

We'll guide you through the steps necessary to get your control plane up and running, taking into account factors like security, resilience, and load balancing. You'll learn how to initiate the control plane, adjust its configuration parameters to suit your specific needs, and verify its successful deployment.

By the end of this step, you'll have a fully functional control plane ready to orchestrate your containerized applications across the worker nodes in your Kubernetes cluster. This step is critical as a properly configured control plane is key to the smooth running of your Kubernetes environment on Suble.io. Let's dive in!

It's crucial to execute these commands solely on your designated master node.

master$ kubeadm config images pull
master$ kubeadm init \
  --pod-network-cidr= \
  --kubernetes-version=v1.27.1 \
  --ignore-preflight-errors=NumCPU \
  --upload-certs \

During the 'kubeadm init' operation, a 'kubeadm join' command will be displayed. It's advisable to keep a copy of this command for future use, although it isn't strictly necessary as a new token can always be generated when required.

Once the initialization is completed, proceed with setting up the essential master components within the cluster. For convenience and ease of use, we recommend configuring the root user's kubeconfig to utilize the admin config of the Kubernetes cluster.

master$ mkdir -p /root/.kube
master$ cp -i /etc/kubernetes/admin.conf /root/.kube/config

And set up the cluster networking

master$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

When Kubernetes is activated with the external cloud provider flag, uninitialized nodes will be marked with a taint. Therefore, it's essential to patch the cluster-critical pods to tolerate these taints for seamless operation.

master$ kubectl -n kube-flannel patch ds kube-flannel-ds --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
master$ kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'

Congratulations! Your control plane is now set up and ready to go. To utilize kubectl from your local computer, you'll need to transfer the /etc/kubernetes/admin.conf file to your local ${HOME}/.kube/config folder. This will allow you to manage your Kubernetes cluster directly from your local machine, providing greater flexibility and ease of use as you administer your applications. Remember, effective cluster management is just a command away!

Step 5 - Incorporating Additional Nodes into the Cluster

In this essential step, we will be expanding the capacity of your Kubernetes cluster by joining additional nodes. Adding more nodes to your cluster is a critical operation for distributing workloads and managing traffic effectively, ensuring your applications run smoothly and reliably.

Remember the 'kubeadm join' command that was generated during the initialization of the master node? We'll be using that command now to connect the remaining nodes to our cluster. Each node will be provided with a unique token and certificate hash to ensure secure communication with the master node.

By the end of this step, your additional nodes will be fully integrated into your Kubernetes cluster, ready to handle the distributed container workloads. This step is crucial in building a robust and scalable cluster that can cater to growing application demands.

So, let's dive in and expand the reach of our Kubernetes cluster by bringing in the additional nodes!

Start with executing the following command on the master node

master$ kubeadm token create --print-join-command

Next, access each of the worker nodes and run the previously mentioned join command.

worker$ kubeadm join ...

Once the join operation is successfully completed, proceed to list all nodes in the cluster.

local$ kubectl get nodes
master-1   Ready    master   5m    v1.27.1
node-1     Ready    <none>   1m    v1.27.1
node-2     Ready    <none>   1m    v1.27.1

Step 6 - Establishing External Connectivity and Deploying Your First Web Service

In this significant step, we will be focusing on configuring external access to your Kubernetes cluster and setting up your inaugural web service. This process is essential in making your applications available to users and other services outside your cluster.

A crucial part of this configuration involves setting up MetalLB, a load-balancer implementation for bare metal Kubernetes clusters. This tool uses standard routing protocols to ensure your services are accessible externally using the floating IP address that you've assigned.

By guiding you through the process of installing and configuring MetalLB and a load balancer, this step will ensure that traffic to your services is efficiently managed and routed.

By the end of this step, not only will you have established external access to your Kubernetes cluster, but you'll also have your first web service ready and reachable, marking a significant milestone in your Kubernetes journey. Let's get started!

Next, return to your Suble.io dashboard and locate the "edit" button associated with your floating IP. Click on this button, navigate to "manage assignments", and assign the floating IP to your master node and the two additional nodes. Refer to the accompanying image below for guidance.

Proceed with the installation of MetalLB.

local$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml

Proceed to confirm the successful installation of MetalLB.

local$ kubectl -n metallb-system get pods
local$ kubectl api-resources| grep metallb

Generate a local file named 'ip-pool.yml' and populate it with the following content. In this example, our floating ip is

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
  name: first-pool
  namespace: metallb-system

Proceed to apply the configuration.

local$ kubectl -n metallb-system apply -f ip-pool.yml

Go ahead and create an l2advertisement.yml file, filling it with the following content.

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
  name: homelab-l2
  namespace: metallb-system
  - first-pool

Proceed to apply the configuration

$ kubectl -n metallb-system apply -f l2advertisement.yml

It's now time to set up the NGINX Ingress Controller using Helm.

local$ helm pull oci://ghcr.io/nginxinc/charts/nginx-ingress --untar --version 0.17.1
local$ cd nginx-ingress
local$ kubectl apply -f crds
local$ helm install nginx-ingress oci://ghcr.io/nginxinc/charts/nginx-ingress --version 0.17.1 

Proceed to confirm the successful installation of the NGINX Ingress.

Generate a file for our web-app ingress, naming it 'web-app-ingress.yml', and populate it with the ensuing content.

apiVersion: networking.k8s.io/v1
kind: Ingress
  name: web-app
    kubernetes.io/ingress.class: "nginx"
  - host: web-app.suble.io
      - path: /
        pathType: Prefix
            name: web-app
              number: 80

Proceed to create a test deployment for our recently crafted ingress file. Name this deployment file 'web-app-deployment.yml' and fill it with the following content.

apiVersion: apps/v1
kind: Deployment
  name: web-app
    app.kubernetes.io/name: web-app
  name: web-app
  replicas: 1
      app.kubernetes.io/name: web-app
        app.kubernetes.io/name: web-app
      - image: nginx
        name: web-app
          - /bin/sh
          - -c
          - "echo 'welcome to my web app!' > /usr/share/nginx/html/index.html && nginx -g 'daemon off;'"
              - name: ndots
                value: "2"


apiVersion: v1
kind: Service
  name: web-app
    app.kubernetes.io/name: web-app
    app.kubernetes.io/name: web-app
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80

Proceed to deploy it using the following commands.

local$ kubectl -n default apply -f web-app-deployment.yml
local$ kubectl -n default apply -f web-app-ingress.yml
local$ kubectl -n default get ingress

Congratulations! You should now be able to navigate to your web browser, input your floating IP address, and witness an example nginx web application in action.