Skip to main content

Grow your Own Kubernetes - A (Somewhat) Simple Guide

·10 mins

There’s no getting around the fact that Kubernetes is really complicated. It’s also the buzzword that’s been thrown at any problem for years now, but there’s still an air of mystery around how you actually set it up for yourself. Thankfully, there is a good way to get it set up yourself and for not with too much in terms of hardware requirements either. The great thing about this is that it’s really scalable. Here’s what you’ll need first:

Note: Other than Oracle Cloud, there’s no cloud free tier that’ll match up with the specs you need.
  • A minimum of 3 nodes, with ideally at least 4
    • The more the merrier!
  • Each node should have at least 4 GB RAM, vCPU amount is your choice, but I’d use at least 2vCPU’s per node.
  • Storage again is up to you, 2 should have at least 30GB, and the rest at least 30GB + enough to store the PersistentVolumes you’ll be using for your apps.
  • An Ubuntu 22.04 Image. You can grab that here if you’re running locally.
  • A couple of hours to kill.

I’m creating these in ProxMox, but any virtualization software (or cloud provider) will work as long as you can match the requirements listed above. This is a (very) basic topology of how things’ll look:

flowchart TD A[Rancher GUI Node] --> B[Master Node] B --> C[Worker Node] A --> C

Think of Rancher as the management plane, and the master as the control plane since that’s essentially what we’re working with here.

First step after getting your Ubuntu nodes online is to install docker on your Rancher node. Here’s the quick list of commands:

Just incase you don’t have the prerequisites for adding GPG keys:

 sudo apt-get update
 sudo apt-get install ca-certificates curl gnupg

Adding Docker’s GPG key:

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Setting up the docker repos:

echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Updating APT:

sudo apt-get update

Now to get docker installed!:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

Thankfully, installing Rancher is one easy step too:

sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

This command’ll start up Rancher in its own docker container for the GUI node. You shouldn’t do this on every node, just this one. When that’s up, you can just go to https://YOURIP:443 . You’ll get a cert error but that’s fine, then, you should see the rancher welcome screen! You should get asked to create an admin user and password, and then you’re good to go!

Rancher Main Menu

You should be able to see a create button at this point. Click on Custom. All you have to do is name the cluster and click confirm, and away we go! This’ll create the configuration needed to import nodes, and that’s what you should be taken to next.

Rancher Create Cluster Screen

If not, you’ll want to go to Clusters -> The cluster you just made -> Registration.

Rancher Master Node

The above is what you’ll want to set for your master node(s). Note the ticking of etcd, control plane, and insecure. The insecure part is just because we’re using self-signed SSL’s.

For the worker node(s), you only want Worker and Insecure ticked. Nothing else should be. You’ll be given a set of commands in “Step 2” that you’ll want to paste in to each node respectively. Go ahead and do that, and it’ll install all the prerequisites. At this point, you’ll want to go make a coffee or finish your Baldurs Gate 3 playthrough. This will take a decent while. Some people said it’d take 10 minutes, but for me it was close to half an hour. You’ll know things are all up if on Cluster -> Nodes, you can see the nodes showing as “Active”.

Download kubeconfig

You can then look at the top right of the screen, and click the paper icon that says “Download kubeconfig”.

At this point, if you haven’t already, get kubectl installed. Their official doc can explain that side better than me.

You can then head to the kubeconfig file (~/.kube/config for mac/linux and C:\Users\username\.kube\config for Windows) and paste in the contents of the kubeconfig file you downloaded.

Now hopefully when you run kubectl get pods -A , you should see a bunch of random pods. If that’s the case, you’ve done it! Well… now what?

There’s actually a bit more missing here than meets the eye. For example, if you want to use a LoadBalancer service in a managed kubernetes provider, it’s likely spinning up a separate cloud Network or Application Load Balancer. We’ll need our own version.

Setting up Load Balancing #

With that, I can present MetalLB. I’d recommend you use the Helm Chart for this. Helm is somewhat similar to something like apt or brew. Again this is something you’ll install separately. A guide for that can be find here.

After you’ve got that installed, you’ll want to run the below:

kubectl create namespace metallb
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install metallb bitnami/metallb --namespace metallb

Then, you’ll want to create some files to define an IP address pool and what’s called an L2 Advertisement. You’ll want one IP for each LoadBalancer you want to create at the very least.

#IPAddressPool YML

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: metal-ip
  namespace: metallb
spec:
  addresses:
  - x.x.x.y - x.x.x.z
---

#L2Advertisement YAML

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: metal-advertisement
  namespace: metallb
spec:
  ipAddressPools:
  - metal-ip

These will make sure that MetalLB will have enough IP’s to work with to assign out. This doesn’t work off your DHCP server, so it is necessary. You can then run kubectl apply -f FILE.yml.

Assuming that runs all fine, we can head to the next step and get the next steps running!

Storage Setup #

If you thought the above bits were tricky, don’t fret. Thankfully, this part’s quite a bit easier despite it seeming quite daunting at first. In short, we’ll be installing a Container Storage Interface (CSI) which will expose the storage on the nodes (or any mounted directory each node has access to!) so it can be used by any Kubernetes workload.

In this instance, we’ll be using Longhorn. People do report Rook being a bit more stable, but documentation for Rancher-specific implementations seem out of date or just incorrect and I could never set it up properly.

For Longhorn, all you need to do is head to your Rancher management URL (IP:443) -> The cluster you made -> Apps and then type in the Filter box for Longhorn. At the top right, you should then see an install button. I’ve already got it installed, so there’s just an Update button for me. The install button looks basically the same though.

Install Longhorn

This might take a little while to install. When it has though, give your browser a refresh and you should see a Longhorn tab on the left side of the screen next to Policy and More Resources.

If that’s the case, it should be setup! It’ll automatically create a storageclass for you called longhorn. Here’s an example of a PersistentVolumeClaim that’ll work with Longhorn:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-storage-example
  namespace: app-deploy
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: longhorn
Note: If you want to use ReadWriteMany, you have to use network storage such as an NFS/CIFS share.

And, well, that’s it! Much easier to set up than the LB. Last up, we’ve got the ingress to sort out. When that’s implemented, we’ll have a fully up and running cluster ready for any app deployments!

NGINX Ingress Controller + Cert Manager #

NGINX Ingress is a pretty decent option for beginners starting out as long as you understand that NGINX Ingress != NGINX. Quite the contrary. The config is much different, and they’re used for different reasons. I’m still learning Istio and Kiali, but I’ll likely create a tutorial for setting that up as well when I’m ready to share that knowledge.

What I found quite good with NGINX Ingress is you can install it however you like. Directly with manifest YAML’s, helm, or even through the Rancher UI. In my case, I’m using Helm just because that’s how I did it with GKE.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm install ingress-nginx ingress-nginx/ingress-nginx

At this point, you’ll see a few pods get created in the ingress-nginx namespace. You won’t instantly start seeing your cluster become accessible. For that, we need to create an ingress YAML file. Here’s an example I’ve got:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: "nginx"
    allow-snippet-annotations: "true"

spec:
  tls:
  - hosts:
    - test.example.xyz

    secretName: letsencrypt-secretname
  rules:
  - host: test.example.xyz
    http:
        paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: service-name
              port:
                number: 3000

Now there’s a few things to note with this example:

Firstly, I’ve got ssl-redirect set to false. Why? This is used to redirect port 80 traffic to 443. While we don’t have any working certs, we don’t want this. Lets Encrypt is something we’ll be setting up using cert-manager, and it uses a check on HTTP Port 80 for SSL validation. When your domains are validated, feel free to play around with this setting. I’ve also set the cluster-issuer to letsencrypt-staging just incase there’s any big problems with our domains for whatever reason. If we set it to prod without thinking, we might end up getting ourselves rate-limited or run into issues in the future when trying to get new certs.

Secondly, you can see that I’ve defined the domain under the tls spec along with its own additional config. This is needed to have cert-manager actually go and try to validate that domain to then give it a cert. Just to note as well, you’ll need to repeat that config from -host downwards for each domain you’re adding.

Now don’t apply that just yet. We should set up cert-manager first. Here’s just how to get that done.

Firstly, you’ll need to apply the cert-manager manifest:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml

This’ll create a lot of Custom Resource Definitions (CRD) along with its own namespace and a few pods. Then, you’ll want to create 2 ClusterIssuer yaml’s like so:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: default
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: CHANGEME
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-staging
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: CHANGEME
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply both of them, and then run kubectl get ClusterIssuer -n cert-manager just to double check they’ve applied. If so, apply your ingress YAML as well. After a little while, you should be able to run kubectl get secret letsencrypt-secretname -A to see if cert-manager has generated the cert you requested in your ingress config. Fingers crossed, you should also be able to access your application with the domain you’ve set. If it’s not a publicly accessible domain, you can curl the IP of your ingress (acquired by running kubectl get ingress and noting the external IP) and change the host header:

curl --resolve 'test.example:443:EXT-IP-OF-INGRESS' https://test.example

And well here we are. We’ve now got a kubernetes cluster that can provision storage volumes, be accessible from outside, and has HTTPS! Hopefully this all-in-one guide has been a helpful one.