Self Managed

Deploy HyperTest on self managed kubernetes cluster using Microk8s

Tech Stack Overview

Microk8s: MicroK8s is a light weight kubernetes tool to easily get a multi-node highly available kubernetes cluster.

Longhorn: Longhorn is cloud-native distributed block storage for Kubernetes that is easy to deploy and upgrade, 100 percent open source and persistent.

Prometheus: Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database built using a HTTP pull model, with flexible queries and real-time alerting.

Grafana: Grafana is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources.

Files Content Overview

This guide will use few scripts for bringing up the cluster via microk8s and deployments. Below is a brief overview of scripts functionality.

  1. microk8s_setup.sh - This script will run on every node and install microk8s on them.

  2. ht_deployment.sh - This script will run on master node for deploying HyperTest controller and other storage and monitoring solutions along with enabling ingress. It will label and taint nodes accordingly and deploy applications on them.

  3. ht-ingress-storage-classes.yaml - This file is used to create a storage class and ingress class which will be used by HyperTest deployments.

Prerequisites

1. Create Ubuntu VM's (>=22.04 LTS)

We will be dedicating a few nodes for data storage to longhorn, and hence other application deployments will not be scheduled on those nodes.

You should have root user access of Ubuntu VM's

  • 5 Ubuntu VM's: 1 kubernetes master node, 1 HyperTest master node, 1 HyperTest worker node and 2 longhorn storage nodes

Minimum Requirement:

  • 3 Ubuntu VM’s: 1 kubernetes master node, 1 HyperTest master node, 1 longhorn storage node

  • Kubernetes Master and HyperTest nodes should have a minimum of 4vCPU and 4Gib RAM and 20GiB disk

  • Longhorn storage node should have minimum of 4vCPU and 4Gib RAM and 1TB disk

Getting Started

Cluster Setup

1. Installation of Microk8s in master node

  • ssh into master node

  • Run the below script for installation of microk8s in master node

The below scripts first installs nfs-common and open-iscsi which are dependencies for longhorn, then it installs microk8s in VM

microk8s_setup.sh
#!/bin/sh
set -x

echo "Installing nfs-common and open-iscsi"
sudo apt install nfs-common open-iscsi -y
sudo systemctl start nfs-common
sudo rm -f /lib/systemd/system/nfs-common.service
sudo systemctl daemon-reload
sudo systemctl start nfs-common
sudo systemctl enable nfs-common
sudo systemctl start iscsid
sudo systemctl enable iscsid

echo "Installing Microk8s"
sudo snap install microk8s --classic --channel=1.24/stable
sudo iptables -P FORWARD ACCEPT

2. Installation of microk8s on worker nodes

  • ssh into worker node

  • Run the below script for installation of microk8s in worker node

microk8s_setup.sh
#!/bin/bash
set -x

echo "Installing nfs-common and open-iscsi"
sudo apt install nfs-common open-iscsi -y
sudo systemctl start nfs-common
sudo rm -f /lib/systemd/system/nfs-common.service
sudo systemctl daemon-reload
sudo systemctl start nfs-common
sudo systemctl enable nfs-common
sudo systemctl start iscsid
sudo systemctl enable iscsid

echo "Installing Microk8s"
sudo snap install microk8s --classic --channel=1.24/stable
sudo iptables -P FORWARD ACCEPT
  • Repeat the above steps in every worker node you wish to join to k8s cluster

3. Join worker nodes to master node

  • ssh into master node

  • Run the following command

sudo microk8s add-node| grep -m1 " --worker"
  • From the output copy the line “microk8s join –worker” to join other node as a worker node, else remove –worker flag from end to join as a master node and create a highly available cluster

  • ssh into worker node

  • Run the microk8s join command copied from above (use sudo if required)

  • Do this from all VM’s you want to join to microk8s cluster

If the output of join command says token is expired or invalid, get a fresh token by running "sudo microk8s add-node" in master node and repeat later steps

  • ssh into master node

  • Check whether all the nodes have successfully joined the cluster and are in Ready state

# Wait for all node’s status to change to Ready
sudo microk8s kubectl get node -w

Storage and Ingress Classes

  • ssh into master node

  • Create ht-ingress-storage-classes.yaml file with as shown below

ht-ingress-storage-classes.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: hypertest-ingress-class
spec:
  controller: k8s.io/ingress-nginx  #specify your controller name here. eg: k8s.io/ingress-nginx for nginx-ingress-controller https://kubernetes.github.io/ingress-nginx/
---

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hypertest-storage-class
parameters:
  dataLocality: disabled
  fromBackup: ""
  fsType: ext4
  numberOfReplicas: "2"
  staleReplicaTimeout: "30"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
provisioner: driver.longhorn.io #specify your provisioner name here. eg: microk8s.io/hostpath

Create the Storage class and Ingress class using the following command

sudo microk8s kubectl apply -f ht-ingress-storage-classes.yaml

HyperTest Deployment

1. Deployment of HyperTest Controller-Service

  • Run the below script for deployment of HyperTest controller-service

Edit the below script and replace the name of nodes (from worker1,2,3 and 4) to VM’s name you are using for hypertest_master_node, hypertest_worker_node and longhorn_storage_node and save and close the file. I If you are going with lesser VM's and don't have a dedicated hypertest worker node or so then pass it as an empty string

ht_deployment.sh
#!/bin/sh

echo "Setting env variable"
HYPERTEST_MASTER_NODE="worker1"
HYPERTEST_WORKER_NODE="worker2"
LONGHORN_STORAGE_NODE="worker3 worker4"
echo $HYPERTEST_MASTER_NODE
echo $HYPERTEST_WORKER_NODE
echo $LONGHORN_STORAGE_NODE

echo "======================================================================="
echo "Label and Taint Nodes"
echo "Labeling hypertest master node"
sudo microk8s kubectl label nodes $HYPERTEST_MASTER_NODE  hypertest_master_node=yes
echo "Labeling hypertest worker node"
sudo microk8s kubectl label nodes $HYPERTEST_WORKER_NODE  hypertest_worker_node=yes
echo "Taint hypertest nodes"
sudo microk8s kubectl taint nodes $HYPERTEST_MASTER_NODE $HYPERTEST_WORKER_NODE  hypertest_node=yes:NoExecute

echo "Labeling Longhorn storage nodes"
sudo microk8s kubectl label nodes $LONGHORN_STORAGE_NODE node.longhorn.io/create-default-disk=true
echo "Taint longhorn storage nodes"
sudo microk8s kubectl taint nodes $LONGHORN_STORAGE_NODE longhorn_storage_node=yes:NoExecute

echo "Enabling dns in microk8s"
sudo microk8s enable dns
echo "Enabling helm3 in microk8s"
sudo microk8s enable helm3

echo "Adding helm repo for longhorn"
sudo microk8s helm3 repo add longhorn https://charts.longhorn.io
sudo microk8s helm3 repo update

echo "Installing Longhorn"
echo "Installation of Longhorn is done with flag defaultSettings.createDefaultDiskLabeledNodes=true hence nodes having this label will only be used for storage"
curl -Lo values.yaml https://raw.githubusercontent.com/longhorn/charts/master/charts/longhorn/values.yaml

sed -i 's/taintToleration: ~/taintToleration: "longhorn_storage_node=yes:NoExecute; hypertest_node=yes:NoExecute"/g' values.yaml

sed -i 's/tolerations: \[\]/tolerations: /g' values.yaml
sed -i '/  tolerations: /a \  - key: "longhorn_storage_node"' values.yaml
sed -i '/  - key: "longhorn_storage_node"/a \  - key: "hypertest_node"' values.yaml
sed -i '/  - key: "longhorn_storage_node"/a \    operator: "Equal"' values.yaml
sed -i '/  - key: "hypertest_node"/a \    operator: "Equal"' values.yaml
sed -i '/    operator: "Equal"/a \    value: "yes"' values.yaml
sed -i '/    value: "yes"/a \    effect: "NoExecute"' values.yaml
sudo microk8s helm3 install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --values values.yaml --set csi.kubeletRootDir="/var/snap/microk8s/common/var/lib/kubelet" --set service.ui.type="NodePort"  --set service.ui.nodePort="32005"  --set defaultSettings.createDefaultDiskLabeledNodes=true --set defaultSettings.nodeDownPodDeletionPolicy="delete-both-statefulset-and-deployment-pod"


echo "HyperTest Installation"
echo "======================================================================="

echo "Enabling ingress in microk8s"
sudo microk8s enable ingress

echo "Deploy hypertest controller-service"
sudo microk8s kubectl apply -f https://hypertest-binaries-1.s3.ap-south-1.amazonaws.com/deployments/latest.json

echo "Enabling prometheus in microk8s"
sudo microk8s enable prometheus

echo "Exposing prometheus and grafana as a NodePort service"
sudo microk8s kubectl patch svc prometheus-k8s -n monitoring --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"},{"op":"replace","path":"/spec/ports/0/nodePort","value":32006}]'
sudo microk8s kubectl patch svc grafana -n monitoring --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"},{"op":"replace","path":"/spec/ports/0/nodePort","value":32007}]'
  • The above script will label and taint the 4 types of nodes in the cluster as below

Taints: none

Labels: none

Workload: general purpose pods such as nginx controllers etc

2. Access services on respective ports

  • List services for longhorn, prometheus, grafana etc and note the nodePort the services are exposed on

sudo microk8s kubectl get svc -A

Get the master IP of cluster

  • If VM’s were created using multipass or similar software: MasterIP is IP of master node

sudo microk8s kubectl get no -o wide
  • If VM’s were created using EC2 Instance or similar: MasterIP is master node instance’s PublicIP

Access HyperTest dashboard on on https://<MasterIP>

Note the nodePort of longhorn-frontend, prometheus-k8s and grafana svc and access their UI on http://<MasterIP>:<nodePort>

Currently used nodePorts are listed below, they might change in future so verify by listing services

Longhorn UI: 32005

Prometheus UI: 32006

Grafana UI: 32007 (default username and password is admin/admin)

After deploying HyperTest, setup the DNS records by following the below guide

Last updated