HyperTest Installation

Installation of HyperTest on cluster

An up and running Kubernetes cluster with the following already configured

  • Kubernetes Cluster

  • Storage Class

  • Ingress Controller

  • Ingress Class

You can refer to our Cluster Setup docs if you don't already have a Kubernetes cluster running

All the prerequisites have already been completed while setting up cluster from cluster docs. Below we have listed them just for overview. If you have missed any of the prerequisite, please complete it from the steps below

Kubernetes Cluster Prerequisites

Minimum Permissions

You need to have permissions for the following:

  1. Create a new namespace

  2. Create a service accounts in that namespace.

  3. Add/edit node labels

  4. Add/edit ingress and storage classes

  5. Deploy ingress controller in a new namespace

kubctl get svc
kubctl get nodes
kubctl get ns
# TODO: bash script for health checkup

Storage Class

HyperTest relies on dynamic volumes for running it's stateful workloads. This is achieved by Kubernetes storage classes for volume creation/mounting.

You should be able to see your available storage classes by running

kubectl get storageclasses -A

If you have not created storage class named "hypertest-storage-class" while setting up your cluster, create it using steps

Note the provisioner from your storage class and we'll create a new storage class using the same provisoner.

ht-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hypertest-storage-class
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
provisioner: <your_provisoner_name> #specify your provisioner name here. eg: microk8s.io/hostpath
allowVolumeExpansion: true # set this to true if your provisioner supports expanding volume, otherwise remove this

Create storage class using the following command

kubectl apply -f ht-storage-class.yaml

Ingress and Ingress Class

Ingress is used to manage external access to services in your cluster. It exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.

If you have not created ingress class named "hypertest-ingress-class" while setting up your cluster, create it using below steps

Ingress-nginx Controller

  • If you are using ingress-nginx and have not installed it already, deploy ingress-nginx controller from here

  • Once deployed, you should have a default ingress class in ingress-nginx namespace

Or else if you are using any other ingress, then use that ingress class for further steps

kubectl get ingressClass -A

Note the controller from here and we'll create a new ingress class using the same controller.

ht-ingress-class.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: hypertest-ingress-class
spec:
  controller: k8s.io/ingress-nginx  #specify your controller name here. eg: k8s.io/ingress-nginx for nginx-ingress-controller https://kubernetes.github.io/ingress-nginx/

Create ingress class using the following command

kubectl apply -f ht-ingress-class.yaml

Node Labels and Taints

Make sure nodes assigned to run hypertest workloads have the following labels and taints

Taints: none

Labels: none

Workload: general purpose pods such as nginx controllers etc

### Show labels for Kubenetes nodes
kubectl get nodes --show-labels

### Show taints for Kubenetes nodes
### Describe nodes and check for taints in the description
kubectl describe nodes 

If the nodes in your cluster are not labelled and tainted, please follow below steps to do so

### for hypertest master node
kubectl label nodes <node-name> hypertest_master_node=yes
### for hypertest worker nodes
kubectl label nodes <node-name> hypertest_worker_node=yes
### taints for both hypertest node types
kubectl taint nodes <node-name> hypertest_node=yes:NoExecute

Self Managed Cluster

If you are using self managed cluster, we have used longhorn for storage solutions and hence have dedicated a few nodes just for storage to used by persistent volumes by stateful sets. Hence to achieve this we have labelled and tainted those nodes with below configuration:

Taints: longhorn_storage_node=yes:NoExecute

Labels: node.longhorn.io/create-default-disk=true

Workload: Longhorn Storage Pods and Volumes

If the nodes in your cluster are not labelled and tainted, please follow below steps to do so

### label for longhorn storage node
kubectl label nodes <node-name> node.longhorn.io/create-default-disk=true
### taints for longhorn storage node
kubectl taint nodes <node-name> longhorn_storage_node=yes:NoExecute

Taints make sure other workloads are not scheduled on HyperTest nodes. If nodes are tainted, you'd need at least 1 more node to run other Kubernetes workloads which we have kept as Kubernetes master node.

Learn more about taints here

Deploy HyperTest Controller-Service

kubectl apply -f https://hypertest-binaries-1.s3.ap-south-1.amazonaws.com/deployments/latest.json

Once deployed run the following checks

# check if the namespace "hypertest-ns" has been deployed
kubectl get ns

# Check pods in the ht-controller namespace
kubectl get pods -n hypertest-ns -w

# Check if the persistent volume claim is bound
kubectl get pvc -n hypertest-ns

# Check if the persistent volume is present and bound
kubectl get pv

Setup Wildcard DNS

Get external address from ingress controller

kubectl get svc -n ingress-nginx

Above command provides external-ip from which dashboard would be accessed.

Copy the external-ip (this guide assumes the address to be my-address.elb.ap-south-1.amazonaws.com)

Create a dns record

  • Create CNAME for domain

  • A record for IPv4

  • AAAA record for IPv6

Point *.hypertest.[your-domain] -> external-ip in your dns provider console (route53, godaddy etc)

eg: CNAME for *.hypertest.test-env.company.co.in -> my-address.elb.ap-south-1.amazonaws.com

Dashboard Access

HyperTest central dashboard should be accessible at http://central.hypertest.[your-domain]

  1. Set and verify base dns from dashboard

  2. Deploy a new hypertest instance from the central dashboard

  3. After new hypertest instance is deployed, you can go to service dashboard and configure instance dashboard for your service

Logger Endpoints

  • K8s Internal Logger Endpoint: ht-<name of your service>-ht-logger.hypertest-ns.svc.cluster.local:3001

  • External Logger Enpoint: ht-<name of your service>.logger.<base_dns>

Add Annotations to Ingress

Add annotations to your dashboard and logger ingress resources using configmap

kubectl edit configmap hypertest-config-map -n hypertest-ns

# Add the following keys to the configmap and their values will be annotations in json format
* DASHBOARD_INGRESS_ANNOTATIONS: '{"Key1": "Value1", "Key2": "Value2"}'
* LOGGER_INGRESS_ANNOTATIONS: '{"Key3": "Value3"}'

Save and close the file

  • DASHBOARD_INGRESS_ANNOTATIONS will add annotations to dashboard ingress resources

  • LOGGER_INGRESS_ANNOTATIONS will add annotations to logger ingress resources

Restart the ht-central-backend-deployment pod to get the latest changes from configmap.

After the pod is again up and running, verify the changes by describing the ingress resources and check for the newly added annotations.

kubectl get po -n hypertest-ns| grep ht-central-backend
# Copy the full name of pod from output
kubectl delete po <pod name> -n hypertest-ns

# List all ingress in hypertest-ns namespace
kubectl get ingress -n hypertest-ns

# Describe ingress and verify annotations addition
kubectl describe ingress <name> -n hypertest-ns

Mirror Traffic to HyperTest

After installing HyperTest, please follow below guide to mirror traffic to HyperTest

pageMirror Traffic

Please follow below guide to learn how HyperTest dashboard works

pageDashboard Tour

Last updated