Hypertest Docs
HyperTestKnowledge Base
  • Quick Set-up
    • Install HyperTest
    • Deploy your application
    • Mirror Traffic
      • Using Kubernetes
      • Using Amazon ECS
      • Using Docker
      • Using Nginx
      • Using Goreplay
        • ElasticBeanStalk Script for Goreplay
      • Using Apache
      • Using IIS
      • Using Istio
      • Using Kong
      • Using HAProxy
      • Others
  • HyperTest Overview
    • Introduction
    • Architecture
  • Detailed Setup Guide
    • Detailed Setup Guide
      • Installation
        • Linux VM
        • Kubernetes Cluster
          • Hardware Requirements
          • Cluster Setup
            • EKS
              • Existing Application Load Balancer
              • Calculate Setup Cost for EKS Cluster
            • GCP
            • AKS
            • Self Managed
              • Microk8s with EKS-D
          • HyperTest Installation
      • Mirror Traffic
      • Configure HyperTest
      • Automate Starting a Test
        • CI Integration
          • GitHub Checks Integration
            • Mandatory checks
          • Gitlab Integration
          • Bitbucket Integration
          • Jenkins Pipeline
            • Jenkins Plugins for PR events
  • Upgrade HyperTest
    • Upgrade HyperTest
      • Linux VM
      • Kubernetes Cluster
  • User Guides
    • Usage Guide
      • Install and Configure HyperTest
        • Install HyperTest
        • Configure Base DNS
        • Add New Service
      • Tests Runs and Analysis
        • View Test Cases
        • Start New Test Run
        • Understand Your Test Run Analysis
        • Prioritize Your Error Types
        • Track Bugs
        • Ignore Errors/Desired Changes
        • View/Download Report
        • View Consolidated Dashboard Reports
        • Sign-off Reports
        • Reduce Execution Time
        • Deduplication Rules
      • Troubleshooting and FAQs
    • Best Practices Guide
      • Cost Optimization
    • Dashboard Tour
      • Dashboard
      • All Sessions
      • Regression Report
      • Notifications
      • First Test Run
      • Interactions
      • Custom Analysis
      • Configuration
    • User Management
      • Create Admin User
      • Roles, Groups & Users
      • Enabling User Signup
    • Other Guides
      • Basic Nginx Auth for Linux HT
  • Middleware
  • Advanced Features
    • Import test cases from Postman
  • Change Log
  • Troubleshooting
  • FAQs
    • Setup
    • General
    • Regression Report
  • Glossary
    • Session
Powered by GitBook
On this page
  • Brief Steps:
  • Tech Stack Overview
  • Prerequisites
  • 1. Install AWS CLI
  • 2. Configure AWS CLI
  • 3. Get Caller Identity
  • 4. Install kubectl
  • 5. Install Helm
  • 6. Install eksctl
  • Getting Started
  • EKS Cluster Setup
  • 1. VPC
  • 2. Create Subnet
  • 3. Gateways and Route Tables
  • 4. Deploy a new cluster
  • 5. Export kubeconfig to use kubectl with new eks cluster
  • 6. Maximise pods on nodes
  • 7. Creating nodes using node-group
  • Load Balancing on Amazon EKS
  • 1. Installing the AWS Load Balancer Controller add-on
  • Network Load Balancer
  • 1. Deploy ingress-nginx-controller
  • 2. Creating storage and ingress classes
  1. Detailed Setup Guide
  2. Detailed Setup Guide
  3. Installation
  4. Kubernetes Cluster
  5. Cluster Setup

EKS

EKS Cluster Setup for HyperTest Deployment

PreviousCluster SetupNextExisting Application Load Balancer

Last updated 1 year ago

If you want to use existing EKS cluster, just add new nodegroups for HyperTest workloads and carry on from step 7, you can skip to 9:12 min in this video (athough we strongly recommend adding a new CIDR range to your VPC, create subnets from it and add the subnets to your EKS cluster - you might to recreate the cluster for this)

Brief Steps:

  1. Install Prereqs

  2. If you want to create a new EKS Cluster for Hypertest/ add more IP's in existing cluster * Create new VPC/add new CIDR in existing VPC * Create new subnets * Edit public and private route tables for NAT and internet gateways * Deploy/Recreate EKS Cluster

  3. Maximize pods on nodes

  4. Create two nodegroups for HyperTest

  5. Install Load Balancer Controller (if not already installed)

  6. Deploy ingress-nginx controller

  7. Create ingress class and storage class for HyperTest

  8. Deploy HyperTest

  9. Create wildcard DNS for HyperTest

  10. Complete RBAC signup

  11. Add DNS in HyperTest

Tech Stack Overview

Prerequisites

If you are setting up from scratch then follow the steps below

To access AWS services with the AWS CLI, you need an AWS account, IAM credentials, and an IAM access key pair. When running AWS CLI commands, the AWS CLI needs to have access to those AWS credentials.

You should also have permissions to create VPC, Subnet, NAT gateway, Internet Internet Gateway, creating a cluster, editing the resources etc in that cluster.

1. Install AWS CLI

sudo apt-get update
sudo apt install unzip -y
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

2. Configure AWS CLI

Once you have installed awscli, run aws configure and configure the settings that awscli uses to interact with AWS. These include your security credentials, the default output format, and the default AWS Region.

aws configure

aws_access_key_id = <your_keyid>

aws_secret_access_key = <your_secretkey>

Default region name : <region_name>

Default output format : json

3. Get Caller Identity

Returns details about the IAM user or role whose credentials are used to call the operation

aws sts get-caller-identity

4. Install kubectl

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

5. Install Helm

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

6. Install eksctl

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

Getting Started

EKS Cluster Setup

We will be creating the EKS cluster in a single AZ. But since AWS requires EKS clusters to have a minimum of two availability zones (AZs), we will be creating subnets in two different AZs and using subnets from single AZ in our nodegroup to create nodes.

Single AZ is a requirement because EC2 nodes can not mount volumes from other AZs. Also, there are no charges for bandwidth within a single AZ

1. VPC

Create a new VPC or add the below IPv4 CIDR to your existing VPC

IPv4 CIDR: 10.3.0.0/16

2. Create Subnet

This guide assumes ap-south-1b as the primary AZ.

  • Create 6 subnets in the VPC with below CIDR Range

  • 3 private and 3 public, for both private and public subnets

2 out of 3 subnets for each category (i.e. public and private subnet) will have to be in a different AZs, since EKS requires at least 2 subnet in 2 different AZs for either private or public subnets

Subnet Name
AZ
IPv4 CIDR Block

ht-az-1b-private-1

ap-south-1b

10.3.0.0/18

ht-az-1b-private-2

ap-south-1b

10.3.64.0/18

ht-az-1b-public-1

ap-south-1b

10.3.128.0/19

ht-az-1b-public-2

ap-south-1b

10.3.160.0/19

ht-az-1a-public-3-waste

ap-south-1a

10.3.192.0/19

ht-az-1a-private-3-waste

ap-south-1a

10.3.224.0/19

  • Edit the public subnets and allow auto assign IPv4 address and IPv6 address from subnet settings

  • For load balancing, tag private subnets with

    • Key – kubernetes.io/role/internal-elb

    • Value – 1

  • For load balancing, tag public subnets with

    • Key – kubernetes.io/role/elb

    • Value – 1

3. Gateways and Route Tables

Make sure you have the following gateways created

Internet Gateway:

Connects VPC to internet, if you have made a new internet gateway manually, attach your gateway to VPC, we will attach the gateway to public subnets via route tables

NAT Gateway:

For instances in private subnet to send request to internet over NAT gateway, we will attach the gateway to private subnets via route tables

Route Tables

  • 2 different route tables are required for private and public subnets respectively

  • Either edit existing route tables you might have or create new and make sure they are configured to use internet and NAT gateways

Public Route Table

Local routing for instances in VPC to communicate to each other over IPv4 and IPv6 All other subnet traffic to the internet over the internet gateway

Destination
Target

IPv4 CIDR of your VPC (for eg: 10.3.0.0/16)

local ( add other CIDR Ranges from VPC accordingly)

IPv6 CIDR of your VPC (for eg: 2406:da1a:9c0:8800::/56)

local

0.0.0.0/0

<Select your Internet Gateway>

Private Route Table

Local routing for instances in VPC to communicate to each other over IPv4 and IPv6

Destination
Target

IPv4 CIDR of your VPC (for eg: 10.3.0.0/16)

local ( add other CIDR Ranges from VPC accordingly)

IPv6 CIDR of your VPC (for eg: 2406:da1a:9c0:8800::/56)

local

0.0.0.0/0

<Select your NAT Gateway>

4. Deploy a new cluster

We will be using subnets from our primary AZ to create nodegroups for the cluster so the cluster is created in single AZ.

Single AZ is a requirement because EC2 nodes can not mount volumes from other AZs. Also, there are no charges for bandwidth within a single AZ

This guide assumes ap-south-1b as the chosen AZ.

Create a file ht-eks-cluster.yaml as shown below

ht-eks-cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: <cluster-name>
  region: <region-name example ap-south-1>
  version: "1.23"
  # check the latest eks supported kubernetes version
  # https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar

vpc:
  nat:
    gateway: Disable
  subnets:
    public:
      public-one: # this key can be anything random
        # id of first public subnet in primary AZ
        id: <subnetID>        
      public-two:
        # id of second public subnet in primary AZ
        id: <subnetID>
      public-three:
        # id of third public subnet in secondary AZ
        id: <subnetID>
    private:
      private-one:
        # id of first private subnet in primary AZ
        id: <subnetID>
      private-two:
        # id of second private subnet in primary AZ
        id: <subnetID> 
      private-three:
        # id of third private subnet in secondary AZ
        id: <subnetID>    
    
iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: aws-load-balancer-controller
      namespace: kube-system
    wellKnownPolicies:
      awsLoadBalancerController: true

addons:
  - name: vpc-cni
    version: latest
  - name: aws-ebs-csi-driver
    version: latest
    attachPolicyARNs:
    - arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy

Create the cluster using the following command

eksctl create cluster -f ht-eks-cluster.yaml

Once cluster is created you can check in clusters under Amazon EKS service showing status as active

5. Export kubeconfig to use kubectl with new eks cluster

These commands would create a new kubeconfig file for the cluster just created and set kubectl to use it for the current shell session

eksctl utils write-kubeconfig --cluster=<cluster-name> --region=<region-name example ap-south-1> --kubeconfig=./ht-cluster-kube-config
export KUBECONFIG=$PWD/ht-cluster-kube-config

### To test
kubectl get ns

If you get this error : exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

then find this line in your kubeconfig and change it to "client.authentication.k8s.io/v1beta1"

6. Maximise pods on nodes

EKS has a low default limit which will prevent more pods from scheduling onto nodes.

Run these commands before creating nodegoups to maximise the no of pods per node for eks nodes.

If you already created a nodegroup, you will have to delete that node-group

kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
kubectl describe daemonset -n kube-system aws-node | grep ENABLE_PREFIX_DELEGATION
kubectl set env ds aws-node -n kube-system WARM_PREFIX_TARGET=1

7. Creating nodes using node-group

Please make sure you have maximized pods that can be scheduled on nodes before creating a nodegroup

It is recommended to use atleast 4core and 8gb ram nodes for better performance.

To minimize logs in our nodegroups and save disk space, we are updating some configuration via preBootstrapCommands For kubelet: in /etc/kubernetes/kubelet/kubelet-config.json

imageGCHighThresholdPercent: 50 (previous 85)

imageGCLowThresholdPercent: 45 (previous 80)

For Docker as container runtime: in /etc/docker/daemon.json

max-file: 2 (previous 10)

For containerd as container runtime: in /etc/kubernetes/kubelet/kubelet-config.json

containerLogMaxFiles: 2 (previous 5)

While creating nodegroups keep vpc and subnet section from original cluster file too

Create a file ht-eks-nodegroups.yaml as shown below

ht-eks-nodegroups.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: <cluster-name>
  region: <region-name example ap-south-1>
  version: "1.23"
  
vpc:
  nat:
    gateway: Disable
  subnets:
    public:
      public-one: # this key can be anything random
        # id of first public subnet in primary AZ
        id: <subnetID>        
      public-two:
        # id of second public subnet in primary AZ
        id: <subnetID>
      public-three:
        # id of third public subnet in secondary AZ
        id: <subnetID>
    private:
      private-one:
        # id of first private subnet in primary AZ
        id: <subnetID>
      private-two:
        # id of second private subnet in primary AZ
        id: <subnetID> 
      private-three:
        # id of third private subnet in secondary AZ
        id: <subnetID>    

managedNodeGroups:
  - name: ng-general
    spot: false
    instanceTypes: 
      - t3a.medium
      - t3.medium
    desiredCapacity: 1
    minSize: 1
    maxSize: 2
    volumeSize: 20
    subnets:
      - public-one
      - public-two
    preBootstrapCommands:
      - sudo jq '( ."log-opts"."max-file") = "2"' /etc/docker/daemon.json > /home/ec2-user/tmp-docker-daemon.json && sudo mv /home/ec2-user/tmp-docker-daemon.json /etc/docker/daemon.json
      - sudo jq '( .containerLogMaxFiles) = 2' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json
      - sudo jq '( .imageGCHighThresholdPercent) = 50' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json  
      - sudo jq '( .imageGCLowThresholdPercent) = 45' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json
  - name: ng-hypertest-master
    spot: false
    labels:
      hypertest_master_node: 'yes'
    taints:
      - key: hypertest_node
        value: 'yes'
        effect: NoExecute
    instanceTypes:
      - t3a.medium
      - t3.medium
    desiredCapacity: 1
    minSize: 1
    maxSize: 2
    volumeSize: 20
    privateNetworking: true
    subnets:
      - private-one
      - private-two
    preBootstrapCommands:
      - sudo jq '( ."log-opts"."max-file") = "2"' /etc/docker/daemon.json > /home/ec2-user/tmp-docker-daemon.json && sudo mv /home/ec2-user/tmp-docker-daemon.json /etc/docker/daemon.json
      - sudo jq '( .containerLogMaxFiles) = 2' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json
      - sudo jq '( .imageGCHighThresholdPercent) = 50' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json  
      - sudo jq '( .imageGCLowThresholdPercent) = 45' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json
  - name: ng-hypertest-worker
    spot: true
    labels:
      hypertest_worker_node: 'yes'
    taints:
      - key: hypertest_node
        value: 'yes'
        effect: NoExecute
    instanceTypes:
      - t3a.2xlarge
      - t3.2xlarge
    desiredCapacity: 4
    minSize: 1
    maxSize: 4
    volumeSize: 20
    privateNetworking: true
    subnets:
      - private-one
      - private-two
    preBootstrapCommands:
      - sudo jq '( ."log-opts"."max-file") = "2"' /etc/docker/daemon.json > /home/ec2-user/tmp-docker-daemon.json && sudo mv /home/ec2-user/tmp-docker-daemon.json /etc/docker/daemon.json
      - sudo jq '( .containerLogMaxFiles) = 2' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json
      - sudo jq '( .imageGCHighThresholdPercent) = 50' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json  
      - sudo jq '( .imageGCLowThresholdPercent) = 45' /etc/kubernetes/kubelet/kubelet-config.json > /home/ec2-user/tmp.json && sudo mv /home/ec2-user/tmp.json /etc/kubernetes/kubelet/kubelet-config.json

Create the cluster using the following command

eksctl create nodegroup -f ht-eks-nodegroups.yaml

The above file will create 3 node-groups in the cluster

Spot: false

Taints: none

Labels: none

Workload: general purpose pods such as nginx controllers etc

Spot: false

Taints: hypertest_node=yes:NoExecute

Labels: hypertest_master_node=yes

Workload: HyperTest controller-service deployments

Spot: true

Taint: hypertest_node=yes:NoExecute

Labels: hypertest_worker_node=yes

Workload: HyperTest service deployments

Once nodegroups are created you can check them on EKS dashboard under clusters -> overview, nodegroup in cluster would be showing status as ready

Load Balancing on Amazon EKS

Load balancer types for Amazon EKS

We will first install AWS Load Balancer Controller add-on in our cluster, which helps in managing the Elastic load balancer on AWS

1. Installing the AWS Load Balancer Controller add-on

helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<cluster-name> --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set image.repository=<registry>.dkr.ecr.<region_code>.amazonaws.com/amazon/aws-load-balancer-controller

Important

The deployed chart doesn't receive security updates automatically. You need to manually upgrade to a newer chart when it becomes available. When upgrading, change install to upgrade in the previous command, but run the following command to install the TargetGroupBinding custom resource definitions before running the previous command.

kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"

Verify that the controller is installed.

kubectl get deployment -n kube-system aws-load-balancer-controller

Now moving on to Load Balancer

If you already have an existing application load balancer and want to use the same for hypertest , please follow the below guide, else continue with network load balancer

Network Load Balancer

1. Deploy ingress-nginx-controller

We'll use ingress-nginx as ingress in the cluster, if you are already using ingress-nginx then it would only update the existing one without making any changes

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/deploy.yaml

Once deployed check if the namespace “ingress-nginx” has been deployed by running kubectl get ns

2. Creating storage and ingress classes

Create a file ht-ingress-storage-classes.yaml as shown below

ht-ingress-storage-classes.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hypertest-storage-class
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
provisioner: ebs.csi.aws.com #specify your provisioner name here. eg: microk8s.io/hostpath
parameters: # specify parameters for your storage class here
  type: gp3
allowVolumeExpansion: true # set this to true if your provisioner supports expanding volume, otherwise remove this
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: hypertest-ingress-class
spec:
  controller: k8s.io/ingress-nginx  #specify your controller name here. eg: k8s.io/ingress-nginx for nginx-ingress-controller https://kubernetes.github.io/ingress-nginx/

Create the ingress and storage class using the following command

kubectl apply -f ht-ingress-storage-classes.yaml

After creating ingress and storage classes, we will now deploy HyperTest controller. Please refer the below guide for the same.

: The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services.

: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

: Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters.

: Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises.

: eksctl is a simple CLI tool for creating and managing clusters on EKS - Amazon's managed Kubernetes service for EC2.

: The Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver allows Amazon Elastic Kubernetes Service (Amazon EKS) clusters to manage the lifecycle of Amazon EBS volumes for persistent volumes.

: AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.

: The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more . The NGINX Ingress Controller an implementation of a Kubernetes Ingress Controller for NGINX and NGINX Plus.

Download the latest version of aws-cli from

Download latest version of kubectl from

Download latest version of helm from

Download latest version of eksctl from

Amazon container image registries can be found

Latest version of ingress-nginx-controller can be found . This guide assumes version 1.3.0

AWS CLI
Kubectl
Helm
EKS
Eksctl
EBS-CSI Driver for storage
Load Balancer Controller
Ingress-Nginx
Services
here
here
here
here
Network Load Balancer
Application Load Balancer
here
Existing Application Load Balancer
here
HyperTest Installation