# Deploying Hypertest on AWS EKS

# Prerequisites

If you are setting up from scratch then follow the steps below

You need to have aws credentials to configure on your machine to run awscli

# 1. Install awscli

Download the latest version of aws-cli from here (opens new window)

Steps for Debian/ubuntu
sudo apt-get update
sudo apt install unzip -y
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# 2. Configure awscli

Once installed run aws configure and fill in the below details

aws configure

aws_access_key_id = <your_keyid>

aws_secret_access_key = <your_secretkey>

Default region name : <region_name>

Default output format : json

# 3. Install kubectl

Download latest version of kubectl from here (opens new window)

Steps for Debian/ubuntu
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

# 4. Install Helm

Download latest version of helm from here (opens new window)

Steps for Linux
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

# 5. Install eksctl

Download latest version of eksctl from here (opens new window)

Steps for Linux
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

# EKS cluster Setup

# 1. Deploy a new cluster

We'll deploy a new cluster in a single Availabilty Zone.

Single AZ is a requirement because EC2 nodes cant mount volumes from other AZs. Also, no charges for bandwidth within a single AZ

This guide assumes ap-south-1b as the chosen AZ.

ht-eks.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: <cluster-name>
  region: <region-name example ap-south-1>
  version: "1.22"

  availabilityZones: <example  ["ap-south-1a", "ap-south-1b"]>
  # chose 2az need to be specified here for having atleast 2 vpcs.
  # Make sure one of them is your chosen AZ

vpc:
  nat:
    gateway: Disable

iam:
  withOIDC: true

addons:
  - name: vpc-cni
    version: latest

create ht-eks.yaml as shown above

eksctl create cluster -f ht-eks.yaml

Once cluster is created you can check in clusters under Amazon EKS service showing status as active

# 1a. Export kubeconfig to use kubectl with new eks cluster

These commands would create a new kubeconfig file for the cluster just created and set kubectl to use it for the current shell session

eksctl utils write-kubeconfig --cluster=<cluster-name> --kubeconfig=./ht-cluster-kube-config
export KUBECONFIG=$PWD/ht-cluster-kube-config

### To test
kubectl get ns

If you get this error : exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

then find this line in your kubeconfig and change it to "client.authentication.k8s.io/v1beta1"

# 2. Maximise pods on nodes

EKS has a low default limit which will prevent more pods from scheduling onto nodes.

Run these commands before creating nodegoups to maximise the no of pods per node for eks nodes.

kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
kubectl describe daemonset -n kube-system aws-node | grep ENABLE_PREFIX_DELEGATION
kubectl set env ds aws-node -n kube-system WARM_PREFIX_TARGET=1

If you already created a nodegroup, you'll need to delete that node-group.

# 3. Installing ebs-csi driver (for gp3 storage class)

Note: AWS doesn't provide direct support with gp3 volume, so we would be using "ebs-csi driver" which would enable us to use gp3 type volume for better performance

# Download the IAM policy

curl -o example-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/docs/example-iam-policy.json

# Creating IAM Policy

aws iam create-policy --policy-name AmazonEKS_EBS_CSI_Driver_Policy --policy-document file://example-iam-policy.json

Name for IAM policy can be changed/kept as per requirement

# Creating IAM role and attaching IAM Policy to it

eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster <cluster_name> --attach-policy-arn arn:aws:iam::<aws_account_id>:policy/AmazonEKS_EBS_CSI_Driver_Policy --approve --override-existing-serviceaccounts

# Adding ebs csi repo using Helm

helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo update
helm upgrade -install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver  --namespace kube-system --set image.repository=<registry>.dkr.ecr.<region_code>.amazonaws.com/eks/aws-ebs-csi-driver --set controller.serviceAccount.create=false --set controller.serviceAccount.name=ebs-csi-controller-sa

Amazon container image registries can be found here (opens new window)

# 4. Creating nodes using node-group

Please make sure you have Maximized pods before creating a nodegroup

It is recommended to use atleast 4core and 8gb ram nodes for better performance

ht-eks-nodegroups.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: #<cluster-name>
  region: #<region-name:- eg ap-south-1>
  version: "1.22"
  # check the latest eks supported kubernetes version
  # https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar

managedNodeGroups:
  - name: ng-general
    spot: false
    instanceTypes:
      - t3a.medium
      - t3.medium
    availabilityZones:
    - # ap-south-1b # replace this with your chosen AZ
    desiredCapacity: 1
    minSize: 1
    maxSize: 2
    volumeSize: 20
  - name: ng-hypertest-master
    spot: false
    labels:
      hypertest_master_node: 'yes'
    taints:
      - key: hypertest_node
        value: 'yes'
        effect: NoExecute
    instanceTypes:
      - t3a.medium
      - t3.medium
    availabilityZones:
    - # ap-south-1b # replace this with your chosen AZ
    desiredCapacity: 1
    minSize: 1
    maxSize: 2
    volumeSize: 20
  - name: ng-hypertest-worker
    spot: true
    labels:
      hypertest_worker_node: 'yes'
    taints:
      - key: hypertest_node
        value: 'yes'
        effect: NoExecute
    instanceTypes:
      - t3a.2xlarge
      - t3.2xlarge
    availabilityZones:
    - # ap-south-1b # replace this with your chosen AZ
    desiredCapacity: 4
    minSize: 1
    maxSize: 4
    volumeSize: 20

Create ht-eks-nodegroups.yaml as shown above.

This is going to add 2 tainted node groups for hypertest(These nodes will only run hypertest workloads). Another general purpose nodegroup is also created.

Run the following to create the 3 nodegroups

eksctl create nodegroup -f ht-eks-nodegroups.yaml

Once nodegroup is created you can check it on eks dashboard under clusters -> overview, nodegroup in cluster would be showing status as ready

# Storage And Ingress Deployment

# 1. Deploy nginx-ingress-controller

We'll use ingress-nginx as ingress in the cluster, if you are already using ingress-nginx then it would only update the existing one without making any changes

Latest version of nginx-ingress-controller can be found here (opens new window) This guide assumes version 1.2.0

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/deploy.yaml

Once deployed check if the namespace “ingress-nginx” has been deployed by running kubectl get ns

# 2. Crating storage and ingress classes for hypertest

ht-ingress-storage-classes.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hypertest-storage-class
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
provisioner: ebs.csi.aws.com #specify your provisioner name here. eg: microk8s.io/hostpath
parameters: # specify parameters for your storage class here
  type: gp3 #
allowVolumeExpansion: true # set this to true if your provisioner supports expanding volume, otherwise remove this
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: hypertest-ingress-class
spec:
  controller: k8s.io/ingress-nginx  #specify your controller name here. eg: k8s.io/ingress-nginx for nginx-ingress-controller https://kubernetes.github.io/ingress-nginx/
kubectl apply -f ht-ingress-storage-classes.yaml