EKS
EKS Cluster Setup for HyperTest Deployment
Last updated
EKS Cluster Setup for HyperTest Deployment
Last updated
If you want to use existing EKS cluster, just add new nodegroups for HyperTest workloads and carry on from step 7, you can skip to 9:12 min in this video (athough we strongly recommend adding a new CIDR range to your VPC, create subnets from it and add the subnets to your EKS cluster - you might to recreate the cluster for this)
Install Prereqs
If you want to create a new EKS Cluster for Hypertest/ add more IP's in existing cluster * Create new VPC/add new CIDR in existing VPC * Create new subnets * Edit public and private route tables for NAT and internet gateways * Deploy/Recreate EKS Cluster
Maximize pods on nodes
Create two nodegroups for HyperTest
Install Load Balancer Controller (if not already installed)
Deploy ingress-nginx controller
Create ingress class and storage class for HyperTest
Deploy HyperTest
Create wildcard DNS for HyperTest
Complete RBAC signup
Add DNS in HyperTest
AWS CLI: The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services.
Kubectl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
Helm: Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters.
EKS: Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises.
Eksctl: eksctl
is a simple CLI tool for creating and managing clusters on EKS - Amazon's managed Kubernetes service for EC2.
EBS-CSI Driver for storage: The Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver allows Amazon Elastic Kubernetes Service (Amazon EKS) clusters to manage the lifecycle of Amazon EBS volumes for persistent volumes.
Load Balancer Controller: AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
Ingress-Nginx: The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. The NGINX Ingress Controller an implementation of a Kubernetes Ingress Controller for NGINX and NGINX Plus.
If you are setting up from scratch then follow the steps below
To access AWS services with the AWS CLI, you need an AWS account, IAM credentials, and an IAM access key pair. When running AWS CLI commands, the AWS CLI needs to have access to those AWS credentials.
You should also have permissions to create VPC, Subnet, NAT gateway, Internet Internet Gateway, creating a cluster, editing the resources etc in that cluster.
Download the latest version of aws-cli from here
Once you have installed awscli
, run aws configure
and configure the settings that awscli
uses to interact with AWS. These include your security credentials, the default output format, and the default AWS Region.
aws_access_key_id = <your_keyid>
aws_secret_access_key = <your_secretkey>
Default region name : <region_name>
Default output format : json
Returns details about the IAM user or role whose credentials are used to call the operation
Download latest version of kubectl from here
Download latest version of helm from here
Download latest version of eksctl from here
We will be creating the EKS cluster in a single AZ. But since AWS requires EKS clusters to have a minimum of two availability zones (AZs), we will be creating subnets in two different AZs and using subnets from single AZ in our nodegroup to create nodes.
Single AZ is a requirement because EC2 nodes can not mount volumes from other AZs. Also, there are no charges for bandwidth within a single AZ
Create a new VPC or add the below IPv4 CIDR to your existing VPC
IPv4 CIDR: 10.3.0.0/16
This guide assumes ap-south-1b as the primary AZ.
Create 6 subnets in the VPC with below CIDR Range
3 private and 3 public, for both private and public subnets
2 out of 3 subnets for each category (i.e. public and private subnet) will have to be in a different AZs, since EKS requires at least 2 subnet in 2 different AZs for either private or public subnets
Edit the public subnets and allow auto assign IPv4 address and IPv6 address from subnet settings
For load balancing, tag private subnets with
Key – kubernetes.io/role/internal-elb
Value – 1
For load balancing, tag public subnets with
Key – kubernetes.io/role/elb
Value – 1
Make sure you have the following gateways created
Connects VPC to internet, if you have made a new internet gateway manually, attach your gateway to VPC, we will attach the gateway to public subnets via route tables
For instances in private subnet to send request to internet over NAT gateway, we will attach the gateway to private subnets via route tables
2 different route tables are required for private and public subnets respectively
Either edit existing route tables you might have or create new and make sure they are configured to use internet and NAT gateways
Local routing for instances in VPC to communicate to each other over IPv4 and IPv6 All other subnet traffic to the internet over the internet gateway
Local routing for instances in VPC to communicate to each other over IPv4 and IPv6
We will be using subnets from our primary AZ to create nodegroups for the cluster so the cluster is created in single AZ.
Single AZ is a requirement because EC2 nodes can not mount volumes from other AZs. Also, there are no charges for bandwidth within a single AZ
This guide assumes ap-south-1b as the chosen AZ.
Create a file ht-eks-cluster.yaml as shown below
Create the cluster using the following command
Once cluster is created you can check in clusters under Amazon EKS service showing status as active
These commands would create a new kubeconfig file for the cluster just created and set kubectl to use it for the current shell session
If you get this error : exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
then find this line in your kubeconfig and change it to "client.authentication.k8s.io/v1beta1"
EKS has a low default limit which will prevent more pods from scheduling onto nodes.
Run these commands before creating nodegoups to maximise the no of pods per node for eks nodes.
If you already created a nodegroup, you will have to delete that node-group
Please make sure you have maximized pods that can be scheduled on nodes before creating a nodegroup
It is recommended to use atleast 4core and 8gb ram nodes for better performance.
To minimize logs in our nodegroups and save disk space, we are updating some configuration via preBootstrapCommands For kubelet: in /etc/kubernetes/kubelet/kubelet-config.json
imageGCHighThresholdPercent: 50 (previous 85)
imageGCLowThresholdPercent: 45 (previous 80)
For Docker as container runtime: in /etc/docker/daemon.json
max-file: 2 (previous 10)
For containerd as container runtime: in /etc/kubernetes/kubelet/kubelet-config.json
containerLogMaxFiles: 2 (previous 5)
While creating nodegroups keep vpc and subnet section from original cluster file too
Create a file ht-eks-nodegroups.yaml as shown below
Create the cluster using the following command
The above file will create 3 node-groups in the cluster
Spot: false
Taints: none
Labels: none
Workload: general purpose pods such as nginx controllers etc
Once nodegroups are created you can check them on EKS dashboard under clusters -> overview, nodegroup in cluster would be showing status as ready
Load balancer types for Amazon EKS
We will first install AWS Load Balancer Controller add-on in our cluster, which helps in managing the Elastic load balancer on AWS
Amazon container image registries can be found here
Important
The deployed chart doesn't receive security updates automatically. You need to manually upgrade to a newer chart when it becomes available. When upgrading, change install
to upgrade
in the previous command, but run the following command to install the TargetGroupBinding
custom resource definitions before running the previous command.
Verify that the controller is installed.
Now moving on to Load Balancer
If you already have an existing application load balancer and want to use the same for hypertest , please follow the below guide, else continue with network load balancer
We'll use ingress-nginx as ingress in the cluster, if you are already using ingress-nginx then it would only update the existing one without making any changes
Latest version of ingress-nginx-controller can be found here. This guide assumes version 1.3.0
Once deployed check if the namespace “ingress-nginx” has been deployed by running kubectl get ns
Create a file ht-ingress-storage-classes.yaml as shown below
Create the ingress and storage class using the following command
After creating ingress and storage classes, we will now deploy HyperTest controller. Please refer the below guide for the same.
Subnet Name | AZ | IPv4 CIDR Block |
---|---|---|
Destination | Target |
---|---|
Destination | Target |
---|---|