Deploying Kubernetes Cluster on AWS using kOps

I assume that you are here, you already know what Kubernetes is. But for the sake of completeness, Kubernetes is an open source cluster tool, that makes it easy to deploy, scale and manage containerized applications. Kubernetes was written by software engineers at Google, with many years of experience in this field.

What is KOPS?

kops is one of the tools that will help you to get a Kubernetes cluster up and running quickly on a few supported cloud providers. Such as Google Cloud, AWS or Digital Ocean.

If you are looking for ways to introduce Kubernetes to your production systems, consider managed Kubernetes services from those cloud providers instead. As it might help you to reduce load on your Operation Team. Kops on the other hand might help you to have more control over your Kubernetes Cluster, and to be up-to-date with the most recent of Kubernetes.

Prerequisites

To start creating your own Kubernetes cluster on AWS, you will need to have these ready first:

  1. Make sure you have an AWS Account. Obviously !

  2. Create a user, let's call it kops that has these permissions, at least in the region that you want to create the cluster in. In my tutorial, I will create the region in eu-central-1. Which is in Frankfurt, Germany.

AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
AmazonSQSFullAccess
AmazonEventBridgeFullAccess

Then create the security credentials for this account, and add them to the file ~/.aws/credentials in this format

[kops]
aws_access_key_id=XXXXXXXXXXXXXX
aws_secret_access_key=YYYYYYYYYYYYYYYYYY
region=eu-central-1

This AWS Documentation Page can go you through more of the instructions, if you have more questions about it.

  1. Make sure you have the AWS Cli tool installed on your machine. I am using Linux in this tutorial. Mac OS will have similar instructions. Download it from here AWS CLI

  2. Download the kubectl client: https://kubernetes.io/docs/tasks/tools/

  3. Optional: Download helm client: https://github.com/helm/helm/releases. Helm is a tool to help you manage applications, and their releases, on Kubernetes. We can go through some example in this post, or some other post, on how to use it to install some applications on Kubernetes.

  4. Have a domain name registered. This tutorial will assume that your domain is registered with a 3rd party, not AWS. But if your domain is registered on AWS, then this reference will walk you throw the steps to have it correctly configured.

In my case, I have the root domain name qunsul.com registered with name.com, and the NS Records for aws.qunsul.com are pointing to the hosted zone that I have on AWS. More on this later.

Preparing for the cluster creation

Creating the S3 Bucket for the KOPS Store

  • Make sure you are using the right AWS Profile with enough credentials. Assuming that you are using the AWS User that I suggested before, then you need to run this command

export AWS_PROFILE=kops

You can run this command to validate that you are using the right user

aws sts get-caller-identity

You should get an output that looks like this

{
    "UserId": "XXXXXXXXXXXXXX",
    "Account": "XXXXXXXXXXXXXX",
    "Arn": "arn:aws:iam::XXXXXXXXXXXXXX:user/kops"
}

  • kops will need an S3 Bucket to store all the clusters that it manages, in my case, I have created a bucket called omar-kops for this. You can use a command like this to create the cluster from using the aws-cli

export AWS_REGION=eu-central-1
export BUCKET_NAME=omar-kops
aws s3api create-bucket --bucket ${BUCKET_NAME} --region ${REGION} --create-bucket-configuration LocationConstraint=eu-central-1

  • Let's assume that you have the domain example.com (btw, you are lucky if you really have this exact domain), we will need to create a hosted zone on AWS for a subdomain/domain that you will need for the cluster(s) that you are creating. So let's create a hosted zone called aws.example.com. In my case, I have one called aws.qunsul.com

The subdomain aws.example.com, will be the subdomain that will contain ALL of the kops clusters that you create.

You can create the hosted zone for this domain using the AWS Console, or the AWS CLI. Here is how the CLI command looks like

aws route53 create-hosted-zone --name subdomain.example.com

That means that if you want to create different kubernetes clusters cluster1.aws.example.com and cluster2.aws.example.com, only 1 hosted zone is needed.

Disclaimer: I have created a hosted zone from the AWS Console. So if something is wrong with this command, please let me know. See my contact info at the end of this post

  • On the DNS Management of the service where you purchased the Domain Name, make the subdomain (or domain) point to the created hosted zone you have on AWS. This is dependent on the control panel of the provider, where you purchased your domain name.

  • Create a file for the Environment Variables needed

We will need to have some environment variables set all the time we are dealing with this cluster, so let's put them in a file. Let's call it env.sh

export AWS_REGION=eu-central-1
export AWS_PROFILE=kops
export NAME=k8s.aws.qunsul.com
export KOPS_STATE_STORE=s3://omar-kops

Comments about those environment variables:

  • Make sure you set the region, to the region that is suitable for you. If you live in the US, maybe you can choose us-east-1, for lower latency on the API calls, and also, it's cheaper ;)

  • The NAME is set to the subdomain that will be used to call the API of your Kubernetes cluster. It's composed of some PART + the previous subdomain aws.example.com In my case it's k8s + aws.qunsul.com

If I create another cluster later, I can choose another.aws.qunsul.com as well.

  • The variable KOPSSTATESTORE should contain the S3 Bucket that you created in the beginning. This bucket should be the same for all the kubernetes cluster that you are managing.

Creating the Kubernetes Cluster

I will be using kops version 1.23.0-beta-1. You can check the version that you have using the command kops version

$ kops version 
Version 1.23.0-beta.1 (git-473018f64ffd2e368d31d608479355d15cddb0d3)

Make sure that the environment variables are set

source env.sh
echo $KOPS_STATE_STORE

Then run the command kops create with these parameters

kops create cluster --networking flannel --dns-zone aws.qunsul.com --zones=eu-central-1a --master-count 1 \
--master-size t3.small --node-count 1 --node-size t3.small ${NAME}

Feel free to adjust the node sizes, but I recommend not using anything smaller than t3.small, because some components might fail to run.

Make sure you set the dns_zone to the subdomain, which is in your case aws.example.com

This command won't do anything, except creating the config for the cluster you want to create, and save that config on S3 store. In the S3 bucket.

If you run kops get cluster, to list your clusters, you should get this output

kops get cluster
NAME            CLOUD   ZONES
k8s.aws.qunsul.com  aws eu-central-1a

That means, if you have the same config on some other computer, you should have the same cluster there as well. This allows different team mates to manage the same clusters from different computers.

Now to create the actual cluster using the configuration that we have, let's go ahead and apply this config. Run this command

kops update cluster --name ${NAME} --yes --admin --admin=87600h

This command will take a while, because it is going to create all the resources on AWS. From this time on, you will be billed for all the resources you are creating.

After the command is done, run this command to list all the EC2 instances that you have to validate

aws ec2 describe-instances | jq '.Reservations[]|.Instances[].InstanceId'

This command assumes that you have the tool jq installed. I highly recommend it having it installed. Consider reading some other tutorial I wrote about using jq. Here is the tutorial JQ Tutorial

It's worth noting that the command kops update might return early, but your Kubernetes cluster is still being deployed.

Now you can also see that kops has automatically updated your ~/.kube/config file to include the admin credentials for new Kubernetes cluster. You can run the command kubectl config get-clusters to see the list of the Kubernetes clusters you have in this file.

By this time, the cluster should be up and ready, so let's run some commands to validate this.

kubectl get nodes
The output is something like this
kubectl get nodes 
NAME                                            STATUS   ROLES                  AGE     VERSION
ip-172-20-38-91.eu-central-1.compute.internal   Ready    node                   8m10s   v1.22.4
ip-172-20-47-81.eu-central-1.compute.internal   Ready    control-plane,master   10m     v1.22.4

That's cool. We have some Kubernetes cluster up and running.

You can see all the currently running pods on your cluster using the command

kubectl get pods --all-namespaces

I will follow up on this with more posts to see how we can deploy some applications on Kubernetes.

Deleting the Cluster

Make sure to delete the cluster if you don't need it anymore. Otherwise you will be paying money for nothing.

kops delete cluster --name=${NAME} --yes

Please be careful if you have different clusters if you are running this command. If you take your production cluster down, it's not my fault!

Contact me regarding any mistakes/comments

On Twitter @OmarQunsul Email : omar.qunsul ( AT ) gmail.com

References


About Me

My name is Omar Qunsul. I write these articles mainly as a future reference for me. So I dedicate some time to make them look shiny, and share them with the public.

You can find me on twitter @OmarQunsul, and on Linkedin.


Homepage