K8S on EC2 vs EKS

While I was in an interview last week, the company heavily using Ansible to provision and configure their k8s cluster on AWS EC2, aim to have the most control and flexibility in case they need to move to another cloud provider.

Here I will use both Terraform and Ansible to automate K8S cluster creation and configuration on a few EC2 instances.

The tools and workflow

  • Major tools will include Ansible, terraform, together with a shell script.
  1. Use terraform to create S3 as backend state file, create security groups and 4 ec2 instances ( 1 bastion, 1 master, 2 workers)

  2. As Instances are created in AWS environment dynamically, Ansible ec2-dynamic-inventory approach is best way to manage them.

  3. Using terraform “connection”, to ssh to the bastion host after creation, use provisoners “file” to upload shell script to bootstrap bastion host as a ansbile master, configure ec2-dynamic-inventory to fetch the other 3 instances for K8S.

  4. Continue with terraform “file” and “remote-exec” to upload playbooks and with “inline” to execute playbooks to init k8s master node and join the worker node.

  5. Finally another playbook to configure bastion to run kubectl

Create SGs and EC2s, with Bastion bootstrap via terraform

# main.tf

resource "aws_instance" "bolg" {
  ami             = var.ami_id # Replace with your AMI ID
  instance_type   = var.instance_type_free
  security_groups = [aws_security_group.blog_sg.name]
  #  vpc_security_group_ids = ["sg-089842a753c9309bb"]
  key_name = var.key_pair
  user_data = <<-EOF
    #!/bin/bash
    sudo apt update
    sudo apt install software-properties-common
    sudo add-apt-repository --yes --update ppa:ansible/ansible
    sudo apt install ansible -y
  EOF
  tags = {
    Name = "blog"
  }
  lifecycle {
    prevent_destroy = true
  }
}
resource "null_resource" "upload_playbook" {
  triggers = {
    # Add a dummy trigger to force a refresh
    timestamp = "${timestamp()}"
  }
  depends_on = [null_resource.wait_for_bastion]
  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = file("terraform-new-key1.pem")
    host        = aws_instance.testbastion.public_ip
  }
  provisioner "file" {
    source      = "pb1.yaml"
    destination = "/home/ubuntu/pb1.yaml"
  }
  provisioner "file" {
    source      = "aws_ec2.yaml"
    destination = "/home/ubuntu/aws_ec2.yaml"
  }
  provisioner "file" {
    source      = "ansible.sh"
    destination = "/home/ubuntu/ansible.sh"
  }
  provisioner "remote-exec" {
    inline = [
      "cd /home/ubuntu/",
      "sudo chmod 600 terraform-new-key1.pem",
      "sudo chmod +x pb1.yaml",
      "sudo ansible-playbook pb1.yaml",
      "sudo chmod +x ansible.sh",
      "sudo ./ansible.sh",
      "sudo ansible-inventory -i aws_ec2.yaml --list",
      "sudo ansible-inventory --graph"
    ]
  }
}
 

SSH to bastion host and run playbooks to init and join k8s nodes

# k8s-deploy.yaml

# Playbook hosts can be defined by EC2 tags
---
- name: EC2-K8S cluster setup
  hosts: all
  gather_facts: false
  tasks:
    - name: test EC2 dynamic hosts connections
      include_playbook: 2-test-connection.yaml

    - name: Pre-task for all k8s hosts
      include_playbook: 3-k8s-nodes-preparation.yaml

    - name: init Master node
      include_playbook: 4-master-init.yaml

    - name: Join worker nodes
      include_playbook: 5-work-join.yaml

    - name: configure bastion to run kubectl
      include_playbook: 8-local-kubeconf-admin.yaml

Conclusion

Now we are able to spin up a 3-nodes K8S cluster on EC2 instances in about 20 minutes.

For lab purpose, running K8S on EC2 instances with default VPC can be cheaper and flexible option compare to EKS, as AWS managed EKS require additional VPC and unable to change node instance type unless delete the existing node group.