Black Friday Course Sales, Save 50%!

Ready to learn Flask and Docker this year? This deal applies to Build a SAAS App with Flask and Dive into Docker, it's available for .

Configuring a KinD Cluster with NGINX Ingress Using Terraform and Helm

blog/cards/configuring-a-kind-cluster-with-nginx-ingress-using-terraform-and-helm.jpg

We'll go over setting up a local Kubernetes cluster that will let you access your services over localhost using the nginx ingress.

Quick Jump: Demo Video

Running a multi-node Kubernetes cluster is pretty painless with KinD (Kubernetes in Docker). It takes about a minute to spin up a cluster. You could use their kind CLI tool but in my opinion if you plan to use Terraform in production you should use it in development too. It saves you from running multiple commands manually or creating a wrapper shell script.

In production you’d likely use a managed Kubernetes services from your cloud hosting service of choice but for local development this will get you going with a local cluster very quickly. We’ll also go over how to hook up an NGINX Ingress Controller using Helm so you can access your services over localhost.

Demo Video

Config Files

Here’s all of the Terraform config files covered in the video:

# versions.tf

terraform {
  required_providers {
    kind = {
      source  = "kyma-incubator/kind"
      version = "0.0.9"
    }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.5.0"
    }

    helm = {
      source  = "hashicorp/helm"
      version = "2.3.0"
    }

    null = {
      source  = "hashicorp/null"
      version = "3.1.0"
    }
  }

  required_version = ">= 1.0.0"
}
# variables.tf

variable "kind_cluster_name" {
  type        = string
  description = "The name of the cluster."
  default     = "demo-local"
}

variable "kind_cluster_config_path" {
  type        = string
  description = "The location where this cluster's kubeconfig will be saved to."
  default     = "~/.kube/config"
}

variable "ingress_nginx_helm_version" {
  type        = string
  description = "The Helm version for the nginx ingress controller."
  default     = "4.0.6"
}

variable "ingress_nginx_namespace" {
  type        = string
  description = "The nginx ingress namespace (it will be created if needed)."
  default     = "ingress-nginx"
}
# kind_cluster.tf

provider "kind" {
}

provider "kubernetes" {
  config_path = pathexpand(var.kind_cluster_config_path)
}

resource "kind_cluster" "default" {
  name            = var.kind_cluster_name
  kubeconfig_path = pathexpand(var.kind_cluster_config_path)
  wait_for_ready  = true

  kind_config {
    kind        = "Cluster"
    api_version = "kind.x-k8s.io/v1alpha4"

    node {
      role = "control-plane"

      kubeadm_config_patches = [
        "kind: InitConfiguration\nnodeRegistration:\n  kubeletExtraArgs:\n    node-labels: \"ingress-ready=true\"\n"
      ]
      extra_port_mappings {
        container_port = 80
        host_port      = 80
      }
      extra_port_mappings {
        container_port = 443
        host_port      = 443
      }
    }

    node {
      role = "worker"
    }
  }
}
# nginx_ingress.tf

provider "helm" {
  kubernetes {
    config_path = pathexpand(var.kind_cluster_config_path)
  }
}

resource "helm_release" "ingress_nginx" {
  name       = "ingress-nginx"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  version    = var.ingress_nginx_helm_version

  namespace        = var.ingress_nginx_namespace
  create_namespace = true

  values = [file("nginx_ingress_values.yaml")]

  depends_on = [kind_cluster.default]
}

resource "null_resource" "wait_for_ingress_nginx" {
  triggers = {
    key = uuid()
  }

  provisioner "local-exec" {
    command = <<EOF
      printf "\nWaiting for the nginx ingress controller...\n"
      kubectl wait --namespace ${helm_release.ingress_nginx.namespace} \
        --for=condition=ready pod \
        --selector=app.kubernetes.io/component=controller \
        --timeout=90s
    EOF
  }

  depends_on = [helm_release.ingress_nginx]
}
---
# nginx_ingress_values.yaml

controller:
  updateStrategy:
    type: "RollingUpdate"
    rollingUpdate:
      maxUnavailable: 1
  hostPort:
    enabled: true
  terminationGracePeriodSeconds: 0
  service:
    type: "NodePort"
  watchIngressWithoutClass: true
  nodeSelector:
    ingress-ready: "true"
  tolerations:
    - key: "node-role.kubernetes.io/master"
      operator: "Equal"
      effect: "NoSchedule"
  publishService:
    enabled: false
  extraArgs:
    publish-status-address: "localhost"

Here’s the ./demo.sh file which you can run to create the cluster:

#!/usr/bin/env sh

set -e

terraform apply -auto-approve

printf "\nWaiting for the echo web server service... \n"
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
sleep 10

printf "\nYou should see 'foo' as a reponse below (if you do the ingress is working):\n"
curl http://localhost/foo

Commands Run

A few commands to interact with the cluster and destroy it:

./demo.sh

kubectl get all -A

terraform destroy -auto-approve

Timestamps

  • 0:45 – Initializing Terraform and spinning up the cluster
  • 2:30 – Going over a few tools we’ll be using
  • 4:44 – Our local cluster is ready to go
  • 5:58 – Going over a few required providers
  • 7:21 – A couple of variables that we can configure
  • 8:48 – Updating the nginx ingress controller to a newer version
  • 10:53 – Going over the kind cluster Terraform resource
  • 14:41 – Configuring the nginx ingress controller with Terraform
  • 19:48 – Creating a null resource with kubectl to wait for the ingress controller

Which local Kubernetes cluster tool do you use? Let me know below.

Never Miss a Tip, Trick or Tutorial

Like you, I'm super protective of my inbox, so don't worry about getting spammed. You can expect a few emails per month (at most), and you can 1-click unsubscribe at any time. See what else you'll get too.



Comments