Learn Docker With My Newest Course

Dive into Docker takes you from "What is Docker?" to confidently applying Docker to your own projects. It's packed with best practices and examples. Start Learning Docker →

Configuring a KinD Cluster with NGINX Ingress Using Terraform and Helm


We'll go over setting up a local Kubernetes cluster that will let you access your services over localhost using the nginx ingress.

Quick Jump: Demo Video

Running a multi-node Kubernetes cluster is pretty painless with KinD (Kubernetes in Docker). It takes about a minute to spin up a cluster. You could use their kind CLI tool but in my opinion if you plan to use Terraform in production you should use it in development too. It saves you from running multiple commands manually or creating a wrapper shell script.

In production you’d likely use a managed Kubernetes services from your cloud hosting service of choice but for local development this will get you going with a local cluster very quickly. We’ll also go over how to hook up an NGINX Ingress Controller using Helm so you can access your services over localhost.

Demo Video

Config Files

Here’s all of the Terraform config files covered in the video. The video uses a different kind provider. Over the years it stopped getting maintained so I changed this post to use a more up to date version and I also updated all of the provider versions.

# versions.tf

terraform {
  required_providers {
    kind = {
      source  = "tehcyx/kind"
      version = "0.2.0"

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.22.0"

    helm = {
      source  = "hashicorp/helm"
      version = "2.10.1"

    null = {
      source  = "hashicorp/null"
      version = "3.2.1"

  required_version = ">= 1.0.0"
# variables.tf

variable "kind_cluster_name" {
  type        = string
  description = "The name of the cluster."
  default     = "demo-local"

variable "kind_cluster_config_path" {
  type        = string
  description = "The location where this cluster's kubeconfig will be saved to."
  default     = "~/.kube/config"

variable "ingress_nginx_helm_version" {
  type        = string
  description = "The Helm version for the nginx ingress controller."
  default     = "4.7.1"

variable "ingress_nginx_namespace" {
  type        = string
  description = "The nginx ingress namespace (it will be created if needed)."
  default     = "ingress-nginx"
# kind_cluster.tf

provider "kind" {

provider "kubernetes" {
  config_path = pathexpand(var.kind_cluster_config_path)

resource "kind_cluster" "default" {
  name            = var.kind_cluster_name
  kubeconfig_path = pathexpand(var.kind_cluster_config_path)
  wait_for_ready  = true

  kind_config {
    kind        = "Cluster"
    api_version = "kind.x-k8s.io/v1alpha4"

    node {
      role = "control-plane"

      kubeadm_config_patches = [
        "kind: InitConfiguration\nnodeRegistration:\n  kubeletExtraArgs:\n    node-labels: \"ingress-ready=true\"\n"
      extra_port_mappings {
        container_port = 80
        host_port      = 80
      extra_port_mappings {
        container_port = 443
        host_port      = 443

    node {
      role = "worker"
# nginx_ingress.tf

provider "helm" {
  kubernetes {
    config_path = pathexpand(var.kind_cluster_config_path)

resource "helm_release" "ingress_nginx" {
  name       = "ingress-nginx"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  version    = var.ingress_nginx_helm_version

  namespace        = var.ingress_nginx_namespace
  create_namespace = true

  values = [file("nginx_ingress_values.yaml")]

  depends_on = [kind_cluster.default]

resource "null_resource" "wait_for_ingress_nginx" {
  triggers = {
    key = uuid()

  provisioner "local-exec" {
    command = <<EOF
      printf "\nWaiting for the nginx ingress controller...\n"
      kubectl wait --namespace ${helm_release.ingress_nginx.namespace} \
        --for=condition=ready pod \
        --selector=app.kubernetes.io/component=controller \

  depends_on = [helm_release.ingress_nginx]
# nginx_ingress_values.yaml

    enabled: true
  terminationGracePeriodSeconds: 0
    type: "NodePort"
  watchIngressWithoutClass: true
    ingress-ready: "true"
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/master"
    operator: "Equal"
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/control-plane"
    operator: "Equal"
    enabled: false
    publish-status-address: "localhost"

Here’s the ./demo.sh which you can run to create the cluster after running terraform init:

#!/usr/bin/env sh

set -e

terraform apply -auto-approve

printf "\nWaiting for the echo web server service... \n"
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
sleep 10

printf "\nYou should see a timestamp as a response below (if you do the ingress is working):\n"
curl http://localhost/foo

Commands Run

A few commands to interact with the cluster and destroy it:


kubectl get all -A

terraform destroy -auto-approve


  • 0:45 – Initializing Terraform and spinning up the cluster
  • 2:30 – Going over a few tools we’ll be using
  • 4:44 – Our local cluster is ready to go
  • 5:58 – Going over a few required providers
  • 7:21 – A couple of variables that we can configure
  • 8:48 – Updating the nginx ingress controller to a newer version
  • 10:53 – Going over the kind cluster Terraform resource
  • 14:41 – Configuring the nginx ingress controller with Terraform
  • 19:48 – Creating a null resource with kubectl to wait for the ingress controller

Which local Kubernetes cluster tool do you use? Let me know below.

Never Miss a Tip, Trick or Tutorial

Like you, I'm super protective of my inbox, so don't worry about getting spammed. You can expect a few emails per month (at most), and you can 1-click unsubscribe at any time. See what else you'll get too.