Replace Kustomize commonLabels, patchesJson6902 and patchesStrategicMerge
All of them have been deprecated by Kustomize and you can replace them with new functionality that exists and patches.
Prefer video? Here it is on YouTube.
If you’re using a reasonably modern version of Kustomize such as v5.3+ you may
have seen deprecation warnings for commonLabels
, patchesJson6902
and
patchesStrategicMerge
.
In a project I was working with for the last couple of years I ended up using all 3. I originally started with an older version of Kustomize where they weren’t deprecated.
Recently I wanted to start using Argo CD ApplicationSets which supports using
Kustomize patches
but not the deprecated patching methods so I needed to
refactor a few things to make it all work. Even if you don’t have to stop using
them right now, it’s a good idea to drop them because that functionality may
disappear in later versions of Kustomize.
# commonlabels
# Warning: 'commonLabels' is deprecated. Please use 'labels' instead.
Using commonLabels
is a quick way to add labels and selectors to the
resources being generated with Kustomize.
# kustomization.yaml
commonLabels:
app.kubernetes.io/name: "hello-app"
The above will add that label to all resources and even add selectors to any deployments and services you may have. That’s pretty sweet for 2 lines of code in 1 spot.
Use labels instead
You can use labels
to replace commonlabels
.
# kustomization.yaml
labels:
- includeSelectors: true
includeTemplates: true
pairs:
app.kubernetes.io/name: "hello-app"
The above produces the same output as commonLabels
. Yes it’s more verbose but
you get the benefit of having more control and it may even help remove the need
for using patchesJson6902
, we’ll cover that use case next.
In practice you may end up setting includeSelectors: false
too and we’ll
cover why next.
# patchesJson6902
# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead.
One “benefit” of using this over patches
is it gets applied at a different
point than patches
which can help you solve certain types of problems.
This will be easier to explain with a concrete example. You can apply this to other use cases.
Imagine having (2) deployments and (1) service. The deployments are for your web application and background worker and the service is associated with your web app.
If you use commonLabels
you will end up with a hello-app
label on your web
and worker deployments and if your service has a selector for hello-app
then
your load balancer might try to put a health check on your background worker
and this will never pass because it’s not a web app listening on a port.
That’s what happened to me with EKS and the AWS Load Balancer Controller.
To get around that you can patch your background worker’s label and selector,
however if you try to use patches
you’ll get a missing value
error.
But, you can use patchesJson6902
which allows you to patch these
labels and selectors. Here’s an example of that:
# kustomization.yaml
patchesJson6902:
- path: "patch-deployment-worker-common-labels.yaml"
target:
kind: "Deployment"
name: "worker-app"
# patch-deployment-worker-common-labels.yaml
---
- op: "replace"
path: "/metadata/labels/app.kubernetes.io~1name"
value: "hello-worker-app"
- op: "replace"
path: "/spec/selector/matchLabels/app.kubernetes.io~1name"
value: "hello-worker-app"
- op: "replace"
path: "/spec/template/metadata/labels/app.kubernetes.io~1name"
value: "hello-worker-app"
If you build this, it will work and your label / selector problem is solved but
you’re still stuck using patchesJson6902
.
Instead, you can prevent your labels from adding selectors:
# kustomization.yaml
labels:
- includeSelectors: false
includeTemplates: true
pairs:
app.kubernetes.io/name: "hello-app"
This is good. Now all of your resources have the hello-app
label which in my
opinion is even better than the previous solution because the background worker
is still part of the hello-app
, it shouldn’t have a different label.
The next piece of the puzzle is adding a different label and selector to your web app deployment and referencing that label in your service’s selector.
# kustomization.yaml
patches:
- path: "patch-deployment-web.yaml"
target:
kind: "Deployment"
name: "app"
- path: "patch-service.yaml"
target:
kind: "Service"
name: "app"
- path: "patch-deployment-worker.yaml"
target:
kind: "Deployment"
name: "worker-app"
# patch-deployment-web.yaml
---
- op: "add"
path: "/metadata/labels/app.kubernetes.io~1tier"
value: "hello-web-app"
- op: "add"
path: "/spec/selector/matchLabels/app.kubernetes.io~1tier"
value: "hello-web-app"
- op: "add"
path: "/spec/template/metadata/labels/app.kubernetes.io~1tier"
value: "hello-web-app"
# patch-service.yaml
---
- op: "add"
path: "/spec/selector/app.kubernetes.io~1tier"
value: "hello-web-app"
There we go, since we’re adding labels and selectors not replacing them we can
use patches
. We’re linking up the web app deployment and service with a new
tier
label. This is a much cleaner and future proof solution.
# patch-deployment-worker.yaml
---
- op: "add"
path: "/spec/selector/matchLabels/app.kubernetes.io~1name"
value: "hello-app"
We also need to add the above selector to the background worker because
remember, the labels
call omits selectors and we need one that matches the
label that was added.
# patchesStrategicMerge
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead.
This behaves similar to patches
in when it happens. In my case I used it
initially because the syntax was less verbose than patches
and it wasn’t
deprecated a few years ago.
For example, let’s say you want to delete a resource, you can do:
# kustomization.yaml
patchesStrategicMerge:
- "delete-job-broadcast.yaml"
# delete-job-broadcast.yaml
---
apiVersion: "batch/v1"
kind: "Job"
metadata:
name: "app-broadcast"
$patch: "delete"
You can do the same thing with patches
:
# kustomization.yaml
patches:
- path: "delete-job-broadcast.yaml"
target:
kind: "Job"
name: "app-broadcast"
The patch file ends up being the same too which is nice. This was an easy refactor.
# Testing Everything
One nice benefit is most of the above is testable client side. You can do a
kustomize build
before and after your changes and then diff
the results. If
you do this incrementally you can expect no changes in some cases. In the cases
where you expect changes you can look at the diff to make sure it all looks
good.
Here’s a minimal output of building the “hello” app with the new set up. It only focuses on the names, labels and selectors. I’ve omit everything else:
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: hello-app
name: hello-app
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app.kubernetes.io/tier: hello-web-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hello-app
app.kubernetes.io/tier: hello-web-app
name: hello-app
spec:
selector:
matchLabels:
app.kubernetes.io/tier: hello-web-app
template:
metadata:
labels:
app.kubernetes.io/name: hello-app
app.kubernetes.io/tier: hello-web-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hello-app
name: hello-worker-app
spec:
selector:
matchLabels:
app.kubernetes.io/name: hello-app
template:
metadata:
labels:
app.kubernetes.io/name: hello-app
Here’s the diff between the old and new versions:
--- old 2024-09-25 14:45:56.961568003 -0400
+++ new 2024-09-25 14:46:02.341567998 -0400
@@ -10,34 +10,36 @@
port: 80
targetPort: http
selector:
- app.kubernetes.io/name: hello-app
+ app.kubernetes.io/tier: hello-web-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hello-app
+ app.kubernetes.io/tier: hello-web-app
name: hello-app
spec:
selector:
matchLabels:
- app.kubernetes.io/name: hello-app
+ app.kubernetes.io/tier: hello-web-app
template:
metadata:
labels:
app.kubernetes.io/name: hello-app
+ app.kubernetes.io/tier: hello-web-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
- app.kubernetes.io/name: hello-worker-app
+ app.kubernetes.io/name: hello-app
name: hello-worker-app
spec:
selector:
matchLabels:
- app.kubernetes.io/name: hello-worker-app
+ app.kubernetes.io/name: hello-app
template:
metadata:
labels:
- app.kubernetes.io/name: hello-worker-app
+ app.kubernetes.io/name: hello-app
Here’s the new version of the kustomization.yaml
file:
---
apiVersion: "kustomize.config.k8s.io/v1beta1"
kind: "Kustomization"
namePrefix: "hello-"
labels:
- includeSelectors: false
includeTemplates: true
pairs:
app.kubernetes.io/name: "hello-app"
resources:
- "../base"
patches:
- path: "patch-deployment-web.yaml"
target:
kind: "Deployment"
name: "app"
- path: "patch-deployment-worker.yaml"
target:
kind: "Deployment"
name: "worker-app"
- path: "patch-service.yaml"
target:
kind: "Service"
name: "app"
- path: "delete-job-broadcast.yaml"
target:
kind: "Job"
name: "app-broadcast"
And here’s the diff of the old vs new kustomization.yaml
file:
--- old/overlay/kustomization.yaml 2024-09-25 14:45:27.441567978 -0400
+++ new/overlay/kustomization.yaml 2024-09-25 14:10:53.591545309 -0400
@@ -4,23 +4,29 @@
namePrefix: "hello-"
-commonLabels:
- app.kubernetes.io/name: "hello-app"
+labels:
+- includeSelectors: false
+ includeTemplates: true
+ pairs:
+ app.kubernetes.io/name: "hello-app"
resources:
- "../base"
-patchesStrategicMerge:
-- "delete-job-broadcast.yaml"
-
patches:
-- path: "patch-deployment-worker.yaml"
+- path: "patch-deployment-web.yaml"
target:
kind: "Deployment"
- name: "worker-app"
-
-patchesJson6902:
-- path: "patch-deployment-worker-common-labels.yaml"
+ name: "app"
+- path: "patch-deployment-worker.yaml"
target:
kind: "Deployment"
name: "worker-app"
+- path: "patch-service.yaml"
+ target:
+ kind: "Service"
+ name: "app"
+- path: "delete-job-broadcast.yaml"
+ target:
+ kind: "Job"
+ name: "app-broadcast"
I applied the complete version of the above against a production system without any issues or surprises but I did test it on a throwaway cluster first.
You need to be careful about changing selectors on an existing deployment / pod. They are immutable. You’ll likely end up needing to delete your old deployment so a new one gets created with the new selector which in turn will generate new pods.
The video below goes over an end-to-end example and runs kustomize build
so
we can see the before and after.
# Demo Video
Timestamps
- 1:12 – The old base configs
- 2:37 – The old overlay configs
- 3:05 – Refactoring commonLabels
- 5:20 – Testing things locally with the diff tool
- 6:37 – Refactoring patchesStrategicMerge
- 8:43 – Looking at patchesJson6902
- 9:21 – patchesJson6902 use case
- 12:43 – Refactoring patchesJson6902
- 16:00 – Careful, selectors are immutable
What was it like for you to perform this refactor? Let me know below.