k3d and Custom Load Balancer
I find k3s
to be more reliable than k3d
.
However, there are some local development and home lab cases where
I find k3d
to be ideal.
Some long-term issues with k3d
prevent me from using
it for any non-trivial scenarios:
- https://github.com/k3d-io/k3d/issues/926
- https://github.com/k3d-io/k3d/issues/1221
- https://github.com/k3d-io/k3d/discussions/1382
That said, this is the formula I use to allocate a
k3d
(k3s
on Docker) cluster locally.
k3d \
cluster create onprem-demo-1 \
--volume /mnt/my-apps:/mnt/my-apps \
--api-port 6443 \
--port "80:80@loadbalancer" \
--port "443:443@loadbalancer" \
--k3s-arg="--disable=traefik@server:0" \
--registry-create registry:0.0.0.0:5000
Previously, I ran my own container registry on the same host
machine, and pointed k3d
at the registry by
specifying an alias with --registry-config
.
...
--registry-config <(printf "
mirrors:
"registry:5000":
endpoint:
- http://host.k3d.internal:5000
")
Unfortunately, due to the GitHub issues listed above, I found
using a separate host-managed registry to be unreliable on reboot.
I found I would need to stop
and then
start
the cluster, and starting would hang with the
message
Injecting records for hostAliases (incl. host.k3d.internal) and
for 3 network members into CoreDNS configmap...
and failed to set DNS host aliases in some cases.
I switched to using a
k3d-managed registry. I deploy images to the k3d-managed registry from other machines
by specifying an /etc/docker/daemon.json
file in
their respective configs like so.
{
"insecure-registries" : [ "my-server.local:5000" ]
}
I disable Traefik as a load-balancer with my
k3d cluster create ...
command above. I run a simple
Caddy deployment that handles the same behavior I need from
Traefik
including LetsEncrypt automatic TLS
certificate provisioning for HTTPS.
kind: Deployment
apiVersion: apps/v1
metadata:
name: caddy
labels:
name: caddy
spec:
replicas: 1
selector:
matchLabels:
app: caddy
template:
metadata:
labels:
app: caddy
spec:
containers:
- name: app
image: registry:5000/caddy:latest
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: data-files-volume
mountPath: /data
volumes:
- name: data-files-volume
hostPath:
path: /mnt/my-apps/caddy
---
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: LoadBalancer
selector:
app: caddy
My Caddyfile
config looks something like this.
my-site.my-domain.com:443 {
# This routes traffic to an app running directly on the host.
reverse_proxy host.k3d.internal.:8000
tls my-email@willhaley.com
}
my-other-site.my-domain.com:443 {
# This routes traffic to an app running within k3d.
reverse_proxy some-app-service.my-namespace.svc.cluster.local.
tls my-email@willhaley.com
}
Whenever I want to destroy my cluster and clean up all traces of it I run the following command.
k3d cluster delete onprem-demo-1
Caveats:
- I found myself deleting and re-creating my cluster quite a bit while figuring things out
-
The API to edit a cluster lacks the ability to add additional
host volumes later, but this can be worked around with bind
mounts on the host and restarting the
k3d
cluster. ymmv