Hello, HomeLab
Welcome to The HomeLab Chronicles. This is the space where I document everything I learn while building, breaking, and rebuilding my home infrastructure - from bare metal Kubernetes clusters to GitOps pipelines and zero-trust networking.
Why this Blog?
I've found that the best way to learn is to build, and the best way to retain what you've learned is to write it down. My lab is focused on cloud-native technologies, Kubernetes, and security - the same tools and patterns used in production environments, just running in my home.
Every post here is a record of:
- Problems I've encountered and how I solved them
- Infrastructure decisions and the reasoning behind them
- Tools and technologies I'm experimenting with
- Mistakes I've made (so you don't have to)
What to Expect
You'll find posts covering the technologies that power this lab:
- Kubernetes - kubeadm clusters, Talos OS, multi-cluster management
- Networking - Cilium CNI, eBPF, Gateway API, Cloudflare Tunnels
- GitOps - ArgoCD with Autopilot, declarative configuration, automated sync and self-heal
- Security - Immutable infrastructure with Talos Linux, zero-trust access, secret management with Doppler and External Secrets Operator
- Infrastructure - VMware ESXi virtualization, bare metal clusters on Mac minis
The Setup
Here's what I'm working with:
HP Z620 Workstation - The Hypervisor
The backbone of the lab. An HP Z620 running VMware ESXi with 16 CPUs, 32 cores, and 128 GB of RAM. It hosts three major workloads:
- 3-node kubeadm cluster - Production services including ArgoCD and External Secrets Operator
- 2-node Talos OS cluster - Development and testing environment for validating changes before production
- Pi-hole VM - Network-wide DNS-based ad blocking
Gallifrey - The Bare Metal Cluster
It's bigger on the inside
Two used 2018 Intel Mac minis running Talos Linux, an immutable, API-driven operating system with no SSH access:
Citadel (Control Plane) Intel i3 · 16GB RAM · 128GB SSD
Arcadia (Worker Node) Intel i3 · 16GB RAM · 128GB SSDThis cluster runs all production workloads exposed via Cloudflare Tunnels for external access.
The Networking Stack
Every cluster runs Cilium as the CNI, replacing kube-proxy with eBPF-based networking. Ingress is handled through the Kubernetes Gateway API with HTTPRoute resources. External access uses Cloudflare Tunnels - zero-trust, no inbound port forwarding required.
GitOps with ArgoCD
Everything is managed through ArgoCD with Git as the single source of truth. A central management cluster controls deployments across all three clusters with a 3-minute sync interval, automated pruning, and self-healing enabled.
Lab Philosophy
This isn't just a playground - it's built to simulate real-world constraints:
- Production patterns - Resource limits, multi-cluster management, and proper secret handling mirror what you'd see at work
- GitOps first - No imperative changes. If it's not in Git, it doesn't exist
- Security by design - Immutable OS, zero-trust networking, scoped RBAC, and automated secret rotation
- Intentional friction - Limited storage (128GB SSDs) and compute forces careful resource management and builds appreciation for HA design
Key Lessons So Far
Running this lab has taught me things no tutorial could:
- Old state causes new failures - always clean up before redeploying
- Labels matter more than names in Kubernetes
- Networking issues often masquerade as application bugs
- Fewer operators means fewer ghosts in the system
- Resource constraints mirror real production environments better than unlimited cloud credits ever will
Stay Tuned
I'll be documenting the build-out of each component in detail - from bootstrapping Talos clusters to wiring up Cilium Gateway API to managing secrets across environments.
Do Not Quit On Yourself - Progress beats perfection.