My Homelab: TyilNET
Personal — Published on .
In my last post, I noted that I was pretty happy with the current state of my homelab setup. In this post, I want to expand on what I have, and how I have it configured.
The main part of the setup is a 15U server rack, with a 4U machine for simple storage, a 4U machine as dedicated database, and 1U with 3 NUCs for compute. Lastly, I use 2 raspberry pi machines to act as ingress machines. Apart from the server rack, I also use a VPS at a hosting company to act as an external loadbalancer. While I currently have a static IPv4 address at home, this is not a guarantee to keep in the future. The VPS can act as the entrypoint for external connections with a static IP address.
The configuration of all these machines is done with Ansible, and the files for it are available through git. For new machines, I would still pick an OS and install it manually. Afterwards, I can add the new machine to the Ansible inventory, assign it to some groups, and let Ansible handle the rest.
One of the main components Ansible will install on non-workstation machines is k3s, and join it into my existing k3s cluster, possibly with the desired taints and labels. Once it has joined the cluster, all the applications hosted in k3s are managed through Terraform, or more accurately, OpenTofu.
The Terraform codebase is split up in 4 modules, crds, base, auth, and
apps. This split is done for logical reasons. While Terraform does do
dependency resolution between deployments, I’ve found this is not always working
perfectly fine unless I manually specify depends_on for resources. Resources
requiring a CRD to be installed, or if I want to ensure Kyverno policies are
definitely applied first.
As such, the first module to get applied is the crds module. This way I don’t
have to worry about the resources I want to apply to have their CRDs available.
Additionally, by splitting up the CRDs I am less likely to have Terraform
accidentally remove/replace a Helm chart and destroying custom resources as a
side-effect.
Next, the base module gets applied. This module is responsible for all basic
services a Kubernetes cluster should provide. Some examples for this are
monitoring, PVC provisioning, and ingress controllers. Without this, a cluster
is either completely unusable or simply very hard to troubleshoot if something
is up.
Once the entire basic cluster is usable for real-world deployments, the auth
module gets a turn. This specifically handles deployments used for centralized
authentication. I prefer authentication to be centralized, and once you’ve had a
taste, you too will understand how nice it is. Almost all applications I host
for personal, friend, or family usage require authentication, so by splitting it
up like this I get out of having a lot of boilerplate depends_on definitions.
Finally, the apps module gets deployed. This contains the actual user-facing
applications. It is a long list of applications, that constantly gets new stuff
added and removed as I try out new things, or phase out old deployments that I
didn’t like or use as much as originally planned.
To make deploying applications easier, I’ve also created a few Terraform modules that I re-use. They are used throughout my Terraform codebase, and designed specifically to greatly reduce the amount of boilerplate code I need to write to deploy a single container in Kubernetes.
The main module I make use of is
kubernetes-container,
and as the name implies it is to run a container on Kubernetes. It has a few
variables to customize some aspects, but most of them come with defaults that
are sensible to my setup. It was inspired by many simple applications not making
their own Helm chart, which I can’t fault them for. I also did not want to make
Helm charts for every little container I want to try out, as (basic) Helm charts
are the definition of boilerplate. HCL is nicer to write, and easier to re-use
than YAML, so it seemed like the cleanest solution.
Another module I make good use of is
kubernetes-hostpath-claim.
For some applications, having the data locally is simply the easiest way to get
it performant. I’d like to have more/faster network storage, but at this point
in time it is excessively expensive. Luckily, a hostpath works Fine(tm) for some
of these use-cases. The Terraform module is quite simple as well, it makes a
PersistentVolume and associated PersistentVolumeClaim resource, and outputs the
name of the claim to be used by a Helm deployment, or the kubernetes-container
module.
There’s a similar module that creates a PersistentVolume and
PersistentVolumeClaim for a CIFS share, named
kubernetes-cifs-claim.
It will obviously take some different variables, but outputs the name for the
PersistentVolumeClaim to be used elsewhere just the same.
With all these components, my homelab is easily expandable, and I am sure I’ll be spending a lot more time tinkering on it, and making more components to make my life easier!