whale-through-space

I’m one of those odd nuts that love kubernetes. I like it so much that it’s currently powering a lot of my personal stuff and hobby projects. So instead of the usual “k8s is bloated”, “k8s is overkill” or “why you don’t need k8s” posts, today let’s talk about why k8s is actually great for personal stuff, and why you should maybe also consider using it? :)

1. Why I love kubernetes for personal stuff

Let’s start with the why. Why would I use something as bloated and heavy as kubernetes for personal stuff, isn’t it a huge overkill? I could go on and on about why it’s great, but let’s take a look at the big reasons.

I want to also preface that small managed clusters can be really cheap. My main managed “cluster” (putting it in quotes because it only has 1 node) on digitalocean costs $12/month. Compare that with other hosting providers.

Infra as code 👨‍💻

k8s, same as with terraform, allows me to specify my infrastructure as code. I write manifest files that specify how stuff is going to get run. If I want to make a change to my infra, all I need to do is update those manifests, apply them, and I’m done. Look at this snippet from a manifest that tells k8s that I want to run a webdav container:

      containers:
        - name: webdav
          image: bytemark/webdav
          ports:
            - containerPort: 80
          env:
            - name: "USERNAME"
            	value: "myuser"
            - name: "PASSWORD"
              valueFrom:
                secretKeyRef:
                  name: webdav-credentials
                  key: password

Because I have everything in code, I can set up my entire cluster from scratch on any provider I want. If I no longer like digitalocean, all I have to do is create a cluster somewhere else and do a kubectl apply. The new setup will be identical (or almost identical) to my previous one, it’s almost provider agnostic (there are some provider-specific things like persistent volumes, but we’ll get to those later).

Manifests are also idempotent, I can re-apply the same files a dozen times without worry of breaking anything. If resources already exist, they stay existent and if there are no changes, the manifest will just get ignored.

Extremely easy to add storage or IP addresses 🔄

Managed kubernetes clusters (like the digitalocean one) are integrated with the entire cloud platform of that provider, so it’s very easy to do things like adding block storage or loadbalancers. How easy? Check this out:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: some-name
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage

This manifest when applied will tell DigitalOcean (through the kubernetes controller) to provision a 1Gb block storage volume. If that 1Gb fills up at some point, I change it to 10Gb, re-apply and DigitalOcean will automatically resize the volume to 10GB for me - neat!

Even cooler, I can now use this storage volume on any of my containers that are running inside my cluster. All I need to do is add a mount point into the manifest, and suddenly ephemeral containers have a way to write persistent data:

      volumes:
        - name: some-name-volume
          persistentVolumeClaim:
            claimName: some-name
            readOnly: false

If I want to get rid of the volume, I do kubectl delete pvc some-name and it gets wiped. (There is a retain setting if you don’t want volume content to get deleted)

This isn’t just storage though, the same thing happens with stuff like LoadBalancers and egress IPs. Provisioning resources on kubernetes causes DigitalOcean to get a static IP + LoadBalancer ready for us, wait until it’s good to go, then assign it to the cluster.

It makes running stuff and keeping stuff running a breeze 🍃

K8s is very battle-tested. A lot of companies run on kubernetes so it’s grown to be robust and fault-tolerant. And if something does end up not working or acting up, I can be sure that I will find the answer to my issue in no time.

Besides running containers, k8s does things like keeping those running (if you want that). Let’s take a look at the webdav example from above, but conjure up some more yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webdav
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webdav
  template:
    metadata:
      labels:
        app: webdav
    spec:
      containers:
        - name: webdav
          image: bytemark/webdav
          ports:
            - containerPort: 80
          env:
            - name: "USERNAME"
            	value: "myuser"
            - name: "PASSWORD"
              valueFrom:
                secretKeyRef:
                  name: webdav-credentials
                  key: password

The webdav container is now in a Deployment with 1 replica. This means k8s will make sure there is always 1 webdav container running. You can even kubectl delete pod <webdav-xxxx> to destroy the webdav container, and kubernetes will just respawn a new one.

My server can crash, the container can crash - doesn’t matter, don’t care. In the end, kubernetes will jump in and just scale a new container up again.

And if I want more containers? All I have to do is change replicas: 1 to replicas: 2, and now I will always have 2 webdav containers. This makes scaling things up and down as easy as changing a yaml file and applying it.

But there are other cool things that just work like CronJobs! I can specify my CronJob yaml manifests, tell kubernetes when it should run those, and I can be sure they are getting run. No messing with crontab or some cloud scheduling thing, I can do it right from within k8s, together with the rest of my infra.

Easy scaling when it becomes necessary 📈

So we saw in the previous section that we can scale containers however we please. It gets even cooler if we deal with multiple nodes on the cluster. Say my hobby project is getting more popular - instead of a simple web server I had to add a database and maybe a queue server. Resources are getting tight, and stuff doesn’t run as nicely any more.

What I can do is, I can open up the DigitalOcean kubernetes admin, go to my node settings, and either:

  • Increase the nodes of the cluster to 2
  • Destroy the current 1-node setup and replace it with a stronger node

Because k8s is managing the cluster, even if we destroy the node and add a new, stronger node in its place, it will still do exactly what it did before: Make sure those containers are running in the configuration we specified as soon as the new machine is up and running.

If we increase the cluster size from 1 to 2, kubernetes will look at available resources on the cluster, see that a new node is available, and balance those containers for us, by maybe moving the queue server to the new machine, or having one of those 2 webdav replicas there instead. (Of course, if we want less magic we can also tell it specifically what it should do and how it should utilize those nodes, like always having one replica on each node)

It goes the other way as well: We’re getting tight on money so we decided to cut down on resources. Removing a node from the cluster will make k8s reshuffle our containers and consolidate them onto the remaining nodes, and THEN shut the node down, without downtime.

2. How I use kubernetes for my personal stuff

Now that you’ve heard the main reasons why I like kubernetes, let’s take a look at how I specifically am using it on a daily basis.

Let’s start with the provider - I mentioned that I use DigitalOcean kubernetes for my cloud of choice. I like them because the cost is predictable and the managed cluster is cheap. I am always sure how my bill will look like and there are no surprises. At $12/node for kubernetes I really have no complaints here. It’s great! This price doesn’t include a LoadBalancer/static IP but for most stuff I don’t really need those.

k8s can do a lot, but the parts that I actually use are:

  • Deployments: Fancy way of saying “keep this many replicas running at all time”
  • Services: Fancy way of saying “I need an cluster internal/external IP for these containers”
  • Ingress: A fancy router like “route domain david.coffee to this pod”
  • Cron

Exposing stuff for personal use only with tailscale

Most of my stuff doesn’t need to be public. It’s either a cronjob that does something and doesn’t need inbound connections, or it’s stuff that’s only public to me.

To make things public to myself only, I am using tailscale as my overlay network of choice (other great options are ZeroTier and Slack Nebula). Tailscale utilizes wireguard tunnels to build a Layer 3 p2p network that most of the times Just Works™️

mesh-network

The cool thing is, I can embed the sidecar container into any kubernetes pod I want to access, and it becomes available to my tailscale network. All I have to do is add some yaml to a pod:

      containers:
        - name: webdav
          image: bytemark/webdav
          ports:
            - containerPort: 80
          env:
            - name: "USERNAME"
            	value: "myuser"
            - name: "PASSWORD"
              valueFrom:
                secretKeyRef:
                  name: webdav-credentials
                  key: password
        - name: ts-sidecar
          imagePullPolicy: Always
          image: "ghcr.io/tailscale/tailscale:latest"
          env:
            - name: TS_KUBE_SECRET
              value: "tailscale-webdav"
            - name: TS_USERSPACE
              value: "false"
            - name: TS_AUTH_KEY
              valueFrom:
                secretKeyRef:
                  name: tailscale-auth
                  key: TS_AUTH_KEY
          securityContext:
            capabilities:
              add:
                - NET_ADMIN

… and when my connection is established, I can just do http://webdav-tailscale:80 and I have a fully personal connection to my webdav container. No ingress or load-balancer needed ✌️

A non-tailscale option would be to use port-forwarding into the pod directly:

  # Listen on port 8888 locally, forwarding to 80 in the pod
  kubectl port-forward pod/mypod 8888:80

Exposing public stuff without static IP + LoadBalancer with Nginx (the frugal option)

Because my cluster has only 1 node, I almost never need a proper loadbalancer or static IP.

My ingress controller of choice is nginx, and we can tweak it to run off a ClusterIP service instead of a full blown load balancer:

helm install ingress-nginx ingress-nginx/ingress-nginx -f nginx-ingress.yml
---
controller:
  kind: DaemonSet
  daemonset:
    useHostPort: true
  dnsPolicy: ClusterFirstWithHostNet
  hostNetwork: true
  service:
    type: ClusterIP
  resources:
    requests:
      cpu: 10m
rbac:
  create: true

A DaemonSet is something we haven’t talked about yet, it’s similar to a Deployment with the exception that it makes sure the thing is running on all nodes of the cluster. So if the cluster has 1 node, there’ll be one of those. If there are 3 nodes, we’ll have 3 nginx controllers.

hostNetwork: true is running that service off the host directly, so we can hook up <machineip>:80 and let nginx handle the routing.

Then once we have that contorller running we can add actual Ingress resources to do something based on domain name like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: ingress-nginx-controller
      port:
        number: 80
  rules:
    - host: somethingsomething.david.coffee
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: somethingsomething-service
                port:
                  number: 80

To make sure the IP stays up to date, we can hook up a cronjob and update the cloudflare DNS records automatically, for maximum money saving.

Of course if you have the cash and want to scale up, just get a proper static IP with LoadBalancer. It’s only $10 extra.

Happy whale-ing

As you can tell by this long post, I am a fan of kubernetes and use it for my personal stuff a lot. If you read until this point, congratulations! You made it. Let me know on twitter if you think this kind of post is useful, or if there are parts you’d like to know more about.

Happy whale-ing! 🐳

whale-space-stars