This Week in Self-Hosted (18 April 2025)
The latest from @homeassistant, software updates and launches, a spotlight on #Papra -- a new document management platform, and more in this week's self-hosted recap!

This Week in Self-Hosted (18 April 2025)
The latest from @homeassistant, software updates and launches, a spotlight on #Papra -- a new document management platform, and more in this week's self-hosted recap!
At first I thought it was a late April Fools, but now I very much want one.
A rack-mountable beer fridge from 45drives!
I will have a 40U in the near future and not much to fill it in with, so maaaaybe?
What do you guys do with your left-over (public) IPv4 addresses? And no, that’s meant for real :)
looks like i might need new HDDs
```
Apr 16 22:22:00 cust00 smartd[1017]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 69 to 70
Apr 16 22:22:00 cust00 smartd[1017]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 166 to 171
Apr 16 22:22:00 cust00 smartd[1017]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181
```
New blog post: https://blog.mei-home.net/posts/k8s-migration-25-controller-migration/
I like to think that many of my blog posts are mildly educational, perhaps even helping someone in a similar situation.
This blog post is the exception. It is a cautionary tale from start to finish. I also imagine that it might be the kind of post someone finds on page 14 of google at 3 am and names their firstborn after me.
Finally! Wireguard from laptop and smartphone back home to my NAS! I can now use navidrome and my music collection on the go. Turning off all the lights I forgot while leaving is a bonus! #homelab #wireguard
Just spun up an instance of obsidian live sync. The setup was and is pretty terrible, but once it works, it works.
Also spun up an instance of Standard Notes. I was excited to try it, but not only are there barely any docs on self hosting the web app, but some features are _still_ locked behind a $40/yr paywall on a self-hosted instance. I get it, the devs gotta make money. But putting markdown behind a subscription is wild
After I get the media suite and #HomeAssistant migrated I'll be able to pave hypnotoad and have two beefy bare metal worker nodes. Hypnotoad has an i7-7700T (4c8t) and 64GB memory and the iGPU is no slouch, plus an aftermarket 2.5gbit NIC. I have another 128gb SSD and a 2TB nvme waiting. And of course nibbler has an i7-12700 (12c20t) with 128gb memory and an Nvidia P4 GPU.
Today's #Kubernetes update:
It's my birthday so I took the day off and got a bunch of stuff ticked off the list!
- Put the 128gb SSD into nibbler and installed #Talos
- Installed Longhorn
- Got Jellyfin working on nibbler with the iGPU and a Longhorn volume for config and NFS mount for media
- Accidentally deleted the worker VM on hypnotoad and witnessed things not breaking
- Spun up a control plane VM on morbo (TrueNAS SCALE)
- Intentionally deleted the control plane VM on hypnotoad
I'm super stoked that things just didn't break when I accidentally deleted a worker. Kind of validates the whole project.
Next steps are to migrate more stuff. The *arrs suite is the next candidate, followed by Home Assistant. Also I need to figure out Longhorn backups.
Nope. Annotations disappear too.
All it takes is an operator restart
Guess I’ll have to see if the labels can be added from the helm chart
I have a large pile of old laptop 2.5" HDD drives (NOT SSD) in varying GB sizes. I think I might do something mad with them and build a 20 drive (or more) RAID NAS.
The only thing I need really, is 20 port (or more) SATA PCI-E card... Does such a thing exist?
Fortunately there looks to be an easy option - use annotations instead of labels. I’m not using the label selector functionality so it doesn’t really make a difference.
But first I need to spin up a test cluster and recreate the issue. If it is the operator, a simple restart would trigger removing the labels…….
Well looking at a rather large fix and CVE list with the new version of #Portainer 2.29.0, guess its time to skip #PatchSaturday and get to it asap :D.
Phew. Okay. That was an expensive evening. But now I've got 3x Raspberry Pi 5 8 GB as a replacement for my control plane Pi 4 on the way, plus the NVMe SSDs to hopefully get past my I/O issues.
In addition, I also got a 16 GB Pi 5, also with an SSD, for some future plans.
One of the reasons I didn't want to wait much longer is the possible supply chain issues coming our way. And the last time the Pis were unobtanium.
Well, after a lot of fiddling, the possibility of a busted #RaspberryPi and my #orchestration software deciding to be an absolute hole, I’ve finally got my #homelab back to a workable state…
Still a bit of work to do but at least I have access to my daily driver services again!
One observation from today’s test that I need to figure out:
The rook operator removed custom labels from the ceph-exporter and csi-provisioner deployments when it was restarted. The annotations were untouched. Need to work out is this is by design or not…..
Would it matter if these #rook #ceph deployments are not scaled down?
Success!!! Cluster scaled down including rook and then back up again
I’m still amazed that the release yesterday worked first time. Wouldn’t have been possible without #nock. My end-to-end scale down/up tests have nearly 100 kubernetes API end point calls mocked up for testing