mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

812
active users

#homelab

65 posts56 participants4 posts today

looks like i might need new HDDs

```

Apr 16 22:22:00 cust00 smartd[1017]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 69 to 70
Apr 16 22:22:00 cust00 smartd[1017]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 166 to 171
Apr 16 22:22:00 cust00 smartd[1017]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181
```

New blog post: blog.mei-home.net/posts/k8s-mi

I like to think that many of my blog posts are mildly educational, perhaps even helping someone in a similar situation.

This blog post is the exception. It is a cautionary tale from start to finish. I also imagine that it might be the kind of post someone finds on page 14 of google at 3 am and names their firstborn after me.

ln --help · Nomad to k8s, Part 25: Control Plane MigrationMigrating my control plane to my Pi 4 hosts.

Just spun up an instance of obsidian live sync. The setup was and is pretty terrible, but once it works, it works.

Also spun up an instance of Standard Notes. I was excited to try it, but not only are there barely any docs on self hosting the web app, but some features are _still_ locked behind a $40/yr paywall on a self-hosted instance. I get it, the devs gotta make money. But putting markdown behind a subscription is wild

Continued thread

After I get the media suite and #HomeAssistant migrated I'll be able to pave hypnotoad and have two beefy bare metal worker nodes. Hypnotoad has an i7-7700T (4c8t) and 64GB memory and the iGPU is no slouch, plus an aftermarket 2.5gbit NIC. I have another 128gb SSD and a 2TB nvme waiting. And of course nibbler has an i7-12700 (12c20t) with 128gb memory and an Nvidia P4 GPU.

Today's #Kubernetes update:

It's my birthday so I took the day off and got a bunch of stuff ticked off the list!

- Put the 128gb SSD into nibbler and installed #Talos

- Installed Longhorn

- Got Jellyfin working on nibbler with the iGPU and a Longhorn volume for config and NFS mount for media

- Accidentally deleted the worker VM on hypnotoad and witnessed things not breaking

- Spun up a control plane VM on morbo (TrueNAS SCALE)

- Intentionally deleted the control plane VM on hypnotoad

I'm super stoked that things just didn't break when I accidentally deleted a worker. Kind of validates the whole project.

Next steps are to migrate more stuff. The *arrs suite is the next candidate, followed by Home Assistant. Also I need to figure out Longhorn backups.

I have a large pile of old laptop 2.5" HDD drives (NOT SSD) in varying GB sizes. I think I might do something mad with them and build a 20 drive (or more) RAID NAS.

The only thing I need really, is 20 port (or more) SATA PCI-E card... Does such a thing exist?

Continued thread

Fortunately there looks to be an easy option - use annotations instead of labels. I’m not using the label selector functionality so it doesn’t really make a difference.

But first I need to spin up a test cluster and recreate the issue. If it is the operator, a simple restart would trigger removing the labels…….

Phew. Okay. That was an expensive evening. But now I've got 3x Raspberry Pi 5 8 GB as a replacement for my control plane Pi 4 on the way, plus the NVMe SSDs to hopefully get past my I/O issues.

In addition, I also got a 16 GB Pi 5, also with an SSD, for some future plans.

One of the reasons I didn't want to wait much longer is the possible supply chain issues coming our way. And the last time the Pis were unobtanium.

Continued thread

One observation from today’s test that I need to figure out:

The rook operator removed custom labels from the ceph-exporter and csi-provisioner deployments when it was restarted. The annotations were untouched. Need to work out is this is by design or not…..

Would it matter if these #rook #ceph deployments are not scaled down?

Continued thread

Success!!! Cluster scaled down including rook and then back up again 😎

I’m still amazed that the release yesterday worked first time. Wouldn’t have been possible without #nock. My end-to-end scale down/up tests have nearly 100 kubernetes API end point calls mocked up for testing