work tonight was skunked for the most humiliating reason: the HP DeskPro 600 G2 Mini machines I meant to use for NVMe osd's on my ceph cluster didn't come with m2 screws, and they use a non-standard size so none in my stash will fit.

I had to buy a tiny screw assortment for the first time since ... this century, I think.

grafana dashboard screen capture 

(apologies this image is too dense to describe well)

I've been aiming more and more of my Swarm containers at my little driven cluster. The almost complete lack of strain defies my expectations. Yes, big bulk file moves can increase dropped packet rates. But the only shortcoming I'm finding right now is I went too cheap on the SSD size.

I've been cannibalizing my k8s cluster nodes to build the Swarm. For my , Swarm > K8s.

I'm thinking I might start cannibalizing the control plane to add some NVMe OSD's to my cluster.

Show thread

I spent a lot of hours in the over the weekend.

What went well:

  • migrating my Docker stack from a single-instance Swarm running on an old Intel NUC, to a 3-node Swarm cluster with shared storage.
  • Got my old Netgate SG-2440 firewall appliance running Debian Buster. Its long-term purpose in life is waiting for me to figure out.

What didn't go well:

  • stuff. Many hours were lost. But I don't think I want to host this in current state.

I'm already using 4's for ssd storage class and it works fine. I might do the same for hdd since the performance needs are even less in that class.

I think what I'm particularly noodling on is nvme storage class, and need for 10Gbe to make good use of it.

Show thread

For anyone running in their , I'm curious to hear about your setup if you have it blogged anywhere. I'm looking to scale mine out & I'm considering small/cheap node options for repurposing NVMe & hdd that I have laying around.

One of the takeaways is that the definitely has a place in the . Learning that it could be an effective storage server was surprising. But also that it can boot from an iSCSI LUN! Mind blown! I'm going to rack up & cluster more of them with nothing but PoE hat attached.

Show thread

The learnings lately have been great. It's opened up a lot of options for me that either I was curious about but never seriously tried before (ceph), or that I used to do professionally and they can help me out again (iscsi).

Now I need to make some of this stuff "production ready" and build out more solutions.

OK my little tiny based cluster is up and running. No clients yet. Right now there are 4x 480GiB Crucial SSD's running as osd (one per Pi). I'll have 4-6 more SSD-class OSD's when the next batch of Pi 4's come in (likely in June).

Ironically I've got 4x 1TiB NVMe left over from another project. More osd's maybe?

This is a YouTube channel I really enjoy. Not my own channel. This new video published today is going to be one of my projects for game servers on . I plan to modify this setup to run on .

I was doing some work tonight, trying to get going (my first time) on & Raspberry Pi OS. That... really didn't go well.

I thought I'd give Server a try but stern warning says: "The Raspberry Pi 4 is not supported."

Putting this aside for the night. ceph will happen eventually here.

Apparently I should have used and not #\introductions in my toot (see parent). But I'm into . I am and proud member of . I'm pretty tech savvy and curious to maybe host some of the fediverse on my home Kubernetes cluster.

Show thread

This is a mastodon instance for social justice activists, LGBTQIA+ people, and activists in general See the Goals and technical details, and Rules and privacy policy pages for more information