Still Alive
So, it's been a while. My website is finally online again after retooling all my servers. A bit of a nightmare, but it'll make things so much easier going forward. Hopefully the new system is more resilient, and easier to maintain.
A bit of backstory: I run my own servers as a homelab. I prefer that to cloud providers as it's more affordable long term, and offers a ton of flexibility. My homelab consisted of a HPE ML350 Gen9, a HPE DL380 Gen9, and a DL360 Gen9.
Before anyone says anything, I am aware of the issues that plague HP rackmount servers, but these were bought second hand for incredibly cheap, and with the power of a modded iLO4, the fans can be quieted down significantly.
As for the OS, I use Unraid. The flexibility it offers when it comes to disk arrays is really nice, and the stability of Slackware is excellent. Unfortunately, the docker management system is a pain. It heavily encourages the more "user friendly" approach of what is essentially an app store.
Finally, for networking, everything was running through Cloudflare Tunnels. This is a very useful service that allows you to expose a web app through a reverse proxy without needing to expose any ports on your network. Unfortunately, it has one limitation that is crippling for me. The 100 MB upload limit.
So we come to a few months ago when I decided that enough is enough. I need to redesign this whole system. First thing was to decommission the ML350. It is a great machine that supports tons of PCIe devices, but it is handicapped by only supporting 8 SFF drives and the cages to extend capacity are outrageously expensive on the second hand market.
Second was to upgrade the DL380. The one I have supports 12 LFF drives which offers a ton of room for upgrades. So, I ordered 2 refurbished 12 TB drives. For the time being, this server runs all the services that require big storage such as media servers, backup systems, etc.
Now we get to the DL360. This server has become the gateway, and runs much of the smaller services that don't require terabytes of storage. This server runs a copy of Caddy and handles all proxying to the various other services, but this introduced a hiccup: communication. To address this, both servers were set up as a Docker Swarm. This allows access to overlay networks, which solve the communication issues. Instead of having to bind ports, or use a macvlan to allow for inter-container communication, I just use an overlay network.
With that, the final issue was Docker management. Unraid's Dockerman doesn't support compose at all, and I considered Portainer but I never liked the UI. Enter Arcane. Arcane is a Docker management system written in Go, and it met my needs perfectly. No abstraction, just a compose file and an environment file. Arcane doesn't support Docker Swarm yet, but that isn't an issue as I'm not running any containers via Swarm. Swarm is just being used for the network.
It took a while, but all of this was set up. I have no doubt that there are better ways to do it, but this works for my use case. It's easy to work with. Deploying new services is as simple as writing a compose file. Updating containers is as easy as clicking a button.
In the future I'd like to build a large SAN and just have a cluster of energy efficient servers doing the heavy lifting, but with the current cost of storage and memory being what they are it's very much out of the budget. Once prices do come down, who knows.