fogbound.net




Wed, 28 Jan 2026

Network?

— SjG @ 4:43 pm

I clicked on a link to a web page I host on a Rocky Linux 9.7 VPS, and my browser stalled. That seemed strange, so I ssh-ed in to see if the web server was running correctly. ssh timed out.

So I used the VPS provider’s web-based console to log in, and found the networking was all dead. That was unexpected. After trying to remember the service name in this flavor/version, I executed systemctl restart NetworkManager.service and it was back … until I did a dnf update. Then it locked hard, and even the web-based console couldn’t log in for a while.

Tech support helped me track down the problem.

The VPS “only” had 2G of RAM, and that’s not enough on a modern Linux system to run a web server, database, and package manager. The dnf package manager subsystem was consuming all available memory when computing dependencies, and being killed by the oom-killer, leaving the system in weird states.

I ended up resizing the VPS to a mighty 4G RAM. It’s a few bucks more a month.

Makes me think of my first Linux box, which ran Debian Hamm in what I seem to think was 6M of memory.


Sat, 24 Jan 2026

Firewall

— SjG @ 5:13 pm

We were in the middle of watching a streaming TV show / SAG Award nominee, when the spinny wheel started up and the “TV” computer popped up a “network disconnected” message. I figured our new Frontier Fiber had disconnected, but the lights all looked good. Next to it, though, the Netgate firewall appliance had an angry blinking red LED. I plugged in a serial cable. Not good!

Uncorrectable I/O failure? Hm. This turns out to be an actual physical hardware error.

I’d bought this appliance before the start of the pandemic, so it was about six and a half years old. Shortly after I bought it, there was all sorts of drama with the company behind pfSense (not linking, but you can search) which made me wonder how trustworthy the whole thing was. And just a few months ago, when I ditched the cable modem for a fiber connection, the device had corrupted its entire configuration during a routine software update, and stupid-stupid-stupid … I hadn’t kept a recent backup of the config.xml. So I’d just been through the whole reconfigure process, which requires a hidden trick for VLANs on that specific device that’s not required on other hardware configurations, and which kept me befuddled for many hours more than I’m willing to admit.

The storage RAM on the SG-1100 is soldered on, and it’s beyond my ambition to try to replace it. Instead, I ordered a new piece of hardware.

On this new device, I figured I’d try the OpnSense fork of the firewall software on the assumption that I’d be mostly familiar with it. Well, that’s partly true. The web-based configuration is reorganized. Some things require fewer steps, but they weren’t immediately obvious to me. Dammit Jim, I’m a code monkey, not a network engineer.

Anyway, I spent today restoring my network. It’s not all that complicated. I have a LAN, an isolated guest network, an isolated “internet of things” network, and VPN access from outside, all with special sets of rules to allow or prevent access, depending.

It was pretty smooth. A few gotchas that tripped me up whether reasonable or not:

  • ddclient. I use this for pushing my dynamic IP address to the primary DNS provider for a domain, so I can have name-based access to the home network. It kept failing. Eventually, I had to log in to the firewall, and manually edit the configuration file to place quotes around my password. There’s probably some vulnerability in there if it’s passing unescaped strings to the command line — although you already need credentials to get to the interface that would allow it.
  • Firewall rules in OpnSense have a “direction” which I don’t remember in pfSense. So when I want to firewall off connections from the Guest network to the LAN, for example, I have to put an “inbound” rule on the Guest network blocking the traffic. “inbound” means that the traffic is blocked when coming to the firewall (I had foolishly thought it was “outbound” to the LAN network). Since rules are specified on interface, network, and can specify both source and destination, I’m not sure why direction is required.
  • I created and deleted a VPN instance early in the process. There’s still a tab in the firewall rules area, and automatically generated firewall rules for it, even though the instance has been deleted and has no interfaces. I don’t think it’s a big deal, but it’s confusing.
  • So. many. DHCP. options. There’s the default “Dnsmasq DNS & DHCP”, there’s also “ISC DHCPv4” and “ISC DHCPv6”, and “Kea DHCP.” For my little network, I could do everything in the default, just creating separate DHCP ranges for each interface.

Setup wasn’t so bad. I’ll doubtless gripe here if things don’t work the way I want them to.


Mon, 5 Jan 2026

Making a Static Copy of a Blogspot/Blogger.com Site

— SjG @ 7:59 pm

Fifteen years ago, I worked on a blog that was hosted via Blogger.com (aka blogspot.com aka Google). We had a custom domain name for the blog and everything. It was pretty cool.

Now, many years later, the domain name is finally set to expire. We haven’t touched that blog in eleven years, but it still seems a shame for the content to just vanish. So I thought about making a static copy to host somewhere.

Google makes cloning one of these blogs difficult. They do, however, give you a backup/download capability. I went through re-activating the Google account that was tied to the blog, giving all sorts of identifying information and getting verification emails and texts. That done, I initiated the process to backup the blog, and shortly thereafter received an email that my download was ready. However, now Google is absolutely certain I’m not who I say I am (even with verification emails and texts), and their security locked me out of the account. Also, when I read up on the subject, even if I could download it, their site backup is an XML bundle that only works for reimporting to their blog system anyway.

So I thought I’d use the good old standby wget to build a static copy. I tried:

wget --mirror -w 2 -p --html-extension --convert-links --restrict-file-names=windows http://www.myurl.com

Yes, this site was so old that we didn’t use SSL… Still, Google stores the assets off in a bunch of other subdomains, and I was unable to come up with the correct syntax to allow wget to follow those. I’d get the pages, but everything still linked to the Google servers for the assets. That wasn’t going to work.

So next I used the old, powerful F/OSS friend, httrack. My first attack was as follows:

httrack "http://www.myurl.com/" \
  -O "myurl-offline" \
  -%v \
  --robots=0 \
  "+www.myurl.com/*" \
  "+*.blogspot.com/*" \
  "+*.bp.blogspot.com/*" \
  "+*.googleusercontent.com/*" \
  "+*.jpg +*.jpeg +*.png +*.gif +*.webp" \
  "+*.css +*.js" \
  "+*.mp4 +*.webm" \
  "-*/search?updated-max=*"

This worked — but a little too well. This blog was part of a community of sites, many of which were hosted elsewhere on blogspot. The cloning was slow. Then I noticed it had used up 2G of disk space, whereupon I discovered that I was happily making static copies of twelve other blogs from that community, and possibly more to come! I interrupted the process, and tried again removing the blank check for blogspot sites:

httrack "http://www.myurlcom/" \
    -O "myurl-offline" \
    -%v \
    --robots=0 \
    "+www.myurl.com/*" \
    "+*.bp.blogspot.com/*" \
    "+*.googleusercontent.com/*" \
    "+*.jpg +*.jpeg +*.png +*.gif +*.webp" \
    "+*.css +*.js" \
    "+*.mp4 +*.webm" \
    "-*/search?updated-max=*"

This was successful!

I now have a static version of the site. It’s not perfect; some references like the user profile links still point at blogspot. But if I want to be able to post the static site somewhere, I can do that, and it will be sufficiently usable that people can still experience the postings and articles.


Sun, 7 Dec 2025

WordPress Gallery

— SjG @ 2:35 pm

Ugh, so the WordPress built-in gallery content type seems broken again. I’m not sure it’s worth bothering about. If i fix it locally, it’ll just break again on some future update.


Holiday memories

— SjG @ 2:31 pm

We used to have a lot of physical devices on our network*. Servers, firewalls, file-shares, staging servers, development machines… all sitting on the network with their hard drives endlessly spinning, spinning, spinning!

System administrators are fond of referring to platter-based hard drives as “spinning rust,” partly as a reference to the ferro-magnetic iron crystals that store the actual data, but also to remind us that it’s always decaying and corroding. Over time, drives start generating errors or becoming unreliable. When we had physical devices that exhibited issues, we’d yank the hard drive and replace it. Over the years, we’d accumulated a pile of a dozen or more drives that were unreliable or bad but still contained data.

The data is not especially sensitive, but there could be stuff that could be abused or belongs to other parties. There may well be meeting notes, source code, sample data files, or there could be cached passwords or other credentials. It’s not worth just hoping it’d be OK to release to the world. So it’s a chore to render this data unreadable.

Pulling apart spinning platter hard disks is humbling. These are incredible little devices, with incredibly precise machining and elegant engineering. Going through a pile that spans a decade, you can actually see the improvements in technology: new vibration damping systems, different head-parking strategies, traps for dust, and more. I see these parts, and am inspired by the craftsmanship that goes into them.

So in the spirit of admiration, I offer these (hopefully unreadable) holiday memories.

* Now, of course, we have few physical devices but all those same services are implemented on “the cloud.” This means that someone else has physical devices somewhere, with their hard drives (or SSDs) endlessly spinning, spinning, spinning (or trimming, trimming, trimming).