Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas

It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta.

The Kansas Reflector published an artical about Meta censoring environmental articles about climate change — deeming them “too controversial”.

Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too.

The CNN story says Facebook apologized and said it was a mistake and was fixing it.

Color me skeptical, because today I saw this:

Yes, that’s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today.

I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm.

Riiiiiight. Cybersecurity.

This isn’t even the first time they’ve done this to me.

On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed.

While they ultimately reversed themselves, I subsequently wrote Facebook’s Blocking Decisions Are Deliberate — Including Their Censorship of Mastodon.

That this same pattern has played out a second time — again with something that is a very slight challenege to Facebook — seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change — or their own censorship — gets removed, and this pattern persists for years.

There’s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social.

So. I’ve written this blog post. And then I’m going to post it to Facebook. Let’s see if they try to censor me for a third time. Bring it, Facebook.

The xz Issue Isn’t About Open Source

You’ve probably heard of the recent backdoor in xz. There have been a lot of takes on this, most of them boiling down to some version of:

The problem here is with Open Source Software.

I want to say not only is that view so myopic that it pushes towards the incorrect, but also it blinds us to more serious problems.

Now, I don’t pretend that there are no problems in the FLOSS community. There have been various pieces written about what this issue says about the FLOSS community (usually without actionable solutions). I’m not here to say those pieces are wrong. Just that there’s a bigger picture.

So with this xz issue, it may well be a state actor (aka “spy”) that added this malicious code to xz. We also know that proprietary software and systems can be vulnerable. For instance, a Twitter whistleblower revealed that Twitter employed Indian and Chinese spies, some knowingly. A recent report pointed to security lapses at Microsoft, including “preventable” lapses in security. According to the Wikipedia article on the SolarWinds attack, it was facilitated by various kinds of carelessness, including passwords being posted to Github and weak default passwords. They directly distributed malware-infested updates, encouraged customers to disable anti-malware tools when installing SolarWinds products, and so forth.

It would be naive indeed to assume that there aren’t black hat actors among the legions of programmers employed by companies that outsource work to low-cost countries — some of which have challenges with bribery.

So, given all this, we can’t really say the problem is Open Source. Maybe it’s more broad:

The problem here is with software.

Maybe that inches us closer, but is it really accurate? We have all heard of Boeing’s recent issues, which seem to have some element of root causes in corporate carelessness, cost-cutting, and outsourcing. That sounds rather similar to the SolarWinds issue, doesn’t it?

Well then, the problem is capitalism.

Maybe it has a role to play, but isn’t it a little too easy to just say “capitalism” and throw up our hands helplessly, just as some do with FLOSS as at the start of this article? After all, capitalism also brought us plenty of products of very high quality over the years. When we can point to successful, non-careless products — and I own some of them (for instance, my Framework laptop). We clearly haven’t reached the root cause yet.

And besides, what would you replace it with? All the major alternatives that have been tried have even stronger downsides. Maybe you replace it with “better regulated capitalism”, but that’s still capitalism.

Then the problem must be with consumers.

As this argument would go, it’s consumers’ buying patterns that drive problems. Buyers — individual and corporate — seek flashy features and low cost, prizing those over quality and security.

No doubt this is true in a lot of cases. Maybe greed or status-conscious societies foster it: Temu promises people to “shop like a billionaire”, and unloads on them cheap junk, which “all but guarantees that shipments from Temu containing products made with forced labor are entering the United States on a regular basis“.

But consumers are also people, and some fraction of them are quite capable of writing fantastic software, and in fact, do so.

So what we need is some way to seize control. Some way to do what is right, despite the pressures of consumers or corporations.

Ah yes, dear reader, you have been slogging through all these paragraphs and now realize I have been leading you to this:

Then the solution is Open Source.

Indeed. Faults and all, FLOSS is the most successful movement I know where people are bringing us back to the commons: working and volunteering for the common good, unleashing a thousand creative variants on a theme, iterating in every direction imaginable. We have FLOSS being vital parts of everything from $30 Raspberry Pis to space missions. It is bringing education and communication to impoverished parts of the world. It lets everyone write and release software. And, unlike the SolarWinds and Twitter issues, it exposes both clever solutions and security flaws to the world.

If an authentication process in Windows got slower, we would all shrug and mutter “Microsoft” under our breath. Because, really, what else can we do? We have no agency with Windows.

If an authentication process in Linux gets slower, anybody that’s interested — anybody at all — can dive in and ask “why” and trace it down to root causes.

Some look at this and say “FLOSS is responsible for this mess.” I look at it and say, “this would be so much worse if it wasn’t FLOSS” — and experience backs me up on this.

FLOSS doesn’t prevent security issues itself.

What it does do is give capabilities to us all. The ability to investigate. Ability to fix. Yes, even the ability to break — and its cousin, the power to learn.

And, most rewarding, the ability to contribute.

Live Migrating from Raspberry Pi OS bullseye to Debian bookworm

I’ve been getting annoyed with Raspberry Pi OS (Raspbian) for years now. It’s a fork of Debian, but manages to omit some of the most useful things. So I’ve decided to migrate all of my Pis to run pure Debian. These are my reasons:

  1. Raspberry Pi OS has, for years now, specified that there is no upgrade path. That is, to get to a newer major release, it’s a reinstall. While I have sometimes worked around this, for a device that is frequently installed in hard-to-reach locations, this is even more important than usual. It’s common for me to upgrade machines for a decade or more across Debian releases and there’s no reason that it should be so much more difficult with Raspbian.
  2. As I noted in Consider Security First, the security situation for Raspberry Pi OS isn’t as good as it is with Debian.
  3. Raspbian lags behind Debian – often times by 6 months or more for major releases, and days or weeks for bug fixes and security patches.
  4. Raspbian has no direct backports support, though Raspberry Pi 3 and above can use Debian’s backports (per my instructions as Installing Debian Backports on Raspberry Pi)
  5. Raspbian uses a custom kernel without initramfs support

It turns out it is actually possible to do an in-place migration from Raspberry Pi OS bullseye to Debian bookworm. Here I will describe how. Even if you don’t have a Raspberry Pi, this might still be instructive on how Raspbian and Debian packages work.

WARNINGS

Before continuing, back up your system. This process isn’t for the neophyte and it is entirely possible to mess up your boot device to the point that you have to do a fresh install to get your Pi to boot. This isn’t a supported process at all.

Architecture Confusion

Debian has three ARM-based architectures:

  • armel, for the lowest-end 32-bit ARM devices without hardware floating point support
  • armhf, for the higher-end 32-bit ARM devices with hardware float (hence “hf”)
  • arm64, for 64-bit ARM devices (which all have hardware float)

Although the Raspberry Pi 0 and 1 do support hardware float, they lack support for other CPU features that Debian’s armhf architecture assumes. Therefore, the Raspberry Pi 0 and 1 could only run Debian’s armel architecture.

Raspberry Pi 3 and above are capable of running 64-bit, and can run both armhf and arm64.

Prior to the release of the Raspberry Pi 5 / Raspbian bookworm, Raspbian only shipped the armhf architecture. Well, it was an architecture they called armhf, but it was different from Debian’s armhf in that everything was recompiled to work with the more limited set of features on the earlier Raspberry Pi boards. It was really somewhere between Debian’s armel and armhf archs. You could run Debian armel on those, but it would run more slowly, due to doing floating point calculations without hardware support. Debian’s raspi FAQ goes into this a bit.

What I am going to describe here is going from Raspbian armhf to Debian armhf with a 64-bit kernel. Therefore, it will only work with Raspberry Pi 3 and above. It may theoretically be possible to take a Raspberry Pi 2 to Debian armhf with a 32-bit kernel, but I haven’t tried this and it may be more difficult. I have seen conflicting information on whether armhf really works on a Pi 2. (If you do try it on a Pi 2, ignore everything about arm64 and 64-bit kernels below, and just go with the linux-image-armmp-lpae kernel per the ARMMP page)

There is another wrinkle: Debian doesn’t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 – like x86_64 – is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions).

(It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)

Prerequisites and Limitations

In addition to the need for a Raspberry Pi 3 or above in order for this to work, there are a few other things to mention.

If you are using the GPIO features of the Pi, I don’t know if those work with Debian.

I think Raspberry Pi OS modified the desktop environment more than other components. All of my Pis are headless, so I don’t know if this process will work if you use a desktop environment.

I am assuming you are booting from a MicroSD card as is typical in the Raspberry Pi world. The Pi’s firmware looks for a FAT partition (MBR type 0x0c) and looks within it for boot information. Depending on how long ago you first installed an OS on your Pi, your /boot may be too small for Debian. Use df -h /boot to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I’ve done this, but it’s outside the scope of this article.)

You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You… did make a backup, right?)

Basic idea

The basic idea here is that since bookworm has almost entirely newer packages then bullseye, we can “just” switch over to it and let the Debian packages replace the Raspbian ones as they are upgraded. Well, it’s not quite that easy, but that’s the main idea.

Preparation

First, make a backup. Even an image of your MicroSD card might be nice. OK, I think I’ve said that enough now.

It would be a good idea to have a HDMI cable (with the appropriate size of connector for your particular Pi board) and a HDMI display handy so you can troubleshoot any bootup issues with a console.

Preparation: access

The Raspberry Pi OS by default sets up a user named pi that can use sudo to gain root without a password. I think this is an insecure practice, but assuming you haven’t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian’s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat:

pi ALL=(ALL) NOPASSWD: ALL

Also, there may be no password set for the root account. It would be a good idea to set one; it makes it easier to log in at the console. Use the passwd command as root to do so.

Preparation: bluetooth

Debian doesn’t correctly identify the Bluetooth hardware address. You can save it off to a file by running hcitool dev > /root/bluetooth-from-raspbian.txt. I don’t use Bluetooth, but this should let you develop a script to bring it up properly.

Preparation: Debian archive keyring

You will next need to install Debian’s archive keyring so that apt can authenticate packages from Debian. Go to the bookworm download page for debian-archive-keyring and copy the URL for one of the files, then download it on the pi. For instance:

wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb

Use sha256sum to verify the checksum of the downloaded file, comparing it to the package page on the Debian site.

Now, you’ll install it with:

dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb

Package first steps

From here on, we are making modifications to the system that can leave it in a non-bootable state.

Examine /etc/apt/sources.list and all the files in /etc/apt/sources.list.d. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:

deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free

(you might leave off contrib and non-free depending on your needs)

Now, we’re going to tell it that we’ll support arm64 packages:

dpkg --add-architecture arm64

And finally, download the bookworm package lists:

apt-get update

If there are any errors from that command, fix them and don’t proceed until you have a clean run of apt-get update.

Moving /boot to /boot/firmware

The boot FAT partition I mentioned above is mounted at /boot by Raspberry Pi OS, but Debian’s scripts assume it will be at /boot/firmware. We need to fix this. First:

umount /boot
mkdir /boot/firmware

Now, edit fstab and change the reference to /boot to be to /boot/firmware. Now:

mount -v /boot/firmware
cd /boot/firmware
mv -vi * ..

This mounts the filesystem at the new location, and moves all its contents back to where apt believes it should be. Debian’s packages will populate /boot/firmware later.

Installing the first packages

Now we start by installing the first of the needed packages. Eventually we will wind up with roughly the same set Debian uses.

apt-get install linux-image-arm64
apt-get install firmware-brcm80211=20230210-5
apt-get install raspi-firmware

If you get errors relating to firmware-brcm80211 from any commands, run that install firmware-brcm80211 command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that’s one of them.

Configuring the bootloader

We need to configure a few things in /etc/default/raspi-firmware before proceeding. Edit that file.

First, uncomment (or add) a line like this:

KERNEL_ARCH="arm64"

Next, in /boot/cmdline.txt you can find your old Raspbian boot command line. It will say something like:

root=PARTUUID=...

Save off the bit starting with PARTUUID. Back in /etc/default/raspi-firmware, set a line like this:

ROOTPART=PARTUUID=abcdef00

(substituting your real value for abcdef00).

This is necessary because the microSD card device name often changes from /dev/mmcblk0 to /dev/mmcblk1 when switching to Debian’s kernel. raspi-firmware will encode the current device name in /boot/firmware/cmdline.txt by default, which will be wrong once you boot into Debian’s kernel. The PARTUUID approach lets it work regardless of the device name.

Purging the Raspbian kernel

Run:

dpkg --purge raspberrypi-kernel

Upgrading the system

At this point, we are going to run the procedure beginning at section 4.4.3 of the Debian release notes. Generally, you will do:

apt-get -u upgrade
apt full-upgrade

Fix any errors at each step before proceeding to the next. Now, to remove some cruft, run:

apt-get --purge autoremove

Inspect the list to make sure nothing important isn’t going to be removed.

Removing Raspbian cruft

You can list some of the cruft with:

apt list '~o'

And remove it with:

apt purge '~o'

I also don’t run Bluetooth, and it seemed to sometimes hang on boot becuase I didn’t bother to fix it, so I did:

apt-get --purge remove bluez

Installing some packages

This makes sure some basic Debian infrastructure is available:

apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools
apt-get --purge autoremove

Installing firmware

Now run:

apt-get install firmware-linux

Resolving firmware package version issues

If it gives an error about the installed version of a package, you may need to force it to the bookworm version. For me, this often happened with firmware-atheros, firmware-libertas, and firmware-realtek.

Here’s how to resolve it, with firmware-realtek as an example:

  1. Go to https://packages.debian.org/PACKAGENAME – for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm – in this case, 20230210-5.

  2. Now, you will force the installation of that package at that version:

    apt-get install firmware-realtek=20230210-5
    
  3. Repeat with every conflicting package until done.

  4. Rerun apt-get install firmware-linux and make sure it runs cleanly.

Also, in the end you should be able to:

apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux

Dealing with other Raspbian packages

The Debian release notes discuss removing non-Debian packages. There will still be a few of those. Run:

apt list '?narrow(?installed, ?not(?origin(Debian)))'

Deal with them; mostly you will need to force the installation of a bookworm version using the procedure in the section Resolving firmware package version issues above (even if it’s not for a firmware package). For non-firmware packages, you might possibly want to add --mark-auto to your apt-get install command line to allow the package to be autoremoved later if the things depending on it go away.

If you aren’t going to use Bluetooth, I recommend apt-get --purge remove bluez as well. Sometimes it can hang at boot if you don’t fix it up as described above.

Set up networking

We’ll be switching to the Debian method of networking, so we’ll create some files in /etc/network/interfaces.d. First, eth0 should look like this:

allow-hotplug eth0
iface eth0 inet dhcp
iface eth0 inet6 auto

And wlan0 should look like this:

allow-hotplug wlan0
iface wlan0 inet dhcp
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Raspbian is inconsistent about using eth0/wlan0 or renamed interface. Run ifconfig or ip addr. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0 file to the one named after the enx interface, or the wlan0 file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name.

If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf. It should have lines like:

network={
   ssid="NetworkName"
   psk="passwordHere"
}

(This is where Raspberry Pi OS put them).

Deal with DHCP

Raspberry Pi OS used dhcpcd, whereas bookworm normally uses isc-dhcp-client. Verify the system is in the correct state:

apt-get install isc-dhcp-client
apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus

Set up LEDs

To set up the LEDs to trigger on MicroSD activity as they did with Raspbian, follow the Debian instructions. Run apt-get install sysfsutils. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf:

class/leds/ACT/brightness = 1
class/leds/ACT/trigger = mmc1

Prepare for boot

To make sure all the /boot/firmware files are updated, run update-initramfs -u. Verify that root in /boot/firmware/cmdline.txt references the PARTUUID as appropriate. Verify that /boot/firmware/config.txt contains the lines arm_64bit=1 and upstream_kernel=1. If not, go back to the section on modifying /etc/default/raspi-firmware and fix it up.

The moment arrives

Cross your fingers and try rebooting into your Debian system:

reboot

For some reason, I found that the first boot into Debian seems to hang for 30-60 seconds during bootstrap. I’m not sure why; don’t panic if that happens. It may be necessary to power cycle the Pi for this boot.

Troubleshooting

If things don’t work out, hook up the Pi to a HDMI display and see what’s up. If I anticipated a particular problem, I would have documented it here (a lot of the things I documented here are because I ran into them!) So I can’t give specific advice other than to watch boot messages on the console. If you don’t even get kernel messages going, then there is some problem with your partition table or /boot/firmware FAT partition. Otherwise, you’ve at least got the kernel going and can troubleshoot like usual from there.

Consider Security First

I write this in the context of my decision to ditch Raspberry Pi OS and move everything I possibly can, including my Raspberry Pi devices, to Debian. I will write about that later.

But for now, I wanted to comment on something I think is often overlooked and misunderstood by people considering distributions or operating systems: the huge importance of getting security updates in an automated and easy way.

Background

Let’s assume that these statements are true, which I think are well-supported by available evidence:

  1. Every computer system (OS plus applications) that can do useful modern work has security vulnerabilities, some of which are unknown at any given point in time;

  2. During the lifetime of that computer system, some of these vulnerabilities will be discovered. For a (hopefully large) subset of those vulnerabilities, timely patches will become available.

Now then, it follows that applying those timely patches is a critical part of having a system that it as secure as possible. Of course, you have to do other things as well – good passwords, secure practices, etc – but, fundamentally, if your system lacks patches for known vulnerabilities, you’ve already lost at the security ballgame.

How to stay patched

There is something of a continuum of how you might patch your system. It runs roughly like this, from best to worst:

  1. All components are kept up-to-date automatically, with no intervention from the user/operator

  2. The operator is automatically alerted to necessary patches, and they can be easily installed with minimal intervention

  3. The operator is automatically alerted to necessary patches, but they require significant effort to apply

  4. The operator has no way to detect vulnerabilities or necessary patches

It should be obvious that the first situation is ideal. Every other situation relies on the timeliness of human action to keep up-to-date with security patches. This is a fallible situation; humans are busy, take trips, dismiss alerts, miss alerts, etc. That said, it is rare to find any system living truly all the way in that scenario, as you’ll see.

What is “your system”?

A critical point here is: what is “your system”? It includes:

  • Your kernel
  • Your base operating system
  • Your applications
  • All the libraries needed to run all of the above

Some OSs, such as Debian, make little or no distinction between the base OS and the applications. Others, such as many BSDs, have a distinction there. And in some cases, people will compile or install applications outside of any OS mechanism. (It must be stressed that by doing so, you are taking the responsibility of patching them on your own shoulders.)

How do common systems stack up?

  • Debian, with its support for unattended-upgrades, needrestart, debian-security-support, and such, is largely category 1. It can automatically apply security patches, in most cases can restart the necessary services for the patch to take effect, and will alert you when some processes or the system must be manually restarted for a patch to take effect (for instance, a kernel update). Those cases requiring manual intervention are category 2. The debian-security-support package will even warn you of gaps in the system. You can also use debsecan to scan for known vulnerabilities on a given installation.

  • FreeBSD has no way to automatically install security patches for things in the packages collection. As with many rolling-release systems, you can’t automate the installation of these security patches with FreeBSD because it is not safe to blindly update packages. It’s not safe to blindly update packages because they may bring along more than just security patches: they may represent major upgrades that introduce incompatibilities, etc. Unlike Debian’s practice of backporting fixes and thus producing narrowly-tailored patches, forcing upgrades to newer versions precludes a “minimal intervention” install. Therefore, rolling release systems are category 3.

  • Things such as Snap, Flatpak, AppImage, Docker containers, Electron apps, and third-party binaries often contain embedded libraries and such for which you have no easy visibility into their status. For instance, if there was a bug in libpng, would you know how many of your containers had a vulnerability? These systems are category 4 – you don’t even know if you’re vulnerable. It’s for this reason that my Debian-based Docker containers apply security patches before starting processes, and also run unattended-upgrades and friends.

The pernicious library problem

As mentioned in my last category above, hidden vulnerabilities can be a big problem. I’ve been writing about this for years. Back in 2017, I wrote an article focused on Docker containers, but which applies to the other systems like Snap and so forth. I cited a study back then that “Over 80% of the :latest versions of official images contained at least one high severity vulnerability.” The situation is no better now. In December 2023, it was reported that, two years after the critical Log4Shell vulnerability, 25% of apps were still vulnerable to it. Also, only 21% of developers ever update third-party libraries after introducing them into their projects.

Clearly, you can’t rely on these images with embedded libraries to be secure. And since they are black box, they are difficult to audit.

Debian’s policy of always splitting libraries out from packages is hugely beneficial; it allows finegrained analysis of not just vulnerabilities, but also the dependency graph. If there’s a vulnerability in libpng, you have one place to patch it and you also know exactly what components of your system use it.

If you use snaps, or AppImages, you can’t know if they contain a deeply embedded vulnerability, nor could you patch it yourself if you even knew. You are at the mercy of upstream detecting and remedying the problem – a dicey situation at best.

Who makes the patches?

Fundamentally, humans produce security patches. Often, but not always, patches originate with the authors of a program and then are integrated into distribution packages. It should be noted that every security team has finite resources; there will always be some CVEs that aren’t patched in a given system for various reasons; perhaps they are not exploitable, or are too low-impact, or have better mitigations than patches.

Debian has an excellent security team; they manage the process of integrating patches into Debian, produce Debian Security Advisories, maintain the Debian Security Tracker (which maintains cross-references with the CVE database), etc.

Some distributions don’t have this infrastructure. For instance, I was unable to find this kind of tracker for Devuan or Raspberry Pi OS. In contrast, Ubuntu and Arch Linux both seem to have active security teams with trackers and advisories.

Implications for Raspberry Pi OS and others

As I mentioned above, I’m transitioning my Pi devices off Raspberry Pi OS (Raspbian). Security is one reason. Although Raspbian is a fork of Debian, and you can install packages like unattended-upgrades on it, they don’t work right because they use the Debian infrastructure, and Raspbian hasn’t modified them to use their own infrastructure. I don’t see any Raspberry Pi OS security advisories, trackers, etc. In short, they lack the infrastructure to support those Debian tools anyhow.

Not only that, but Raspbian lags behind Debian in both new releases and new security patches, sometimes by days or weeks.

Live Migrating from Raspberry Pi OS bullseye to Debian bookworm contains instructions for migrating Raspberry Pis to Debian.

The Grumpy Cricket (And Other Enormous Creatures)

This Christmas, one of my gifts to my kids was a text adventure (interactive fiction) game for them. Now that they’ve enjoyed it, I’m releasing it under the GPL v3.

As interactive fiction, it’s like an e-book, but the reader is also the player, guiding the exploration of the world.

The Grumpy Cricket is designed to be friendly for a first-time player of interactive fiction. There is no way to lose the game or to die. There is an in-game hint system providing context-sensitive hints anytime the reader types HINT. There are splashes of humor throughout that got all three of my kids laughing.

I wrote it in 2023 for my kids, which range in age from 6 to 17. That’s quite a wide range, but they all were enjoying it.

You can download it, get the source, or play it online in a web browser at https://www.complete.org/the-grumpy-cricket/

It’s More Important To Recognize What Direction People Are Moving Than Where They Are

I recently read a post on social media that went something like this (paraphrased):

“If you buy an EV, you’re part of the problem. You’re advancing car culture and are actively hurting the planet. The only ethical thing to do is ditch your cars and put all your effort into supporting transit. Anything else is worthless.”

There is some truth there; supporting transit in areas it makes sense is better than having more cars, even EVs. But of course the key here is in areas it makes sense.

My road isn’t even paved. I live miles from the nearest town. And get into the remote regions of the western USA and you’ll find people that live 40 miles from the nearest neighbor. There’s no realistic way that mass transit is ever going to be a thing in these areas. And even if it were somehow usable, sending buses over miles where nobody lives just to reach the few that are there will be worse than private EVs. And because I can hear this argument coming a mile away, no, it doesn’t make sense to tell these people to just not live in the country because the planet won’t support that anymore, because those people are literally the ones that feed the ones that live in the cities.

The funny thing is: the person that wrote that shares my concerns and my goals. We both care deeply about climate change. We both want positive change. And I, ahem, recently bought an EV.

I have seen this play out in so many ways over the last few years. Drive a car? Get yelled at. Support the wrong politician? Get a shunning. Not speak up loudly enough about the right politician? That’s a yellin’ too.

The problem is, this doesn’t make friends. In fact, it hurts the cause. It doesn’t recognize this truth:

It is more important to recognize what direction people are moving than where they are.

I support trains and transit. I’ve donated money and written letters to politicians. But, realistically, there will never be transit here. People in my county are unable to move all the way to transit. But what can we do? Plenty. We bought an EV. I’ve been writing letters to the board of our local electrical co-op advocating for relaxation of rules around residential solar installations, and am planning one myself. It may well be that our solar-powered transportation winds up having a lower carbon footprint than the poster’s transit use.

Pick your favorite cause. Whatever it is, consider your strategy: What do you do with someone that is very far away from you, but has taken the first step to move an inch in your direction? Do you yell at them for not being there instantly? Or do you celebrate that they have changed and are moving?

How Gapped is Your Air?

Sometimes we want better-than-firewall security for things. For instance:

  1. An industrial control system for a municipal water-treatment plant should never have data come in or out
  2. Or, a variant of the industrial control system: it should only permit telemetry and monitoring data out, and nothing else in or out
  3. A system dedicated to keeping your GPG private keys secure should only have material to sign (or decrypt) come in, and signatures (or decrypted data) go out
  4. A system keeping your tax records should normally only have new records go in, but may on occasion have data go out (eg, to print a copy of an old record)

In this article, I’ll talk about the “high side” (the high-security or high-sensitivity systems) and the “low side” (the lower-sensitivity or general-purpose systems). For the sake of simplicity, I’ll assume the high side is a single machine, but it could as well be a whole network.

Let’s focus on examples 3 and 4 to make things simpler. Let’s consider the primary concern to be data exfiltration (someone stealing your data), with a secondary concern of data integrity (somebody modifying or destroying your data).

You might think the safest possible approach is Airgapped – that is, there is literal no physical network connection to the machine at all. This help! But then, the problem becomes: how do we deal with the inevitable need to legitimately get things on or off of the system? As I wrote in Dead USB Drives Are Fine: Building a Reliable Sneakernet, by using tools such as NNCP, you can certainly create a “sneakernet”: using USB drives as transport.

While this is a very secure setup, as with most things in security, it’s less than perfect. The Wikipedia airgap article discusses some ways airgapped machines can still be exploited. It mentions that security holes relating to removable media have been exploited in the past. There are also other ways to get data out; for instance, Debian ships with gensio and minimodem, both of which can transfer data acoustically.

But let’s back up and think about why we think of airgapped machines as so much more secure, and what the failure modes of other approaches might be.

What about firewalls?

You could very easily set up high-side machine that is on a network, but is restricted to only one outbound TCP port. There could be a local firewall, and perhaps also a special port on an external firewall that implements the same restrictions. A variant on this approach would be two computers connected directly by a crossover cable, though this doesn’t necessarily imply being more secure.

Of course, the concern about a local firewall is that it could potentially be compromised. An external firewall might too; for instance, if your credentials to it were on a machine that got compromised. This kind of dual compromise may be unlikely, but it is possible.

We can also think about the complexity in a network stack and firewall configuration, and think that there may be various opportunities to have things misconfigured or buggy in a system of that complexity. Another consideration is that data could be sent at any time, potentially making it harder to detect. On the other hand, network monitoring tools are commonplace.

On the other hand, it is convenient and cheap.

I use a system along those lines to do my backups. Data is sent, gpg-encrypted and then encrypted again at the NNCP layer, to the backup server. The NNCP process on the backup server runs as an untrusted user, and dumps the gpg-encrypted files to a secure location that is then processed by a cron job using Filespooler. The backup server is on a dedicated firewall port, with a dedicated subnet. The only ports allowed out are for NNCP and NTP, and offsite backups. There is no default gateway. Not even DNS is permitted out (the firewall does the appropriate redirection). There is one pinhole allowed out, where a subset of the backup data is sent offsite.

I initially used USB drives as transport, and it had no network connection at all. But there were disadvantages to doing this for backups – particularly that I’d have no backups for as long as I’d forget to move the drives. The backup system also would have clock drift, and the offsite backup picture was more challenging. (The clock drift was a problem because I use 2FA on the system; a password, plus a TOTP generated by a Yubikey)

This is “pretty good” security, I’d think.

What are the weak spots? Well, if there were somehow a bug in the NNCP client, and the remote NNCP were compromised, that could lead to a compromise of the NNCP account. But this itself would accomplish little; some other vulnerability would have to be exploited on the backup server, because the NNCP account can’t see plaintext data at all. I use borgbackup to send a subset of backup data offsite over ssh. borgbackup has to run as root to be able to access all the files, but the ssh it calls runs as a separate user. A ssh vulnerability is therefore unlikely to cause much damage. If, somehow, the remote offsite system were compromised and it was able to exploit a security issue in the local borgbackup, that would be a problem. But that sounds like a remote possibility.

borgbackup itself can’t even be used over a sneakernet since it is not asynchronous. A more secure solution would probably be using something like dar over NNCP. This would eliminate the ssh installation entirely, and allow a complete isolation between the data-access and the communication stacks, and notably not require bidirectional communication. Logic separation matters too. My Roundup of Data Backup and Archiving Tools may be helpful here.

Other attack vectors could be a vulnerability in the kernel’s networking stack, local root exploits that could be combined with exploiting NNCP or borgbackup to gain root, or local misconfiguration that makes the sandboxes around NNCP and borgbackup less secure.

Because this system is in my basement in a utility closet with no chairs and no good place for a console, I normally manage it via a serial console. While it’s a dedicated line between the system and another machine, if the other machine is compromised or an adversary gets access to the physical line, credentials (and perhaps even data) could leak, albeit slowly.

But we can do much better with serial lines. Let’s take a look.

Serial lines

Some of us remember RS-232 serial lines and their once-ubiquitous DB-9 connectors. Traditionally, their speed maxxed out at 115.2Kbps.

Serial lines have the benefit that they can be a direct application-to-application link. In my backup example above, a serial line could directly link the NNCP daemon on one system with the NNCP caller on another, with no firewall or anything else necessary. It is simply up to those programs to open the serial device appropriately.

This isn’t perfect, however. Unlike TCP over Ethernet, a serial line has no inherent error checking. Modern programs such as NNCP and ssh assume that a lower layer is making the link completely clean and error-free for them, and will interpret any corruption as an attempt to tamper and sever the connection. However, there is a solution to that: gensio. In my page Using gensio and ser2net, I discuss how to run NNCP and ssh over gensio. gensio is a generic framework that can add framing, error checking, and retransmit to an unreliable link such as a serial port. It can also add encryption and authentication using TLS, which could be particularly useful for applications that aren’t already doing that themselves.

More traditional solutions for serial communications have their own built-in error correction. For instance, UUCP and Kermit both were designed in an era of noisy serial lines and might be an excellent fit for some use cases. The ZModem protocol also might be, though it offers somewhat less flexibility and automation than Kermit.

I have found that certain USB-to-serial adapters by Gearmo will actually run at up to 2Mbps on a serial line! Look for the ones on their spec pages with a FTDI chipset rated at 920Kbps. It turns out they can successfully be driven faster, especially if gensio’s relpkt is used. I’ve personally verified 2Mbps operation (Linux port speed 2000000) on Gearmo’s USA-FTDI2X and the USA-FTDI4X. (I haven’t seen any single-port options from Gearmo with the 920Kbps chipset, but they may exist).

Still, even at 2Mbps, speed may well be a limiting factor with some applications. If what you need is a console and some textual or batch data, it’s probably fine. If you are sending 500GB backup files, you might look for something else. In theory, this USB to RS-422 adapter should work at 10Mbps, but I haven’t tried it.

But if the speed works, running a dedicated application over a serial link could be a nice and fairly secure option.

One of the benefits of the airgapped approach is that data never leaves unless you are physically aware of transporting a USB stick. Of course, you may not be physically aware of what is ON that stick in the event of a compromise. This could easily be solved with a serial approach by, say, only plugging in the cable when you have data to transfer.

Data diodes

A traditional diode lets electrical current flow in only one direction. A data diode is the same concept, but for data: a hardware device that allows data to flow in only one direction.

This could be useful, for instance, in the tax records system that should only receive data, or the industrial system that should only send it.

Wikipedia claims that the simplest kind of data diode is a fiber link with transceivers connected in only one direction. I think you could go one simpler: a serial cable with only ground and TX connected at one end, wired to ground and RX at the other. (I haven’t tried this.)

This approach does have some challenges:

  • Many existing protocols assume a bidirectional link and won’t be usable

  • There is a challenge of confirming data was successfully received. For a situation like telemetry, maybe it doesn’t matter; another observation will come along in a minute. But for sending important documents, one wants to make sure they were properly received.

In some cases, the solution might be simple. For instance, with telemetry, just writing out data down the serial port in a simple format may be enough. For sending files, various mitigations, such as sending them multiple times, etc., might help. You might also look into FEC-supporting infrastructure such as blkar and flute, but these don’t provide an absolute guarantee. There is no perfect solution to knowing when a file has been successfully received if the data communication is entirely one-way.

Audio transport

I hinted above that minimodem and gensio both are software audio modems. That is, you could literally use speakers and microphones, or alternatively audio cables, as a means of getting data into or out of these systems. This is pretty limited; it is 1200bps, and often half-duplex, and could literally be disrupted by barking dogs in some setups. But hey, it’s an option.

Airgapped with USB transport

This is the scenario I began with, and named some of the possible pitfalls above as well. In addition to those, note also that USB drives aren’t necessarily known for their error-free longevity. Be prepared for failure.

Concluding thoughts

I wanted to lay out a few things in this post. First, that simply being airgapped is generally a step forward in security, but is not perfect. Secondly, that both physical and logical separation matter. And finally, that while tools like NNCP can make airgapped-with-USB-drive-transport a doable reality, there are also alternatives worth considering – especially serial ports, firewalled hard-wired Ethernet, data diodes, and so forth. I think serial links, in particular, have been largely forgotten these days.

Note: This article also appears on my website, where it may be periodically updated.

A Maze of Twisty Little Pixels, All Tiny

Two years ago, I wrote Managing an External Display on Linux Shouldn’t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved.

But then you throw HiDPI into the mix and it all goes wonky.

If you’re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can’t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren’t. Wayland is far better, supporting on-the-fly resizes quite nicely.

I’ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that.

On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness.

But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right.

I tried both Gnome and KDE. Here are my observations with both:

Gnome

I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section.

Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable.

Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there’s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote “We’re not going to add a preference just because you want one”. KDE, on the other hand, aside from not mucking with your system’s power settings in this way, has a nice panel with “on AC” and “on battery” and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff.

While Gnome has excellent support for Wayland, it doesn’t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below).

Gnome won’t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn’t exist on KDE.

Both Gnome and KDE support “night light” (warmer color temperatures at night), but Gnome’s often didn’t change when it should have, or changed on one display but not the other.

The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don’t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this.

Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I’m about to dd a Debian installer to it, I definitely don’t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn’t have a nice removable-drives icon like KDE does) and it’s a bunch of annoying clicks, and I didn’t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc.

The ssh agent on Gnome doesn’t start up for a Wayland session, though this is easily enough worked around.

The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies.

The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly.

KDE

The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system.

Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn’t muck with my CPU power settings, and lets me easily define “on AC” vs “on battery” settings for things like suspend when idle.

KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is “Scaled by the system”, which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is “Apply scaling themselves”, which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you’ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use “Apply scaling by themselves” because only a few of my apps aren’t Wayland-aware.

I did encounter a few bugs in KDE under Wayland:

sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec.

Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine.

The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it.

On Debian, KDE mysteriously installed Pulseaudio instead of Debian’s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine).

Conclusions

I’m sticking with KDE. Given that I couldn’t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn’t hard. But even if it weren’t for that, I’d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome.

Update: Corrected the gsettings command

For the First Time In Years, I’m Excited By My Computer Purchase

Some decades back, when I’d buy a new PC, it would unlock new capabilities. Maybe AGP video, or a PCMCIA slot, or, heck, sound.

Nowadays, mostly new hardware means things get a bit faster or less crashy, or I have some more space for files. It’s good and useful, but sorta… meh.

Not this purchase.

Cory Doctorow wrote about the Framework laptop in 2021:

There’s no tape. There’s no glue. Every part has a QR code that you can shoot with your phone to go to a service manual that has simple-to-follow instructions for installing, removing and replacing it. Every part is labeled in English, too!

The screen is replaceable. The keyboard is replaceable. The touchpad is replaceable. Removing the battery and replacing it takes less than five minutes. The computer actually ships with a screwdriver.

Framework had been on my radar for awhile. But for various reasons, when I was ready to purchase, I didn’t; either the waitlist was long, or they didn’t have the specs I wanted.

Lately my aging laptop with 8GB RAM started OOMing (running out of RAM). My desktop had developed a tendency to hard hang about once a month, and I researched replacing it, but the cost was too high to justify.

But when I looked into the Framework, I thought: this thing could replace both. It is a real shift in perspective to have a laptop that is nearly as upgradable as a desktop, and can be specced out to exactly what I wanted: 2TB storage and 64GB RAM. And still cheaper than a Macbook or Thinkpad with far lower specs, because the Framework uses off-the-shelf components as much as possible.

Cory Doctorow wrote, in The Framework is the most exciting laptop I’ve ever broken:

The Framework works beautifully, but it fails even better… Framework has designed a small, powerful, lightweight machine – it works well. But they’ve also designed a computer that, when you drop it, you can fix yourself. That attention to graceful failure saved my ass.

I like small laptops, so I ordered the Framework 13. I loaded it up with the 64GB RAM and 2TB SSD I wanted. Frameworks have four configurable ports, which are also hot-swappable. I ordered two USB-C, one USB-A, and one HDMI. I put them in my preferred spots (one USB-C on each side for easy docking and charging). I put Debian on it, and it all Just Worked. Perfectly.

Now, I orderd the DIY version. I hesitated about this — I HATE working with laptops because they’re all so hard, even though I KNEW this one was different — but went for it, because my preferred specs weren’t available in a pre-assembled model.

I’m glad I did that, because assembly was actually FUN.

I got my box. I opened it. There was the bottom shell with the motherboard and CPU installed. Here are the RAM sticks. There’s the SSD. A minute or two with each has them installed. Put the bezel on the screen, attach the keyboard — it has magnets to guide it into place — and boom, ready to go. Less than 30 minutes to assemble a laptop nearly from scratch. It was easier than assembling most desktops.

So now, for the first time, my main computing device is a laptop. Rather than having a desktop and a laptop, I just have a laptop. I’ll be able to upgrade parts of it later if I want to. I can rearrange the ports. And I can take all my most important files with me. I’m quite pleased!

Try the Last Internet Kermit Server

$ grep kermit /etc/services
kermit          1649/tcp

What is this mysterious protocol? Who uses it and what is its story?

This story is a winding one, beginning in 1981. Kermit is, to the best of my knowledge, the oldest actively-maintained software package with an original developer still participating. It is also a scripting language, an Internet server, a (scriptable!) SSH client, and a file transfer protocol.

And my first use of it was talking to my HP-48GX calculator over a 9600bps serial link. Yes, that calculator had a Kermit server built in.

But let’s back up and talk about serial ports and Modems.

Serial Ports and Modems

In my piece The PC & Internet Revolution in Rural America, I recently talked about getting a modem – what an excitement it was to get one! I realize that many people today have never used a serial line or a modem, so let’s briefly discuss.

Before Ethernet and Wifi took off in a big way, in the 1990s-2000s, two computers would talk to each other over a serial line and a modem. By modern standards, these were slow; 300bps was a common early speed. They also (at least in the beginning) had no kind of error checking. Characters could be dropped or changed. Sometimes even those speeds were faster than the receiving device could handle. Some serial links were 7-bit, and wouldn’t even pass all 7-bit characters; for instance, sending a Ctrl-S could lock up a remote until you sent Ctrl-Q.

And computers back in the 1970s and 1980s weren’t as uniform as they are now. They used different character sets, different line endings, and even had different notions of what a file is. Today’s notion of a file as whatever set of binary bytes an application wants it to be was by no means universal; some systems treated a file as a set of fixed-length records, for instance.

So there were a lot of challenges in reliably moving files between systems. Kermit was introduced to reliably move files between systems using serial lines, automatically working around the varieties of serial lines, detecting errors and retransmitting, managing transmit speeds, and adapting between architectures as appropriate. Quite a task! And perhaps this explains why it was supported on a calculator with a primitive CPU by today’s standards.

Serial communication, by the way, is still commonplace, though now it isn’t prominent in everyone’s home PC setup. It’s used a lot in industrial equipment, avionics, embedded systems, and so forth.

The key point about serial lines is that they aren’t inherently multiplexed or packetized. Whereas an Ethernet network is designed to let many dozens of applications use it at once, a serial line typically runs only one (unless it is something like PPP, which is designed to do multiplexing over the serial line).

So it become useful to be able to both log in to a machine and transfer files with it. That is, incidentally, still useful today.

Kermit and XModem/ZModem

I wondered: why did we end up with two diverging sets of protocols, created at about the same time? The Kermit website has the answer: essentially, BBSs could assume 8-bit clean connections, so XModem and ZModem had much less complexity to worry about. Kermit, on the other hand, was highly flexible. Although ZModem came out a few years before Kermit had its performance optimizations, by about 1993 Kermit was on par or faster than ZModem.

Beyond serial ports

As LANs and the Internet came to be popular, people started to use telnet (and later ssh) to connect to remote systems, rather than serial lines and modems. FTP was an early way to transfer files across the Internet, but it had its challenges. Kermit added telnet support, as well as later support for ssh (as a wrapper around the ssh command you already know). Now you could easily log in to a machine and exchange files with it without missing a beat.

And so it was that the Internet Kermit Service Daemon (IKSD) came into existence. It allows a person to set up a Kermit server, which can authenticate against local accounts or present anonymous access akin to FTP.

And so I established the quux.org Kermit Server, which runs the Unix IKSD (part of the Debian ckermit package).

Trying Out the quux.org Kermit Server

There are more instructions on the quux.org Kermit Server page! You can connect to it using either telnet or the kermit program. I won’t duplicate all of the information here, but here’s what it looks like to connect:

$ kermit
C-Kermit 10.0 Beta.08, 15 Dec 2022, for Linux+SSL (64-bit)
 Copyright (C) 1985, 2022,
  Trustees of Columbia University in the City of New York.
  Open Source 3-clause BSD license since 2011.
Type ? or HELP for help.
(/tmp/t/) C-Kermit>iksd /user:anonymous kermit.quux.org
 DNS Lookup...  Trying 135.148.101.37...  Reverse DNS Lookup... (OK)
Connecting to host glockenspiel.complete.org:1649
 Escape character: Ctrl-\ (ASCII 28, FS): enabled
Type the escape character followed by C to get back,
or followed by ? to see other options.
----------------------------------------------------

 >>> Welcome to the Internet Kermit Service at kermit.quux.org <<<

To log in, use 'anonymous' as the username, and any non-empty password

Internet Kermit Service ready at Fri Aug  4 22:32:17 2023
C-Kermit 10.0 Beta.08, 15 Dec 2022
kermit

Enter e-mail address as Password: [redacted]

Anonymous login.

You are now connected to the quux kermit server.

Try commands like HELP, cd gopher, dir, and the like.  Use INTRO
for a nice introduction.

(~/) IKSD>

You can even recursively download the entire Kermit mirror: over 1GB of files!

Conclusions

So, have fun. Enjoy this experience from the 1980s.

And note that Kermit also makes a better ssh client than ssh in a lot of ways; see ideas on my Kermit page.

This page also has a permanent home on my website, where it may be periodically updated.