I last reviewed email services in 2019. That review focused a lot of attention on privacy. At the time, I selected mailbox.org as my provider, and have been using them for these 5 years since. However, both their service and their support have gone significantly downhill since, so it is time for me to look at other options.
Here I am focusing strongly on email. Some of the providers mentioned here provide other services (IM, video calls, groupware, etc.), and to the extent they do, I am ignoring them.
What Matters in 2024
I want to start off by acknowledging that what you need in email probably depends on your circumstances and the country in which you live. For me, I begin by naming that the largest threat most of us face isn’t from state actors but from criminals: hackers, ransomware gangs, etc. It is important to take as many steps as possible to secure one’s account against that. Privacy and security are both part of the mix. I still value privacy but I am acknowledging, as Migadu does, that “Email as we know it and encryption are incompatible.” Although some of these services strongly protect parts of the conversation, the reality is that most people will be emailing people using plain old email services which don’t. For stronger security, something like Signal would be needed. (I wrote about Signal in 2021 also.)
Interestingly, OpenPGP support seems to be something of a standard feature in the providers I reviewed by this point. All or almost all of them provide integration with browser-based encryption as well as server-side encryption if you prefer that.
Although mailbox.org can automatically PGP-encrypt every message that arrives in plaintext, for general use, this is unwieldy; there isn’t good tooling for searching mailboxes where every message is encrypted, etc. So I never enabled that feature at Mailbox. I still value security and privacy, but a pragmatic approach addresses the most pressing threats first.
My criteria
The basic requirements for an email service include:
- Ability to use my own domains
- Strong privacy policy
- Ability for me to use my own IMAP and SMTP clients on both desktop and mobile
- It must be extremely reliable
- It must not be free
- It must have excellent support for those rare occasions when it is needed
- Support for basic aliases
Why do I say it must not be free? Because if someone is providing a service with the quality I’m talking about here, and not charging for it, it implies something is fishy: either they are unscrupulous, are financially unstable, or the product is something else like ads. I am not aware of any provider that matches the other criteria with a free account anyhow. These providers range from about $30 to $90 per year, so cheaper than a Netflix subscription.
Immediately, this rules out several options:
- Proton doesn’t let me use my own clients on mobile (their bridge is desktop-only)
- Tuta also doesn’t let me use my own clients
- Posteo doesn’t let me use my own domain
- mxroute.com lacks a strong privacy policy, and its policy has numerous causes for concern (for instance, “If you repeatedly send email to invalid/unroutable recipients, they may be published on our GitHub”)
I will have a bit more to say about a couple of these providers below.
There are some additional criteria that are strongly desired but not absolutely required:
- Ability to set individual access passwords for every device/app
- Support for two-factor authentication (2FA/TFA/TOTP) for web-based access
- Support for basics in filtering: ability to filter on envelope recipient (so if I get BCC’d, I can still filter), and ability to execute more than one action on filter match (eg, deliver to two folders, or deliver to a folder and forward to someone else)
IMAP and SMTP don’t really support 2FA, so by setting individual passwords for every device, you can at least limit the blast radius and cut off a specific device if something is (or might be) compromised.
The candidates
I considered these providers: Startmail, Mailfence, Runbox, Fastmail, Kolab, Mailbox.org, and Migadu. I’ll review each, and highlight the pricing of the plan I would most likely use. Each provider offers multiple plans; some may be more expensive and some may be cheaper than the one I reviewed. I included a link to each provider’s full pricing information so you can compare for your needs.
I set up trials with each of these (except Mailbox.org, with which I already had a paid account). It so happend that I had actual questions for support for each one, which gave me an opportunity to see how support responded. I did not fabricate questions, and would not have contacted support if I didn’t have real ones. (This means that I asked different questions of each provider, because they were the REAL questions I had.) I’ll jump to the spoiler right now: I eventually chose Migadu, with Fastmail and Mailfence as close seconds.
I looked for providers myself, and also solicited recommendations in a Mastodon thread.
Mailbox.org
I begin with Mailbox, as it was my top choice in 2019 and the incumbent.
Until this year, I had been quite happy with it. I had cause to reach their support less than once a year on average, and each time they replied the same day or next day. Now, however, they are failing on reliability and on support.
Their spam filter has become overly aggressive. It has blocked quite a bit of legitimate mail. When contacting their support about a prior issue earlier this year, they initially took 4 days to reply, and then 6 days to reply after that. Ouch. They had me disable some spam settings.
It didn’t really help. I continue to lose mail. I don’t know how much, because they block a lot of it before it even hits the spam folder. One of my friends texted to say mail was dropping. I raised a new ticket with mailbox, which took them 5 days to reply to. Their reply was unhelpful. “As the Internet is not a static system, unforeseen events can always occur.” Well yes, that’s true, and I get it, false positives exist with email. But this was from an ISP’s mail system with an address that had been established for years, and it was part of a larger pattern of rejecting quite a bit of legit mail. And every interaction with them recently hasn’t resulted in them actually doing anything to resolve anything. It’s just a paragraph or two of reply that does nothing and helps nothing.
When I complained that it took 5 days to reply, they said “We have not been able to reply sooner as we are currently experiencing a high volume of customer enquiries.” Even though their SLA for my account is a not-great “48 business hour” turnaround, they still missed it and their reason is “we’re busy.” I finally asked what RBL had caught the blocked email, since when I checked, the sender wasn’t on any RBL. Mailbox’s reply: they only keep their logs for 7 days, so next time I should contact them within 7 days. Which, of course, I DID; it was them that kept delaying. Ugh! It’s like they’ve become a cable company.
Even worse is how they have been blocking mail from GrapheneOS’s discussion form. See their thread about it. In short, Graphene’s mail server has a clean reputation and Mailbox has no problem with it. But because one of Graphene’s IPv6 webservers has an IPv6 allocation of a size Mailbox doesn’t like, they drop mail. It’s ridiculous, and Mailbox was dismissive of this well-known and well-regarded Open Source project. So if the likes of GrapheneOS can’t get good faith effort to deliver their mail, what chance does an individual like me have?
I’m sorry, but I’m literally paying you to deliver email for me and provide good support. If you can’t do either of those, you don’t get to push that problem down onto me. Hire appropriate staff.
On the technical side, they support aliases, my own clients, and have a reasonable privacy policy. Their 2FA support exists for the web interface (though weirdly not the support site), though it is somewhat weird. They do not support app passwords.
A somewhat unique feature is the @secure.mailbox.org
domain. If you try to receive mail at that address, mailbox.org will block it unless it uses TLS. Same for sending. This isn’t E2EE, but it does at least require things not be in plaintext for the last hop to Mailbox.
Verdict: not recommended due to poor reliability and support.
Mailbox.Org summary:
- Website: https://mailbox.org/en/
- Reliability: iffy due to over-aggressive spam filtering
- Support: Poor; takes 4-6 days for a reply and replies are unhelpful
- Individual access passwords: No
- 2FA: Yes, but with a PIN instead of a password as the other factor
- Filtering: Full SIEVE feature set and GUI editor
- Spam settings: greylisting on/off, reject some/all spam, etc. But they’re insufficient to address Mailbox’s overzealousness, which support says I cannot workaround within the interface.
- Server storage location: Germany
- Plan as reviewed: standard [pricing link]
- Cost per year: EUR 30 (about $33)
- Mail storage included: 10GB
- Limits on send/receive volume: none
- Aliases: 50 on your domain name, 25 on mailbox.org
- Additional mailboxes: Available; each one at the same fee as the primary mailbox
Startmail
I really wanted to like Startmail. Its “vault” is an interesting idea and should contribute to the security and privacy of an account. They clearly care about privacy.
It falls down in filtering. They have no way to filter on envelope recipient (BCC or similar). Their support confirmed this to me and that’s a showstopper.
Startmail support was also as slow as Mailbox, taking 5 days to respond to me.
Two showstoppers right there.
Verdict: Not recommended due to slow support responsiveness and weak filtering.
Startmail summary:
- Website: https://www.startmail.com/
- Reliability: Seems to be fine
- Support: Mediocre; Took 5 days for a reply, but the reply was helpful
- Individual app access passwords: Yes
- 2FA: Yes
- Filtering: Poor; cannot filter on envelope recipient, and can’t build filters with multiple actions
- Spam settings: None
- Server storage location: The Netherlands
- Plan as reviewed: Custom domain (trial was Personal), [pricing link]
- Cost per year: $70
- Mail storage included: 20GB
- Limits on send/receive volume: none
- Aliases: unlimited, with lots of features: can set expiration, etc.
- Additional mailboxes: not available
Kolab
Kolab Now is mainly positioned as a full groupware service, but they do have a email-only option which I investigated. There isn’t much documentation about it compared to other providers, and also not much in the way of settings. You can turn greylisting on or off. And…. that’s it.
It has a full suite of filtering options. They set an X-Envelope-To header which you can use with the arbitrary header match to do the right thing even for BCC situations. Filters can have multiple conditions and multiple actions. It is SIEVE-based and you can download your SIEVE definitions.
If you enable 2FA, you disable IMAP and SMTP; not great.
Verdict: Not an impressive enough email featureset to justify going with it.
Kolab Now summary:
- Website: https://kolabnow.com/
- Reliability: Seems to be fine
- Support: Fine responsiveness (next day)
- Invidiaul app passwords: no
- 2FA: Yes, but if you enable it, they disable IMAP and SMTP
- Filtering: Excellent
- Spam settings: Only greylisting on/off
- Server storage location: Switzerland; they have lots of details on their setup
- Plan as reviewed: “Just email” [pricing link]
- Cost per year: CHF 60, about $66
- Mail storage included: 5GB
- Limitations on send/receive volume: None
- Aliases: Yes. Not sure if there are limits.
- Additional mailboxes: Yes if you set up a group account. “Flexible pricing based on user count” is not documented anywhere I could find.
Mailfence
Mailfence is another option, somewhat similar to Startmail but without the unique vault. I had some questions about filters, and support was quite responsive, responding in a couple of hours.
Some of their copy on their website is a bit misleading, but support clarified when I asked them. They do not offer encryption at rest (like most of the entries here).
Mailfence’s filtering system is the kind I’d like to see. It allows multiple conditions and multiple actions for each rule, and has some unique actions as well (notify by SMS or XMPP). Support says that “Recipients” matches envelope recipients. However, one ommission is that I can’t match on arbitrary headers; only the canned list of headers they provide.
They have only two spam settings:
- spam filter on/off
- whitelist
Given some recent complaints about their spam filter being overly aggressive, I find this lack of control somewhat concerning. (However, I discount complaints about people begging for more features in free accounts; free won’t provide the kind of service I’m looking for with any provider.) There are generally just very few settings for email as well.
Verdict: Response and helpful support, filtering has the right structure but lacks arbitrary header match. Could be a good option.
Mailfence summary:
- Website: https://mailfence.com/
- Reliability: Seems to be fine
- Support: Excellent responsiveness and helpful replies (after some initial confusion about my question of greylisting)
- Individual app access passwords: No. You can set a per-service password (eg, an IMAP password), but those will be shared with all devices speaking that protocol.
- 2FA: Yes
- Filtering: Good; only misses the ability to filter on arbitrary headers
- Spam settings: Very few
- Server storage location: Belgium
- Plan as reviewed: Entry [pricing link]
- Cost per year: $42
- Mail storage included: 10GB, with a maximum of 50,000 messages
- Limits on send/receive volume: none
- Aliases: 50. Aliases can’t be deleted once created (there may be an exeption to this for aliases on your own domain rather than mailfence.com)
- Additional mailboxes: Their page on this is a bit confusing, and the pricing page lacks the information promised. It looks like you can pay the same $42/year for additional mailboxes, with a limit of up to 2 additional paid mailboxes and 2 additional free mailboxes tied to the account.
Runbox
This one came recommended in a Mastodon thread. I had some questions about it, and support response was fantastic – I heard from two people that were co-founders of the company! Even within hours, on a weekend. Incredible! This kind of response was only surpassed by Migadu.
I initially wrote to Runbox with questions about the incoming and outgoing message limits, which I hadn’t seen elsewhere, as well as the bandwidth limit. They said the bandwidth limit is no longer enforced on paid accounts. The incoming and outgoing limits are enforced, and all email (even spam) counts towards the limit. Notably the outgoing limit is per recipient, so if you send 10 messages to your 50-recipient family group, that’s the limit. However, they also indicated a willingness to reset the limit if something happens. Unfortunately, hitting the limit results in a hard bounce (SMTP 5xx) rather than a temporary failure (SMTP 4xx) so it can result in lost mail. This means I’d be worried about some attack or other weirdness causing me to lose mail.
Their filter is a pain point. Here are the challenges:
- You can’t directly match on a BCC recipient. Support advised to use a “headers” match, which will search for something anywhere in the headers. This works and is probably “good enough” since this data is in the Received: headers, but it is a little more imprecise.
- They only have a “contains”, not an “equals” operator. So, for instance, a pattern searching for “test@example.com” would also match “newtest@example.com”. Support advised to put the email address in angle brackets to avoid this. That will work… mostly. Angle brackets aren’t always required in headers.
- There is no way to have multiple actions on the filter (there is just no way to file an incoming message into two folders). This was the ultimate showstopper for me.
Support advised they are planning to upgrade the filter system in the future, but these are the limitations today.
Verdict: A good option if you don’t need much from the filtering system. Lots of privacy emphasis.
Runbox summary:
- Website: https://runbox.com/
- Reliability: Seems to be fine, except returning 5xx codes if per-day limits are exceeded
- Support: Excellent responsiveness and replies from founders
- Individual app passwords: Yes
- 2FA: Yes
- Filtering: Poor
- Spam settings: Very few
- Server storage location: Norway
- Plan as reviewed: Mini [pricing link]
- Cost per year: $35
- Mail storage included: 10GB
- Limited on send/receive volume: Receive 5000 messages/day, Send 500 recipients/day
- Aliases: 100 on runbox.com; unlimited on your own domain
- Additional mailboxes: $15/yr each, also with 10GB non-shared storage per mailbox
Fastmail
Fastmail came recommended to me by a friend I’ve known for decades.
Here’s the thing about Fastmail, compared to all the services listed above: It all just works. Everything. Filtering, spam prevention, it is all there, all feature-complete, and all just does the right thing as you’d hope. Their filtering system has a canned dropdown for “To/Cc/Bcc”, it supports multiple conditions and multiple actions, and just does the right thing. (Delivering to multiple folders is a little cumbersome but possible.) It has a particularly strong feature set around administering multiple accounts, including things like whether users can prevent admins from reading their mail.
The not-so-great part of the picture is around privacy. Fastmail is based in Australia, where the government has extensive power around spying on data, even to the point of forcing companies to add wiretap capabilities. Fastmail’s privacy policy states user data may be held in Australia, USA, India, and Netherlands. By default, they share data with unidentified “spam companies”, though you can disable this in settings. On the other hand, they do make a good effort towards privacy.
I contacted support with some questions and got back a helpful response in three hours. However, one of the questions was about in which countries my particular data would be stored, and the support response said they would have to get back to me on that. It’s been several days and no word back.
Verdict: A featureful option that “just works”, with a lot of features for managing family accounts and the like, but lacking in the privacy area.
Fastmail summary:
- Website: https://www.fastmail.com/
- Reliability: Seems to be fine
- Support: Good response time on most questions; dropped the ball on one tha trequired research
- Individual app access passwords: Yes
- 2FA: Yes
- Filtering: Excellent
- Spam settings: Can set filter aggressiveness, decide whether to share spam data with “spam-fighting companies”, configure how to handle backscatter spam, and evaluate the personal learning filter.
- Server storage locations: Australia, USA, India, and The Netherlands. Legal jurisdiction is Australia.
- Plan as reviewed: Individual [pricing link]
- Cost per year: $60
- Mail storage included: 50GB
- Limits on send/receive volume: 300/hour
- Aliases: Unlimited from what I can see
- Additional mailboxes: No; requires a different plan for that
Migadu
Migadu was a service I’d never heard of, but came recommended to me on Mastodon.
I listed Migadu last because it is a class of its own compared to all the other options. Every other service is basically a webmail interface with a few extra settings tacked on.
Migadu has a full-featured email admin console in addition. By that I mean you can:
- View usage graphs (incoming, outgoing, storage) over time
- Manage DNS (if you want Migadu to run your nameservers)
- Manage multiple domains, and cross-domain relationships with mailboxes
- View a limited set of logs
- Configure accounts, reset their passwords if needed/authorized, etc.
- Configure email address rewrite rules with wildcards and so forth
Basically, if you were the sort of person that ran your own mail servers back in the day, here is Migadu giving you most of that functionality. Effectively you have a web interface to do all the useful stuff, and they handle the boring and annoying bits. This is a really attractive model.
Migadu support has been fantastic. They are quick to respond, and went above and beyond. I pointed out that their X-Envelope-To header, which is needed for filtering by BCC, wasn’t being added on emails I sent myself. They replied 5 hours later indicating they had added the feature to add X-Envelope-To even for internal mails! Wow! I am impressed.
With Migadu, you buy a pool of resources: storage space and incoming/outgoing traffic. What you do within that pool is up to you. You can set up users (“mailboxes”), aliases, domains, whatever you like. It all just shares the pool. You can restrict users further so that an individual user has access to only a subset of the pool resources.
I was initially concerned about Migadu’s daily send/receive message count limits, but in visiting with support and reading the documentation, what really comes out is that Migadu is a service with a personal touch. Hitting the incoming traffic limit will cause a SMTP temporary fail (4xx) response so you won’t lose legit mail – and support will work with you if it’s a problem for legit uses. In other words, restrictions are “soft” and they are interpreted reasonably.
One interesting thing about Migadu is that they do not offer accounts under their domain. That is, you MUST bring your own domain. That’s pretty easy and cheap, of course. It also puts you in a position of power, because it is easy to migrate email from one provider to another if you own the domain.
Filtering is done via SIEVE. There is a GUI editor which lets you accomplish most things, though it has an odd blind spot where you can’t file a message into multiple folders. However, you can edit a SIEVE ruleset directly and you get the full SIEVE featureset, which is extensive (and does support filing a message into multiple folders). I note that the SIEVE :envelope match doesn’t work, but Migadu adds an X-Envelope-To header which is just as good.
I particularly love a company that tells you all the reasons you might not want to use them. Migadu’s pro/con list is an honest drawbacks list (of course, their homepage highlights all the features!).
Verdict: Fantastically powerful, excellent support, and good privacy. I chose this one.
Migadu summary:
- Website: https://migadu.com/
- Reliability: Excellent
- Support: Fantastic. Good response times and they added a feature (or fixed a bug?) a few hours after I requested it.
- Individual access passwords: Yes. Create “identities” to support them.
- 2FA: Yes, on both the admin interface and the webmail interface
- Filtering: Excellent, based on SIEVE. GUI editor doesn’t support multiple actions when filing into a folder, but full SIEVE functionality is exposed.
- Spam settings:
- On the domain level, filter aggressiveness, Greylisting on/off, black and white lists
- On the mailbox level, filter aggressiveness, black and whitelists, action to take with spam; compatible with filters.
- Server storage location: France; legal jurisdiction Switzerland
- Plan as reviewed: mini [pricing link]
- Cost per year: $90
- Mail storage included: 30GB (“soft” quota)
- Limits on send/receive volume: 1000 messgaes in/day, 100 messages out/day (“soft” quotas)
- Aliases: Unlimited on an unlimited number of domains
- Additional mailboxes: Unlimited and free; uses pooled quotas, but individual quotas can be set
Others
Here are a few others that I didn’t think worthy of getting a trial:
- mxroute was recommended by several. Lots of concerning things in their policy, such as:
- if you repeatedly send mail to unroutable recipients, they may publish the addresses on Github
- they will terminate your account if they think you are “rude” or want to contest a charge
- they reserve the right to cancel your service at any time for any (or no) reason.
- Proton keeps coming up, and I will not consider it so long as I am locked into their client on mobile.
- Skiff comes up sometimes, but they were acquired by Notion.
- Disroot comes up; this discussion highlights a number of reasons why I avoid them. Their Terms of Service (ToS) is inconsistent with a general-purpose email account (I guess for targeting nonprofits and activists, that could make sense). Particularly laughable is that they claim to be friends of Open Source, but then would take down your account if you upload “copyrighted” material. News flash: in order for an Open Source license to be meaningful, the underlying work is copyrighted. It is perfectly legal to upload copyrighted material when you wrote it or have the license to do so!
Conclusions
There are a lot of good options for email hosting today, and in particular I appreciate the excellent personal support from companies like Migadu and Runbox. Support small businesses!
Photographic comparison: Is the Kobo Libra Colour display worse than the Kobo Libra 2?
I’ve been using E Ink-based ereaders for quite a number of years now. I’ve had my Kobo Libra 2 for a few years, and was looking forward to the Kobo Libra Colour — the first color E Ink display in a mainstream ereader line.
I found the display to be a mixed bag; contrast seemed a lot worse on B&W images, and the device “backlight” (it’s not technically a “back” light) seemed to cause a particular contrast reduction in dark mode. I went searching for information on this. I found a lot of videos on “Kobo Libra 2 vs Libra Colour” and so forth, but they were all pretty much useless. These were the mistakes they made:
- Being videos. Photos would show the differences in better detail.
- Shooting videos with cameras with automatic light levels. Since the thing we’re trying to evaluate here is how much darker the Kobo Libra Colour screen is than the Kobo Libra screen, having a camera that automatically adjusts for brighter or darker images defeats the purpose. Cell phone cameras (still and video) all do this by default and I saw evidence of it in all the videos.
- Placing the two devices side-by-side instead of in identical locations for subsequent shots. This led to different shadows on each device (because OF COURSE the people shooting videos had to have their phone and head between the light source and the device), again preventing a good comparison.
So I dug out my Canon DSLR, tripod, and set up shots. Every shot here is set at ISO 100. Every shot in the same setting has the same exposure settings, which I document. The one thing I forgot to shut off was automatic white balance; you can notice it is active if you look closely at the backgrounds, but WB isn’t really relevant to this comparison anyhow.
Because there has also been a lot of concern about how well fine B&W details will show up on the Kobo Libra Colour screen, I shot all photos using a PDF test image from the open source hplip package (testpage.ps.gz converted to PDF). This also rules out font differences between the devices. I ensured a full screen refresh before each shot.
This is all because color E Ink is effectively a filter called Kaleido over the B&W layer. This causes dimming and some other visual effects.
You can click on any image here to see a full-resolution view. The full-size images are the exact JPEG coming from the camera, with only two modifications: 1) metadata has been redacted for privacy reasons, and 2) some images were losslessly rotated after the shoot.
OK, onwards!
Outdoors, bright sun, shot from directly overhead
Bright sun is ideal lighting for an E Ink display. They need no lighting at all in this scenario, and in fact, if you turn on their internal display light, it will probably not be very noticeable. Of course, this is in contrast to phone LCD screens, for which bright sunlight is the worst.
Scene: Morning sunlight reaching the ereaders at an angle. The angle was sufficient so that no shadows were cast by the camera or tripod.
Device light: Off on both
Exposure: 1/160, f16, ISO 100
You can see how much darker the Libra Colour is here. Though in these bright conditions, it is still plenty bright. There may actually be situations in which the Libra 2 is too bright in direct sunlight, requiring a person to squint or whatnot.
Looking at the radial lines, it is a bit difficult to tell because the difference in brightness, but I don’t see a hugely obvious reduction in quality in the Libra 2. Later I have a shot where I try to match brightness, and we’ll check it out again there.
Outdoors, shade, shot from directly overhead
For the next shot, I set the ereaders in shade, but still well-lit with the diffuse sunlight from all around.
The first two have both device lights off. For the third, I set the device light on the Kobo Colour to 100%, full cool shade, to try to see how close I could get it to the Libra 2 brightness. (Sorry it looks like I forgot to close the toolbar on the Colour for this set, but it doesn’t modify the important bits of the underlying image.)
Device light: Initially off on both
Exposure: 1/60, f6.4, ISO 100
Here you can see the light on the Libra Colour was nearly able to match the brightness on the Libra 2.
Indoors, room lit with overhead and window light, device light off
We continue to move into dimmer light with this next shot.
Device light: Off on both
Exposure: 1/4, f5, ISO 100
Indoors, room lit with overhead and window light, device light on
Now we have the first head-to-head with the device light on. I set the Libra 2 to my favorite warmth setting, found a brightness that looked good, and then tried my best to match those settings on the Libra Colour. My camera’s light meter aided in matching brightness.
Device light: On (Libra 2 at 40%, Libra Colour at 59%)
Exposure: 1/8, f5, ISO 100
(Apparently I am terrible at remembering to dismiss menus, sigh.)
Indoors, dark room, dark mode, at an angle
The Kobo Libra Colour surprised me with its dark mode. When viewed at an oblique angle, the screen gets pretty washed out. I maintained the same brightness settings here as I did above. It is much more noticeable when the brightness is set down to my preferred nighttime level (4%), or with a more significant angle.
Since you can’t see my tags, the order of the photos here will be: Libra 2 (standard orientation), Colour (standard orientation), Colour (turned around.
Device light: On (as above)
Exposure: 1/4, f5.6, ISO 100
Notice how I said I maintained the same brightness settings as before, and yet the Libra Colour looks brighter than the Libra 2 here, whereas it looked the same in the prior non-dark mode photos. Here’s why. I set the exposure of each set of shots based on camera metering. As we have seen from the light-off photos, the brightness of a white pixel is a lot less on a Libra Colour than on the Libra 2. However, it is likely that the brightness of a black pixel is about that same. Therefore, contrast on the Libra Colour is lower than on the Libra 2. The traditional shot is majority white pixels, so to make the Libra Colour brightness match that of the Libra 2, I had to crank up the brightness on the Libra Colour to compensate for the darker “white” background. With me so far?
Now with the inverted image, you can see what that does. It doesn’t just raise the brightness of the white pixels, but it also raises the brightness of the black pixels. This is expected because we didn’t raise contrast, only brightness.
Also, in the last image, you can see it is brighter to the right. Again, other conditions that are more difficult to photograph make that much more pronounced. Viewing the Libra Colour from one side (but not the other), in dark mode, with the light on, produces noticeably worse contrast on one side.
Conclusions
This isn’t a slam dunk. Let’s walk through this:
I don’t think there is any noticeable loss of detail on the Libra Colour. The radial lines appeared as well defined on it as on the Libra 2. Oddly, with the backlight, some striations were apparent in the gray gradient test, but I wouldn’t be using an E Ink device for clear photographic reproduction anyhow.
If you read mostly black and white: If you had been using a Kobo Libra Colour and were handed a Libra 2, you would go, “Wow! What an upgrade! The screen is so much brighter!” There’s little reason to get a Libra Colour. The Libra 2 might be hard to find these days, but the new Clara BW (with a 6″ instead of the 7″ screen on the Libra series) might be just the thing for you. The Libra 2 is at home in any lighting, from direct sun to pitch black, and has all the usual E Ink benefits (eg, battery life measured in weeks) and drawbacks (slower refresh rate) that we’re all used to.
If you are interested in photographic color reproduction mostly indoors: Consider a small tablet. The Libra Colour’s 4096 colors are going to appear washed out compared to what you’re used to on a LCD screen.
If you are interested in color content indoors and out: The Libra Colour might be a good fit. It could work well for things where superb color rendition isn’t essential — for instance, news stories (the Pocket integration or Calibre’s news feature could be nice there), comics, etc.
In a moderately-lit indoor room, it looks like the Libra Colour’s light can lead it to results that approach Libra 2 quality. So if most of your reading is in those conditions, perhaps the Libra Colour is right for you.
As a final aside, I wrote in this article about the Kobo devices. I switched from Kindles to Kobos a couple of years ago due to the greater openness of the Kobo devices (you can add things like Nickel Menu and KOReader to them, and they have built-in support for more useful formats), their featureset, and their cost. The top-of-the-line Kindle devices will have a screen very similar if not identical to the Libra 2, so you can very easily consider this to be a comparison between the Oasis and the Libra Colour as well.
Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas
It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta.
The Kansas Reflector published an artical about Meta censoring environmental articles about climate change — deeming them “too controversial”.
Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too.
The CNN story says Facebook apologized and said it was a mistake and was fixing it.
Color me skeptical, because today I saw this:
Yes, that’s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today.
I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm.
Riiiiiight. Cybersecurity.
This isn’t even the first time they’ve done this to me.
On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed.
While they ultimately reversed themselves, I subsequently wrote Facebook’s Blocking Decisions Are Deliberate — Including Their Censorship of Mastodon.
That this same pattern has played out a second time — again with something that is a very slight challenege to Facebook — seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change — or their own censorship — gets removed, and this pattern persists for years.
There’s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social.
So. I’ve written this blog post. And then I’m going to post it to Facebook. Let’s see if they try to censor me for a third time. Bring it, Facebook.
The xz Issue Isn’t About Open Source
You’ve probably heard of the recent backdoor in xz. There have been a lot of takes on this, most of them boiling down to some version of:
The problem here is with Open Source Software.
I want to say not only is that view so myopic that it pushes towards the incorrect, but also it blinds us to more serious problems.
Now, I don’t pretend that there are no problems in the FLOSS community. There have been various pieces written about what this issue says about the FLOSS community (usually without actionable solutions). I’m not here to say those pieces are wrong. Just that there’s a bigger picture.
So with this xz issue, it may well be a state actor (aka “spy”) that added this malicious code to xz. We also know that proprietary software and systems can be vulnerable. For instance, a Twitter whistleblower revealed that Twitter employed Indian and Chinese spies, some knowingly. A recent report pointed to security lapses at Microsoft, including “preventable” lapses in security. According to the Wikipedia article on the SolarWinds attack, it was facilitated by various kinds of carelessness, including passwords being posted to Github and weak default passwords. They directly distributed malware-infested updates, encouraged customers to disable anti-malware tools when installing SolarWinds products, and so forth.
It would be naive indeed to assume that there aren’t black hat actors among the legions of programmers employed by companies that outsource work to low-cost countries — some of which have challenges with bribery.
So, given all this, we can’t really say the problem is Open Source. Maybe it’s more broad:
The problem here is with software.
Maybe that inches us closer, but is it really accurate? We have all heard of Boeing’s recent issues, which seem to have some element of root causes in corporate carelessness, cost-cutting, and outsourcing. That sounds rather similar to the SolarWinds issue, doesn’t it?
Well then, the problem is capitalism.
Maybe it has a role to play, but isn’t it a little too easy to just say “capitalism” and throw up our hands helplessly, just as some do with FLOSS as at the start of this article? After all, capitalism also brought us plenty of products of very high quality over the years. When we can point to successful, non-careless products — and I own some of them (for instance, my Framework laptop). We clearly haven’t reached the root cause yet.
And besides, what would you replace it with? All the major alternatives that have been tried have even stronger downsides. Maybe you replace it with “better regulated capitalism”, but that’s still capitalism.
Then the problem must be with consumers.
As this argument would go, it’s consumers’ buying patterns that drive problems. Buyers — individual and corporate — seek flashy features and low cost, prizing those over quality and security.
No doubt this is true in a lot of cases. Maybe greed or status-conscious societies foster it: Temu promises people to “shop like a billionaire”, and unloads on them cheap junk, which “all but guarantees that shipments from Temu containing products made with forced labor are entering the United States on a regular basis“.
But consumers are also people, and some fraction of them are quite capable of writing fantastic software, and in fact, do so.
So what we need is some way to seize control. Some way to do what is right, despite the pressures of consumers or corporations.
Ah yes, dear reader, you have been slogging through all these paragraphs and now realize I have been leading you to this:
Then the solution is Open Source.
Indeed. Faults and all, FLOSS is the most successful movement I know where people are bringing us back to the commons: working and volunteering for the common good, unleashing a thousand creative variants on a theme, iterating in every direction imaginable. We have FLOSS being vital parts of everything from $30 Raspberry Pis to space missions. It is bringing education and communication to impoverished parts of the world. It lets everyone write and release software. And, unlike the SolarWinds and Twitter issues, it exposes both clever solutions and security flaws to the world.
If an authentication process in Windows got slower, we would all shrug and mutter “Microsoft” under our breath. Because, really, what else can we do? We have no agency with Windows.
If an authentication process in Linux gets slower, anybody that’s interested — anybody at all — can dive in and ask “why” and trace it down to root causes.
Some look at this and say “FLOSS is responsible for this mess.” I look at it and say, “this would be so much worse if it wasn’t FLOSS” — and experience backs me up on this.
FLOSS doesn’t prevent security issues itself.
What it does do is give capabilities to us all. The ability to investigate. Ability to fix. Yes, even the ability to break — and its cousin, the power to learn.
And, most rewarding, the ability to contribute.
Live Migrating from Raspberry Pi OS bullseye to Debian bookworm
I’ve been getting annoyed with Raspberry Pi OS (Raspbian) for years now. It’s a fork of Debian, but manages to omit some of the most useful things. So I’ve decided to migrate all of my Pis to run pure Debian. These are my reasons:
- Raspberry Pi OS has, for years now, specified that there is no upgrade path. That is, to get to a newer major release, it’s a reinstall. While I have sometimes worked around this, for a device that is frequently installed in hard-to-reach locations, this is even more important than usual. It’s common for me to upgrade machines for a decade or more across Debian releases and there’s no reason that it should be so much more difficult with Raspbian.
- As I noted in Consider Security First, the security situation for Raspberry Pi OS isn’t as good as it is with Debian.
- Raspbian lags behind Debian – often times by 6 months or more for major releases, and days or weeks for bug fixes and security patches.
- Raspbian has no direct backports support, though Raspberry Pi 3 and above can use Debian’s backports (per my instructions as Installing Debian Backports on Raspberry Pi)
- Raspbian uses a custom kernel without initramfs support
It turns out it is actually possible to do an in-place migration from Raspberry Pi OS bullseye to Debian bookworm. Here I will describe how. Even if you don’t have a Raspberry Pi, this might still be instructive on how Raspbian and Debian packages work.
WARNINGS
Before continuing, back up your system. This process isn’t for the neophyte and it is entirely possible to mess up your boot device to the point that you have to do a fresh install to get your Pi to boot. This isn’t a supported process at all.
Architecture Confusion
Debian has three ARM-based architectures:
- armel, for the lowest-end 32-bit ARM devices without hardware floating point support
- armhf, for the higher-end 32-bit ARM devices with hardware float (hence “hf”)
- arm64, for 64-bit ARM devices (which all have hardware float)
Although the Raspberry Pi 0 and 1 do support hardware float, they lack support for other CPU features that Debian’s armhf architecture assumes. Therefore, the Raspberry Pi 0 and 1 could only run Debian’s armel architecture.
Raspberry Pi 3 and above are capable of running 64-bit, and can run both armhf and arm64.
Prior to the release of the Raspberry Pi 5 / Raspbian bookworm, Raspbian only shipped the armhf architecture. Well, it was an architecture they called armhf, but it was different from Debian’s armhf in that everything was recompiled to work with the more limited set of features on the earlier Raspberry Pi boards. It was really somewhere between Debian’s armel and armhf archs. You could run Debian armel on those, but it would run more slowly, due to doing floating point calculations without hardware support. Debian’s raspi FAQ goes into this a bit.
What I am going to describe here is going from Raspbian armhf to Debian armhf with a 64-bit kernel. Therefore, it will only work with Raspberry Pi 3 and above. It may theoretically be possible to take a Raspberry Pi 2 to Debian armhf with a 32-bit kernel, but I haven’t tried this and it may be more difficult. I have seen conflicting information on whether armhf really works on a Pi 2. (If you do try it on a Pi 2, ignore everything about arm64 and 64-bit kernels below, and just go with the linux-image-armmp-lpae
kernel per the ARMMP page)
There is another wrinkle: Debian doesn’t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 – like x86_64 – is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions).
(It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)
Prerequisites and Limitations
In addition to the need for a Raspberry Pi 3 or above in order for this to work, there are a few other things to mention.
If you are using the GPIO features of the Pi, I don’t know if those work with Debian.
I think Raspberry Pi OS modified the desktop environment more than other components. All of my Pis are headless, so I don’t know if this process will work if you use a desktop environment.
I am assuming you are booting from a MicroSD card as is typical in the Raspberry Pi world. The Pi’s firmware looks for a FAT partition (MBR type 0x0c) and looks within it for boot information. Depending on how long ago you first installed an OS on your Pi, your /boot may be too small for Debian. Use df -h /boot
to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I’ve done this, but it’s outside the scope of this article.)
You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You… did make a backup, right?)
Basic idea
The basic idea here is that since bookworm has almost entirely newer packages then bullseye, we can “just” switch over to it and let the Debian packages replace the Raspbian ones as they are upgraded. Well, it’s not quite that easy, but that’s the main idea.
Preparation
First, make a backup. Even an image of your MicroSD card might be nice. OK, I think I’ve said that enough now.
It would be a good idea to have a HDMI cable (with the appropriate size of connector for your particular Pi board) and a HDMI display handy so you can troubleshoot any bootup issues with a console.
Preparation: access
The Raspberry Pi OS by default sets up a user named pi
that can use sudo
to gain root without a password. I think this is an insecure practice, but assuming you haven’t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian’s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat
:
pi ALL=(ALL) NOPASSWD: ALL
Also, there may be no password set for the root account. It would be a good idea to set one; it makes it easier to log in at the console. Use the passwd
command as root to do so.
Preparation: bluetooth
Debian doesn’t correctly identify the Bluetooth hardware address. You can save it off to a file by running hcitool dev > /root/bluetooth-from-raspbian.txt
. I don’t use Bluetooth, but this should let you develop a script to bring it up properly.
Preparation: Debian archive keyring
You will next need to install Debian’s archive keyring so that apt can authenticate packages from Debian. Go to the bookworm download page for debian-archive-keyring and copy the URL for one of the files, then download it on the pi. For instance:
wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb
Use sha256sum
to verify the checksum of the downloaded file, comparing it to the package page on the Debian site.
Now, you’ll install it with:
dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb
Package first steps
From here on, we are making modifications to the system that can leave it in a non-bootable state.
Examine /etc/apt/sources.list
and all the files in /etc/apt/sources.list.d
. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:
deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free
(you might leave off contrib and non-free depending on your needs)
Now, we’re going to tell it that we’ll support arm64 packages:
dpkg --add-architecture arm64
And finally, download the bookworm package lists:
apt-get update
If there are any errors from that command, fix them and don’t proceed until you have a clean run of apt-get update
.
Moving /boot to /boot/firmware
The boot FAT partition I mentioned above is mounted at /boot
by Raspberry Pi OS, but Debian’s scripts assume it will be at /boot/firmware
. We need to fix this. First:
umount /boot
mkdir /boot/firmware
Now, edit fstab and change the reference to /boot
to be to /boot/firmware
. Now:
mount -v /boot/firmware
cd /boot/firmware
mv -vi * ..
This mounts the filesystem at the new location, and moves all its contents back to where apt believes it should be. Debian’s packages will populate /boot/firmware
later.
Installing the first packages
Now we start by installing the first of the needed packages. Eventually we will wind up with roughly the same set Debian uses.
apt-get install linux-image-arm64
apt-get install firmware-brcm80211=20230210-5
apt-get install raspi-firmware
If you get errors relating to firmware-brcm80211 from any commands, run that install firmware-brcm80211
command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that’s one of them.
Configuring the bootloader
We need to configure a few things in /etc/default/raspi-firmware
before proceeding. Edit that file.
First, uncomment (or add) a line like this:
KERNEL_ARCH="arm64"
Next, in /boot/cmdline.txt
you can find your old Raspbian boot command line. It will say something like:
root=PARTUUID=...
Save off the bit starting with PARTUUID
. Back in /etc/default/raspi-firmware
, set a line like this:
ROOTPART=PARTUUID=abcdef00
(substituting your real value for abcdef00).
This is necessary because the microSD card device name often changes from /dev/mmcblk0
to /dev/mmcblk1
when switching to Debian’s kernel. raspi-firmware
will encode the current device name in /boot/firmware/cmdline.txt
by default, which will be wrong once you boot into Debian’s kernel. The PARTUUID
approach lets it work regardless of the device name.
Purging the Raspbian kernel
Run:
dpkg --purge raspberrypi-kernel
Upgrading the system
At this point, we are going to run the procedure beginning at section 4.4.3 of the Debian release notes. Generally, you will do:
apt-get -u upgrade
apt full-upgrade
Fix any errors at each step before proceeding to the next. Now, to remove some cruft, run:
apt-get --purge autoremove
Inspect the list to make sure nothing important isn’t going to be removed.
Removing Raspbian cruft
You can list some of the cruft with:
apt list '~o'
And remove it with:
apt purge '~o'
I also don’t run Bluetooth, and it seemed to sometimes hang on boot becuase I didn’t bother to fix it, so I did:
apt-get --purge remove bluez
Installing some packages
This makes sure some basic Debian infrastructure is available:
apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools
apt-get --purge autoremove
Installing firmware
Now run:
apt-get install firmware-linux
Resolving firmware package version issues
If it gives an error about the installed version of a package, you may need to force it to the bookworm version. For me, this often happened with firmware-atheros
, firmware-libertas
, and firmware-realtek
.
Here’s how to resolve it, with firmware-realtek
as an example:
-
Go to
https://packages.debian.org/PACKAGENAME
– for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm – in this case, 20230210-5. -
Now, you will force the installation of that package at that version:
apt-get install firmware-realtek=20230210-5
-
Repeat with every conflicting package until done.
-
Rerun
apt-get install firmware-linux
and make sure it runs cleanly.
Also, in the end you should be able to:
apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux
Dealing with other Raspbian packages
The Debian release notes discuss removing non-Debian packages. There will still be a few of those. Run:
apt list '?narrow(?installed, ?not(?origin(Debian)))'
Deal with them; mostly you will need to force the installation of a bookworm version using the procedure in the section Resolving firmware package version issues above (even if it’s not for a firmware package). For non-firmware packages, you might possibly want to add --mark-auto
to your apt-get install
command line to allow the package to be autoremoved later if the things depending on it go away.
If you aren’t going to use Bluetooth, I recommend apt-get --purge remove bluez
as well. Sometimes it can hang at boot if you don’t fix it up as described above.
Set up networking
We’ll be switching to the Debian method of networking, so we’ll create some files in /etc/network/interfaces.d
. First, eth0
should look like this:
allow-hotplug eth0
iface eth0 inet dhcp
iface eth0 inet6 auto
And wlan0
should look like this:
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
Raspbian is inconsistent about using eth0/wlan0 or renamed interface. Run ifconfig
or ip addr
. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0
file to the one named after the enx interface, or the wlan0
file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name.
If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf
. It should have lines like:
network={
ssid="NetworkName"
psk="passwordHere"
}
(This is where Raspberry Pi OS put them).
Deal with DHCP
Raspberry Pi OS used dhcpcd, whereas bookworm normally uses isc-dhcp-client
. Verify the system is in the correct state:
apt-get install isc-dhcp-client
apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus
Set up LEDs
To set up the LEDs to trigger on MicroSD activity as they did with Raspbian, follow the Debian instructions. Run apt-get install sysfsutils
. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf
:
class/leds/ACT/brightness = 1
class/leds/ACT/trigger = mmc1
Prepare for boot
To make sure all the /boot/firmware
files are updated, run update-initramfs -u
. Verify that root
in /boot/firmware/cmdline.txt
references the PARTUUID
as appropriate. Verify that /boot/firmware/config.txt
contains the lines arm_64bit=1
and upstream_kernel=1
. If not, go back to the section on modifying /etc/default/raspi-firmware
and fix it up.
The moment arrives
Cross your fingers and try rebooting into your Debian system:
reboot
For some reason, I found that the first boot into Debian seems to hang for 30-60 seconds during bootstrap. I’m not sure why; don’t panic if that happens. It may be necessary to power cycle the Pi for this boot.
Troubleshooting
If things don’t work out, hook up the Pi to a HDMI display and see what’s up. If I anticipated a particular problem, I would have documented it here (a lot of the things I documented here are because I ran into them!) So I can’t give specific advice other than to watch boot messages on the console. If you don’t even get kernel messages going, then there is some problem with your partition table or /boot/firmware
FAT partition. Otherwise, you’ve at least got the kernel going and can troubleshoot like usual from there.
Consider Security First
I write this in the context of my decision to ditch Raspberry Pi OS and move everything I possibly can, including my Raspberry Pi devices, to Debian. I will write about that later.
But for now, I wanted to comment on something I think is often overlooked and misunderstood by people considering distributions or operating systems: the huge importance of getting security updates in an automated and easy way.
Background
Let’s assume that these statements are true, which I think are well-supported by available evidence:
-
Every computer system (OS plus applications) that can do useful modern work has security vulnerabilities, some of which are unknown at any given point in time;
-
During the lifetime of that computer system, some of these vulnerabilities will be discovered. For a (hopefully large) subset of those vulnerabilities, timely patches will become available.
Now then, it follows that applying those timely patches is a critical part of having a system that it as secure as possible. Of course, you have to do other things as well – good passwords, secure practices, etc – but, fundamentally, if your system lacks patches for known vulnerabilities, you’ve already lost at the security ballgame.
How to stay patched
There is something of a continuum of how you might patch your system. It runs roughly like this, from best to worst:
-
All components are kept up-to-date automatically, with no intervention from the user/operator
-
The operator is automatically alerted to necessary patches, and they can be easily installed with minimal intervention
-
The operator is automatically alerted to necessary patches, but they require significant effort to apply
-
The operator has no way to detect vulnerabilities or necessary patches
It should be obvious that the first situation is ideal. Every other situation relies on the timeliness of human action to keep up-to-date with security patches. This is a fallible situation; humans are busy, take trips, dismiss alerts, miss alerts, etc. That said, it is rare to find any system living truly all the way in that scenario, as you’ll see.
What is “your system”?
A critical point here is: what is “your system”? It includes:
- Your kernel
- Your base operating system
- Your applications
- All the libraries needed to run all of the above
Some OSs, such as Debian, make little or no distinction between the base OS and the applications. Others, such as many BSDs, have a distinction there. And in some cases, people will compile or install applications outside of any OS mechanism. (It must be stressed that by doing so, you are taking the responsibility of patching them on your own shoulders.)
How do common systems stack up?
-
Debian, with its support for unattended-upgrades, needrestart, debian-security-support, and such, is largely category 1. It can automatically apply security patches, in most cases can restart the necessary services for the patch to take effect, and will alert you when some processes or the system must be manually restarted for a patch to take effect (for instance, a kernel update). Those cases requiring manual intervention are category 2. The debian-security-support package will even warn you of gaps in the system. You can also use debsecan to scan for known vulnerabilities on a given installation.
-
FreeBSD has no way to automatically install security patches for things in the packages collection. As with many rolling-release systems, you can’t automate the installation of these security patches with FreeBSD because it is not safe to blindly update packages. It’s not safe to blindly update packages because they may bring along more than just security patches: they may represent major upgrades that introduce incompatibilities, etc. Unlike Debian’s practice of backporting fixes and thus producing narrowly-tailored patches, forcing upgrades to newer versions precludes a “minimal intervention” install. Therefore, rolling release systems are category 3.
-
Things such as Snap, Flatpak, AppImage, Docker containers, Electron apps, and third-party binaries often contain embedded libraries and such for which you have no easy visibility into their status. For instance, if there was a bug in libpng, would you know how many of your containers had a vulnerability? These systems are category 4 – you don’t even know if you’re vulnerable. It’s for this reason that my Debian-based Docker containers apply security patches before starting processes, and also run unattended-upgrades and friends.
The pernicious library problem
As mentioned in my last category above, hidden vulnerabilities can be a big problem. I’ve been writing about this for years. Back in 2017, I wrote an article focused on Docker containers, but which applies to the other systems like Snap and so forth. I cited a study back then that “Over 80% of the :latest versions of official images contained at least one high severity vulnerability.” The situation is no better now. In December 2023, it was reported that, two years after the critical Log4Shell vulnerability, 25% of apps were still vulnerable to it. Also, only 21% of developers ever update third-party libraries after introducing them into their projects.
Clearly, you can’t rely on these images with embedded libraries to be secure. And since they are black box, they are difficult to audit.
Debian’s policy of always splitting libraries out from packages is hugely beneficial; it allows finegrained analysis of not just vulnerabilities, but also the dependency graph. If there’s a vulnerability in libpng, you have one place to patch it and you also know exactly what components of your system use it.
If you use snaps, or AppImages, you can’t know if they contain a deeply embedded vulnerability, nor could you patch it yourself if you even knew. You are at the mercy of upstream detecting and remedying the problem – a dicey situation at best.
Who makes the patches?
Fundamentally, humans produce security patches. Often, but not always, patches originate with the authors of a program and then are integrated into distribution packages. It should be noted that every security team has finite resources; there will always be some CVEs that aren’t patched in a given system for various reasons; perhaps they are not exploitable, or are too low-impact, or have better mitigations than patches.
Debian has an excellent security team; they manage the process of integrating patches into Debian, produce Debian Security Advisories, maintain the Debian Security Tracker (which maintains cross-references with the CVE database), etc.
Some distributions don’t have this infrastructure. For instance, I was unable to find this kind of tracker for Devuan or Raspberry Pi OS. In contrast, Ubuntu and Arch Linux both seem to have active security teams with trackers and advisories.
Implications for Raspberry Pi OS and others
As I mentioned above, I’m transitioning my Pi devices off Raspberry Pi OS (Raspbian). Security is one reason. Although Raspbian is a fork of Debian, and you can install packages like unattended-upgrades on it, they don’t work right because they use the Debian infrastructure, and Raspbian hasn’t modified them to use their own infrastructure. I don’t see any Raspberry Pi OS security advisories, trackers, etc. In short, they lack the infrastructure to support those Debian tools anyhow.
Not only that, but Raspbian lags behind Debian in both new releases and new security patches, sometimes by days or weeks.
Live Migrating from Raspberry Pi OS bullseye to Debian bookworm contains instructions for migrating Raspberry Pis to Debian.
The Grumpy Cricket (And Other Enormous Creatures)
This Christmas, one of my gifts to my kids was a text adventure (interactive fiction) game for them. Now that they’ve enjoyed it, I’m releasing it under the GPL v3.
As interactive fiction, it’s like an e-book, but the reader is also the player, guiding the exploration of the world.
The Grumpy Cricket is designed to be friendly for a first-time player of interactive fiction. There is no way to lose the game or to die. There is an in-game hint system providing context-sensitive hints anytime the reader types HINT. There are splashes of humor throughout that got all three of my kids laughing.
I wrote it in 2023 for my kids, which range in age from 6 to 17. That’s quite a wide range, but they all were enjoying it.
You can download it, get the source, or play it online in a web browser at https://www.complete.org/the-grumpy-cricket/
It’s More Important To Recognize What Direction People Are Moving Than Where They Are
I recently read a post on social media that went something like this (paraphrased):
“If you buy an EV, you’re part of the problem. You’re advancing car culture and are actively hurting the planet. The only ethical thing to do is ditch your cars and put all your effort into supporting transit. Anything else is worthless.”
There is some truth there; supporting transit in areas it makes sense is better than having more cars, even EVs. But of course the key here is in areas it makes sense.
My road isn’t even paved. I live miles from the nearest town. And get into the remote regions of the western USA and you’ll find people that live 40 miles from the nearest neighbor. There’s no realistic way that mass transit is ever going to be a thing in these areas. And even if it were somehow usable, sending buses over miles where nobody lives just to reach the few that are there will be worse than private EVs. And because I can hear this argument coming a mile away, no, it doesn’t make sense to tell these people to just not live in the country because the planet won’t support that anymore, because those people are literally the ones that feed the ones that live in the cities.
The funny thing is: the person that wrote that shares my concerns and my goals. We both care deeply about climate change. We both want positive change. And I, ahem, recently bought an EV.
I have seen this play out in so many ways over the last few years. Drive a car? Get yelled at. Support the wrong politician? Get a shunning. Not speak up loudly enough about the right politician? That’s a yellin’ too.
The problem is, this doesn’t make friends. In fact, it hurts the cause. It doesn’t recognize this truth:
It is more important to recognize what direction people are moving than where they are.
I support trains and transit. I’ve donated money and written letters to politicians. But, realistically, there will never be transit here. People in my county are unable to move all the way to transit. But what can we do? Plenty. We bought an EV. I’ve been writing letters to the board of our local electrical co-op advocating for relaxation of rules around residential solar installations, and am planning one myself. It may well be that our solar-powered transportation winds up having a lower carbon footprint than the poster’s transit use.
Pick your favorite cause. Whatever it is, consider your strategy: What do you do with someone that is very far away from you, but has taken the first step to move an inch in your direction? Do you yell at them for not being there instantly? Or do you celebrate that they have changed and are moving?
How Gapped is Your Air?
Sometimes we want better-than-firewall security for things. For instance:
- An industrial control system for a municipal water-treatment plant should never have data come in or out
- Or, a variant of the industrial control system: it should only permit telemetry and monitoring data out, and nothing else in or out
- A system dedicated to keeping your GPG private keys secure should only have material to sign (or decrypt) come in, and signatures (or decrypted data) go out
- A system keeping your tax records should normally only have new records go in, but may on occasion have data go out (eg, to print a copy of an old record)
In this article, I’ll talk about the “high side” (the high-security or high-sensitivity systems) and the “low side” (the lower-sensitivity or general-purpose systems). For the sake of simplicity, I’ll assume the high side is a single machine, but it could as well be a whole network.
Let’s focus on examples 3 and 4 to make things simpler. Let’s consider the primary concern to be data exfiltration (someone stealing your data), with a secondary concern of data integrity (somebody modifying or destroying your data).
You might think the safest possible approach is Airgapped – that is, there is literal no physical network connection to the machine at all. This help! But then, the problem becomes: how do we deal with the inevitable need to legitimately get things on or off of the system? As I wrote in Dead USB Drives Are Fine: Building a Reliable Sneakernet, by using tools such as NNCP, you can certainly create a “sneakernet”: using USB drives as transport.
While this is a very secure setup, as with most things in security, it’s less than perfect. The Wikipedia airgap article discusses some ways airgapped machines can still be exploited. It mentions that security holes relating to removable media have been exploited in the past. There are also other ways to get data out; for instance, Debian ships with gensio and minimodem, both of which can transfer data acoustically.
But let’s back up and think about why we think of airgapped machines as so much more secure, and what the failure modes of other approaches might be.
What about firewalls?
You could very easily set up high-side machine that is on a network, but is restricted to only one outbound TCP port. There could be a local firewall, and perhaps also a special port on an external firewall that implements the same restrictions. A variant on this approach would be two computers connected directly by a crossover cable, though this doesn’t necessarily imply being more secure.
Of course, the concern about a local firewall is that it could potentially be compromised. An external firewall might too; for instance, if your credentials to it were on a machine that got compromised. This kind of dual compromise may be unlikely, but it is possible.
We can also think about the complexity in a network stack and firewall configuration, and think that there may be various opportunities to have things misconfigured or buggy in a system of that complexity. Another consideration is that data could be sent at any time, potentially making it harder to detect. On the other hand, network monitoring tools are commonplace.
On the other hand, it is convenient and cheap.
I use a system along those lines to do my backups. Data is sent, gpg-encrypted and then encrypted again at the NNCP layer, to the backup server. The NNCP process on the backup server runs as an untrusted user, and dumps the gpg-encrypted files to a secure location that is then processed by a cron job using Filespooler. The backup server is on a dedicated firewall port, with a dedicated subnet. The only ports allowed out are for NNCP and NTP, and offsite backups. There is no default gateway. Not even DNS is permitted out (the firewall does the appropriate redirection). There is one pinhole allowed out, where a subset of the backup data is sent offsite.
I initially used USB drives as transport, and it had no network connection at all. But there were disadvantages to doing this for backups – particularly that I’d have no backups for as long as I’d forget to move the drives. The backup system also would have clock drift, and the offsite backup picture was more challenging. (The clock drift was a problem because I use 2FA on the system; a password, plus a TOTP generated by a Yubikey)
This is “pretty good” security, I’d think.
What are the weak spots? Well, if there were somehow a bug in the NNCP client, and the remote NNCP were compromised, that could lead to a compromise of the NNCP account. But this itself would accomplish little; some other vulnerability would have to be exploited on the backup server, because the NNCP account can’t see plaintext data at all. I use borgbackup to send a subset of backup data offsite over ssh. borgbackup has to run as root to be able to access all the files, but the ssh it calls runs as a separate user. A ssh vulnerability is therefore unlikely to cause much damage. If, somehow, the remote offsite system were compromised and it was able to exploit a security issue in the local borgbackup, that would be a problem. But that sounds like a remote possibility.
borgbackup itself can’t even be used over a sneakernet since it is not asynchronous. A more secure solution would probably be using something like dar over NNCP. This would eliminate the ssh installation entirely, and allow a complete isolation between the data-access and the communication stacks, and notably not require bidirectional communication. Logic separation matters too. My Roundup of Data Backup and Archiving Tools may be helpful here.
Other attack vectors could be a vulnerability in the kernel’s networking stack, local root exploits that could be combined with exploiting NNCP or borgbackup to gain root, or local misconfiguration that makes the sandboxes around NNCP and borgbackup less secure.
Because this system is in my basement in a utility closet with no chairs and no good place for a console, I normally manage it via a serial console. While it’s a dedicated line between the system and another machine, if the other machine is compromised or an adversary gets access to the physical line, credentials (and perhaps even data) could leak, albeit slowly.
But we can do much better with serial lines. Let’s take a look.
Serial lines
Some of us remember RS-232 serial lines and their once-ubiquitous DB-9 connectors. Traditionally, their speed maxxed out at 115.2Kbps.
Serial lines have the benefit that they can be a direct application-to-application link. In my backup example above, a serial line could directly link the NNCP daemon on one system with the NNCP caller on another, with no firewall or anything else necessary. It is simply up to those programs to open the serial device appropriately.
This isn’t perfect, however. Unlike TCP over Ethernet, a serial line has no inherent error checking. Modern programs such as NNCP and ssh assume that a lower layer is making the link completely clean and error-free for them, and will interpret any corruption as an attempt to tamper and sever the connection. However, there is a solution to that: gensio. In my page Using gensio and ser2net, I discuss how to run NNCP and ssh over gensio. gensio is a generic framework that can add framing, error checking, and retransmit to an unreliable link such as a serial port. It can also add encryption and authentication using TLS, which could be particularly useful for applications that aren’t already doing that themselves.
More traditional solutions for serial communications have their own built-in error correction. For instance, UUCP and Kermit both were designed in an era of noisy serial lines and might be an excellent fit for some use cases. The ZModem protocol also might be, though it offers somewhat less flexibility and automation than Kermit.
I have found that certain USB-to-serial adapters by Gearmo will actually run at up to 2Mbps on a serial line! Look for the ones on their spec pages with a FTDI chipset rated at 920Kbps. It turns out they can successfully be driven faster, especially if gensio’s relpkt is used. I’ve personally verified 2Mbps operation (Linux port speed 2000000) on Gearmo’s USA-FTDI2X and the USA-FTDI4X. (I haven’t seen any single-port options from Gearmo with the 920Kbps chipset, but they may exist).
Still, even at 2Mbps, speed may well be a limiting factor with some applications. If what you need is a console and some textual or batch data, it’s probably fine. If you are sending 500GB backup files, you might look for something else. In theory, this USB to RS-422 adapter should work at 10Mbps, but I haven’t tried it.
But if the speed works, running a dedicated application over a serial link could be a nice and fairly secure option.
One of the benefits of the airgapped approach is that data never leaves unless you are physically aware of transporting a USB stick. Of course, you may not be physically aware of what is ON that stick in the event of a compromise. This could easily be solved with a serial approach by, say, only plugging in the cable when you have data to transfer.
Data diodes
A traditional diode lets electrical current flow in only one direction. A data diode is the same concept, but for data: a hardware device that allows data to flow in only one direction.
This could be useful, for instance, in the tax records system that should only receive data, or the industrial system that should only send it.
Wikipedia claims that the simplest kind of data diode is a fiber link with transceivers connected in only one direction. I think you could go one simpler: a serial cable with only ground and TX connected at one end, wired to ground and RX at the other. (I haven’t tried this.)
This approach does have some challenges:
-
Many existing protocols assume a bidirectional link and won’t be usable
-
There is a challenge of confirming data was successfully received. For a situation like telemetry, maybe it doesn’t matter; another observation will come along in a minute. But for sending important documents, one wants to make sure they were properly received.
In some cases, the solution might be simple. For instance, with telemetry, just writing out data down the serial port in a simple format may be enough. For sending files, various mitigations, such as sending them multiple times, etc., might help. You might also look into FEC-supporting infrastructure such as blkar and flute, but these don’t provide an absolute guarantee. There is no perfect solution to knowing when a file has been successfully received if the data communication is entirely one-way.
Audio transport
I hinted above that minimodem and gensio both are software audio modems. That is, you could literally use speakers and microphones, or alternatively audio cables, as a means of getting data into or out of these systems. This is pretty limited; it is 1200bps, and often half-duplex, and could literally be disrupted by barking dogs in some setups. But hey, it’s an option.
Airgapped with USB transport
This is the scenario I began with, and named some of the possible pitfalls above as well. In addition to those, note also that USB drives aren’t necessarily known for their error-free longevity. Be prepared for failure.
Concluding thoughts
I wanted to lay out a few things in this post. First, that simply being airgapped is generally a step forward in security, but is not perfect. Secondly, that both physical and logical separation matter. And finally, that while tools like NNCP can make airgapped-with-USB-drive-transport a doable reality, there are also alternatives worth considering – especially serial ports, firewalled hard-wired Ethernet, data diodes, and so forth. I think serial links, in particular, have been largely forgotten these days.
Note: This article also appears on my website, where it may be periodically updated.
A Maze of Twisty Little Pixels, All Tiny
Two years ago, I wrote Managing an External Display on Linux Shouldn’t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved.
But then you throw HiDPI into the mix and it all goes wonky.
If you’re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can’t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren’t. Wayland is far better, supporting on-the-fly resizes quite nicely.
I’ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that.
On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness.
But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right.
I tried both Gnome and KDE. Here are my observations with both:
Gnome
I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section.
Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable.
Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there’s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote “We’re not going to add a preference just because you want one”. KDE, on the other hand, aside from not mucking with your system’s power settings in this way, has a nice panel with “on AC” and “on battery” and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff.
While Gnome has excellent support for Wayland, it doesn’t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below).
Gnome won’t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn’t exist on KDE.
Both Gnome and KDE support “night light” (warmer color temperatures at night), but Gnome’s often didn’t change when it should have, or changed on one display but not the other.
The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don’t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this.
Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I’m about to dd a Debian installer to it, I definitely don’t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn’t have a nice removable-drives icon like KDE does) and it’s a bunch of annoying clicks, and I didn’t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc.
The ssh agent on Gnome doesn’t start up for a Wayland session, though this is easily enough worked around.
The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies.
The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly.
KDE
The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system.
Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn’t muck with my CPU power settings, and lets me easily define “on AC” vs “on battery” settings for things like suspend when idle.
KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is “Scaled by the system”, which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is “Apply scaling themselves”, which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you’ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use “Apply scaling by themselves” because only a few of my apps aren’t Wayland-aware.
I did encounter a few bugs in KDE under Wayland:
sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec.
Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine.
The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it.
On Debian, KDE mysteriously installed Pulseaudio instead of Debian’s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine).
Conclusions
I’m sticking with KDE. Given that I couldn’t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn’t hard. But even if it weren’t for that, I’d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome.
Update: Corrected the gsettings command