Category Archives: Debian

Tips for Upgrading to, And Securing, Debian Buster

Wow.  Once again, a Debian release impresses me — a guy that’s been using Debian for more than 20 years.  For the first time I can ever recall, buster not only supported suspend-to-disk out of the box on my laptop, but it did so on an encrypted volume atop LVM.  Very impressive!

For those upgrading from previous releases, I have a few tips to enhance the experience with buster.

AppArmor

AppArmor is a new line of defense against malicious software.  The release notes indicate it’s now enabled by default in buster.  For desktops, I recommend installing apparmor-profiles-extra apparmor-notify.  The latter will provide an immediate GUI indication when something is blocked by AppArmor, so you can diagnose strange behavior.  You may also need to add userself to the adm group with adduser username adm.

Security

I recommend installing these packages and taking note of these items, some of which are different in buster:

  • unattended-upgrades will automatically install security updates for you.  New in buster, the default config file will also apply stable updates in addition to security updates.
  • needrestart will detect what processes need a restart after a library update and, optionally, restart them. Beginning in buster, it will not automatically restart them when in noninteractive (unattended-upgrades) mode. This can be changed by editing /etc/needrestart/needrestart.conf (or, better, putting a .conf file in /etc/needrestart/conf.d) and setting $nrconf{restart} = 'a'. Edit: If you have an Intel CPU, installing iucode-tool intel-microcode will let needrestart also check on your CPU microcode.
  • debian-security-support will warn you of gaps in security support for packages you are installing or running.
  • package-update-indicator is useful for desktops that won’t be running unattended-upgrades. I believe Gnome 3 has this built in, but for other desktops, this adds an icon when updates are available.
  • You can harden apt with seccomp.
  • You can enable UEFI secure boot.

Tuning

If you hadn’t noticed, many of these items are links into the buster release notes. It’s a good document to read over, even for a new buster install.

Goodbye to a 15-year-old Debian server

It was October of 2003 that the server I’ve called “glockenspiel” was born. It was the early days of Linux-based VM hosting, using a VPS provider called memset, running under, of all things, User Mode Linux. Over the years, it has been migrated around, sometimes running on the metal and sometimes in a VM. The operating system has been upgraded in-place using standard Debian upgrades over the years, and is now happily current on stretch (albeit with a 32-bit userland). But it has never been reinstalled. When I’d migrate hosting providers, I’d use tar or rsync to stream glockenspiel across the Internet to its new home.

A lot of people reinstall an OS when a new version comes out. I’ve been doing Debian upgrades with apt for ages, and this one is a case in point. It lingers.

Root’s .profile was last modified in November 2004, and its .bashrc was last modified in December 2004. My own home directory still has a .pinerc, .gopherrc, and .arch-params file. I last edited my .vimrc in 2003 and my .emacs dates back to 2002 (having been copied over from a pre-glockenspiel FreeBSD server).

drwxr-xr-x  3 jgoerzen jgoerzen      4096 Dec  3  2003 irclogs
-rw-r--r--  1 jgoerzen jgoerzen       373 Dec  3  2003 .vimrc
-rw-r--r--  1 jgoerzen jgoerzen       651 Nov 27  2003 .reportbugrc
drwx------  3 jgoerzen jgoerzen      4096 Sep  2  2003 .arch-params
-rw-r--r--  1 jgoerzen jgoerzen      1115 Aug 23  2003 .gopherrc
drwxr-xr-x  3 jgoerzen jgoerzen      4096 Jul 18  2003 .subversion
-rw-r--r--  1 jgoerzen jgoerzen     15317 Jun 21  2003 .pinerc

Poking around /etc on glockenspiel is like a trip back in time. Various apache sites still have configuration files around, but have long since been disabled. Over the years, glockenspiel has hosted source code repositories using Subversion, arch, tla, darcs, mercurial and git. It’s hosted websites using Drupal, WordPress, Serendipity, and so forth. It’s hosted gopher sites, websites or mailing lists for various Free Software projects (such as Freeciv), and any number of local charitable organizations. Remnants of an FTP configuration still exist, when people used web design software to build websites for those organizations on their PCs and then upload them to glockenspiel.

-rw-r--r--   1 root  root                      268 Dec 25  2005 libnet.cfg
-rw-r-----   1 root  root                     1305 Nov 11  2004 mrtg.cfg
-rw-r--r--   1 root  root                      552 Jul 31  2004 pam.conf

All this has been replaced by a set of Docker containers running my docker-debian-base software. They’re all in git, I can rebuild one of the containers in a few seconds or a few minutes by typing “make”, and there is no cruft from 2002. There are a lot of benefits to this.

And yet, there is a part of me that feels it’s all so… cold. Servers having “personalities” was always a distinctly dubious thing, but these days as we work through more and more layers of virtualization and indirection and become more distant from the hardware, we lose an appreciation for what we have and the many shoulders of giants upon which we stand.

And, so with that, the final farewell to this server that’s been running since 2003:

glockenspiel:/etc# shutdown -P now
Shared connection to glockenspiel.complete.org closed.

A (Partial) Defense of Debian

I was sad to read on his blog that Michael Stapelberg is winding down his Debian involvement. In his post, he outlined some critiques of Debian. In his post, I want to acknowledge that he is on point with some of them, but also push back on others. Some of this is also a response to some of the comments on Hacker News.

I’d first like to discuss some of the assumptions I believe his post rests on: namely that “we’ve always done it this way” isn’t a good reason to keep doing something. I completely agree. However, I would also say that “this thing is newer, so it’s better and we should use it” is also poor reasoning. Newer is not always better. Sometimes it is, sometimes it’s not, but deeper thought is generally required.

Also, when thinking about why things are a certain way or why people prefer certain approaches, we must often ask “why does that make sense to them?” So let’s dive in.

Debian’s Perspective: Stability

Stability, of course, can mean software that tends not to crash. That’s important, but there’s another aspect of it that is also important: software continuing to act the same over time. For instance, if you wrote a C program in 1985, will that program still compile and run today? Granted, that’s a bit of an extreme example, but the point is: to what extent can you count on software you need continuing to operate without forced change?

People that have been sysadmins for a long period of time will instantly recognize the value of this kind of stability. Change is expensive and difficult, and often causes outages and incidents as bugs are discovered when software is adapted to a new environment. Being able to keep up-to-date with security patches while also expecting little or no breaking changes is a huge win. Maintaining backwards compatibility for old software is also important.

Even from a developer’s perspective, lack of this kind of stability is why I have handed over maintainership of most of my Haskell software to others. Some of my Haskell projects were basically “done”, and every so often I’d get bug reports that it no longer compiles due to some change in the base library. Occasionally I’d have patches with those bug reports, but they would inevitably break compatibility with older versions (even though the language has plenty good support for something akin to a better version of #ifdefs to easily deal with this.) The culture of stability was not there.

This is not to say that this kind of stability is always good or always bad. In the Haskell case, there is value to be had in fixing designs that are realized to be poor and removing cruft. Some would say that strcpy() should be removed from libc for this reason. People that want the latest versions of gimp or whatever are probably not going to be running Debian stable. People that want to install a machine and not really be burdened by it for a couple of years are.

Debian has, for pretty much its entire life, had a large proportion of veteran sysadmins and programmers as part of the organization. Many of us have learned the value of this kind of stability from the school of hard knocks – over and over again. We recognize the value of something that just works, that is so stable that things like unattended-upgrades are safe and reliable. With many other distros, something like this isn’t even possible; when your answer to a security bug is to “just upgrade to the latest version”, just trusting a cron job to do it isn’t going to work because of the higher risk.

Recognizing Personal Preference

Writing about Debian’s bug-tracking tool, Michael says “It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).” This is representative of a personal preference. A web form might be more convenient for Michael — I have no reason to doubt this — but is it more convenient for everyone? I’d say no.

In his linked post, Michael also writes: “Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload. “

I think that’s fair for someone that wants a web-based workflow. Allow me to present the opposite: for me, I tend to push off contributions that only come through Github, and the reason is that, for me, they’re less convenient. It’s also harder for me to contribute to Github projects than Debian ones. Let’s look at this – say I want to send in a small patch for something. If it’s Github, it’s going to look like this:

  1. Go to the website for the thing, click fork
  2. Now clone that fork or add it to my .git/config, hack, and commit
  3. Push the commit, go back to the website, and submit a PR
  4. Github’s email integration is so poor that I basically have to go back to the website for most parts of the conversation. I can do little from the comfort of mu4e.
  5. Remember to clean up my fork after the patch is accepted or rejected.

Compare that to how I’d contribute with Debian:

  1. Hack (and commit if I feel like it)
  2. Type “reportbug foo”, attach my patch
  3. Followup conversation happens directly in email where it’s convenient to reply

How about as the developer? Github constantly forces me to their website. I can’t very well work on bug reports, etc. without a strong Internet connection. And it’s designed to push people into using their tools and their interface, which is inferior in a lot of ways to a local interface – but then the process to pull down someone else’s set of patches involves a lot of typing and clicking, much more that would be involved from a simple git format-patch. In short, I don’t have my shortcut keys, my environment, etc. for reviewing things – the roadblocks are there to make me use theirs.

If I get a contribution from someone in debbugs, it’s so much easier. It’s usually just git apply or patch -p1 and boom, I can see exactly what’s changed and review it. A review comment is just a reply to an email. I don’t have to ever fire up a web browser. So much more convenient.

I don’t write this to say Michael is wrong about what’s more convenient for him. I write it to say he’s wrong about what’s more convenient for me (or others). It may well be the case that debbugs is so inconvenient that it pushes him to leave while github is so inconvenient for others that it pushes them to avoid it.

I will note before leaving this conversation that there are some command-line tools available for Github and a web interface to debbugs, but it is still clear that debbugs is a lot easier to work with from within my own mail reader and tooling, and Github is a lot easier to work with from within a web browser.

The case for reportbug

I remember the days before we had reportbug. Over and over and over again, I would get bug reports from users that wouldn’t have the basic information needed to investigate. reportbug gathers information from the system: package versions, configurations, versions of dependencies, etc. A simple web form can’t do this because it doesn’t have a local agent. From a developer’s perspective, trying to educate users on how to do this over and over as an unending, frustrating, and counter-productive task. Even if it’s clearly documented, the battle will be fought over and over. From a user’s perspective, having your bug report ignored or told you’re doing it wrong is frustrating too.

So I think reportbug is much nicer than having some github-esque web-based submission form. Could it be better? Sure. I think a mode to submit the reportbug report via HTTPS instead of email would make sense, since a lot of machines no longer have local email configured.

Where Debian Should Improve

I agree that there are areas where Debian should improve.

Michael rightly identifies the “strong maintainer” concept as a source of trouble. I agree. Though we’ve been making slow progress over time with things like low-threshold NMU and maintainer teams, the core assumption that a maintainer has a lot of power over particular packages is one that needs to be thrown out.

Michael, and commentators on HN, both identify things that boil down to documentation problems. I have heard so many times that it’s terribly hard to package something up for Debian. That’s really not the case for most things; dh_make and similar tools will do the right thing for many packages, and all you have to do is add some package descriptions and such. I wrote a “concise guide” to packaging for my workplace that ran to only about 2 pages. But it is true that the documentation on debian.org doesn’t clearly offer this simple path, so people are put off and not aware of it. Then there were the comments about how hard it is to become a Debian developer, and how easy it is to submit a patch to NixOS or some such. The fact is, these are different things; one does not need to be a Debian Developer to contribute to Debian. A DD is effectively the same as a patch approver elsewhere; these are the people that can ultimately approve software for insertion into the OS, and you DO want an element of trust there. Debian could do more to offer concise guides for drive-by contributions and the building of packages that follow standard language community patterns, both of which can be done without much knowledge of packaging tools and inner workings of the project.

Finally, I have distanced myself from conversations in Debian for some time, due to lack of time to participate in what I would call excessive bikeshedding. This is hardly unique to Debian, but I am glad to see the project putting more effort into expecting good behavior from conversations of late.


How are you handling building local Debian/Ubuntu packages?

I’m in the middle of some conversations about Debian/Ubuntu repositories, and I’m curious how others are handling this.

How are people maintaining repos for an organization? Are you integrating them with a git/CI (github/gitlab, jenkins/travis, etc) workflow? How do packages propagate into repos? How do you separate prod from testing? Is anyone running buildd locally, or integrating with more common CI tools?

I’m also interested in how people handle local modifications of packages — anything from newer versions of C libraries to newer interpreters. Do you just use the regular Debian toolchain, packaging them up for (potentially) the multiple distros/versions that you have in production? Pin them in apt?

Just curious what’s out there.

Some Googling has so far turned up just one relevant hit: Michael Prokop’s DebConf15 slides, “Continuous Delivery of Debian packages”. Looks very interesting, and discusses jenkins-debian-glue.

Some tangentially-related but interesting items:

Edit 2018-02-02: I should have also mentioned BuildStream

Switching to xmonad + Gnome – and ditching a Mac

I have been using XFCE with xmonad for years now. I’m not sure exactly how many, but at least 6 years, if not closer to 10. Today I threw in the towel and switched to Gnome.

More recently, at a new job, I was given a Macbook Pro. I wasn’t entirely sure what to think of this, but I thought I’d give it a try. I found MacOS to be extremely frustrating and confining. It had no real support for a tiling window manager, and although projects like amethyst tried to approximate what xmonad can do on Linux, they were just too limited by the platform and were clunky. Moreover, the entire UI was surprisingly sluggish; maybe that was an induced effect from animations, but I don’t think that explains it. A Debisn stretch install, even on inferior hardware, was snappy in a way that MacOS never was. So I have requested to swap for a laptop that will run Debian. The strange use of Command instead of Control for things, combined with the overall lack of configurability of keybindings, meant that I was going to always be fighting muscle memory moving from one platform to another. Not only that, but being back in the world of a Free Software OS means a lot.

Now then, back to xmonad and XFCE situation. XFCE once worked very well with xmonad. Over the years, this got more challenging. Around the jessie (XFCE 4.10) time, I had to be very careful about when I would let it save my session, because it would easily break. With stretch, I had to write custom scripts because the panel wouldn’t show up properly, and even some application icons would be invisible, if things were started in a certain order. This took much trial and error and was still cumbersome.

Gnome 3, with its tightly-coupled Gnome Shell, has never been compatible with other window managers — at least not directly. A person could have always used MATE with xmonad — but a lot of people that run XFCE tend to have some Gnome 3 apps (for instance, evince) anyhow. Cinnamon also wouldn’t work with xmonad, because it is simply another tightly-coupled shell instead of Gnome Shell. And then today I discovered gnome-flashback. gnome-flashback is a Gnome 3 environment that uses the traditional X approach with a separate window manager (metacity of yore by default). Sweet.

It turns out that Debian’s xmonad has built-in support for it. If you know the secret: apt-get install gnome-session-flashback (OK, it’s not so secret; it’s even in xmonad’s README.Debian these days) Install that, plus gnome and gdm3 and things are nice. Configure xmonad with GNOME support and poof – goodness right out of the box, selectable from the gdm sessions list.

I still have some gripes about Gnome’s configurability (or lack thereof). But I’ve got to say: This environment is the first one I’ve ever used that got external display switching very nearly right without any configuration, and I include MacOS in that. Plug in an external display, and poof – it’s configured and set up. You can hit a toggle key (Windows+P by default) to change the configurations, or use the Display section in gnome-control-center. Unplug it, and it instantly reconfigures itself to put everything back on the laptop screen. Yessss! I used to have scripts to do this in the wheezy/jessie days. XFCE in stretch had numerous annoying failures in this area which rendered the internal display completely dark until the next reboot – very frustrating. With Gnome, it just works. And, even if you have “suspend on lid closed” turned on, if the system is powered up and hooked up to an external display, it will keep running even if the lid is closed, figuring you must be using it on the external screen. Another thing the Mac wouldn’t do well.

All in all, some pretty good stuff here. I continue to be impressed by stretch. It is darn impressive to put this OS on generic hardware and have it outshine the closed-ecosystem Mac!

Silent Data Corruption Is Real

Here’s something you never want to see:

ZFS has detected a checksum error:

   eid: 138
 class: checksum
  host: alexandria
  time: 2017-01-29 18:08:10-0600
 vtype: disk

This means there was a data error on the drive. But it’s worse than a typical data error — this is an error that was not detected by the hardware. Unlike most filesystems, ZFS and btrfs write a checksum with every block of data (both data and metadata) written to the drive, and the checksum is verified at read time. Most filesystems don’t do this, because theoretically the hardware should detect all errors. But in practice, it doesn’t always, which can lead to silent data corruption. That’s why I use ZFS wherever I possibly can.

As I looked into this issue, I saw that ZFS repaired about 400KB of data. I thought, “well, that was unlucky” and just ignored it.

Then a week later, it happened again. Pretty soon, I noticed it happened every Sunday, and always to the same drive in my pool. It so happens that the highest I/O load on the machine happens on Sundays, because I have a cron job that runs zpool scrub on Sundays. This operation forces ZFS to read and verify the checksums on every block of data on the drive, and is a nice way to guard against unreadable sectors in rarely-used data.

I finally swapped out the drive, but to my frustration, the new drive now exhibited the same issue. The SATA protocol does include a CRC32 checksum, so it seemed (to me, at least) that the problem was unlikely to be a cable or chassis issue. I suspected motherboard.

It so happened I had a 9211-8i SAS card. I had purchased it off eBay awhile back when I built the server, but could never get it to see the drives. I wound up not filling it up with as many drives as planned, so the on-board SATA did the trick. Until now.

As I poked at the 9211-8i, noticing that even its configuration utility didn’t see any devices, I finally started wondering if the SAS/SATA breakout cables were a problem. And sure enough – I realized I had a “reverse” cable and needed a “forward” one. $14 later, I had the correct cable and things are working properly now.

One other note: RAM errors can sometimes cause issues like this, but this system uses ECC DRAM and the errors would be unlikely to always manifest themselves on a particular drive.

So over the course of this, had I not been using ZFS, I would have had several megabytes of reads with undetected errors. Thanks to using ZFS, I know my data integrity is still good.

I’m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

Count me as a systemd convert

Back in 2014, I wrote about some negative first impressions of systemd. I also had a plea to debian-project to end all the flaming, pointing out that “jessie will still boot”, noting that my preference was for sysvinit but things are what they are and it wasn’t that big of a deal.

Although I still have serious misgivings about the systemd upstream’s attitude, I’ve got to say I find the system rather refreshing and useful in practice.

Here’s an example. I was debugging the boot on a server recently. It mounts a bunch of NFS filesystems and runs a third-party daemon that is started from an old-style /etc/init.d script.

We had a situation where the NFS filesystems the daemon required didn’t mount on boot. The daemon then was started, and unfortunately it basically does a mkdir -p on startup. So it started running and processing requests with negative results.

So there were two questions: why did the NFS filesystems fail to start, and how could we make sure the daemon wouldn’t start without them mounted? For the first, journalctl -xb was immensely helpful. It logged the status of each individual mount, and it turned out that it looked like a modprobe or kernel race condition when a bunch of NFS mounts were kicked off in parallel and all tried to load the nfsv4 module at the same time. That was easy enough to work around by adding nfsv4 to /etc/modules. Now for the other question: refusing to start the daemon if the filesystems weren’t there.

With systemd, this was actually trivial. I created /etc/systemd/system/mydaemon.service.requires (I’ll call the service “mydaemon” here), and in it I created a symlink to /lib/systemd/system/remote-fs.target. Then systemctl daemon-reload, and boom, done. systemctl list-dependencies mydaemon will even show the the dependency tree, color-coded status of each item on it, and will actually show every single filesystem that remote-fs requires and the status of it in one command. Super handy.

In a non-systemd environment, I’d probably be modifying the init script and doing a bunch of manual scripting to check the filesystems. Here, one symlink and one command did it, and I get tools to inspect the status of the mydaemon prerequisites for free.

I’ve got to say, as someone that has occasionally had to troubleshoot boot ordering and update-rc.d symlink hell, troubleshooting this stuff in systemd is considerably easier and the toolset is more powerful. Yes, it has its set of poorly-documented complexity, but then so did sysvinit.

I never thought the “world is falling” folks were right, but by now I can be counted among those that feels like systemd has matured to the point where it truly is superior to sysvinit. Yes, in 2014 it had some bugs, but by here in 2016 it looks pretty darn good and I feel like Debian’s decision has been validated through my actual experience with it.

Hiking a mountain with Ian Murdock

“Would you like to hike a mountain?” That question caught me by surprise. It was early in 2000, and I had flown to Tucson for a job interview. Ian Murdock was starting a new company, Progeny, and I was being interviewed for their first hire.

“Well,” I thought, “hiking will be fun.” So we rode a bus or something to the top of the mountain and then hiked down. Our hike was full of — well, everything. Ian talked about Tucson and the mountains, about his time as the Debian project leader, about his college days. I asked about the plants and such we were walking past. We talked about the plans for Progeny, my background, how I might fit in. It was part interview, part hike, part two geeks chatting. Ian had no HR telling him “you can’t go hiking down a mountain with a job candidate,” as I’m sure HR would have. And I am glad of it, because even 16 years later, that is still by far the best time I ever had at a job interview, despite the fact that it ruined the only pair of shoes I had brought along — I had foolishly brought dress shoes for a, well, job interview.

I guess it worked, too, because I was hired. Ian wanted to start up the company in Indianapolis, so over the next little while there was the busy work of moving myself and setting up an office. I remember those early days – Ian and I went computer shopping at a local shop more than once to get the first workstations and servers for the company. Somehow he had found a deal on some office space in a high-rent office building. I still remember the puzzlement on the faces of accountants and lawyers dressed up in suits riding in the elevators with us in our shorts and sandals, or tie-die, next to them.

Progeny’s story was to be a complicated one. We set out to rock the world. We didn’t. We didn’t set out to make lasting friendships, but we often did. We set out to accomplish great things, and we did some of that, too.

We experienced a full range of emotions there — elation when we got hardware auto-detection working well or when our downloads looked very popular, despair when our funding didn’t come through as we had hoped, being lost when our strategy had to change multiple times. And, as is the case everywhere, none of us were perfect.

I still remember the excitement after we published our first release on the Internet. Our little server that could got pegged at 100Mb of outbound bandwidth (that was something for a small company in those days.) The moment must have meant something, because I still have the mrtg chart from that day on my computer, 15 years later.

Progeny's Bandwidth Chart

We made a good Linux distribution, an excellent Debian derivative, but commercial success did not flow from it. In the succeeding months, Ian and the company tried hard to find a strategy that would stick and make our big break. But that never happened. We had several rounds of layoffs when hoped-for funding never materialized. Ian eventually lost control of the company, and despite a few years of Itanium contract work after I left, closed for good.

Looking back, Progeny was life — compressed. During the good times, we had joy, sense of accomplishment, a sense of purpose at doing something well that was worth doing. I had what was my dream job back then: working on Debian as I loved to do, making the world a better place through Free Software, and getting paid to do it. And during the bad times, different people at Progeny experienced anger, cynicism, apathy, sorrow for the loss of our friends or plans, or simply a feeling to soldier on. All of the emotions, good or bad, were warranted in their own way.

Bruce Byfield, one of my co-workers at Progeny, recently wrote a wonderful memoriam of Ian. He wrote, “More than anything, he wanted to repeat his accomplishment with Debian, and, naturally he wondered if he could live up to his own expectations of himself. That, I think, was Ian’s personal tragedy — that he had succeeded early in life, and nothing else he did with his life could quite measure up to his expectations and memories.”

Ian was not the only one to have some guilt over Progeny. I, for years, wondered if I should have done more for the company, could have saved things by doing something more, or different. But I always came back to the conclusion I had at the time: that there was nothing I could do — a terribly sad realization.

In the years since, I watched Ubuntu take the mantle of easy-to-install Debian derivative. I saw them reprise some of the ideas we had, and even some of our mistakes. But by that time, Progeny was so thoroughly forgotten that I doubt they even realized they were doing it.

I had long looked at our work at Progeny as a failure. Our main goal was never accomplished, our big product never sold many copies, our company eventually shuttered, our rock-the-world plan crumpled and forgotten. And by those traditional measurements, you could say it was a failure.

But I have come to learn in the years since that success is a lot more that those things. Success is also about finding meaning and purpose through our work. As a programmer, success is nailing that algorithm that lets the application scale 10x more than before, or solving that difficult problem. As a manager, success is helping team members thrive, watching pieces come together on projects that no one person could ever do themselves. And as a person, success comes from learning from our experiences, and especially our mistakes. As J. Michael Straczynski wrote in a Babylon 5 episode, loosely paraphrased: “Maybe this experience will be a good lesson. Too bad it was so painful, but there ain’t no other kind.”

The thing about Progeny is this – Ian built a group of people that wanted to change the world for the better. We gave it our all. And there’s nothing wrong with that.

Progeny did change the world. As us Progeny alumni have scattered around the country, we benefit from the lessons we learned there. And many of us were “different”, sort of out of place before Progeny, and there we found others that loved C compilers, bootloaders, and GPL licenses just as much as we did. We belonged, not just online but in life, and we went on to pull confidence and skill out of our experience at Progeny and use them in all sorts of ways over the years.

And so did Ian. Who could have imagined the founder of Debian and Progeny would one day lead the cause of an old-guard Unix turning Open Source? I run ZFS on my Debian system today, and Ian is partly responsible for that — and his time at Progeny is too.

So I can remember Ian, and Progeny, as a success. And I leave you with a photo of my best memento from the time there: an original unopened boxed copy of Progeny Linux.

IMG_6197_v1

Detailed Smart Card Cryptographic Token Security Guide

After my first post about smartcards under Linux, I thought I would share some information I’ve been gathering.

This post is already huge, so I am not going to dive into — much — specific commands, but I am linking to many sources with detailed instructions.

I’ve reviewed several types of cards. For this review, I will focus on the OpenPGP card and the Yubikey NEO, since the Cardomatic Smartcard-HSM is not supported by the gpg version in Jessie.

Both cards are produced by people with strong support for the Free Software ecosystem and have strong cross-platform support with source code.

OpenPGP card: Basics with GnuPG

The OpenPGP card is well-known as one of the first smart cards to work well on Linux. It is a single-application card focused on use with GPG. Generally speaking, by the way, you want GPG2 for use with smartcards.

Basically, this card contains three slots: decryption, signing, and authentication slots. The concept is that the private key portions of the keys used for these items are stored only on the card, can never be extracted from the card, and the cryptographic operations are performed on the card. There is more information in my original post. In a fairly rare move for smartcards, this card supports 4096-byte RSA keys; most are restricted to 2048-byte keys.

The FSF Europe hands these out to people and has a lot of good information about them online, including some HOWTOs. The official GnuPG smart card howto is 10 years old, and although it has some good background, I’d suggest using the FSFE instructions instead.

As you’ll see in a bit, most of this information also pertains to the OpenPGP mode of the Yubikey Neo.

OpenPGP card: Other uses

Of course, this is already pretty great to enhance your GPG security, but there’s a lot more that you can do with this card to add two-factor authentication (2FA) to a lot of other areas. Here are some pointers:

OpenPGP card: remote authentication with ssh

You can store the private part of your ssh key on the card. Traditionally, this was only done by using the ssh agent emulation mode of gnupg-agent. This is still possible, of course.

Now, however, the OpenSC project now supports the OpenPGP card as a PKCS#11 and PKCS#15 card, which means it works natively with ssh-agent as well. Try just ssh-add -s /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so if you’ve put a key in the auth slot with GPG. ssh-add -L will list its fingerprint for insertion into authorized_keys. Very simple!

As an aside: Comments that you need scute for PKCS#11 support are now outdated. I do not recommend scute. It is quite buggy.

OpenPGP card: local authentication with PAM

You can authenticate logins to a local machine by using the card with libpam-poldi — here are some instructions.

Between the use with ssh and the use with PAM, we have now covered 2FA for both local and remote use in Unix environments.

OpenPGP card: use on Windows

Let’s move on to Windows environments. The standard suggestion here seems to be the mysmartlogon OpenPGP mini-driver. It works with some sort of Windows CA system, or the local accounts using EIDAuthenticate. I have not yet tried this.

OpenPGP card: Use with X.509 or Windows Active Directory

You can use the card in X.509 mode via these gpgsm instructions, which apparently also work with Windows Active Directory in some fashion.

You can also use it with web browsers to present a certificate from a client for client authentication. For example, here are OpenSC instructions for Firefox.

OpenPGP card: Use with OpenVPN

Via the PKCS#11 mode, this card should be usable to authenticate a client to OpenVPN. See the official OpenVPN HOWTO or these other instructions for more.

OpenPGP card: a note on PKCS#11 and PKCS#15 support

You’ll want to install the opensc-pkcs11 package, and then give the path /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so whenever something needs the PKCS#11 library. There seem to be some locking/contention issues between GPG2 and OpenSC, however. Usually killing pcscd and scdaemon will resolve this.

I would recommend doing manipulation operations (setting PINs, generating or uploading keys, etc.) via GPG2 only. Use the PKCS#11 tools only to access.

OpenPGP card: further reading

Kernel Concepts also has some nice readers; you can get this card in a small USB form-factor by getting the mini-card and the Gemalto reader.

Yubikey Neo Introduction

The Yubikey Neo is a fascinating device. It is a small USB and NFC device, a little smaller than your average USB drive. It is a multi-application device that actually has six distinct modes:

  • OpenPGP JavaCard Applet (pc/sc-compatible)
  • Personal Identity Verification [PIV] (pc/sc-compatible, PKCS#11-compatible in Windows and OpenSC)
  • Yubico HOTP, via your own auth server or Yubico’s
  • OATH, with its two sub-modes:
    • OATH TOTP, with a mobile or desktop helper app (drop-in for Google Authenticator
    • OATH HOTP
  • Challenge-response mode
  • U2F (Universal 2nd Factor) with Chrome

There is a ton to digest with this device.

Yubikey Neo Basics

By default, the Yubikey Neo is locked to only a subset of its features. Using the yubikey-personalization tool (you’ll need the version in stretch; jessie is too old), you can use ykpersonalize -m86 to unlock the full possibilities of the card. Run that command, then unplug and replug the device.

It will present itself as a USB keyboard as well as a PC/SC-compatible card reader. It has a capacitive button, which is used to have it generate keystrokes to input validation information for HOTP or HMAC validation. It has two “slots” that can be configured with HMAC and HOTP; a short button press selects the default slot #1 and a long press selects slot #2.

But before we get into that, let’s step back at some basics.

opensc-tool –list-algorithms claims this card supports RSA with 1024, 2048, and 3072 sizes, and EC with 256 and 384-bit sizes. I haven’t personally verified anything other than RSA-2048 though.

Yubikey Neo: OpenPGP support

In this mode, the card is mostly compatible with the physical OpenPGP card. I say “mostly” because there are a few protocol differences I’ll get into later. It is also limited to 2048-byte keys.

Support for this is built into GnuPG and the GnuPG features described above all work fine.

In this mode, it uses firmware from the Yubico fork of the JavaCard OpenPGP Card applet. There are Yubico-specific tutorials available, but again, most of the general GPG stuff applies.

You can use gnupg-agent to use the card with SSH as before. However, due to some incompatibilities, the OpenPGP applet on this card cannot be used as a PKCS#11 card with either scute or OpenSC. That is not exactly a huge problem, however, as the card has another applet (PIV) that is compatible with OpenSC and so this still provides an avenue for SSH, OpenVPN, Mozilla, etc.

It should be noted that the OpenPGP applet on this card can also be used with NFC on Android with the OpenKeychain app. Together with pass (or its Windows, Mac, or phone ports), this makes a nicely secure system for storing passwords.

Yubikey Neo: PKCS#11 with the PIV applet

There is also support for the PIV standard on the Yubikey Neo. This is supported by default on Linux (via OpenSC) and Windows and provides a PKCS#11-compabible store. It should, therefore, be compatible with ssh-agent, OpenVPN, Active Directory, and all the other OpenPGP card features described above. The only difference is that it uses storage separate from the OpenPGP applet.

You will need one of the Yubico PIV tools to configure the key for it; in Debian, the yubico-piv-tool from stretch does this.

Here are some instructions on using the Yubikey Neo in PIV mode:

A final note: for security, it’s important to change the management key and PINs before deploying the PIV mode.

I couldn’t get this to work with Firefox, but it worked pretty much everywhere else.

Yubikey Neo: HOTP authentication

This is the default mode for your Yubikey; all other modes require enabling with ykpersonalize. In this mode, a 128-bit AES key stored on the Yubikey is used to generate one-time passwords (OTP). (This key was shared in advance with the authentication server.) A typical pattern would be for three prompts: username, password, and Yubikey HOTP. The user clicks in the Yubikey HOTP field, touches the Yubikey, and their one-time token is pasted in.

In the background, the service being authenticated to contacts an authentication server. This authentication server can be either your own (there are several open source implementations in Debian) or the free Yubicloud.

Either way, the server decrypts the encrypted part of the OTP, performs validity checks (making sure that the counter is larger than any counter it’s seen before, etc) and returns success or failure back to the service demanding authentication.

The first few characters of the posted auth contain the unencrypted key ID, and thus it can also be used to provide username if desired.

Yubico has provided quite a few integrations and libraries for this mode. A few highlights:

You can also find some details on the OTP mode. Here’s another writeup.

This mode is simple to implement, but it has a few downsides. One is that it is specific to the Yubico line of products, and thus has a vendor lock-in factor. Another is the dependence on the authentication server; this creates a potential single point of failure and can be undesireable in some circumtances.

Yubikey Neo: OATH and HOTP and TOTP

First, a quick note: OATH and OAuth are not the same. OATH is an authentication protocol, and OAuth is an authorization protocol. Now then…

Like Yubikey HOTP, OATH (both HOTP and TOTP) modes rely on a pre-shared key. (See details in the Yubico article.) Let’s talk about TOTP first. With TOTP, there is a pre-shared secret with each service. Each time you authenticate to that service, your TOTP generator combines the timestamp with the shared secret using a HMAC algorithm and produces a OTP that changes every 30 seconds. Google Authenticator is a common example of this protocol, and this is a drop-in replacement for it. Gandi has a nice description of it that includes links to software-only solutions on various platforms as well.

With the Yubikey, the shared secrets are stored on the card and processed within it. You cannot extract the shared secret from the Yubikey. Of course, if someone obtains physical access to your Yubikey they could use the shared secret stored on it, but there is no way they can steal the shared secret via software, even by compromising your PC or phone.

Since the Yubikey does not have a built-in clock, TOTP operations cannot be completed solely on the card. You can use a PC-based app or the Android application (Play store link) with NFC to store secrets on the device and generate your TOTP codes. Command-line users can also use the yubikey-totp tool in the python-yubico package.

OATH can also use HOTP. With HOTP, an authentication counter is used instead of a clock. This means that HOTP passwords can be generated entirely within the Yubikey. You can use ykpersonalize to configure either slot 1 or 2 for this mode, but one downside is that it can really only be used with one service per slot.

OATH support is all over the place; for instance, there’s libpam-oath from the OATH toolkit for Linux platforms. (Some more instructions on this exist.)

Note: There is another tool from Yubico (not in Debian) that can apparently store multiple TOTP and HOTP codes in the Yubikey, although ykpersonalize and other documentation cannot. It is therefore unclear to me if multiple HOTP codes are supported, and how..

Yubikey Neo: Challenge-Response Mode

This can be useful for doing offline authentication, and is similar to OATH-HOTP in a sense. There is a shared secret to start with, and the service trying to authenticate sends a challenge to the token, which must supply an appropriate response. This makes it only suitable for local authentication, but means it can be done fairly automatically and optionally does not even require a button press.

To muddy the waters a bit, it supports both “Yubikey OTP” and HMAC-SHA1 challenge-response modes. I do not really know the difference. However, it is worth noting that libpam-yubico works with HMAC-SHA1 mode. This makes it suitable, for instance, for logon passwords.

Yubikey Neo: U2F

U2F is a new protocol for web-based apps. Yubico has some information, but since it is only supported in Chrome, it is not of interest to me right now.

Yubikey Neo: Further resources

Yubico has a lot of documentation, and in particular a technical manual that is actually fairly detailed.

Closing comments

Do not think a hardware security token is a panacea. It is best used as part of a multi-factor authentication system; you don’t want a lost token itself to lead to a breach, just as you don’t want a compromised password due to a keylogger to lead to a breach.

These things won’t prevent someone that has compromised your PC from abusing your existing ssh session (or even from establishing new ssh sessions from your PC, once you’ve unlocked the token with the passphrase). What it will do is prevent them from stealing your ssh private key and using it on a different PC. It won’t prevent someone from obtaining a copy of things you decrypt on a PC using the Yubikey, but it will prevent them from decrypting other things that used that private key. Hopefully that makes sense.

One also has to consider the security of the hardware. On that point, I am pretty well satisfied with the Yubikey; large parts of it are open source, and they have put a lot of effort into hardening the hardware. It seems pretty much impervious to non-government actors, which is about the best guarantee a person can get about anything these days.

I hope this guide has been helpful.