Category Archives: Linux

Emacs #4: Automated emails to org-mode and org-mode syncing

This is fourth in a series on Emacs and org-mode.

Hopefully by now you’ve started to see how powerful and useful org-mode is. If you’re like me, you’re thinking:

“I’d really like to have this in sync across all my devices.”

and, perhaps:

“Can I forward emails into org-mode?”

This being Emacs, the answers, of course, are “Yes.”

Syncing

Since org-mode just uses text files, syncing is pretty easily accomplished using any number of tools. I use git with git-remote-gcrypt. Due to some limitations of git-remote-gcrypt, each machine tends to push to its own branch, and to master on command. Each machine merges from all the other branches and pushes the result to master after a merge. A cron job causes pushes to the machine’s branch to happen, and a bit of elisp coordinates it all — making sure to save buffers before a sync, refresh them from disk after, etc.

The code for this post is somewhat more extended, so I will be linking to it on github rather than posting inline.

I have a directory $HOME/org where all my org-stuff lives. In ~/org lives a Makefile that handles the syncing. It defines these targets:

  • push: adds, commits, and pushes to a branch named after the machine’s hostname
  • fetch: does a simple git fetch
  • sync: adds, commits, pulls remote changes, merges, and (assuming the merge was successful) pushes to the branch named after the machine’s hostname plus master

Now, in my user’s crontab, I have this:

*/15   *   *  *   *      make -C $HOME/org push fetch 2>&1 | logger --tag 'orgsync'

The accompanying elisp code defines a shortcut (C-c s) to cause a sync to occur. Thanks to the cronjob, as long as files were saved — even if I didn’t explicitly sync on the other boxen — they’ll be pulled in.

I have found this setup to work really well.

Emailing to org-mode

Before going down this path, one should ask the question: do you really need it? I use org-mode with mu4e, and the integration is excellent; any org task can link to an email by message-id, and this is ideal — it lets a person do things like make a reminder to reply to a message in a week.

However, org is not just about reminders. It’s also a knowledge base, authoring system, etc. And, not all of my mail clients use mu4e. (Note: things like MobileOrg exist for mobile devices). I don’t actually use this as much as I thought I would, but it has its uses and I thought I’d document it here too.

Now I didn’t want to just be able to accept plain text email. I wanted to be able to handle attachments, HTML mail, etc. This quickly starts to sound problematic — but with tools like ripmime and pandoc, it’s not too bad.

The first step is to set up some way to get mail into a specific folder. A plus-extension, special user, whatever. I then use a fetchmail configuration to pull it down and run my insorgmail script.

This script is where all the interesting bits happen. It starts with ripmime to process the message. HTML bits are converted from HTML to org format using pandoc. an org hierarchy is made to represent the structure of the email as best as possible. emails can get pretty complicated, with HTML and the rest, but I have found this does an acceptable job with my use cases.

Up next…

My last post on org-mode will talk about using it to write documents and prepare slides — a use for which I found myself surprisingly pleased with it, but which needed a bit of tweaking.

An old DOS BBS in a Docker container

Awhile back, I wrote about my Debian Docker base images. I decided to extend this concept a bit further: to running DOS applications in Docker.

But first, a screenshot:

It turns out this is possible, but difficult. I went through all three major DOS emulators available (dosbox, qemu, and dosemu). I got them all running inside the Docker container, but had a number of, er, fun issues to resolve.

The general thing one has to do here is present a fake modem to the DOS environment. This needs to be exposed outside the container as a TCP port. That much is possible in various ways — I wound up using tcpser. dosbox had a TCP modem interface, but it turned out to be too buggy for this purpose.

The challenge comes in where you want to be able to accept more than one incoming telnet (or TCP) connection at a time. DOS was not a multitasking operating system, so there were any number of hackish solutions back then. One might have had multiple physical computers, one for each incoming phone line. Or they might have run multiple pseudo-DOS instances under a multitasking layer like DESQview, OS/2, or even Windows 3.1.

(Side note: I just learned of DESQview/X, which integrated DESQview with X11R5 and replaced the Windows 3 drivers to allow running Windows as an X application).

For various reasons, I didn’t want to try running one of those systems inside Docker. That left me with emulating the original multiple physical node setup. In theory, pretty easy — spin up a bunch of DOS boxes, each using at most 1MB of emulated RAM, and go to town. But here came the challenge.

In a multiple-physical-node setup, you need some sort of file sharing, because your nodes have to access the shared message and file store. There were a myriad of clunky ways to do this in the old DOS days – Netware, LAN manager, even some PC NFS clients. I didn’t have access to Netware. I tried the Microsoft LM client in DOS, talking to a Samba server running inside the Docker container. This I got working, but the LM client used so much RAM that, even with various high memory tricks, BBS software wasn’t going to run. I couldn’t just mount an underlying filesystem in multiple dosbox instances either, because dosbox did caching that wasn’t going to be compatible.

This is why I wound up using dosemu. Besides being a more complete emulator than dosbox, it had a way of sharing the host’s filesystems that was going to work.

So, all of this wound up with this: jgoerzen/docker-bbs-renegade.

I also prepared building blocks for others that want to do something similar: docker-dos-bbs and the lower-level docker-dosemu.

As a side bonus, I also attempted running this under Joyent’s Triton (SmartOS, Solaris-based). I was pleasantly impressed that I got it all almost working there. So yes, a Renegade DOS BBS running under a Linux-based DOS emulator in a container on a Solaris machine.

The Yellow House Phone Company (Featuring Asterisk and an 11-year-old)

“Well Jacob, do you think we should set up our own pretend phone company in the house?”

“We can DO THAT?”

“Yes!”

“Then… yes. Yes! YES YES YESYESYESYES YES! Let’s do it, dad!”

Not long ago, my parents had dug up the old phone I used back in the day. We still have a landline, and Jacob was having fun discovering how an analog phone works. I told him about the special number he could call to get the time and temperature read out to him. He discovered what happens if you call your own number and hang up. He figured out how to play “Mary Had a Little Lamb” using touchtone keys (after a slightly concerned lecture from me setting out some rules to make sure his “musical dialing” wouldn’t result in any, well, dialing.)

He was hooked. So I thought that taking it to the next level would be a good thing for a rainy day. I have run Asterisk before, though I had unfortunately gotten rid of most of my equipment some time back. But I found a great deal on a Cisco 186 ATA (Analog Telephone Adapter). It has two FXS lines (FXS ports simulate the phone company, and provide dialtone and ring voltage to a connected phone), and of course hooks up to the LAN.

We plugged that in, and Jacob was amazed to see its web interface come up. I had to figure out how to configure it (unfortunately, it uses SCCP rather than SIP, and figuring out Asterisk’s chan_skinny took some doing, but we got there.)

I set up voicemail. He loved it. He promptly figured out how to record his own greetings. We set up a second phone on the other line, so he could call between them. The cordless phones in our house support SIP, so I configured one of them as a third line. He spent a long time leaving himself messages.

IMG_3465

Pretty soon we both started having ideas. I set up extension 777, where he could call for the time. Then he wanted a way to get the weather forecast. Well, weather-util generates a text-based report. With it, a little sed and grep tweaking, the espeak TTS engine, and a little help from sox, I had a shell script worked up that would read back a forecast whenever he called a certain extension. He was super excited! “That’s great, dad! Can it also read weather alerts too?” Sure! weather-util has a nice option just for that. Both boys cackled as the system tried to read out the NWS header (their timestamps like 201711031258 started with “two hundred one billion…”)

Then I found an online source for streaming NOAA Weather Radio feeds – Jacob enjoys listening to weather radio – and I set up another extension he could call to listen to that. More delight!

But it really took off when I asked him, “Would you like to record your own menu?” “You mean those things where it says press 1 or 2 for this or that?” “Yes.” “WE CAN DO THAT?” “Oh yes!” “YES, LET’S DO IT RIGHT NOW!”

So he recorded a menu, then came and hovered by me while I hacked up extensions.conf, then eagerly went back to the phone to try it. Oh the excitement of hearing hisown voice, and finding that it worked! Pretty soon he was designing sub-menus (“OK Dad, so we’ll set it up so people can press 2 for the weather, and then choose if they want weather radio or the weather report. I’m recording that now. Got it?”)

He has informed me that next Saturday we will build an intercom system “like we have at school.” I’m going to have to have some ideas on how to tie Squeezebox in with Asterisk to make that happen, I think. Maybe this will do.

Fixing the Problems with Docker Images

I recently wrote about the challenges in securing Docker container contents, and in particular with keeping up-to-date with security patches from all over the Internet.

Today I want to fix that.

Besides security, there is a second problem: the common way of running things in Docker pretends to provide a traditional POSIX API and environment, but really doesn’t. This is a big deal.

Before diving into that, I want to explain something: I have often heard it said the Docker provides single-process containers. This is unambiguously false in almost every case. Any time you have a shell script inside Docker that calls cp or even ls, you are running a second process. Web servers from Apache to whatever else use processes or threads of various types to service multiple connections at once. Many Docker containers are single-application, but a process is a core part of the POSIX API, and very little software would work if it was limited to a single process. So this is my little plea for more precise language. OK, soapbox mode off.

Now then, in a traditional Linux environment, besides your application, there are other key components of the system. These are usually missing in Docker containers.

So today, I will fix this also.

In my docker-debian-base images, I have prepared a system that still has only 11MB RAM overhead, makes minimal changes on top of Debian, and yet provides a very complete environment and API. Here’s what you get:

  • A real init system, capable of running standard startup scripts without modification, and solving the nasty Docker zombie reaping problem.
  • Working syslog, which can either export all logs to Docker’s logging infrastructure, or keep them within the container, depending on your preferences.
  • Working real schedulers (cron, anacron, and at), plus at least the standard logrotate utility to help prevent log files inside the container from becoming huge.

The above goes into my “minimal” image. Additional images add layers on top of it, and here are some of the features they add:

  • A real SMTP agent (exim4-daemon-light) so that cron and friends can actually send you mail
  • SSH client and server (optionally exposed to the Internet)
  • Automatic security patching via unattended-upgrades and needsrestart

All of the above, including the optional features, has an 11MB overhead on start. Not bad for so much, right?

From here, you can layer on top all your usual Dockery things. You can still run one application per container. But you can now make sure your disk doesn’t fill up from logs, run your database vacuuming commands at will, have your blog download its RSS feeds every few minutes, etc — all from within the container, as it should be. Furthermore, you don’t have to reinvent the wheel, because Debian already ships with things to take care of a lot of this out of the box — and now those tools will just work.

There is some popular work done in this area already by phusion’s baseimage-docker. However, I made my own for these reasons:

  • I wanted something based on Debian rather than Ubuntu
  • By using sysvinit rather than runit, the OS default init scripts can be used unmodified, reducing the administrative burden on container builders
  • Phusion’s system is, for some reason, not auto-built on the Docker hub. Mine is, so it will be automatically revised whenever the underlying Debian system, or the Github repository, is.

Finally a word on the choice to use sysvinit. It would have been simpler to use systemd here, since it is the default in Debian these days. Unfortunately, systemd requires you to poke some holes in the Docker security model, as well as mount a cgroups filesystem from the host. I didn’t consider this acceptable, and sysvinit ran without these workarounds, so I went with it.

With all this, Docker becomes a viable replacement for KVM for various services on my internal networks. I’ll be writing about that later.

Easily Improving Linux Security with Two-Factor Authentication

2-Factor Authentication (2FA) is a simple way to help improve the security of your systems. It restricts the scope of damage if a machine is compromised. If, for instance, you have a security token or authenticator app on your phone that is required for ssh to a remote machine, then even if every laptop you use to connect to the remote is totally owned, an attacker cannot establish a new ssh session on their own.

There are a lot of tutorials out there on the Internet that get you about halfway there, so here is some more detail.

Background

In this article, I will be focusing on authentication in the style of Google Authenticator, which is a special case of OATH HOTP or TOTP. You can use the Google Authenticator app, FreeOTP, or a hardware token like Yubikey to generate tokens with this. They are all 100% compatible with Google Authenticator and libpam-google-authenticator.

The basic idea is that there is a pre-shared secret key. At each login, a different and unique token is required, which is generated based on the pre-shared secret key and some other information. With TOTP, the “other information” is the current time, implying that both machines must be reasably well in-sync time-wise. With HOTP, the “other information” is a count of the number of times the pre-shared key has been used. Both typically have a “window” on the server side that can let times within a certain number of seconds, or a certain number of login accesses, work.

The beauty of this system is that after the initial setup, no Internet access is required on either end to validate the key (though TOTP requires both ends to be reasonably in sync time-wise).

The basics: user account setup and ssh authentication

You can start with the basics by reading one of these articles: one, two, three. Debian/Ubuntu users will find both the pam module and the user account setup binary in libpam-google-authenticator.

For many, you can stop there. You’re done. But if you want to kick it up a notch, read on:

Enhancement 1: Requiring 2FA even when ssh public key auth is used

Let’s consider a scenario in which your system is completely compromised. Unless your ssh keys are also stored in something like a Yubikey Neo, they could wind up being compromised as well – if someone can read your files and sniff your keyboard, your ssh private keys are at risk.

So we can configure ssh and PAM so that a OTP token is required even for this scenario.

First off, in /etc/ssh/sshd_config, we want to change or add these lines:

UsePAM yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

This forces all authentication to pass two verification methods in ssh: publickey and keyboard-interactive. All users will have to supply a public key and then also pass keyboard-interactive auth. Normally keyboard-interactive auth prompts for a password, but we can change /etc/pam.d/sshd on this. I added this line at the very top of /etc/pam.d/sshd:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

This basically makes Google Authenticator both necessary and sufficient for keyboard-interactive in ssh. That is, whenever the system wants to use keyboard-interactive, rather than prompt for a password, it instead prompts for a token. Note that any user that has not set up google-authenticator already will be completely unable to ssh into their account.

Enhancement 1, variant 2: Allowing automated processes to root

On many of my systems, I have ~root/.ssh/authorized_keys set up to permit certain systems to run locked-down commands for things like backups. These are automated commands, and the above configuration will break them because I’m not going to be typing in codes at 3AM.

If you are very restrictive about what you put in root’s authorized_keys, you can exempt the root user from the 2FA requirement in ssh by adding this to sshd_config:

Match User root
  AuthenticationMethods publickey

This says that the only way to access the root account via ssh is to use the authorized_keys file, and no 2FA will be required in this scenario.

Enhancement 1, variant 2: Allowing non-pubkey auth

On some multiuser systems, some users may still want to use password auth rather than publickey auth. There are a few ways we can support that:

  1. Users without public keys will have to supply a OTP and a password, while users with public keys will have to supply public key, OTP, and a password
  2. Users without public keys will have to supply OTP or a password, while users with public keys will have to supply public key, OTP, or a password
  3. Users without public keys will have to supply OTP and a password, while users with public keys only need to supply the public key

The third option is covered in any number of third-party tutorials. To enable options 1 or 2, you’ll need to put this in sshd_config:

AuthenticationMethods publickey,keyboard-interactive keyboard-interactive

This means that to authenticate, you need to pass either publickey and then keyboard-interactive auth, or just keyboard-interactive auth.

Then in /etc/pam.d/sshd, you put this:

auth required pam_google_authenticator.so

As a sub-variant for option 1, you can add nullok to here to permit auth from people that do not have a Google Authenticator configuration.

Or for option 2, change “required” to “sufficient”. You should not add nullok in combination with sufficient, because that could let people without a Google Authenticator config authenticate completely without a password at all.

Enhancement 2: Configuring su

A lot of other tutorials stop with ssh (and maybe gdm) but forget about the other ways we authenticate or change users on a system. su and sudo are the two most important ones. If your root password is compromised, you don’t want anybody to be able to su to that account without having to supply a token. So you can set up google-authenticator for root.

Then, edit /etc/pam.d/su and insert this line after the pam_rootok.so line:

auth       required     pam_google_authenticator.so nullok

The reason you put this after pam_rootok.so is because you want to be able to su from root to any account without having to input a token. We add nullok to the end of this, because you may want to su to accounts that don’t have tokens. Just make sure to configure tokens for the root account first.

Enhancement 3: Configuring sudo

This one is similar to su, but a little different. This lets you, say, secure the root password for sudo.

Normally, you might sudo from your user account to root (if so configured). You might have sudo configured to require you to enter in your own password (rather than root’s), or to just permit you to do whatever you want as root without a password.

Our first step, as always, is to configure PAM. What we do here depends on your desired behavior: do you want to require someone to supply both a password and a token, or just a token, or require a token? If you want to require a token, put this at the top of /etc/pam.d/sudo:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

If you want to require a token and a password, change the bracketed string to “required”, and if you want a token or a password, change it to “sufficient”. As before, if you want to permit people without a configured token to proceed, add “nullok”, but do not use that with “sufficient” or the bracketed example here.

Now here comes the fun part. By default, if a user is required to supply a password to sudo, they are required to supply their own password. That does not help us here, because a user logged in to the system can read the ~/.google_authenticator file and easily then supply tokens for themselves. What you want to do is require them to supply root’s password. Here’s how I set that up in sudoers:

Defaults:jgoerzen rootpw
jgoerzen ALL=(ALL) ALL

So now, with the combination of this and the PAM configuration above, I can sudo to the root user without knowing its password — but only if I can supply root’s token. Pretty slick, eh?

Further reading

In addition to the basic tutorials referenced above, consider:

Edit: additional comments

Here are a few other things to try:

First, the libpam-google-authenticator module supports putting the Google Authenticator files in different locations and having them owned by a certain user. You could use this to, for instance, lock down all secret keys to be readable only by the root user. This would prevent users from adding, changing, or removing their own auth tokens, but would also let you do things such as reusing your personal token for the root account without a problem.

Also, the pam-oath module does much of the same things as the libpam-google-authenticator module, but without some of the help for setup. It uses a single monolithic root-owned password file for all accounts.

There is an oathtool that can be used to generate authentication codes from the command line.

I’m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

Detailed Smart Card Cryptographic Token Security Guide

After my first post about smartcards under Linux, I thought I would share some information I’ve been gathering.

This post is already huge, so I am not going to dive into — much — specific commands, but I am linking to many sources with detailed instructions.

I’ve reviewed several types of cards. For this review, I will focus on the OpenPGP card and the Yubikey NEO, since the Cardomatic Smartcard-HSM is not supported by the gpg version in Jessie.

Both cards are produced by people with strong support for the Free Software ecosystem and have strong cross-platform support with source code.

OpenPGP card: Basics with GnuPG

The OpenPGP card is well-known as one of the first smart cards to work well on Linux. It is a single-application card focused on use with GPG. Generally speaking, by the way, you want GPG2 for use with smartcards.

Basically, this card contains three slots: decryption, signing, and authentication slots. The concept is that the private key portions of the keys used for these items are stored only on the card, can never be extracted from the card, and the cryptographic operations are performed on the card. There is more information in my original post. In a fairly rare move for smartcards, this card supports 4096-byte RSA keys; most are restricted to 2048-byte keys.

The FSF Europe hands these out to people and has a lot of good information about them online, including some HOWTOs. The official GnuPG smart card howto is 10 years old, and although it has some good background, I’d suggest using the FSFE instructions instead.

As you’ll see in a bit, most of this information also pertains to the OpenPGP mode of the Yubikey Neo.

OpenPGP card: Other uses

Of course, this is already pretty great to enhance your GPG security, but there’s a lot more that you can do with this card to add two-factor authentication (2FA) to a lot of other areas. Here are some pointers:

OpenPGP card: remote authentication with ssh

You can store the private part of your ssh key on the card. Traditionally, this was only done by using the ssh agent emulation mode of gnupg-agent. This is still possible, of course.

Now, however, the OpenSC project now supports the OpenPGP card as a PKCS#11 and PKCS#15 card, which means it works natively with ssh-agent as well. Try just ssh-add -s /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so if you’ve put a key in the auth slot with GPG. ssh-add -L will list its fingerprint for insertion into authorized_keys. Very simple!

As an aside: Comments that you need scute for PKCS#11 support are now outdated. I do not recommend scute. It is quite buggy.

OpenPGP card: local authentication with PAM

You can authenticate logins to a local machine by using the card with libpam-poldi — here are some instructions.

Between the use with ssh and the use with PAM, we have now covered 2FA for both local and remote use in Unix environments.

OpenPGP card: use on Windows

Let’s move on to Windows environments. The standard suggestion here seems to be the mysmartlogon OpenPGP mini-driver. It works with some sort of Windows CA system, or the local accounts using EIDAuthenticate. I have not yet tried this.

OpenPGP card: Use with X.509 or Windows Active Directory

You can use the card in X.509 mode via these gpgsm instructions, which apparently also work with Windows Active Directory in some fashion.

You can also use it with web browsers to present a certificate from a client for client authentication. For example, here are OpenSC instructions for Firefox.

OpenPGP card: Use with OpenVPN

Via the PKCS#11 mode, this card should be usable to authenticate a client to OpenVPN. See the official OpenVPN HOWTO or these other instructions for more.

OpenPGP card: a note on PKCS#11 and PKCS#15 support

You’ll want to install the opensc-pkcs11 package, and then give the path /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so whenever something needs the PKCS#11 library. There seem to be some locking/contention issues between GPG2 and OpenSC, however. Usually killing pcscd and scdaemon will resolve this.

I would recommend doing manipulation operations (setting PINs, generating or uploading keys, etc.) via GPG2 only. Use the PKCS#11 tools only to access.

OpenPGP card: further reading

Kernel Concepts also has some nice readers; you can get this card in a small USB form-factor by getting the mini-card and the Gemalto reader.

Yubikey Neo Introduction

The Yubikey Neo is a fascinating device. It is a small USB and NFC device, a little smaller than your average USB drive. It is a multi-application device that actually has six distinct modes:

  • OpenPGP JavaCard Applet (pc/sc-compatible)
  • Personal Identity Verification [PIV] (pc/sc-compatible, PKCS#11-compatible in Windows and OpenSC)
  • Yubico HOTP, via your own auth server or Yubico’s
  • OATH, with its two sub-modes:
    • OATH TOTP, with a mobile or desktop helper app (drop-in for Google Authenticator
    • OATH HOTP
  • Challenge-response mode
  • U2F (Universal 2nd Factor) with Chrome

There is a ton to digest with this device.

Yubikey Neo Basics

By default, the Yubikey Neo is locked to only a subset of its features. Using the yubikey-personalization tool (you’ll need the version in stretch; jessie is too old), you can use ykpersonalize -m86 to unlock the full possibilities of the card. Run that command, then unplug and replug the device.

It will present itself as a USB keyboard as well as a PC/SC-compatible card reader. It has a capacitive button, which is used to have it generate keystrokes to input validation information for HOTP or HMAC validation. It has two “slots” that can be configured with HMAC and HOTP; a short button press selects the default slot #1 and a long press selects slot #2.

But before we get into that, let’s step back at some basics.

opensc-tool –list-algorithms claims this card supports RSA with 1024, 2048, and 3072 sizes, and EC with 256 and 384-bit sizes. I haven’t personally verified anything other than RSA-2048 though.

Yubikey Neo: OpenPGP support

In this mode, the card is mostly compatible with the physical OpenPGP card. I say “mostly” because there are a few protocol differences I’ll get into later. It is also limited to 2048-byte keys.

Support for this is built into GnuPG and the GnuPG features described above all work fine.

In this mode, it uses firmware from the Yubico fork of the JavaCard OpenPGP Card applet. There are Yubico-specific tutorials available, but again, most of the general GPG stuff applies.

You can use gnupg-agent to use the card with SSH as before. However, due to some incompatibilities, the OpenPGP applet on this card cannot be used as a PKCS#11 card with either scute or OpenSC. That is not exactly a huge problem, however, as the card has another applet (PIV) that is compatible with OpenSC and so this still provides an avenue for SSH, OpenVPN, Mozilla, etc.

It should be noted that the OpenPGP applet on this card can also be used with NFC on Android with the OpenKeychain app. Together with pass (or its Windows, Mac, or phone ports), this makes a nicely secure system for storing passwords.

Yubikey Neo: PKCS#11 with the PIV applet

There is also support for the PIV standard on the Yubikey Neo. This is supported by default on Linux (via OpenSC) and Windows and provides a PKCS#11-compabible store. It should, therefore, be compatible with ssh-agent, OpenVPN, Active Directory, and all the other OpenPGP card features described above. The only difference is that it uses storage separate from the OpenPGP applet.

You will need one of the Yubico PIV tools to configure the key for it; in Debian, the yubico-piv-tool from stretch does this.

Here are some instructions on using the Yubikey Neo in PIV mode:

A final note: for security, it’s important to change the management key and PINs before deploying the PIV mode.

I couldn’t get this to work with Firefox, but it worked pretty much everywhere else.

Yubikey Neo: HOTP authentication

This is the default mode for your Yubikey; all other modes require enabling with ykpersonalize. In this mode, a 128-bit AES key stored on the Yubikey is used to generate one-time passwords (OTP). (This key was shared in advance with the authentication server.) A typical pattern would be for three prompts: username, password, and Yubikey HOTP. The user clicks in the Yubikey HOTP field, touches the Yubikey, and their one-time token is pasted in.

In the background, the service being authenticated to contacts an authentication server. This authentication server can be either your own (there are several open source implementations in Debian) or the free Yubicloud.

Either way, the server decrypts the encrypted part of the OTP, performs validity checks (making sure that the counter is larger than any counter it’s seen before, etc) and returns success or failure back to the service demanding authentication.

The first few characters of the posted auth contain the unencrypted key ID, and thus it can also be used to provide username if desired.

Yubico has provided quite a few integrations and libraries for this mode. A few highlights:

You can also find some details on the OTP mode. Here’s another writeup.

This mode is simple to implement, but it has a few downsides. One is that it is specific to the Yubico line of products, and thus has a vendor lock-in factor. Another is the dependence on the authentication server; this creates a potential single point of failure and can be undesireable in some circumtances.

Yubikey Neo: OATH and HOTP and TOTP

First, a quick note: OATH and OAuth are not the same. OATH is an authentication protocol, and OAuth is an authorization protocol. Now then…

Like Yubikey HOTP, OATH (both HOTP and TOTP) modes rely on a pre-shared key. (See details in the Yubico article.) Let’s talk about TOTP first. With TOTP, there is a pre-shared secret with each service. Each time you authenticate to that service, your TOTP generator combines the timestamp with the shared secret using a HMAC algorithm and produces a OTP that changes every 30 seconds. Google Authenticator is a common example of this protocol, and this is a drop-in replacement for it. Gandi has a nice description of it that includes links to software-only solutions on various platforms as well.

With the Yubikey, the shared secrets are stored on the card and processed within it. You cannot extract the shared secret from the Yubikey. Of course, if someone obtains physical access to your Yubikey they could use the shared secret stored on it, but there is no way they can steal the shared secret via software, even by compromising your PC or phone.

Since the Yubikey does not have a built-in clock, TOTP operations cannot be completed solely on the card. You can use a PC-based app or the Android application (Play store link) with NFC to store secrets on the device and generate your TOTP codes. Command-line users can also use the yubikey-totp tool in the python-yubico package.

OATH can also use HOTP. With HOTP, an authentication counter is used instead of a clock. This means that HOTP passwords can be generated entirely within the Yubikey. You can use ykpersonalize to configure either slot 1 or 2 for this mode, but one downside is that it can really only be used with one service per slot.

OATH support is all over the place; for instance, there’s libpam-oath from the OATH toolkit for Linux platforms. (Some more instructions on this exist.)

Note: There is another tool from Yubico (not in Debian) that can apparently store multiple TOTP and HOTP codes in the Yubikey, although ykpersonalize and other documentation cannot. It is therefore unclear to me if multiple HOTP codes are supported, and how..

Yubikey Neo: Challenge-Response Mode

This can be useful for doing offline authentication, and is similar to OATH-HOTP in a sense. There is a shared secret to start with, and the service trying to authenticate sends a challenge to the token, which must supply an appropriate response. This makes it only suitable for local authentication, but means it can be done fairly automatically and optionally does not even require a button press.

To muddy the waters a bit, it supports both “Yubikey OTP” and HMAC-SHA1 challenge-response modes. I do not really know the difference. However, it is worth noting that libpam-yubico works with HMAC-SHA1 mode. This makes it suitable, for instance, for logon passwords.

Yubikey Neo: U2F

U2F is a new protocol for web-based apps. Yubico has some information, but since it is only supported in Chrome, it is not of interest to me right now.

Yubikey Neo: Further resources

Yubico has a lot of documentation, and in particular a technical manual that is actually fairly detailed.

Closing comments

Do not think a hardware security token is a panacea. It is best used as part of a multi-factor authentication system; you don’t want a lost token itself to lead to a breach, just as you don’t want a compromised password due to a keylogger to lead to a breach.

These things won’t prevent someone that has compromised your PC from abusing your existing ssh session (or even from establishing new ssh sessions from your PC, once you’ve unlocked the token with the passphrase). What it will do is prevent them from stealing your ssh private key and using it on a different PC. It won’t prevent someone from obtaining a copy of things you decrypt on a PC using the Yubikey, but it will prevent them from decrypting other things that used that private key. Hopefully that makes sense.

One also has to consider the security of the hardware. On that point, I am pretty well satisfied with the Yubikey; large parts of it are open source, and they have put a lot of effort into hardening the hardware. It seems pretty much impervious to non-government actors, which is about the best guarantee a person can get about anything these days.

I hope this guide has been helpful.

First steps with smartcards under Linux and Android — hard, but it works

Well this has been an interesting project.

It all started with a need to get better password storage at work. We wound up looking heavily at a GPG-based solution. This prompted the question: how can we make it even more secure?

Well, perhaps, smartcards. The theory is this: a smartcard holds your private keys in a highly-secure piece of hardware. The PC can never actually access the private keys. Signing and decrypting operations are done directly on the card to prevent the need to export the private key material to the PC. There are lots of “standards” to choose from (PKCS#11, PKCS#15, and OpenPGP card specs) that are relevant here. And there are ways to use SSH and OpenVPN with some of these keys too. Access to the card is protected by a passphrase (called a “PIN” in smartcard lingo, even though it need not be numeric). These smartcards might be USB sticks, or cards you pop into a reader. In any case, you can pop them out when not needed, pop them in to use them, and… well, pretty nice, eh?

So that’s the theory. Let’s talk a bit of reality.

First of all, it is hard for a person like me to evaluate how secure my data is in hardware. There was a high-profile bug in the OpenPGP JavaCard applet used by Yubico that caused the potential to use keys without a PIN, for instance. And how well protected is the key in the physical hardware? Granted, in most of these cards you’re talking serious hardware skill to compromise them, but still, this is unknown in absolute terms.

Here’s the bigger problem: compatibility. There are all sorts of card readers, but compatibility with pcsc-tools and pcscd on Linux seems pretty good. But the cards themselves — oh my. PKCS#11 defines an interface API, but each vendor would provide their own .so or .dll file to interface. Some cards (for instance, the ACOS5-64 mentioned on the Debian wiki!) are made by vendors that charge $50 for the privilege of getting the drivers needed to make them work… and they’re closed-source proprietary drivers at that.

Some attempts

I ordered several cards to evaluate: the OpenPGP card, specifically designed to support GPG; the ACOS5-64 card, the JavaCOS A22, the Yubikey Neo, and a simple reader listed on the GPG smartcard howto.

The OpenPGP card and ACOS5-64 are the only ones in the list that support 4096-bit RSA keys due to the computational demands of them. The others all support 2048-bit RSA keys.

The JavaCOS requires the user to install a JavaCard applet to the card to make it useable. The Yubico OpenPGP applet works here, along with GlobalPlatform to install it. I am not sure just how solid it is. The Yubikey Neo has yet to arrive; it integrates some interesting OAUTH and TOTP capabilities as well.

I found that Debian’s wiki page for smartcards lists a bunch of them that are not really useable using the tools in main. The ACOS5-64 was such a dud. But I got the JavaCOS A22 working quite nicely. It’s also NFC-enabled and works perfectly with OpenKeyChain on Android (looking like a “Yubikey Neo” to it, once the OpenPGP applet is installed). I’m impressed! Here’s a way to be secure with my smartphone without revealing everything all the time.

Really the large amount of time is put into figuring out how all this stuff fits together. I’m getting there, but I’ve got a ways to go yet.

Update: Corrected to read “signing and decrypting” rather than “signing and encrypting” operations are being done on the card. Thanks to BenoĆ®t Allard for catching this error.

Roundup of remote encrypted deduplicated backups in Linux

Since I wrote last about Linux backup tools, back in a 2008 article about BackupPC and similar toools and a 2011 article about dedpulicating filesystems, I’ve revisited my personal backup strategy a bit.

I still use ZFS, with my tool “simplesnap” that I wrote about in 2014 to perform local backups to USB drives, which get rotated offsite periodically. This has the advantage of being very fast and very secure, but I also wanted offsite backups over the Internet. I began compiling criteria, which ran like this:

  • Remote end must not need any special software installed. Storage across rsync, sftp, S3, WebDAV, etc. should all be good candidates. The remote end should not need to support hard links or symlinks, etc.
  • Cross-host deduplication at at least the file level is required, so if I move a 4GB video file from one machine to another, my puny DSL wouldn’t have to re-upload it.
  • All data that is stored remotely must be 100% encrypted 100% of the time. I must not need to have any trust at all in the remote end.
  • Each backup after the first must send only an incremental’s worth of data across the line. No periodic re-uploading of the entire data set can be done.
  • The repository format must be well-documented and stable.

So, how did things stack up?

Didn’t meet criteria

A lot of popular tools didn’t meet the criteria. Here are some that I considered:

  • BackupPC requires software on the remote end and does not do encryption.
  • None of the rsync hardlink tree-based tools are suitable here.
  • rdiff-backup requires software on the remote end and does not do encryption or dedup.
  • duplicity requires a periodic re-upload of a full backup, or incremental chains become quite long and storage-inefficient. It also does not support dedup, although it does have an impressive list of “dumb” storage backends.
  • ZFS, if used to do backups the efficient way, would require software to be installed on the remote end. If simple “zfs send” images are used, the same limitations as with duplicity apply.
  • The tools must preserve POSIX attributes like uid/gid, permission bits, symbolic links, hard links, etc. Support for xattrs is also desireable but not required.
  • bup and zbackup are both interesting deduplicators, but do not yet have support for removing old data, so are impractical for this purpose.
  • burp requires software on the server side.

Obnam and Attic/Borg Backup

Obnam and Attic (and its fork Borg Backup) are both programs that have a similar concept at their heart, which is roughly this: the backup repository stores small chunks of data, indexed by a checksum. Directory trees are composed of files that are assembled out of lists of chunks, so if any given file matches another file already in the repository somewhere, the added cost is just a small amount of metadata.

Obnam was eventually my tool of choice. It has built-in support for sftp, but its reliance on local filesystem semantics is very conservative and it works fine atop davfs2 (and, I’d imagine, other S3-backed FUSE filesystems). Obnam’s repository format is carefully documented and it is very conservatively designed through and through — clearly optimized for integrity above all else, including speed. Just what a backup program should be. It has a lot of configurable options, including chunk size, caching information (dedup tables can be RAM-hungry), etc. These default to fairly conservative values, and the performance of Obnam can be significantly improved with a few simple config tweaks.

Attic was also a leading contender. It has a few advantages over Obnam, actually. One is that it uses an rsync-like rolling checksum method. This means that if you add 1 byte at the beginning of a 100MB file, Attic will upload a 1-byte chunk and then reference the other chunks after that, while Obnam will have to re-upload the entire file, since its chunks start at the beginning of the file in fixed sizes. (The only time Obnam has chunks smaller than its configured chunk size is with very small files or the last chunk in a file.) Another nice feature of Attic is its use of “packs”, where it groups chunks together into larger pack files. This can have significant performance advantages when backing up small files, especially over high-latency protocols and links.

On the downside, Attic has a hardcoded fairly small chunksize that gives it a heavy metadata load. It is not at all as configurable as Obnam, and unlike Obnam, there is nothing you can do about this. The biggest reason I avoided it though was that it uses a single monolithic index file that would have to be uploaded from scratch after each backup. I calculated that this would be many GB in size, if not even tens of GB, for my intended use, and this is just not practical over the Internet. Attic assumes that if you are going remote, you run Attic on the remote so that the rewrite of this file doesn’t have to send all the data across the network. Although it does work atop davfs2, this support seemed like an afterthought and is clearly not very practical.

Attic did perform much better than Obnam in some ways, largely thanks to its pack support, but the monolothic index file was going to make it simply impractical to use.

There is a new fork of Attic called Borg that may, in the future, address some of these issues.

Brief honorable mentions: bup, zbackup, syncany

There are a few other backup tools that people are talking about which do dedup. bup is frequently mentioned, but one big problem with it is that it has no way to delete old data! In other words, it is more of an archive than a backup tool. zbackup is a really neat idea — it dedups anything you feed at it, such as a tar stream or “zfs send” stream, and can encrypt, too. But it doesn’t (yet) support removing old data either.

syncany is fundamentally a syncing tool, but can also be used from the command line to do periodic syncs to a remote. It supports encryption, sftp, webdave, etc. natively, and runs on quite a number of platforms easily. However, it doesn’t store a number of POSIX attributes, such as hard links, uid/gid owner, ACL, xattr, etc. This makes it impractical for use for even backing up my home directory; I make fairly frequent use of ln, both with and without -s. If there were some tool to create/restore archives of metadata, that might work out better.

First impressions and review of OwnCloud

In my recent post (I give up on Google), a lot of people suggested using OwnCloud as a replacement for several Google services. I’ve been playing around with it for a few days, and it is something of a mix of awesome and disappointing, in my opinion.

Files

OwnCloud started as a file-sync tool, somewhat akin to Google Drive and Dropbox. It has clients for every platform, and it is also a client for every platform: you can have subfolders of your OwnCloud installation stored on WebDav, *FTP*, Google Drive, Dropbox, you name it. It is a pretty nice integrator of other storage services, and provides the only way to use some of them on Linux (*cough* Google Drive *cough*)

One particularly interesting feature is the live editing in the browser of ODT, DOCX, and TXT files. This is somewhat similar to Google Docs and the only such thing I’ve seen in Open Source software. It writes changes directly back to the documents and, in my limited testing, seems to work well. A very nice feature!

I’ve tested the syncing only on Linux so far, but it looks solid.

There are two surprising issues, however: there is no deduplication and no delta-uploads. Add 10 bytes to the end of a 1GB file, and you re-upload the 1GB file. Thankfully the OwnCloud GUI client is smart enough to use inotify to notice an mv, but my guess is — and I haven’t tested this, but apparently OwnCloud doesn’t use hashes at all — that the CLI client would require a reupload after any mv, because it doesn’t run continuously.

In some situations, Syncany may be a useful work-around for this, as it does chunk-based dedup and client-side encryption. However, you would lose a lot of the sharing features inside OwnCloud by doing this, and the integration with the OwnCloud “apps” for photos, videos, and music.

The Android/mobile apps support all the usual auto-upload options.

Calendar

A lot of people report using OwnCloud as a calendar server, and it does indeed use CalDAV. With a program like DAVDroid or Mozilla Lightning, this makes, in theory, a full-functioning calendar syncing tool. There is, of course, also a web interface to the calendar. It, sadly, is limited. Or shall we say, VERY limited. Even something like sending an invite is missing — and in fact, the GUI for sharing an event is baffling. You can share it with someone, they get no say in whether or not it shows up, and it shows up on their calendar on the web only (not on synced copies) and they have no way to remove it!

Sharing calendars is similar; you can hide the display of any one of your calendars on the web interface, but not of any calendars shared with you. Baffling.

Address Book

I haven’t tested this yet, but there’s not much to test, I suspect. It can be shared with others, which I could see as a nice feature.

Bookmarks

An interesting bookmarks manager, though mysteriously not with Firefox sync support. There is Chrome sync support, and a separate Mozilla Sync support, but it doesn’t provide cross-browser syncing, apparently.

Music

It is designed to present an interface to music that is stored in Files. It provides an Ampache-compatible API, so there are a lot of clients that can stream music. It has very few options, not even for transcoding, so I don’t see how it would be useful for my FLAC collection.

Pictures

Sort of a gallery view of photos synced up with Files. Very basic. Has a sharing button to share a link to an entire folder, but no option to embed photos in blog posts at a lower resolution or shortcut to sharing individual photos.

Notes, Tasks, etc.

I haven’t had the chance to look at this much. Some of them sync to various clients. The Notes are saved as HTML files that get synced down.

Clients overall

There is a very helpful page that lists all the sync clients for OwnCloud — not just for files, but also for calendars, contacts, etc. The list is extensive!

Other options

The two other Open Source options mentioned on my blog post were Kolab and Sogo, and there is also Zimbra which also has a community edition. The Debian Groupware page lists a number of other groupware options as well. Citadel caught my eye (wow, it’s still around!). Sogo has ActiveSync support, which might make phone integration a lot easier. It is not dead-simple to set up like OwnCloud is, though, so I haven’t tried it out, but I will probably be looking at it and Citadel next.