Objects On Earth Are Closer Than They Appear

“We all live beneath the great Big Dipper.”

So goes a line in a song I once heard the great Tony Brown sing. As I near the completion of my private pilot’s training, I’ve had more and more opportunities to literally see the wisdom in those words. Here’s a story of one of them.

Night

“A shining beacon in space — all alone in the night.”

– Babylon 5

A night cross-country flight, my first, taking off from a country airport. The plane lifts into the dark sky. The bright white lights of the runway get smaller, and disappear as I pass the edge of the airport. Directly below me, it looks like a dark sky; pitch black except for little pinpoints of light at farmhouses and the occasional car. But seconds later, an expanse of light unfolds, from a city it takes nearly an hour to reach by car. Already it is in sight, and as I look off to other directions, other cities even farther away are visible, too. The ground shows a square grid, the streets of the city visible for miles.

There are no highway signs in the sky. There are no wheels to keep my plane pointed straight. Even if I point the plane due south, if there is an east wind, I will actually be flying southwest. I use my eyes, enhanced by technology like a compass, GPS, and VHF radio beacons, to find my way. Before ever getting into the airplane, I have carefully planned my route, selecting both visual and technological waypoints along the way to provide many ways to ensure I am on course and make sure I don’t get lost.

Soon I see a flash repeating every few seconds in the distance — an airport beacon. Then another, and another. Little pinpoints of light nestled in the square orange grid. Wichita has many airports, each with its beacon, and one of them will be my first visual checkpoint of the night. I make a few clicks in the cockpit, and soon the radio-controlled lights at one of the airports spring to life, illuminating my first checkpoint. More than a mile of white lights there to welcome any plane that lands, and to show a point on the path of any plane that passes.

I continue my flight, sometimes turning on lights at airports, other times pointing my plane at lights from antenna towers (that are thousands of feet below me), sometimes keeping a tiny needle on my panel centered on a radio beacon. I land at a tiny, deserted airport, and then a few minutes later at a large commercial airport.

On my way back home, I fly solely by reference to the ground — directly over a freeway. I have other tools at my disposal, but don’t need them; the steady stream of red and white lights beneath me are all I need.

From my plane, there is just red and white. One after another, passing beneath me as I fly over them at 115 MPH. There is no citizen or undocumented immigrant, no rich or poor, no atheist or Christian or Muslim, no Democrat or Replubican, no American or Mexican, no adult or child, no rich or poor, no Porsche or Kia. Just red and white points of light, each one the same as the one before and the one after, stretching as far as I can see into the distance. All alike in the night.

You only need to get a hundred feet off the ground before you realize how little state lines, national borders, and the machinery of politics and exclusion really mean. From the sky, the difference between a field of corn and a field of wheat is far more significant than the difference between Kansas and Missouri.

This should be a comforting reminder to us. We are all unique, and beautiful in our uniqueness, but we are all human, each as valuable as the next.

Up in the sky, even though my instructor was with me, during quiet times it is easy to feel all alone in the night. But I know it is not the case. Only a few thousand feet separate my plane from those cars. My plane, too, has red and white lights.

How often at night, when the heavens were bright,
With the light of the twinkling stars
Have I stood here amazed, and asked as I gazed,
If their glory exceed that of ours.

– John A. Lomax

There’s still a chance to save WiFi

You may not know it, but wifi is under assault in the USA due to proposed FCC regulations about modifications to devices with modular radios. In short, it would make it illegal for vendors to sell devices with firmware that users can replace. This is of concern to everyone, because Wifi routers are notoriously buggy and insecure. It is also of special concern to amateur radio hobbyists, due to the use of these devices in the Amateur Radio Service (FCC Part 97).

I submitted a comment to the FCC about this, which I am pasting in here. This provides a background and summary of the issues for those that are interested. Here it is:

My comment has two parts: one, the impact on the Amateur Radio service; and two, the impact on security. Both pertain primarily to the 802.11 (“Wifi”) services typically operating under Part 15.

The Amateur Radio Service (FCC part 97) has long been recognized by the FCC and Congress as important to the nation. Through it, amateurs contribute to scientific knowledge, learn skills that bolster the technological competitiveness of the United States, and save lives through their extensive involvement in disaster response.

Certain segments of the 2.4GHz and 5GHz Wifi bands authorized under FCC Part 15 also fall under the frequencies available to licensed amateurs under FCC Part 97 [1].

By scrupulously following the Part 97 regulations, many amateur radio operators are taking devices originally designed for Part 15 use and modifying them for legal use under the Part 97 Amateur Radio Service. Although the uses are varied, much effort is being devoted to fault-tolerant mesh networks that provide high-speed multimedia communications in response to a disaster, even without the presence of any traditional infrastructure or Internet backbone. One such effort [2] causes users to replace the firmware on off-the-shelf Part 15 Wifi devices, reconfiguring them for proper authorized use under Part 97. This project has many vital properties, particularly the self-discovery of routes between nodes and self-healing nature of the mesh network. These features are not typically available in the firmware of normal Part 15 devices.

It should also be noted that there is presently no vendor of Wifi devices that operate under Part 97 out of the box. The only route available to amateurs is to take Part 15 devices and modify them for Part 97 use.

Amateur radio users of these services have been working for years to make sure they do not cause interference to Part 15 users [3]. One such effort takes advantage of the modular radio features of consumer Wifi gear to enable communication on frequencies that are within the Part 97 allocation, but outside (though adjacent) to the Part 15 allocation. For instance, the chart at [1] identifies frequencies such as 2.397GHz or 5.660GHz that will never cause interference to Part 15 users because they lie entirely outside the Part 15 Wifi allocation.

If the FCC prevents the ability of consumers to modify the firmware of these devices, the following negative consequences will necessarily follow:

1) The use of high-speed multimedia or mesh networks in the Amateur Radio service will be sharply curtailed, relegated to only outdated hardware.

2) Interference between the Amateur Radio service — which may use higher power or antennas with higher gain — and Part 15 users will be expanded, because Amateur Radio service users will no longer be able to intentionally select frequencies that avoid Part 15.

3) The culture of inventiveness surrounding wireless communication will be curtailed in an important way.

Besides the impact on the Part 97 Amateur Radio Service, I also wish to comment on the impact to end-user security. There have been a terrible slew of high-profile situations where very popular consumer Wifi devices have had incredible security holes. Vendors have often been unwilling to address these issues [4].

Michael Horowitz maintains a website tracking security bugs in consumer wifi routers [5]. Sadly these bugs are both severe and commonplace. Within just the last month, various popular routers have been found vulnerable to remote hacking [6] and platforms for launching Distributed Denial-of-Service (DDoS) attacks [7]. These impacted multiple models from multiple vendors. To make matters worse, most of these issues should have never happened in the first place, and were largely the result of carelessness or cluelessness on the part of manufacturers.

Consumers should not be at the mercy of vendors to fix their own networks, nor should they be required to trust unauditable systems. There are many well-regarded efforts to provide better firmware for Wifi devices, which still keep them operating under Part 15 restrictions. One is OpenWRT [8], which supports a wide variety of devices with a system built upon a solid Linux base.

Please keep control of our devices in the hands of consumers and amateurs, for the benefit of all.

References:

[1] https://en.wikipedia.org/wiki/High-speed_multimedia_radio

[2] http://www.broadband-hamnet.org/just-starting-read-this.html

[3] http://www.qsl.net/kb9mwr/projects/wireless/modify.html

[4] http://www.tomsitpro.com/articles/netgear-routers-security-hole,1-2461.html

[5] http://routersecurity.org/bugs.php

[6] https://www.kb.cert.org/vuls/id/950576

[7] https://www.stateoftheinternet.com/resources-cloud-security-2015-q2-web-security-report.html

[8] https://openwrt.org/

First steps: Debian on an Asus t100, and some negative experience with Gnome

The Asus t100 tablet is this amazing and odd little thing: it sells for under $200, yet has a full-featured Atom 64-bit CPU, 2GB RAM, 32 or 64GB SSD, etc. By default, it ships with Windows 8.1. It has a detachable keyboard, so it can be used as a tablet or a very small 10″ laptop.

I have never been a fan of Windows on it. It does the trick for web browsing and email, but I’d like to ssh into my machines sometimes, and I just can’t bring myself to type sensitive passwords into Windows.

I decided to try installing Debian on it. After a lot of abortive starts due to the UEFI-only firmware, I got jessie installed. (The installer was fine; it was Debian Live that wouldn’t boot.) I got wifi and battery status working via an upgrade to the 4.1 kernel. A little $10 Edimax USB adapter was handy to spare a bunch of copying via USB disks.

I have been using XFCE with XMonad for so many years that I am somewhat a stranger to other desktop environments. XMonad isn’t really suitable for a tablet, however, so I thought I’d try Gnome, especially after a fairly glowing review about its use on a tablet.

I am already disappointed after just a few minutes. There is no suspend button on the menu. Some Googling showed that holding Alt while hovering over the power off button will change it to a suspend button. And indeed it does. But… uh, what? That is so common and so non-obvious. And pushing the power button does… nothing. That’s right, nothing. Apparently the way to enable some action when you push the power button is to type in a settings command in a terminal. There’s no setting in the settings panel.

I initially ditched Gnome some years ago due to its penchant for removing features. I had hoped that this much time later, it would have passed that stage, but I’m already disappointed. I was hoping for some really nice integration with the system. But my XFCE setup has a very clear “When power button is pressed” setting. I have no idea why Gnome doesn’t.

Also, the touch screen works fine and it registers my touches, but whenever I touch anywhere, the cursor disappears. Weird, eh?

There are some things to fix yet on the tablet (sound, brightness adjustment, and making suspend reliable) but others have solved these in Ubuntu so I don’t think it’ll be too hard.

In the meantime, any suggestions regarding Gnome? Is it just going to annoy me? Maybe I should try KDE also. I’ve heard good things about Plasma Active, but don’t see it in Debian though.

Detailed Smart Card Cryptographic Token Security Guide

After my first post about smartcards under Linux, I thought I would share some information I’ve been gathering.

This post is already huge, so I am not going to dive into — much — specific commands, but I am linking to many sources with detailed instructions.

I’ve reviewed several types of cards. For this review, I will focus on the OpenPGP card and the Yubikey NEO, since the Cardomatic Smartcard-HSM is not supported by the gpg version in Jessie.

Both cards are produced by people with strong support for the Free Software ecosystem and have strong cross-platform support with source code.

OpenPGP card: Basics with GnuPG

The OpenPGP card is well-known as one of the first smart cards to work well on Linux. It is a single-application card focused on use with GPG. Generally speaking, by the way, you want GPG2 for use with smartcards.

Basically, this card contains three slots: decryption, signing, and authentication slots. The concept is that the private key portions of the keys used for these items are stored only on the card, can never be extracted from the card, and the cryptographic operations are performed on the card. There is more information in my original post. In a fairly rare move for smartcards, this card supports 4096-byte RSA keys; most are restricted to 2048-byte keys.

The FSF Europe hands these out to people and has a lot of good information about them online, including some HOWTOs. The official GnuPG smart card howto is 10 years old, and although it has some good background, I’d suggest using the FSFE instructions instead.

As you’ll see in a bit, most of this information also pertains to the OpenPGP mode of the Yubikey Neo.

OpenPGP card: Other uses

Of course, this is already pretty great to enhance your GPG security, but there’s a lot more that you can do with this card to add two-factor authentication (2FA) to a lot of other areas. Here are some pointers:

OpenPGP card: remote authentication with ssh

You can store the private part of your ssh key on the card. Traditionally, this was only done by using the ssh agent emulation mode of gnupg-agent. This is still possible, of course.

Now, however, the OpenSC project now supports the OpenPGP card as a PKCS#11 and PKCS#15 card, which means it works natively with ssh-agent as well. Try just ssh-add -s /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so if you’ve put a key in the auth slot with GPG. ssh-add -L will list its fingerprint for insertion into authorized_keys. Very simple!

As an aside: Comments that you need scute for PKCS#11 support are now outdated. I do not recommend scute. It is quite buggy.

OpenPGP card: local authentication with PAM

You can authenticate logins to a local machine by using the card with libpam-poldi — here are some instructions.

Between the use with ssh and the use with PAM, we have now covered 2FA for both local and remote use in Unix environments.

OpenPGP card: use on Windows

Let’s move on to Windows environments. The standard suggestion here seems to be the mysmartlogon OpenPGP mini-driver. It works with some sort of Windows CA system, or the local accounts using EIDAuthenticate. I have not yet tried this.

OpenPGP card: Use with X.509 or Windows Active Directory

You can use the card in X.509 mode via these gpgsm instructions, which apparently also work with Windows Active Directory in some fashion.

You can also use it with web browsers to present a certificate from a client for client authentication. For example, here are OpenSC instructions for Firefox.

OpenPGP card: Use with OpenVPN

Via the PKCS#11 mode, this card should be usable to authenticate a client to OpenVPN. See the official OpenVPN HOWTO or these other instructions for more.

OpenPGP card: a note on PKCS#11 and PKCS#15 support

You’ll want to install the opensc-pkcs11 package, and then give the path /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so whenever something needs the PKCS#11 library. There seem to be some locking/contention issues between GPG2 and OpenSC, however. Usually killing pcscd and scdaemon will resolve this.

I would recommend doing manipulation operations (setting PINs, generating or uploading keys, etc.) via GPG2 only. Use the PKCS#11 tools only to access.

OpenPGP card: further reading

Kernel Concepts also has some nice readers; you can get this card in a small USB form-factor by getting the mini-card and the Gemalto reader.

Yubikey Neo Introduction

The Yubikey Neo is a fascinating device. It is a small USB and NFC device, a little smaller than your average USB drive. It is a multi-application device that actually has six distinct modes:

  • OpenPGP JavaCard Applet (pc/sc-compatible)
  • Personal Identity Verification [PIV] (pc/sc-compatible, PKCS#11-compatible in Windows and OpenSC)
  • Yubico HOTP, via your own auth server or Yubico’s
  • OATH, with its two sub-modes:
    • OATH TOTP, with a mobile or desktop helper app (drop-in for Google Authenticator
    • OATH HOTP
  • Challenge-response mode
  • U2F (Universal 2nd Factor) with Chrome

There is a ton to digest with this device.

Yubikey Neo Basics

By default, the Yubikey Neo is locked to only a subset of its features. Using the yubikey-personalization tool (you’ll need the version in stretch; jessie is too old), you can use ykpersonalize -m86 to unlock the full possibilities of the card. Run that command, then unplug and replug the device.

It will present itself as a USB keyboard as well as a PC/SC-compatible card reader. It has a capacitive button, which is used to have it generate keystrokes to input validation information for HOTP or HMAC validation. It has two “slots” that can be configured with HMAC and HOTP; a short button press selects the default slot #1 and a long press selects slot #2.

But before we get into that, let’s step back at some basics.

opensc-tool –list-algorithms claims this card supports RSA with 1024, 2048, and 3072 sizes, and EC with 256 and 384-bit sizes. I haven’t personally verified anything other than RSA-2048 though.

Yubikey Neo: OpenPGP support

In this mode, the card is mostly compatible with the physical OpenPGP card. I say “mostly” because there are a few protocol differences I’ll get into later. It is also limited to 2048-byte keys.

Support for this is built into GnuPG and the GnuPG features described above all work fine.

In this mode, it uses firmware from the Yubico fork of the JavaCard OpenPGP Card applet. There are Yubico-specific tutorials available, but again, most of the general GPG stuff applies.

You can use gnupg-agent to use the card with SSH as before. However, due to some incompatibilities, the OpenPGP applet on this card cannot be used as a PKCS#11 card with either scute or OpenSC. That is not exactly a huge problem, however, as the card has another applet (PIV) that is compatible with OpenSC and so this still provides an avenue for SSH, OpenVPN, Mozilla, etc.

It should be noted that the OpenPGP applet on this card can also be used with NFC on Android with the OpenKeychain app. Together with pass (or its Windows, Mac, or phone ports), this makes a nicely secure system for storing passwords.

Yubikey Neo: PKCS#11 with the PIV applet

There is also support for the PIV standard on the Yubikey Neo. This is supported by default on Linux (via OpenSC) and Windows and provides a PKCS#11-compabible store. It should, therefore, be compatible with ssh-agent, OpenVPN, Active Directory, and all the other OpenPGP card features described above. The only difference is that it uses storage separate from the OpenPGP applet.

You will need one of the Yubico PIV tools to configure the key for it; in Debian, the yubico-piv-tool from stretch does this.

Here are some instructions on using the Yubikey Neo in PIV mode:

A final note: for security, it’s important to change the management key and PINs before deploying the PIV mode.

I couldn’t get this to work with Firefox, but it worked pretty much everywhere else.

Yubikey Neo: HOTP authentication

This is the default mode for your Yubikey; all other modes require enabling with ykpersonalize. In this mode, a 128-bit AES key stored on the Yubikey is used to generate one-time passwords (OTP). (This key was shared in advance with the authentication server.) A typical pattern would be for three prompts: username, password, and Yubikey HOTP. The user clicks in the Yubikey HOTP field, touches the Yubikey, and their one-time token is pasted in.

In the background, the service being authenticated to contacts an authentication server. This authentication server can be either your own (there are several open source implementations in Debian) or the free Yubicloud.

Either way, the server decrypts the encrypted part of the OTP, performs validity checks (making sure that the counter is larger than any counter it’s seen before, etc) and returns success or failure back to the service demanding authentication.

The first few characters of the posted auth contain the unencrypted key ID, and thus it can also be used to provide username if desired.

Yubico has provided quite a few integrations and libraries for this mode. A few highlights:

You can also find some details on the OTP mode. Here’s another writeup.

This mode is simple to implement, but it has a few downsides. One is that it is specific to the Yubico line of products, and thus has a vendor lock-in factor. Another is the dependence on the authentication server; this creates a potential single point of failure and can be undesireable in some circumtances.

Yubikey Neo: OATH and HOTP and TOTP

First, a quick note: OATH and OAuth are not the same. OATH is an authentication protocol, and OAuth is an authorization protocol. Now then…

Like Yubikey HOTP, OATH (both HOTP and TOTP) modes rely on a pre-shared key. (See details in the Yubico article.) Let’s talk about TOTP first. With TOTP, there is a pre-shared secret with each service. Each time you authenticate to that service, your TOTP generator combines the timestamp with the shared secret using a HMAC algorithm and produces a OTP that changes every 30 seconds. Google Authenticator is a common example of this protocol, and this is a drop-in replacement for it. Gandi has a nice description of it that includes links to software-only solutions on various platforms as well.

With the Yubikey, the shared secrets are stored on the card and processed within it. You cannot extract the shared secret from the Yubikey. Of course, if someone obtains physical access to your Yubikey they could use the shared secret stored on it, but there is no way they can steal the shared secret via software, even by compromising your PC or phone.

Since the Yubikey does not have a built-in clock, TOTP operations cannot be completed solely on the card. You can use a PC-based app or the Android application (Play store link) with NFC to store secrets on the device and generate your TOTP codes. Command-line users can also use the yubikey-totp tool in the python-yubico package.

OATH can also use HOTP. With HOTP, an authentication counter is used instead of a clock. This means that HOTP passwords can be generated entirely within the Yubikey. You can use ykpersonalize to configure either slot 1 or 2 for this mode, but one downside is that it can really only be used with one service per slot.

OATH support is all over the place; for instance, there’s libpam-oath from the OATH toolkit for Linux platforms. (Some more instructions on this exist.)

Note: There is another tool from Yubico (not in Debian) that can apparently store multiple TOTP and HOTP codes in the Yubikey, although ykpersonalize and other documentation cannot. It is therefore unclear to me if multiple HOTP codes are supported, and how..

Yubikey Neo: Challenge-Response Mode

This can be useful for doing offline authentication, and is similar to OATH-HOTP in a sense. There is a shared secret to start with, and the service trying to authenticate sends a challenge to the token, which must supply an appropriate response. This makes it only suitable for local authentication, but means it can be done fairly automatically and optionally does not even require a button press.

To muddy the waters a bit, it supports both “Yubikey OTP” and HMAC-SHA1 challenge-response modes. I do not really know the difference. However, it is worth noting that libpam-yubico works with HMAC-SHA1 mode. This makes it suitable, for instance, for logon passwords.

Yubikey Neo: U2F

U2F is a new protocol for web-based apps. Yubico has some information, but since it is only supported in Chrome, it is not of interest to me right now.

Yubikey Neo: Further resources

Yubico has a lot of documentation, and in particular a technical manual that is actually fairly detailed.

Closing comments

Do not think a hardware security token is a panacea. It is best used as part of a multi-factor authentication system; you don’t want a lost token itself to lead to a breach, just as you don’t want a compromised password due to a keylogger to lead to a breach.

These things won’t prevent someone that has compromised your PC from abusing your existing ssh session (or even from establishing new ssh sessions from your PC, once you’ve unlocked the token with the passphrase). What it will do is prevent them from stealing your ssh private key and using it on a different PC. It won’t prevent someone from obtaining a copy of things you decrypt on a PC using the Yubikey, but it will prevent them from decrypting other things that used that private key. Hopefully that makes sense.

One also has to consider the security of the hardware. On that point, I am pretty well satisfied with the Yubikey; large parts of it are open source, and they have put a lot of effort into hardening the hardware. It seems pretty much impervious to non-government actors, which is about the best guarantee a person can get about anything these days.

I hope this guide has been helpful.

The Time Machine of Durango

“The airplane may be the closest thing we have to a time machine.”

– Brian J. Terwilliger

IMG_5731_v1

There is something about that moment. Hiking in the mountains near Durango, Colorado, with Laura and the boys, we found a beautiful spot with a view of the valley. We paused to admire, and then –

The sound of a steam locomotive whistle from down below, sounding loud all the way up there, then echoing back and forth through the valley. Then the quieter, seemingly more distant sound of the steam engine heading across the valley, chugging and clacking as it goes. More whistles, the sight of smoke and then of the train full of people, looking like a beautiful model train from our vantage point.

IMG_5515

I’ve heard that sound on a few rare recordings, but never experienced it. I’ve been on steam trains a few times, but never spent time in a town where they still run all day, every day. It is a different sort of feeling to spend a week in a place where Jacob and Oliver would jump up several times a day and rush to the nearest window in an attempt to catch sight of the train.

IMG_5719_v1

Airplanes really can be a time machine in a sense — what a wondrous time to be alive, when things so ancient are within the reach of so many. I have been transported to Lübeck and felt the uneven 700-year-old stones of the Marienkirche underneath my feet, feeling a connection to the people that walked those floors for centuries. I felt the same in Prague, in St. George’s Basilica, built in 1142, and at the Acropolis of Lindos, with its ancient Greek temple ruins. In Kansas, I feel that when in the middle of the Flint Hills — rolling green hills underneath the pure blue sky with billowing white clouds, the sounds of crickets, frogs, and cicadas in my ears; the sights and sounds are pretty much as they’ve been for tens of thousands of years. And, of course, in Durango, arriving on a plane but seeing the steam train a few minutes later.

IMG_5571_v1

It was fitting that we were in Durango with Laura’s parents to celebrate their 50th anniversary. As we looked forward to riding the train, we heard their stories of visits to Durango years ago, of their memories of days when steam trains were common. We enjoyed thinking about what our lives would be like should we live long enough to celebrate 50 years of marriage. Perhaps we would still be in good enough health to be able to ride a steam train in Durango, telling about that time when we rode the train, which by then will have been pretty much the same for 183 years. Or perhaps we would take them to our creek, enjoying a meal at the campfire like I’ve done since I was a child.

Each time has its unique character. I am grateful for the cameras and airplanes and air conditioning we have today. But I am also thankful for those things that connect us with each other trough time, those rocks that are the same every year, those places that remind us how close we really are to those that came before.

True Things About Learning to Fly

I’ve been pretty quiet for the last few months because I’m learning to fly. I want to start with a few quotes about aviation. I have heard things like these from many people and can vouch for their accuracy:

Anyone can learn to fly.

Learning to fly is one of the hardest things you’ll ever do.

It is totally worth it. Being a pilot will give you a new outlook on life.

You’ll be amazed at what radios do a 3000ft. Have you ever had an 3000-foot antenna tower?

The world is glorious at 1000ft up.

Share your enthusiasm with those around you. You have a perspective very few ever see, except for a few seconds on the way to 35,000ft.

Earlier this month, I flew solo for the first time — the biggest milestone on the way to getting the pilot’s license. Here’s a photo my flight instructor took as I was coming in to land that day.

landing

Today I took my first flight to another airport. It wasn’t far — about 20 miles away — but it was still a thrill. I flew about 1500ft above the ground, roughly above a freeway that happened to be my route. From that height, things still look three-dimensional. The grain elevator that marked out the one small town, the manufacturing plant at another, the college at the third. Bales of hay dotting the fields, the occasional tractor creeping along a road, churches sticking up above the trees. These are places I’ve known for decades, and now, suddenly, they are all new.

What a time to be alive! I am glad that our world is still so full of wonder and beauty.

First steps with smartcards under Linux and Android — hard, but it works

Well this has been an interesting project.

It all started with a need to get better password storage at work. We wound up looking heavily at a GPG-based solution. This prompted the question: how can we make it even more secure?

Well, perhaps, smartcards. The theory is this: a smartcard holds your private keys in a highly-secure piece of hardware. The PC can never actually access the private keys. Signing and decrypting operations are done directly on the card to prevent the need to export the private key material to the PC. There are lots of “standards” to choose from (PKCS#11, PKCS#15, and OpenPGP card specs) that are relevant here. And there are ways to use SSH and OpenVPN with some of these keys too. Access to the card is protected by a passphrase (called a “PIN” in smartcard lingo, even though it need not be numeric). These smartcards might be USB sticks, or cards you pop into a reader. In any case, you can pop them out when not needed, pop them in to use them, and… well, pretty nice, eh?

So that’s the theory. Let’s talk a bit of reality.

First of all, it is hard for a person like me to evaluate how secure my data is in hardware. There was a high-profile bug in the OpenPGP JavaCard applet used by Yubico that caused the potential to use keys without a PIN, for instance. And how well protected is the key in the physical hardware? Granted, in most of these cards you’re talking serious hardware skill to compromise them, but still, this is unknown in absolute terms.

Here’s the bigger problem: compatibility. There are all sorts of card readers, but compatibility with pcsc-tools and pcscd on Linux seems pretty good. But the cards themselves — oh my. PKCS#11 defines an interface API, but each vendor would provide their own .so or .dll file to interface. Some cards (for instance, the ACOS5-64 mentioned on the Debian wiki!) are made by vendors that charge $50 for the privilege of getting the drivers needed to make them work… and they’re closed-source proprietary drivers at that.

Some attempts

I ordered several cards to evaluate: the OpenPGP card, specifically designed to support GPG; the ACOS5-64 card, the JavaCOS A22, the Yubikey Neo, and a simple reader listed on the GPG smartcard howto.

The OpenPGP card and ACOS5-64 are the only ones in the list that support 4096-bit RSA keys due to the computational demands of them. The others all support 2048-bit RSA keys.

The JavaCOS requires the user to install a JavaCard applet to the card to make it useable. The Yubico OpenPGP applet works here, along with GlobalPlatform to install it. I am not sure just how solid it is. The Yubikey Neo has yet to arrive; it integrates some interesting OAUTH and TOTP capabilities as well.

I found that Debian’s wiki page for smartcards lists a bunch of them that are not really useable using the tools in main. The ACOS5-64 was such a dud. But I got the JavaCOS A22 working quite nicely. It’s also NFC-enabled and works perfectly with OpenKeyChain on Android (looking like a “Yubikey Neo” to it, once the OpenPGP applet is installed). I’m impressed! Here’s a way to be secure with my smartphone without revealing everything all the time.

Really the large amount of time is put into figuring out how all this stuff fits together. I’m getting there, but I’ve got a ways to go yet.

Update: Corrected to read “signing and decrypting” rather than “signing and encrypting” operations are being done on the card. Thanks to Benoît Allard for catching this error.

Roundup of remote encrypted deduplicated backups in Linux

Since I wrote last about Linux backup tools, back in a 2008 article about BackupPC and similar toools and a 2011 article about dedpulicating filesystems, I’ve revisited my personal backup strategy a bit.

I still use ZFS, with my tool “simplesnap” that I wrote about in 2014 to perform local backups to USB drives, which get rotated offsite periodically. This has the advantage of being very fast and very secure, but I also wanted offsite backups over the Internet. I began compiling criteria, which ran like this:

  • Remote end must not need any special software installed. Storage across rsync, sftp, S3, WebDAV, etc. should all be good candidates. The remote end should not need to support hard links or symlinks, etc.
  • Cross-host deduplication at at least the file level is required, so if I move a 4GB video file from one machine to another, my puny DSL wouldn’t have to re-upload it.
  • All data that is stored remotely must be 100% encrypted 100% of the time. I must not need to have any trust at all in the remote end.
  • Each backup after the first must send only an incremental’s worth of data across the line. No periodic re-uploading of the entire data set can be done.
  • The repository format must be well-documented and stable.

So, how did things stack up?

Didn’t meet criteria

A lot of popular tools didn’t meet the criteria. Here are some that I considered:

  • BackupPC requires software on the remote end and does not do encryption.
  • None of the rsync hardlink tree-based tools are suitable here.
  • rdiff-backup requires software on the remote end and does not do encryption or dedup.
  • duplicity requires a periodic re-upload of a full backup, or incremental chains become quite long and storage-inefficient. It also does not support dedup, although it does have an impressive list of “dumb” storage backends.
  • ZFS, if used to do backups the efficient way, would require software to be installed on the remote end. If simple “zfs send” images are used, the same limitations as with duplicity apply.
  • The tools must preserve POSIX attributes like uid/gid, permission bits, symbolic links, hard links, etc. Support for xattrs is also desireable but not required.
  • bup and zbackup are both interesting deduplicators, but do not yet have support for removing old data, so are impractical for this purpose.
  • burp requires software on the server side.

Obnam and Attic/Borg Backup

Obnam and Attic (and its fork Borg Backup) are both programs that have a similar concept at their heart, which is roughly this: the backup repository stores small chunks of data, indexed by a checksum. Directory trees are composed of files that are assembled out of lists of chunks, so if any given file matches another file already in the repository somewhere, the added cost is just a small amount of metadata.

Obnam was eventually my tool of choice. It has built-in support for sftp, but its reliance on local filesystem semantics is very conservative and it works fine atop davfs2 (and, I’d imagine, other S3-backed FUSE filesystems). Obnam’s repository format is carefully documented and it is very conservatively designed through and through — clearly optimized for integrity above all else, including speed. Just what a backup program should be. It has a lot of configurable options, including chunk size, caching information (dedup tables can be RAM-hungry), etc. These default to fairly conservative values, and the performance of Obnam can be significantly improved with a few simple config tweaks.

Attic was also a leading contender. It has a few advantages over Obnam, actually. One is that it uses an rsync-like rolling checksum method. This means that if you add 1 byte at the beginning of a 100MB file, Attic will upload a 1-byte chunk and then reference the other chunks after that, while Obnam will have to re-upload the entire file, since its chunks start at the beginning of the file in fixed sizes. (The only time Obnam has chunks smaller than its configured chunk size is with very small files or the last chunk in a file.) Another nice feature of Attic is its use of “packs”, where it groups chunks together into larger pack files. This can have significant performance advantages when backing up small files, especially over high-latency protocols and links.

On the downside, Attic has a hardcoded fairly small chunksize that gives it a heavy metadata load. It is not at all as configurable as Obnam, and unlike Obnam, there is nothing you can do about this. The biggest reason I avoided it though was that it uses a single monolithic index file that would have to be uploaded from scratch after each backup. I calculated that this would be many GB in size, if not even tens of GB, for my intended use, and this is just not practical over the Internet. Attic assumes that if you are going remote, you run Attic on the remote so that the rewrite of this file doesn’t have to send all the data across the network. Although it does work atop davfs2, this support seemed like an afterthought and is clearly not very practical.

Attic did perform much better than Obnam in some ways, largely thanks to its pack support, but the monolothic index file was going to make it simply impractical to use.

There is a new fork of Attic called Borg that may, in the future, address some of these issues.

Brief honorable mentions: bup, zbackup, syncany

There are a few other backup tools that people are talking about which do dedup. bup is frequently mentioned, but one big problem with it is that it has no way to delete old data! In other words, it is more of an archive than a backup tool. zbackup is a really neat idea — it dedups anything you feed at it, such as a tar stream or “zfs send” stream, and can encrypt, too. But it doesn’t (yet) support removing old data either.

syncany is fundamentally a syncing tool, but can also be used from the command line to do periodic syncs to a remote. It supports encryption, sftp, webdave, etc. natively, and runs on quite a number of platforms easily. However, it doesn’t store a number of POSIX attributes, such as hard links, uid/gid owner, ACL, xattr, etc. This makes it impractical for use for even backing up my home directory; I make fairly frequent use of ln, both with and without -s. If there were some tool to create/restore archives of metadata, that might work out better.

First impressions and review of OwnCloud

In my recent post (I give up on Google), a lot of people suggested using OwnCloud as a replacement for several Google services. I’ve been playing around with it for a few days, and it is something of a mix of awesome and disappointing, in my opinion.

Files

OwnCloud started as a file-sync tool, somewhat akin to Google Drive and Dropbox. It has clients for every platform, and it is also a client for every platform: you can have subfolders of your OwnCloud installation stored on WebDav, *FTP*, Google Drive, Dropbox, you name it. It is a pretty nice integrator of other storage services, and provides the only way to use some of them on Linux (*cough* Google Drive *cough*)

One particularly interesting feature is the live editing in the browser of ODT, DOCX, and TXT files. This is somewhat similar to Google Docs and the only such thing I’ve seen in Open Source software. It writes changes directly back to the documents and, in my limited testing, seems to work well. A very nice feature!

I’ve tested the syncing only on Linux so far, but it looks solid.

There are two surprising issues, however: there is no deduplication and no delta-uploads. Add 10 bytes to the end of a 1GB file, and you re-upload the 1GB file. Thankfully the OwnCloud GUI client is smart enough to use inotify to notice an mv, but my guess is — and I haven’t tested this, but apparently OwnCloud doesn’t use hashes at all — that the CLI client would require a reupload after any mv, because it doesn’t run continuously.

In some situations, Syncany may be a useful work-around for this, as it does chunk-based dedup and client-side encryption. However, you would lose a lot of the sharing features inside OwnCloud by doing this, and the integration with the OwnCloud “apps” for photos, videos, and music.

The Android/mobile apps support all the usual auto-upload options.

Calendar

A lot of people report using OwnCloud as a calendar server, and it does indeed use CalDAV. With a program like DAVDroid or Mozilla Lightning, this makes, in theory, a full-functioning calendar syncing tool. There is, of course, also a web interface to the calendar. It, sadly, is limited. Or shall we say, VERY limited. Even something like sending an invite is missing — and in fact, the GUI for sharing an event is baffling. You can share it with someone, they get no say in whether or not it shows up, and it shows up on their calendar on the web only (not on synced copies) and they have no way to remove it!

Sharing calendars is similar; you can hide the display of any one of your calendars on the web interface, but not of any calendars shared with you. Baffling.

Address Book

I haven’t tested this yet, but there’s not much to test, I suspect. It can be shared with others, which I could see as a nice feature.

Bookmarks

An interesting bookmarks manager, though mysteriously not with Firefox sync support. There is Chrome sync support, and a separate Mozilla Sync support, but it doesn’t provide cross-browser syncing, apparently.

Music

It is designed to present an interface to music that is stored in Files. It provides an Ampache-compatible API, so there are a lot of clients that can stream music. It has very few options, not even for transcoding, so I don’t see how it would be useful for my FLAC collection.

Pictures

Sort of a gallery view of photos synced up with Files. Very basic. Has a sharing button to share a link to an entire folder, but no option to embed photos in blog posts at a lower resolution or shortcut to sharing individual photos.

Notes, Tasks, etc.

I haven’t had the chance to look at this much. Some of them sync to various clients. The Notes are saved as HTML files that get synced down.

Clients overall

There is a very helpful page that lists all the sync clients for OwnCloud — not just for files, but also for calendars, contacts, etc. The list is extensive!

Other options

The two other Open Source options mentioned on my blog post were Kolab and Sogo, and there is also Zimbra which also has a community edition. The Debian Groupware page lists a number of other groupware options as well. Citadel caught my eye (wow, it’s still around!). Sogo has ActiveSync support, which might make phone integration a lot easier. It is not dead-simple to set up like OwnCloud is, though, so I haven’t tried it out, but I will probably be looking at it and Citadel next.

I Give Up on Google: Free is Too Expensive

I am really tired of things Google has done lately.

The most recent example being retiring Classic Maps. That’s a problem, because the current Maps mysteriously doesn’t show most of my saved (“starred”) places. Google has known about this since at least 2013. There are posts all over their forums about it going back to when what is now “regular” Google Maps was beta. Google employees even knew about it and did nothing. For someone that made heavy use of it, this was quite annoying.

But there have been plenty of others:

  • Removing My Places and My Maps from Maps for Android. Those features were used to, for instance, plan trips, highlight routes, add campground possibilities, etc. (They eventually brought this feature back months/years later, in limited form.)
  • Removed the 7-day and month views from Calendar for Android, claiming this was “better” for users. Finally re-added those views a few months later after many complaints. I even participated in a survey process with them where they were clearly struggling to understand why anybody wanted to see 7 days at once, when that feature had been there for years…
  • Removing the XMPP capabilities in Google Talk/Hangouts.
  • Picasaweb pretty much shut down, with very strong redirects to Google+ Photos. Which still to this day doesn’t have a handy feature for embedding in a blog post or anything that’s not, well, Google+.
  • General creeping crapification of everything they touch. It’s almost like Microsoft in the 90s all over again. All of a sudden my starred places stop showing up in Google Maps, but show up in Google Drive — shared with the whole world. What? I never wanted them in Google Drive to start with.
  • All the products that are all-but-dead — Google Groups and the sad state of the Deja News archives. Maybe Google+ itself goes on this list soon?
  • Looks like they’re trying to kill off Google Voice and merge it into hangouts, but I can’t send a text from the web with Hangouts.
  • And this massive list of discontinued services and products. Yeowch. Remember when Google Code was hot, and then they didn’t touch it at all for years?
  • And they still haven’t fixed some really basic things, such as letting people change their email address when they get married.
  • Dropping SIP from Grand Central, ActiveSync from Apps, etc.

I even used to use Flickr, then moved to Picasa when Yahoo stopped investing in Flickr. Now I’m back to Flickr, because Google stopped investing in Picasa.

The takeaway is that you can’t really rely on Google for anything. Counting on something being there for an upcoming trip and then having it be suddenly yanked away is a level of frustration that just makes the service not so useful. Never knowing when obvious things (7-day calendar view) will be removed means you just can’t depend on it.

So, are there good alternatives? Things I’m thinking of include:

  • Alternative calendar applications. Ideally it would support shared calendars for multiple people in a family, an Android app that lets you easily view some or all calendars, etc. I wonder if outlook.com is really the only competitor here? Last I looked — a few years ago — none of the Open Source options really worked well.
  • Alternative mapping applications. Must-haves include directions, navigation in the car, saving points of interest, and offline storage on Android. Nice-to-haves would include restaurant review integration, etc. Looks like Nokia (HERE.com) and Mapquest, plus a few OSM spinoffs, are the leading contenders here.
  • Email is easily enough found elsewhere, and I’ve never used Gmail much anyhow.

Anybody else moving off Google?