Category Archives: Technology

Silent Data Corruption Is Real

Here’s something you never want to see:

ZFS has detected a checksum error:

   eid: 138
 class: checksum
  host: alexandria
  time: 2017-01-29 18:08:10-0600
 vtype: disk

This means there was a data error on the drive. But it’s worse than a typical data error — this is an error that was not detected by the hardware. Unlike most filesystems, ZFS and btrfs write a checksum with every block of data (both data and metadata) written to the drive, and the checksum is verified at read time. Most filesystems don’t do this, because theoretically the hardware should detect all errors. But in practice, it doesn’t always, which can lead to silent data corruption. That’s why I use ZFS wherever I possibly can.

As I looked into this issue, I saw that ZFS repaired about 400KB of data. I thought, “well, that was unlucky” and just ignored it.

Then a week later, it happened again. Pretty soon, I noticed it happened every Sunday, and always to the same drive in my pool. It so happens that the highest I/O load on the machine happens on Sundays, because I have a cron job that runs zpool scrub on Sundays. This operation forces ZFS to read and verify the checksums on every block of data on the drive, and is a nice way to guard against unreadable sectors in rarely-used data.

I finally swapped out the drive, but to my frustration, the new drive now exhibited the same issue. The SATA protocol does include a CRC32 checksum, so it seemed (to me, at least) that the problem was unlikely to be a cable or chassis issue. I suspected motherboard.

It so happened I had a 9211-8i SAS card. I had purchased it off eBay awhile back when I built the server, but could never get it to see the drives. I wound up not filling it up with as many drives as planned, so the on-board SATA did the trick. Until now.

As I poked at the 9211-8i, noticing that even its configuration utility didn’t see any devices, I finally started wondering if the SAS/SATA breakout cables were a problem. And sure enough – I realized I had a “reverse” cable and needed a “forward” one. $14 later, I had the correct cable and things are working properly now.

One other note: RAM errors can sometimes cause issues like this, but this system uses ECC DRAM and the errors would be unlikely to always manifest themselves on a particular drive.

So over the course of this, had I not been using ZFS, I would have had several megabytes of reads with undetected errors. Thanks to using ZFS, I know my data integrity is still good.

Two Boys, An Airplane, Plus Hundreds of Old Computers

“Was there anything you didn’t like about our trip?”

Jacob’s answer: “That we had to leave so soon!”

That’s always a good sign.

When I first heard about the Vintage Computer Festival Midwest, I almost immediately got the notion that I wanted to go. Besides the TRS-80 CoCo II up in my attic, I also have fond memories of an old IBM PC with CGA monitor, a 25MHz 486, an Alpha also in my attic, and a lot of other computers along the way. I didn’t really think my boys would be interested.

But I mentioned it to them, and they just lit up. They remembered the Youtube videos I’d shown them of old line printers and punch card readers, and thought it would be great fun. I thought it could be a great educational experience for them too — and it was.

It also turned into a trip that combined being a proud dad with so many of my other interests. Quite a fun time.

IMG_20160911_061456

(Jacob modeling his new t-shirt)

Captain Jacob

Chicago being not all that close to Kansas, I planned to fly us there. If you’re flying yourself, solid flight planning is always important. I had already planned out my flight using electronic tools, but I always carry paper maps with me in the cockpit for backup. I got them out and the boys and I planned out the flight the old-fashioned way.

Here’s Oliver using a scale ruler (with markings for miles corresponding to the scale of the map) and Jacob doing calculating for us. We measured the entire route and came to within one mile of the computer’s calculation for each segment — those boys are precise!

20160904_175519

We figured out how much fuel we’d use, where we’d make fuel stops, etc.

The day of our flight, we made it as far as Davenport, Iowa when a chance of bad weather en route to Chicago convinced me to land there and drive the rest of the way. The boys saw that as part of the exciting adventure!

Jacob is always interested in maps, and had kept wanting to use my map whenever we flew. So I dug an old Android tablet out of the attic, put Avare on it (which has aviation maps), and let him use that. He was always checking it while flying, sometimes saying this over his headset: “DING. Attention all passengers, this is Captain Jacob speaking. We are now 45 miles from St. Joseph. Our altitude is 6514 feet. Our speed is 115 knots. We will be on the ground shortly. Thank you. DING”

Here he is at the Davenport airport, still busy looking at his maps:

IMG_20160909_183813

Every little airport we stopped at featured adults smiling at the boys. People enjoyed watching a dad and his kids flying somewhere together.

Oliver kept busy too. He loves to help me on my pre-flight inspections. He will report every little thing to me – a scratch, a fleck of paint missing on a wheel cover, etc. He takes it seriously. Both boys love to help get the plane ready or put it away.

The Computers

Jacob quickly gravitated towards a few interesting things. He sat for about half an hour watching this old Commodore plotter do its thing (click for video):

VID_20160910_142044

His other favorite thing was the phones. Several people had brought complete analog PBXs with them. They used them to demonstrate various old phone-related hardware; one had several BBSs running with actual modems, another had old answering machines and home-security devices. Jacob learned a lot about phones, including how to operate a rotary-dial phone, which he’d never used before!

IMG_20160910_151431

Oliver was drawn more to the old computers. He was fascinated by the IBM PC XT, which I explained was just about like a model I used to get to use sometimes. They learned about floppy disks and how computers store information.

IMG_20160910_195145

He hadn’t used joysticks much, and found Pong (“this is a soccer game!”) interesting. Somebody has also replaced the guts of a TRS-80 with a Raspberry Pi running a SNES emulator. This had thoroughly confused me for a little while, and excited Oliver.

Jacob enjoyed an old TRS-80, which, through a modern Ethernet interface and a little computation help in AWS, provided an interface to Wikipedia. Jacob figured out the text-mode interface quickly. Here he is reading up on trains.

IMG_20160910_140524

I had no idea that Commodore made a lot of adding machines and calculators before they got into the home computer business. There was a vast table with that older Commodore hardware, too much to get on a single photo. But some of the adding machines had their covers off, so the boys got to see all the little gears and wheels and learn how an adding machine can do its printing.

IMG_20160910_145911

And then we get to my favorite: the big iron. Here is a VAX — a working VAX. When you have a computer that huge, it’s easier for the kids to understand just what something is.

IMG_20160910_125451

When we encountered the table from the Glenside Color Computer Club, featuring the good old CoCo IIs like what I used as a kid (and have up in my attic), I pointed out to the boys that “we have a computer just like this that can do these things” — and they responded “wow!” I think they are eager to try out floppy disks and disk BASIC now.

Some of my favorites were the old Unix systems, which are a direct ancestor to what I’ve been working with for decades now. Here’s AT&T System V release 3 running on its original hardware:

IMG_20160910_144923

And there were a couple of Sun workstations there, making me nostalgic for my college days. If memory serves, this one is actually running on m68k in the pre-Sparc days:

IMG_20160910_153418

Returning home

After all the excitement of the weekend, both boys zonked out for awhile on the flight back home. Here’s Jacob, sleeping with his maps still up.

IMG_20160911_132952

As we were nearly home, we hit a pocket of turbulence, the kind that feels as if the plane is dropping a bit (it’s perfectly normal and safe; you’ve probably felt that on commercial flights too). I was a bit concerned about Oliver; he is known to get motion sick in cars (and even planes sometimes). But what did I hear from Oliver?

“Whee! That was fun! It felt like a roller coaster! Do it again, dad!”

Easily Improving Linux Security with Two-Factor Authentication

2-Factor Authentication (2FA) is a simple way to help improve the security of your systems. It restricts the scope of damage if a machine is compromised. If, for instance, you have a security token or authenticator app on your phone that is required for ssh to a remote machine, then even if every laptop you use to connect to the remote is totally owned, an attacker cannot establish a new ssh session on their own.

There are a lot of tutorials out there on the Internet that get you about halfway there, so here is some more detail.

Background

In this article, I will be focusing on authentication in the style of Google Authenticator, which is a special case of OATH HOTP or TOTP. You can use the Google Authenticator app, FreeOTP, or a hardware token like Yubikey to generate tokens with this. They are all 100% compatible with Google Authenticator and libpam-google-authenticator.

The basic idea is that there is a pre-shared secret key. At each login, a different and unique token is required, which is generated based on the pre-shared secret key and some other information. With TOTP, the “other information” is the current time, implying that both machines must be reasably well in-sync time-wise. With HOTP, the “other information” is a count of the number of times the pre-shared key has been used. Both typically have a “window” on the server side that can let times within a certain number of seconds, or a certain number of login accesses, work.

The beauty of this system is that after the initial setup, no Internet access is required on either end to validate the key (though TOTP requires both ends to be reasonably in sync time-wise).

The basics: user account setup and ssh authentication

You can start with the basics by reading one of these articles: one, two, three. Debian/Ubuntu users will find both the pam module and the user account setup binary in libpam-google-authenticator.

For many, you can stop there. You’re done. But if you want to kick it up a notch, read on:

Enhancement 1: Requiring 2FA even when ssh public key auth is used

Let’s consider a scenario in which your system is completely compromised. Unless your ssh keys are also stored in something like a Yubikey Neo, they could wind up being compromised as well – if someone can read your files and sniff your keyboard, your ssh private keys are at risk.

So we can configure ssh and PAM so that a OTP token is required even for this scenario.

First off, in /etc/ssh/sshd_config, we want to change or add these lines:

UsePAM yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

This forces all authentication to pass two verification methods in ssh: publickey and keyboard-interactive. All users will have to supply a public key and then also pass keyboard-interactive auth. Normally keyboard-interactive auth prompts for a password, but we can change /etc/pam.d/sshd on this. I added this line at the very top of /etc/pam.d/sshd:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

This basically makes Google Authenticator both necessary and sufficient for keyboard-interactive in ssh. That is, whenever the system wants to use keyboard-interactive, rather than prompt for a password, it instead prompts for a token. Note that any user that has not set up google-authenticator already will be completely unable to ssh into their account.

Enhancement 1, variant 2: Allowing automated processes to root

On many of my systems, I have ~root/.ssh/authorized_keys set up to permit certain systems to run locked-down commands for things like backups. These are automated commands, and the above configuration will break them because I’m not going to be typing in codes at 3AM.

If you are very restrictive about what you put in root’s authorized_keys, you can exempt the root user from the 2FA requirement in ssh by adding this to sshd_config:

Match User root
  AuthenticationMethods publickey

This says that the only way to access the root account via ssh is to use the authorized_keys file, and no 2FA will be required in this scenario.

Enhancement 1, variant 2: Allowing non-pubkey auth

On some multiuser systems, some users may still want to use password auth rather than publickey auth. There are a few ways we can support that:

  1. Users without public keys will have to supply a OTP and a password, while users with public keys will have to supply public key, OTP, and a password
  2. Users without public keys will have to supply OTP or a password, while users with public keys will have to supply public key, OTP, or a password
  3. Users without public keys will have to supply OTP and a password, while users with public keys only need to supply the public key

The third option is covered in any number of third-party tutorials. To enable options 1 or 2, you’ll need to put this in sshd_config:

AuthenticationMethods publickey,keyboard-interactive keyboard-interactive

This means that to authenticate, you need to pass either publickey and then keyboard-interactive auth, or just keyboard-interactive auth.

Then in /etc/pam.d/sshd, you put this:

auth required pam_google_authenticator.so

As a sub-variant for option 1, you can add nullok to here to permit auth from people that do not have a Google Authenticator configuration.

Or for option 2, change “required” to “sufficient”. You should not add nullok in combination with sufficient, because that could let people without a Google Authenticator config authenticate completely without a password at all.

Enhancement 2: Configuring su

A lot of other tutorials stop with ssh (and maybe gdm) but forget about the other ways we authenticate or change users on a system. su and sudo are the two most important ones. If your root password is compromised, you don’t want anybody to be able to su to that account without having to supply a token. So you can set up google-authenticator for root.

Then, edit /etc/pam.d/su and insert this line after the pam_rootok.so line:

auth       required     pam_google_authenticator.so nullok

The reason you put this after pam_rootok.so is because you want to be able to su from root to any account without having to input a token. We add nullok to the end of this, because you may want to su to accounts that don’t have tokens. Just make sure to configure tokens for the root account first.

Enhancement 3: Configuring sudo

This one is similar to su, but a little different. This lets you, say, secure the root password for sudo.

Normally, you might sudo from your user account to root (if so configured). You might have sudo configured to require you to enter in your own password (rather than root’s), or to just permit you to do whatever you want as root without a password.

Our first step, as always, is to configure PAM. What we do here depends on your desired behavior: do you want to require someone to supply both a password and a token, or just a token, or require a token? If you want to require a token, put this at the top of /etc/pam.d/sudo:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

If you want to require a token and a password, change the bracketed string to “required”, and if you want a token or a password, change it to “sufficient”. As before, if you want to permit people without a configured token to proceed, add “nullok”, but do not use that with “sufficient” or the bracketed example here.

Now here comes the fun part. By default, if a user is required to supply a password to sudo, they are required to supply their own password. That does not help us here, because a user logged in to the system can read the ~/.google_authenticator file and easily then supply tokens for themselves. What you want to do is require them to supply root’s password. Here’s how I set that up in sudoers:

Defaults:jgoerzen rootpw
jgoerzen ALL=(ALL) ALL

So now, with the combination of this and the PAM configuration above, I can sudo to the root user without knowing its password — but only if I can supply root’s token. Pretty slick, eh?

Further reading

In addition to the basic tutorials referenced above, consider:

Edit: additional comments

Here are a few other things to try:

First, the libpam-google-authenticator module supports putting the Google Authenticator files in different locations and having them owned by a certain user. You could use this to, for instance, lock down all secret keys to be readable only by the root user. This would prevent users from adding, changing, or removing their own auth tokens, but would also let you do things such as reusing your personal token for the root account without a problem.

Also, the pam-oath module does much of the same things as the libpam-google-authenticator module, but without some of the help for setup. It uses a single monolithic root-owned password file for all accounts.

There is an oathtool that can be used to generate authentication codes from the command line.

Building a home firewall: review of pfsense

For some time now, I’ve been running OpenWRT on an RT-N66U device. I initially set that because I had previously been using my Debian-based file/VM server as a firewall, and this had some downsides: every time I wanted to reboot that, Internet for the whole house was down; shorewall took a fair bit of care and feeding; etc.

I’ve been having indications that all is not well with OpenWRT or the N66U in the last few days, and some long-term annoyances prompted me to search out a different solution. I figured I could buy an embedded x86 device, slap Debian on it, and be set.

The device I wound up purchasing happened to have pfsense preinstalled, so I thought I’d give it a try.

As expected, with hardware like that to work with, it was a lot more capable than OpenWRT and had more features. However, I encountered a number of surprising issues.

The biggest annoyance was that the system wouldn’t allow me to set up a static DHCP entry with the same IP for multiple MAC addresses. This is a very simple configuration in the underlying DHCP server, and OpenWRT permitted it without issue. It is quite useful so my laptop has the same IP whether connected by wifi or Ethernet, and I have used it for years with no issue. Googling it a bit turned up some rather arrogant pfsense people saying that this is “broken” and poor design, and that your wired and wireless networks should be on different VLANs anyhow. They also said “just give it the same hostname for the different IPs” — but it rejects this too. Sigh. I discovered, however, that downloading the pfsense backup XML file, editing the IP within, and re-uploading it gets me what I want with no ill effects!

So then I went to set up DNS. I tried to enable the “DNS Forwarder”, but it wouldn’t let me do that while the “DNS Resolver” was still active. Digging in just a bit, it appears that the DNS Forwarder and DNS Resolver both provide forwarding and resolution features; they just have different underlying implementations. This is not clear at all in the interface.

Next stop: traffic shaping. Since I use VOIP for work, this is vitally important for me. I dove in, and found a list of XML filenames for wizards: one for “Dedicated Links” and another for “Multiple Lan/Wan”. Hmmm. Some Googling again turned up that everyone suggests using the “Multiple Lan/Wan” wizard. Fine. I set it up, and notice that when I start an upload, my download performance absolutely tanks. Some investigation shows that outbound ACKs aren’t being handled properly. The wizard had created a qACK queue, but neglected to create a packet match rule for it, so ACKs were not being dealt with appropriately. Fixed that with a rule of my own design, and now downloads are working better again. I also needed to boost the bandwidth allocated to qACK (setting it to 25% seemed to do the trick).

Then there was the firewall rules. The “interface” section is first-match-wins, whereas the “floating” section is last-match-wins. This is rather non-obvious.

Getting past all the interface glitches, however, the system looks powerful, solid, and well-engineered under the hood, and fairly easy to manage.

I’m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

Mud, Airplanes, Arduino, and Fun

The last few weeks have been pretty hectic in their way, but I’ve also had the chance to take some time off work to spend with family, which has been nice.

Memorial Day: breakfast and mud

For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty.

Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then…. water. They put the hose over the slide and made a “water slide” (more mud slide maybe).

IMG_7688

Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours.

MVI_7738

Flying to Petit Jean, Arkansas

Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it’s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain.

And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning:

IMG_7475

And here’s a view of one of the trails:

IMG_7576

The sunset views were pretty nice, too:

IMG_7610

And finally, the plane we flew out in, parked all by itself on the ramp:

IMG_20160522_171823

It was truly a relaxing, peaceful, re-invigorating place.

Flying to Atchison

Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too.

Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River.

IMG_20160611_134313

I was amused to find this hanging in the county historical society museum:

IMG_20160611_153826

One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum:

VID_20160611_151504

An Arduino Project with Jacob

One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his “police station”, so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn’t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off:

VID_20160614_074802.mp4

Count me as a systemd convert

Back in 2014, I wrote about some negative first impressions of systemd. I also had a plea to debian-project to end all the flaming, pointing out that “jessie will still boot”, noting that my preference was for sysvinit but things are what they are and it wasn’t that big of a deal.

Although I still have serious misgivings about the systemd upstream’s attitude, I’ve got to say I find the system rather refreshing and useful in practice.

Here’s an example. I was debugging the boot on a server recently. It mounts a bunch of NFS filesystems and runs a third-party daemon that is started from an old-style /etc/init.d script.

We had a situation where the NFS filesystems the daemon required didn’t mount on boot. The daemon then was started, and unfortunately it basically does a mkdir -p on startup. So it started running and processing requests with negative results.

So there were two questions: why did the NFS filesystems fail to start, and how could we make sure the daemon wouldn’t start without them mounted? For the first, journalctl -xb was immensely helpful. It logged the status of each individual mount, and it turned out that it looked like a modprobe or kernel race condition when a bunch of NFS mounts were kicked off in parallel and all tried to load the nfsv4 module at the same time. That was easy enough to work around by adding nfsv4 to /etc/modules. Now for the other question: refusing to start the daemon if the filesystems weren’t there.

With systemd, this was actually trivial. I created /etc/systemd/system/mydaemon.service.requires (I’ll call the service “mydaemon” here), and in it I created a symlink to /lib/systemd/system/remote-fs.target. Then systemctl daemon-reload, and boom, done. systemctl list-dependencies mydaemon will even show the the dependency tree, color-coded status of each item on it, and will actually show every single filesystem that remote-fs requires and the status of it in one command. Super handy.

In a non-systemd environment, I’d probably be modifying the init script and doing a bunch of manual scripting to check the filesystems. Here, one symlink and one command did it, and I get tools to inspect the status of the mydaemon prerequisites for free.

I’ve got to say, as someone that has occasionally had to troubleshoot boot ordering and update-rc.d symlink hell, troubleshooting this stuff in systemd is considerably easier and the toolset is more powerful. Yes, it has its set of poorly-documented complexity, but then so did sysvinit.

I never thought the “world is falling” folks were right, but by now I can be counted among those that feels like systemd has matured to the point where it truly is superior to sysvinit. Yes, in 2014 it had some bugs, but by here in 2016 it looks pretty darn good and I feel like Debian’s decision has been validated through my actual experience with it.

Bach, Dot Matrix Printers, and Dinner

Dinner last night started out all normal. Then Jacob and Oliver started asking me about printers. First they wanted to know how an ink jet printer works. Then they wanted to know how a laser printer works. Then they wanted to know what would happen if you’d put ink in a laser printer or toner in an ink jet. They were fascinated as I described the various kinds of clogging and ruining that would inevitably occur.

Then these words: “What other kinds of printers are there?”

So our dinner conversation started to resolve around printers. I talked about daisy wheel printers, line printers, dot matrix printers. I explained the type chain of line printers, the pins of dot matrix. “More printers!” I had to dig deeper into my memory: wax transfer printers, thermal printers, dye sublimation, always describing a bit about how each one worked — except for dye sublimation, which I couldn’t remember many details about. “More printers!” So we went onwards towards the printing press, offset printing, screen printing, mimeograph, and photocopiers. Although I could give them plenty of details about most of the printers, I also failed under their barrage of questions about offset printing. So I finally capitulated, and said “should I go get my phone and look it up while you finish eating?” “YEAH!”

So I looked up the misty details of dye sublimation and offset printing and described how they worked. That seemed to satisfy them. Then they asked me what my favorite kind of printer was. I said “dot matrix, because it makes the best sound.” That had their attention. They stopped eating to ask the vitally important question: “Dad, what sound does it make?” At this point, I did my best dot matrix impression at the dinner table, to much laughter and delight.

Before long, they wanted to see videos of dot matrix printers. They were fascinated by them. And then I found this gem of a dot matrix printer playing a famous Bach tune, which fascinated me also:

I guess it must have all sunk in, because this morning before school Jacob all of a sudden begged to see the fuser in my laser printer. So we turned it around, opened up the back panel — to his obvious excitement — and then I pointed to the fuser, with its “hot” label. I even heard a breathy “wow” from him.

Hiking a mountain with Ian Murdock

“Would you like to hike a mountain?” That question caught me by surprise. It was early in 2000, and I had flown to Tucson for a job interview. Ian Murdock was starting a new company, Progeny, and I was being interviewed for their first hire.

“Well,” I thought, “hiking will be fun.” So we rode a bus or something to the top of the mountain and then hiked down. Our hike was full of — well, everything. Ian talked about Tucson and the mountains, about his time as the Debian project leader, about his college days. I asked about the plants and such we were walking past. We talked about the plans for Progeny, my background, how I might fit in. It was part interview, part hike, part two geeks chatting. Ian had no HR telling him “you can’t go hiking down a mountain with a job candidate,” as I’m sure HR would have. And I am glad of it, because even 16 years later, that is still by far the best time I ever had at a job interview, despite the fact that it ruined the only pair of shoes I had brought along — I had foolishly brought dress shoes for a, well, job interview.

I guess it worked, too, because I was hired. Ian wanted to start up the company in Indianapolis, so over the next little while there was the busy work of moving myself and setting up an office. I remember those early days – Ian and I went computer shopping at a local shop more than once to get the first workstations and servers for the company. Somehow he had found a deal on some office space in a high-rent office building. I still remember the puzzlement on the faces of accountants and lawyers dressed up in suits riding in the elevators with us in our shorts and sandals, or tie-die, next to them.

Progeny’s story was to be a complicated one. We set out to rock the world. We didn’t. We didn’t set out to make lasting friendships, but we often did. We set out to accomplish great things, and we did some of that, too.

We experienced a full range of emotions there — elation when we got hardware auto-detection working well or when our downloads looked very popular, despair when our funding didn’t come through as we had hoped, being lost when our strategy had to change multiple times. And, as is the case everywhere, none of us were perfect.

I still remember the excitement after we published our first release on the Internet. Our little server that could got pegged at 100Mb of outbound bandwidth (that was something for a small company in those days.) The moment must have meant something, because I still have the mrtg chart from that day on my computer, 15 years later.

Progeny's Bandwidth Chart

We made a good Linux distribution, an excellent Debian derivative, but commercial success did not flow from it. In the succeeding months, Ian and the company tried hard to find a strategy that would stick and make our big break. But that never happened. We had several rounds of layoffs when hoped-for funding never materialized. Ian eventually lost control of the company, and despite a few years of Itanium contract work after I left, closed for good.

Looking back, Progeny was life — compressed. During the good times, we had joy, sense of accomplishment, a sense of purpose at doing something well that was worth doing. I had what was my dream job back then: working on Debian as I loved to do, making the world a better place through Free Software, and getting paid to do it. And during the bad times, different people at Progeny experienced anger, cynicism, apathy, sorrow for the loss of our friends or plans, or simply a feeling to soldier on. All of the emotions, good or bad, were warranted in their own way.

Bruce Byfield, one of my co-workers at Progeny, recently wrote a wonderful memoriam of Ian. He wrote, “More than anything, he wanted to repeat his accomplishment with Debian, and, naturally he wondered if he could live up to his own expectations of himself. That, I think, was Ian’s personal tragedy — that he had succeeded early in life, and nothing else he did with his life could quite measure up to his expectations and memories.”

Ian was not the only one to have some guilt over Progeny. I, for years, wondered if I should have done more for the company, could have saved things by doing something more, or different. But I always came back to the conclusion I had at the time: that there was nothing I could do — a terribly sad realization.

In the years since, I watched Ubuntu take the mantle of easy-to-install Debian derivative. I saw them reprise some of the ideas we had, and even some of our mistakes. But by that time, Progeny was so thoroughly forgotten that I doubt they even realized they were doing it.

I had long looked at our work at Progeny as a failure. Our main goal was never accomplished, our big product never sold many copies, our company eventually shuttered, our rock-the-world plan crumpled and forgotten. And by those traditional measurements, you could say it was a failure.

But I have come to learn in the years since that success is a lot more that those things. Success is also about finding meaning and purpose through our work. As a programmer, success is nailing that algorithm that lets the application scale 10x more than before, or solving that difficult problem. As a manager, success is helping team members thrive, watching pieces come together on projects that no one person could ever do themselves. And as a person, success comes from learning from our experiences, and especially our mistakes. As J. Michael Straczynski wrote in a Babylon 5 episode, loosely paraphrased: “Maybe this experience will be a good lesson. Too bad it was so painful, but there ain’t no other kind.”

The thing about Progeny is this – Ian built a group of people that wanted to change the world for the better. We gave it our all. And there’s nothing wrong with that.

Progeny did change the world. As us Progeny alumni have scattered around the country, we benefit from the lessons we learned there. And many of us were “different”, sort of out of place before Progeny, and there we found others that loved C compilers, bootloaders, and GPL licenses just as much as we did. We belonged, not just online but in life, and we went on to pull confidence and skill out of our experience at Progeny and use them in all sorts of ways over the years.

And so did Ian. Who could have imagined the founder of Debian and Progeny would one day lead the cause of an old-guard Unix turning Open Source? I run ZFS on my Debian system today, and Ian is partly responsible for that — and his time at Progeny is too.

So I can remember Ian, and Progeny, as a success. And I leave you with a photo of my best memento from the time there: an original unopened boxed copy of Progeny Linux.

IMG_6197_v1

Memories of a printer

I have a friend who hates printers. I’ll call him “Mark”, because that, incidentally, is his name. His hatred for printers is partly my fault, but that is, ahem, a story for another time that involves him returning from a battle with a printer with a combination of weld dust, toner, and a deep scowl on his face.

I also tend to hate printers. Driver issues, crinkled paper, toner spilling all over the place…. everybody hates printers.

But there is exactly one printer that I have never hated. It’s almost 20 years old, and has some stories to tell.

Nearly 20 years ago, I was about to move out of my parents’ house, and I needed a printer. I bought a LaserJet 6MP. This printer ought to have been made by Nokia. It’s still running fine, 18 years later. It turned out to be one of the best investments in computing equipment I’ve ever made. Its operating costs, by now, are cheaper than just about any printer you can buy today — less than one cent per page. It has been supported by every major operating system for years.

PostScript was important, because back then running Ghostscript to convert to PCL was both slow and a little error-prone. PostScript meant I didn’t need a finicky lpr/lprng driver on my Linux workstation to print. It just… printed. (Hat tip to anyone else that remembers the trial and error of constructing an /etc/printcap that would print both ASCII and PostScript files correctly!)

Out of this printer have come plane and train tickets, taking me across the country to visit family and across the world to visit friends. It’s printed resumes and recipes, music and university papers. I even printed wedding invitations and envelopes on them two years ago, painstakingly typeset in LaTeX and TeXmacs. I remember standing at the printer in the basement one evening, feeding envelope after envelope into the manual feed slot. (OK, so it did choke on a couple of envelopes, but overall it all worked great.)

The problem, though, is that it needs a parallel port. I haven’t had a PC with one of those in a long while. A few years ago, in a moment of foresight, I bought a little converter box that has an Ethernet port and a parallel port, with the idea that it would be pay for itself by letting me not maintain some old PC just to print. Well, it did, but now the converter box is dying! And they don’t make them anymore. So I finally threw in the towel and bought a new LaserJet.

It cost a third of what the 6MP did, has a copier, scanner, prints in color, does duplexing, has wifi… and, yes, still supports PostScript — strangely enough, a deciding factor in going with HP over Brother once again. (The other was image quality)

We shall see if I am still using it when I’m 50.