Category Archives: Technology

The Yellow House Phone Company (Featuring Asterisk and an 11-year-old)

“Well Jacob, do you think we should set up our own pretend phone company in the house?”

“We can DO THAT?”

“Yes!”

“Then… yes. Yes! YES YES YESYESYESYES YES! Let’s do it, dad!”

Not long ago, my parents had dug up the old phone I used back in the day. We still have a landline, and Jacob was having fun discovering how an analog phone works. I told him about the special number he could call to get the time and temperature read out to him. He discovered what happens if you call your own number and hang up. He figured out how to play “Mary Had a Little Lamb” using touchtone keys (after a slightly concerned lecture from me setting out some rules to make sure his “musical dialing” wouldn’t result in any, well, dialing.)

He was hooked. So I thought that taking it to the next level would be a good thing for a rainy day. I have run Asterisk before, though I had unfortunately gotten rid of most of my equipment some time back. But I found a great deal on a Cisco 186 ATA (Analog Telephone Adapter). It has two FXS lines (FXS ports simulate the phone company, and provide dialtone and ring voltage to a connected phone), and of course hooks up to the LAN.

We plugged that in, and Jacob was amazed to see its web interface come up. I had to figure out how to configure it (unfortunately, it uses SCCP rather than SIP, and figuring out Asterisk’s chan_skinny took some doing, but we got there.)

I set up voicemail. He loved it. He promptly figured out how to record his own greetings. We set up a second phone on the other line, so he could call between them. The cordless phones in our house support SIP, so I configured one of them as a third line. He spent a long time leaving himself messages.

IMG_3465

Pretty soon we both started having ideas. I set up extension 777, where he could call for the time. Then he wanted a way to get the weather forecast. Well, weather-util generates a text-based report. With it, a little sed and grep tweaking, the espeak TTS engine, and a little help from sox, I had a shell script worked up that would read back a forecast whenever he called a certain extension. He was super excited! “That’s great, dad! Can it also read weather alerts too?” Sure! weather-util has a nice option just for that. Both boys cackled as the system tried to read out the NWS header (their timestamps like 201711031258 started with “two hundred one billion…”)

Then I found an online source for streaming NOAA Weather Radio feeds – Jacob enjoys listening to weather radio – and I set up another extension he could call to listen to that. More delight!

But it really took off when I asked him, “Would you like to record your own menu?” “You mean those things where it says press 1 or 2 for this or that?” “Yes.” “WE CAN DO THAT?” “Oh yes!” “YES, LET’S DO IT RIGHT NOW!”

So he recorded a menu, then came and hovered by me while I hacked up extensions.conf, then eagerly went back to the phone to try it. Oh the excitement of hearing hisown voice, and finding that it worked! Pretty soon he was designing sub-menus (“OK Dad, so we’ll set it up so people can press 2 for the weather, and then choose if they want weather radio or the weather report. I’m recording that now. Got it?”)

He has informed me that next Saturday we will build an intercom system “like we have at school.” I’m going to have to have some ideas on how to tie Squeezebox in with Asterisk to make that happen, I think. Maybe this will do.

Switching to xmonad + Gnome – and ditching a Mac

I have been using XFCE with xmonad for years now. I’m not sure exactly how many, but at least 6 years, if not closer to 10. Today I threw in the towel and switched to Gnome.

More recently, at a new job, I was given a Macbook Pro. I wasn’t entirely sure what to think of this, but I thought I’d give it a try. I found MacOS to be extremely frustrating and confining. It had no real support for a tiling window manager, and although projects like amethyst tried to approximate what xmonad can do on Linux, they were just too limited by the platform and were clunky. Moreover, the entire UI was surprisingly sluggish; maybe that was an induced effect from animations, but I don’t think that explains it. A Debisn stretch install, even on inferior hardware, was snappy in a way that MacOS never was. So I have requested to swap for a laptop that will run Debian. The strange use of Command instead of Control for things, combined with the overall lack of configurability of keybindings, meant that I was going to always be fighting muscle memory moving from one platform to another. Not only that, but being back in the world of a Free Software OS means a lot.

Now then, back to xmonad and XFCE situation. XFCE once worked very well with xmonad. Over the years, this got more challenging. Around the jessie (XFCE 4.10) time, I had to be very careful about when I would let it save my session, because it would easily break. With stretch, I had to write custom scripts because the panel wouldn’t show up properly, and even some application icons would be invisible, if things were started in a certain order. This took much trial and error and was still cumbersome.

Gnome 3, with its tightly-coupled Gnome Shell, has never been compatible with other window managers — at least not directly. A person could have always used MATE with xmonad — but a lot of people that run XFCE tend to have some Gnome 3 apps (for instance, evince) anyhow. Cinnamon also wouldn’t work with xmonad, because it is simply another tightly-coupled shell instead of Gnome Shell. And then today I discovered gnome-flashback. gnome-flashback is a Gnome 3 environment that uses the traditional X approach with a separate window manager (metacity of yore by default). Sweet.

It turns out that Debian’s xmonad has built-in support for it. If you know the secret: apt-get install gnome-session-flashback (OK, it’s not so secret; it’s even in xmonad’s README.Debian these days) Install that, plus gnome and gdm3 and things are nice. Configure xmonad with GNOME support and poof – goodness right out of the box, selectable from the gdm sessions list.

I still have some gripes about Gnome’s configurability (or lack thereof). But I’ve got to say: This environment is the first one I’ve ever used that got external display switching very nearly right without any configuration, and I include MacOS in that. Plug in an external display, and poof – it’s configured and set up. You can hit a toggle key (Windows+P by default) to change the configurations, or use the Display section in gnome-control-center. Unplug it, and it instantly reconfigures itself to put everything back on the laptop screen. Yessss! I used to have scripts to do this in the wheezy/jessie days. XFCE in stretch had numerous annoying failures in this area which rendered the internal display completely dark until the next reboot – very frustrating. With Gnome, it just works. And, even if you have “suspend on lid closed” turned on, if the system is powered up and hooked up to an external display, it will keep running even if the lid is closed, figuring you must be using it on the external screen. Another thing the Mac wouldn’t do well.

All in all, some pretty good stuff here. I continue to be impressed by stretch. It is darn impressive to put this OS on generic hardware and have it outshine the closed-ecosystem Mac!

The Joy of Exploring: Old Phone Systems, Pizza, and Discovery

This story involves boys pretending to be pizza deliverymen using a working automated Strowger telephone exchange demonstrator on display in a museum, which is very old and is, to my knowledge, the only such working exhibit in the world. (Yes, I have video.) But first, a thought on exploration.

There are those that would say that there is nothing left to explore anymore – that the whole earth is mapped, photographed by satellites, and, well, known.

I prefer to look at it a different way: the earth is full of places that billions of people will never see, and probably don’t even know about. Those places may be quiet country creeks, peaceful neighborhoods one block away from major tourist attractions, an MTA museum in Brooklyn, a state park in Arkansas, or a beautiful church in Germany.

Martha is not yet two months old, and last week she and I spent a surprisingly long amount of time just gazing at tree branches — she was mesmerized, and why not, because to her, everything is new.

As I was exploring in Portland two weeks ago, I happened to pick up a nearly-forgotten book by a nearly-forgotten person, Beryl Markham, a woman who was a pilot in Africa about 80 years ago. The passage that I happened to randomly flip to in the bookstore, which really grabbed my attention, was this:

The available aviation maps of Africa in use at that time all bore the cartographer’s scale mark, ‘1/2,000,000’ — one over two million. An inch on the map was about thitry-two miles in the air, as compared to the flying maps of Europe on which one inch represented no more than four air miles.

Moreover, it seemed that the printers of the African maps had a slightly malicious habit of including, in large letters, the names of towns, junctions, and villages which, while most of them did exist in fact, as a group of thatched huts may exist or a water hold, they were usually so inconsequential as completely to escape discovery from the cockpit.

Beyond this, it was even more disconcerting to examine your charts before a proposed flight only to find that in many cases the bulk of the terrain over which you had to fly was bluntly marked: ‘UNSURVEYED’.

It was as if the mapmakers had said, “We are aware that between this spot and that one, there are several hundred thousands of acres, but until you make a forced landing there, we won’t know whether it is mud, desert, or jungle — and the chances are we won’t know then!”

— Beryl Markham, West With the Night

My aviation maps today have no such markings. The continent is covered with radio beacons, the world with GPS, the maps with precise elevations of the ground and everything from skyscrapers to antenna towers.

And yet, despite all we know, the world is still a breathtaking adventure.

Yesterday, the boys and I were going to fly to Abilene, KS, to see a museum (Seelye Mansion). Circumstances were such that we neither flew, nor saw that museum. But we still went to Abilene, and wound up at the Museum of Independent Telephony, a wondrous place for anyone interested in the history of technology. As it is one of those off-the-beaten-path sorts of places, the boys got 2.5 hours to use the hands-on exhibits of real old phones, switchboards, and also the schoolhouse out back. They decided — why not? — to use this historic equipment to pretend to order pizzas.

Jacob and Oliver proceeded to invent all sorts of things to use the phones for: ordering pizza, calling the cops to chase the pizza delivery guys, etc. They were so interested that by 2PM we still hadn’t had lunch and they claimed “we’re not hungry” despite the fact that we were going to get pizza for lunch. And I certainly enjoyed the exhibits on the evolution of telephones, switching (from manual plugboards to automated switchboards), and such.

This place was known – it even has a website, I had been there before, and in fact so had the boys (my parents took them there a couple of years ago). But yesterday, we discovered the Strowger switch had been repaired since the last visit, and that it, in fact, is great for conversations about pizza.

Whether it’s seeing an eclipse, discovering a fascination with tree branches, or historic telephones, a spirit of curiosity and exploration lets a person find fun adventures almost anywhere.

Fixing the Problems with Docker Images

I recently wrote about the challenges in securing Docker container contents, and in particular with keeping up-to-date with security patches from all over the Internet.

Today I want to fix that.

Besides security, there is a second problem: the common way of running things in Docker pretends to provide a traditional POSIX API and environment, but really doesn’t. This is a big deal.

Before diving into that, I want to explain something: I have often heard it said the Docker provides single-process containers. This is unambiguously false in almost every case. Any time you have a shell script inside Docker that calls cp or even ls, you are running a second process. Web servers from Apache to whatever else use processes or threads of various types to service multiple connections at once. Many Docker containers are single-application, but a process is a core part of the POSIX API, and very little software would work if it was limited to a single process. So this is my little plea for more precise language. OK, soapbox mode off.

Now then, in a traditional Linux environment, besides your application, there are other key components of the system. These are usually missing in Docker containers.

So today, I will fix this also.

In my docker-debian-base images, I have prepared a system that still has only 11MB RAM overhead, makes minimal changes on top of Debian, and yet provides a very complete environment and API. Here’s what you get:

  • A real init system, capable of running standard startup scripts without modification, and solving the nasty Docker zombie reaping problem.
  • Working syslog, which can either export all logs to Docker’s logging infrastructure, or keep them within the container, depending on your preferences.
  • Working real schedulers (cron, anacron, and at), plus at least the standard logrotate utility to help prevent log files inside the container from becoming huge.

The above goes into my “minimal” image. Additional images add layers on top of it, and here are some of the features they add:

  • A real SMTP agent (exim4-daemon-light) so that cron and friends can actually send you mail
  • SSH client and server (optionally exposed to the Internet)
  • Automatic security patching via unattended-upgrades and needsrestart

All of the above, including the optional features, has an 11MB overhead on start. Not bad for so much, right?

From here, you can layer on top all your usual Dockery things. You can still run one application per container. But you can now make sure your disk doesn’t fill up from logs, run your database vacuuming commands at will, have your blog download its RSS feeds every few minutes, etc — all from within the container, as it should be. Furthermore, you don’t have to reinvent the wheel, because Debian already ships with things to take care of a lot of this out of the box — and now those tools will just work.

There is some popular work done in this area already by phusion’s baseimage-docker. However, I made my own for these reasons:

  • I wanted something based on Debian rather than Ubuntu
  • By using sysvinit rather than runit, the OS default init scripts can be used unmodified, reducing the administrative burden on container builders
  • Phusion’s system is, for some reason, not auto-built on the Docker hub. Mine is, so it will be automatically revised whenever the underlying Debian system, or the Github repository, is.

Finally a word on the choice to use sysvinit. It would have been simpler to use systemd here, since it is the default in Debian these days. Unfortunately, systemd requires you to poke some holes in the Docker security model, as well as mount a cgroups filesystem from the host. I didn’t consider this acceptable, and sysvinit ran without these workarounds, so I went with it.

With all this, Docker becomes a viable replacement for KVM for various services on my internal networks. I’ll be writing about that later.

Silent Data Corruption Is Real

Here’s something you never want to see:

ZFS has detected a checksum error:

   eid: 138
 class: checksum
  host: alexandria
  time: 2017-01-29 18:08:10-0600
 vtype: disk

This means there was a data error on the drive. But it’s worse than a typical data error — this is an error that was not detected by the hardware. Unlike most filesystems, ZFS and btrfs write a checksum with every block of data (both data and metadata) written to the drive, and the checksum is verified at read time. Most filesystems don’t do this, because theoretically the hardware should detect all errors. But in practice, it doesn’t always, which can lead to silent data corruption. That’s why I use ZFS wherever I possibly can.

As I looked into this issue, I saw that ZFS repaired about 400KB of data. I thought, “well, that was unlucky” and just ignored it.

Then a week later, it happened again. Pretty soon, I noticed it happened every Sunday, and always to the same drive in my pool. It so happens that the highest I/O load on the machine happens on Sundays, because I have a cron job that runs zpool scrub on Sundays. This operation forces ZFS to read and verify the checksums on every block of data on the drive, and is a nice way to guard against unreadable sectors in rarely-used data.

I finally swapped out the drive, but to my frustration, the new drive now exhibited the same issue. The SATA protocol does include a CRC32 checksum, so it seemed (to me, at least) that the problem was unlikely to be a cable or chassis issue. I suspected motherboard.

It so happened I had a 9211-8i SAS card. I had purchased it off eBay awhile back when I built the server, but could never get it to see the drives. I wound up not filling it up with as many drives as planned, so the on-board SATA did the trick. Until now.

As I poked at the 9211-8i, noticing that even its configuration utility didn’t see any devices, I finally started wondering if the SAS/SATA breakout cables were a problem. And sure enough – I realized I had a “reverse” cable and needed a “forward” one. $14 later, I had the correct cable and things are working properly now.

One other note: RAM errors can sometimes cause issues like this, but this system uses ECC DRAM and the errors would be unlikely to always manifest themselves on a particular drive.

So over the course of this, had I not been using ZFS, I would have had several megabytes of reads with undetected errors. Thanks to using ZFS, I know my data integrity is still good.

Two Boys, An Airplane, Plus Hundreds of Old Computers

“Was there anything you didn’t like about our trip?”

Jacob’s answer: “That we had to leave so soon!”

That’s always a good sign.

When I first heard about the Vintage Computer Festival Midwest, I almost immediately got the notion that I wanted to go. Besides the TRS-80 CoCo II up in my attic, I also have fond memories of an old IBM PC with CGA monitor, a 25MHz 486, an Alpha also in my attic, and a lot of other computers along the way. I didn’t really think my boys would be interested.

But I mentioned it to them, and they just lit up. They remembered the Youtube videos I’d shown them of old line printers and punch card readers, and thought it would be great fun. I thought it could be a great educational experience for them too — and it was.

It also turned into a trip that combined being a proud dad with so many of my other interests. Quite a fun time.

IMG_20160911_061456

(Jacob modeling his new t-shirt)

Captain Jacob

Chicago being not all that close to Kansas, I planned to fly us there. If you’re flying yourself, solid flight planning is always important. I had already planned out my flight using electronic tools, but I always carry paper maps with me in the cockpit for backup. I got them out and the boys and I planned out the flight the old-fashioned way.

Here’s Oliver using a scale ruler (with markings for miles corresponding to the scale of the map) and Jacob doing calculating for us. We measured the entire route and came to within one mile of the computer’s calculation for each segment — those boys are precise!

20160904_175519

We figured out how much fuel we’d use, where we’d make fuel stops, etc.

The day of our flight, we made it as far as Davenport, Iowa when a chance of bad weather en route to Chicago convinced me to land there and drive the rest of the way. The boys saw that as part of the exciting adventure!

Jacob is always interested in maps, and had kept wanting to use my map whenever we flew. So I dug an old Android tablet out of the attic, put Avare on it (which has aviation maps), and let him use that. He was always checking it while flying, sometimes saying this over his headset: “DING. Attention all passengers, this is Captain Jacob speaking. We are now 45 miles from St. Joseph. Our altitude is 6514 feet. Our speed is 115 knots. We will be on the ground shortly. Thank you. DING”

Here he is at the Davenport airport, still busy looking at his maps:

IMG_20160909_183813

Every little airport we stopped at featured adults smiling at the boys. People enjoyed watching a dad and his kids flying somewhere together.

Oliver kept busy too. He loves to help me on my pre-flight inspections. He will report every little thing to me – a scratch, a fleck of paint missing on a wheel cover, etc. He takes it seriously. Both boys love to help get the plane ready or put it away.

The Computers

Jacob quickly gravitated towards a few interesting things. He sat for about half an hour watching this old Commodore plotter do its thing (click for video):

VID_20160910_142044

His other favorite thing was the phones. Several people had brought complete analog PBXs with them. They used them to demonstrate various old phone-related hardware; one had several BBSs running with actual modems, another had old answering machines and home-security devices. Jacob learned a lot about phones, including how to operate a rotary-dial phone, which he’d never used before!

IMG_20160910_151431

Oliver was drawn more to the old computers. He was fascinated by the IBM PC XT, which I explained was just about like a model I used to get to use sometimes. They learned about floppy disks and how computers store information.

IMG_20160910_195145

He hadn’t used joysticks much, and found Pong (“this is a soccer game!”) interesting. Somebody has also replaced the guts of a TRS-80 with a Raspberry Pi running a SNES emulator. This had thoroughly confused me for a little while, and excited Oliver.

Jacob enjoyed an old TRS-80, which, through a modern Ethernet interface and a little computation help in AWS, provided an interface to Wikipedia. Jacob figured out the text-mode interface quickly. Here he is reading up on trains.

IMG_20160910_140524

I had no idea that Commodore made a lot of adding machines and calculators before they got into the home computer business. There was a vast table with that older Commodore hardware, too much to get on a single photo. But some of the adding machines had their covers off, so the boys got to see all the little gears and wheels and learn how an adding machine can do its printing.

IMG_20160910_145911

And then we get to my favorite: the big iron. Here is a VAX — a working VAX. When you have a computer that huge, it’s easier for the kids to understand just what something is.

IMG_20160910_125451

When we encountered the table from the Glenside Color Computer Club, featuring the good old CoCo IIs like what I used as a kid (and have up in my attic), I pointed out to the boys that “we have a computer just like this that can do these things” — and they responded “wow!” I think they are eager to try out floppy disks and disk BASIC now.

Some of my favorites were the old Unix systems, which are a direct ancestor to what I’ve been working with for decades now. Here’s AT&T System V release 3 running on its original hardware:

IMG_20160910_144923

And there were a couple of Sun workstations there, making me nostalgic for my college days. If memory serves, this one is actually running on m68k in the pre-Sparc days:

IMG_20160910_153418

Returning home

After all the excitement of the weekend, both boys zonked out for awhile on the flight back home. Here’s Jacob, sleeping with his maps still up.

IMG_20160911_132952

As we were nearly home, we hit a pocket of turbulence, the kind that feels as if the plane is dropping a bit (it’s perfectly normal and safe; you’ve probably felt that on commercial flights too). I was a bit concerned about Oliver; he is known to get motion sick in cars (and even planes sometimes). But what did I hear from Oliver?

“Whee! That was fun! It felt like a roller coaster! Do it again, dad!”

Easily Improving Linux Security with Two-Factor Authentication

2-Factor Authentication (2FA) is a simple way to help improve the security of your systems. It restricts the scope of damage if a machine is compromised. If, for instance, you have a security token or authenticator app on your phone that is required for ssh to a remote machine, then even if every laptop you use to connect to the remote is totally owned, an attacker cannot establish a new ssh session on their own.

There are a lot of tutorials out there on the Internet that get you about halfway there, so here is some more detail.

Background

In this article, I will be focusing on authentication in the style of Google Authenticator, which is a special case of OATH HOTP or TOTP. You can use the Google Authenticator app, FreeOTP, or a hardware token like Yubikey to generate tokens with this. They are all 100% compatible with Google Authenticator and libpam-google-authenticator.

The basic idea is that there is a pre-shared secret key. At each login, a different and unique token is required, which is generated based on the pre-shared secret key and some other information. With TOTP, the “other information” is the current time, implying that both machines must be reasably well in-sync time-wise. With HOTP, the “other information” is a count of the number of times the pre-shared key has been used. Both typically have a “window” on the server side that can let times within a certain number of seconds, or a certain number of login accesses, work.

The beauty of this system is that after the initial setup, no Internet access is required on either end to validate the key (though TOTP requires both ends to be reasonably in sync time-wise).

The basics: user account setup and ssh authentication

You can start with the basics by reading one of these articles: one, two, three. Debian/Ubuntu users will find both the pam module and the user account setup binary in libpam-google-authenticator.

For many, you can stop there. You’re done. But if you want to kick it up a notch, read on:

Enhancement 1: Requiring 2FA even when ssh public key auth is used

Let’s consider a scenario in which your system is completely compromised. Unless your ssh keys are also stored in something like a Yubikey Neo, they could wind up being compromised as well – if someone can read your files and sniff your keyboard, your ssh private keys are at risk.

So we can configure ssh and PAM so that a OTP token is required even for this scenario.

First off, in /etc/ssh/sshd_config, we want to change or add these lines:

UsePAM yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

This forces all authentication to pass two verification methods in ssh: publickey and keyboard-interactive. All users will have to supply a public key and then also pass keyboard-interactive auth. Normally keyboard-interactive auth prompts for a password, but we can change /etc/pam.d/sshd on this. I added this line at the very top of /etc/pam.d/sshd:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

This basically makes Google Authenticator both necessary and sufficient for keyboard-interactive in ssh. That is, whenever the system wants to use keyboard-interactive, rather than prompt for a password, it instead prompts for a token. Note that any user that has not set up google-authenticator already will be completely unable to ssh into their account.

Enhancement 1, variant 2: Allowing automated processes to root

On many of my systems, I have ~root/.ssh/authorized_keys set up to permit certain systems to run locked-down commands for things like backups. These are automated commands, and the above configuration will break them because I’m not going to be typing in codes at 3AM.

If you are very restrictive about what you put in root’s authorized_keys, you can exempt the root user from the 2FA requirement in ssh by adding this to sshd_config:

Match User root
  AuthenticationMethods publickey

This says that the only way to access the root account via ssh is to use the authorized_keys file, and no 2FA will be required in this scenario.

Enhancement 1, variant 2: Allowing non-pubkey auth

On some multiuser systems, some users may still want to use password auth rather than publickey auth. There are a few ways we can support that:

  1. Users without public keys will have to supply a OTP and a password, while users with public keys will have to supply public key, OTP, and a password
  2. Users without public keys will have to supply OTP or a password, while users with public keys will have to supply public key, OTP, or a password
  3. Users without public keys will have to supply OTP and a password, while users with public keys only need to supply the public key

The third option is covered in any number of third-party tutorials. To enable options 1 or 2, you’ll need to put this in sshd_config:

AuthenticationMethods publickey,keyboard-interactive keyboard-interactive

This means that to authenticate, you need to pass either publickey and then keyboard-interactive auth, or just keyboard-interactive auth.

Then in /etc/pam.d/sshd, you put this:

auth required pam_google_authenticator.so

As a sub-variant for option 1, you can add nullok to here to permit auth from people that do not have a Google Authenticator configuration.

Or for option 2, change “required” to “sufficient”. You should not add nullok in combination with sufficient, because that could let people without a Google Authenticator config authenticate completely without a password at all.

Enhancement 2: Configuring su

A lot of other tutorials stop with ssh (and maybe gdm) but forget about the other ways we authenticate or change users on a system. su and sudo are the two most important ones. If your root password is compromised, you don’t want anybody to be able to su to that account without having to supply a token. So you can set up google-authenticator for root.

Then, edit /etc/pam.d/su and insert this line after the pam_rootok.so line:

auth       required     pam_google_authenticator.so nullok

The reason you put this after pam_rootok.so is because you want to be able to su from root to any account without having to input a token. We add nullok to the end of this, because you may want to su to accounts that don’t have tokens. Just make sure to configure tokens for the root account first.

Enhancement 3: Configuring sudo

This one is similar to su, but a little different. This lets you, say, secure the root password for sudo.

Normally, you might sudo from your user account to root (if so configured). You might have sudo configured to require you to enter in your own password (rather than root’s), or to just permit you to do whatever you want as root without a password.

Our first step, as always, is to configure PAM. What we do here depends on your desired behavior: do you want to require someone to supply both a password and a token, or just a token, or require a token? If you want to require a token, put this at the top of /etc/pam.d/sudo:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

If you want to require a token and a password, change the bracketed string to “required”, and if you want a token or a password, change it to “sufficient”. As before, if you want to permit people without a configured token to proceed, add “nullok”, but do not use that with “sufficient” or the bracketed example here.

Now here comes the fun part. By default, if a user is required to supply a password to sudo, they are required to supply their own password. That does not help us here, because a user logged in to the system can read the ~/.google_authenticator file and easily then supply tokens for themselves. What you want to do is require them to supply root’s password. Here’s how I set that up in sudoers:

Defaults:jgoerzen rootpw
jgoerzen ALL=(ALL) ALL

So now, with the combination of this and the PAM configuration above, I can sudo to the root user without knowing its password — but only if I can supply root’s token. Pretty slick, eh?

Further reading

In addition to the basic tutorials referenced above, consider:

Edit: additional comments

Here are a few other things to try:

First, the libpam-google-authenticator module supports putting the Google Authenticator files in different locations and having them owned by a certain user. You could use this to, for instance, lock down all secret keys to be readable only by the root user. This would prevent users from adding, changing, or removing their own auth tokens, but would also let you do things such as reusing your personal token for the root account without a problem.

Also, the pam-oath module does much of the same things as the libpam-google-authenticator module, but without some of the help for setup. It uses a single monolithic root-owned password file for all accounts.

There is an oathtool that can be used to generate authentication codes from the command line.

Building a home firewall: review of pfsense

For some time now, I’ve been running OpenWRT on an RT-N66U device. I initially set that because I had previously been using my Debian-based file/VM server as a firewall, and this had some downsides: every time I wanted to reboot that, Internet for the whole house was down; shorewall took a fair bit of care and feeding; etc.

I’ve been having indications that all is not well with OpenWRT or the N66U in the last few days, and some long-term annoyances prompted me to search out a different solution. I figured I could buy an embedded x86 device, slap Debian on it, and be set.

The device I wound up purchasing happened to have pfsense preinstalled, so I thought I’d give it a try.

As expected, with hardware like that to work with, it was a lot more capable than OpenWRT and had more features. However, I encountered a number of surprising issues.

The biggest annoyance was that the system wouldn’t allow me to set up a static DHCP entry with the same IP for multiple MAC addresses. This is a very simple configuration in the underlying DHCP server, and OpenWRT permitted it without issue. It is quite useful so my laptop has the same IP whether connected by wifi or Ethernet, and I have used it for years with no issue. Googling it a bit turned up some rather arrogant pfsense people saying that this is “broken” and poor design, and that your wired and wireless networks should be on different VLANs anyhow. They also said “just give it the same hostname for the different IPs” — but it rejects this too. Sigh. I discovered, however, that downloading the pfsense backup XML file, editing the IP within, and re-uploading it gets me what I want with no ill effects!

So then I went to set up DNS. I tried to enable the “DNS Forwarder”, but it wouldn’t let me do that while the “DNS Resolver” was still active. Digging in just a bit, it appears that the DNS Forwarder and DNS Resolver both provide forwarding and resolution features; they just have different underlying implementations. This is not clear at all in the interface.

Next stop: traffic shaping. Since I use VOIP for work, this is vitally important for me. I dove in, and found a list of XML filenames for wizards: one for “Dedicated Links” and another for “Multiple Lan/Wan”. Hmmm. Some Googling again turned up that everyone suggests using the “Multiple Lan/Wan” wizard. Fine. I set it up, and notice that when I start an upload, my download performance absolutely tanks. Some investigation shows that outbound ACKs aren’t being handled properly. The wizard had created a qACK queue, but neglected to create a packet match rule for it, so ACKs were not being dealt with appropriately. Fixed that with a rule of my own design, and now downloads are working better again. I also needed to boost the bandwidth allocated to qACK (setting it to 25% seemed to do the trick).

Then there was the firewall rules. The “interface” section is first-match-wins, whereas the “floating” section is last-match-wins. This is rather non-obvious.

Getting past all the interface glitches, however, the system looks powerful, solid, and well-engineered under the hood, and fairly easy to manage.

I’m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

Mud, Airplanes, Arduino, and Fun

The last few weeks have been pretty hectic in their way, but I’ve also had the chance to take some time off work to spend with family, which has been nice.

Memorial Day: breakfast and mud

For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty.

Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then…. water. They put the hose over the slide and made a “water slide” (more mud slide maybe).

IMG_7688

Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours.

MVI_7738

Flying to Petit Jean, Arkansas

Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it’s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain.

And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning:

IMG_7475

And here’s a view of one of the trails:

IMG_7576

The sunset views were pretty nice, too:

IMG_7610

And finally, the plane we flew out in, parked all by itself on the ramp:

IMG_20160522_171823

It was truly a relaxing, peaceful, re-invigorating place.

Flying to Atchison

Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too.

Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River.

IMG_20160611_134313

I was amused to find this hanging in the county historical society museum:

IMG_20160611_153826

One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum:

VID_20160611_151504

An Arduino Project with Jacob

One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his “police station”, so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn’t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off:

VID_20160614_074802.mp4