Category Archives: Debian

Introducing the Command Line at 3 years

Jacob is very interested in how things work. He’s 3.5 years old, and into everything. He loves to look at propane tanks, as the pressure meter, and open the lids on top to see the vent underneath. Last night, I showed him our electric meter and the spinning disc inside it.

And, more importantly, last night I introduced him to the Linux command line interface, which I called the “black screen.” Now, Jacob can’t read yet, though he does know his letters. He had a lot of fun sort of exploring the system.

I ran “cat”, which will simply let him bash on the keyboard, and whenever he presses Enter, will echo what he typed back at him. I taught him how to hold Shift and press a number key to get a fun symbol. His favorite is the “hat” above the 6.

Then I ran tr a-z A-Z for him, and he got to watch the computer convert every lowercase letter into an uppercase letter.

Despite the fact that Jacob enjoys watching Youtube videos of trains and even a bit of Railroad Tycoon 3 with me, this was some pure exploration that he loves. Sometimes he’d say, “Dad, what will this key do?” Sometimes I didn’t know; some media keys did nothing, and some other keys caused weird things to appear. My keyboard has back and forward buttons designed to use with a web browser. He almost squealed with delight when he pressed the forward button and noticed it printed lots of ^@^@^@ characters on the screen when he held it down. “DAD! It makes LOTS of little hats! And what is that other thing?” (The at-sign).

I’ve decided it’s time to build a computer for Jacob. I have an old Sempron motherboard lying around, and an old 9″ black-and-white VGA CRT that’s pretty much indestructible, plus an old case or two. So it will cost nothing. This evening, Jacob will help me find the parts, and then he can help me assemble them all. (This should be interesting.)

Then I’ll install Debian while he sleeps, and by tomorrow he should be able to run cat all by himself. I think that, within a few days, he can probably remember how to log himself in and fire up a program or two without help.

I’m looking for suggestions for text-mode games appropriate to a 3-year-old. So far, I’ve found worm from bsdgames that looks good. It doesn’t require him to have quick reflexes or to read anything, and I think he’ll pick up using the arrow keys to move it just fine. I think that tetris is probably still a bit much, but maybe after he’s had enough of worm he would enjoy trying it.

I was asked on Twitter why I’ll be using the command line for him. There are a few reasons. One is that it will actually be usable on the 9″ screen, but another one is that it will expose the computer at a different level than a GUI would. He will inevitably learn about GUIs, but learning about a CLI isn’t inevitable. He won’t have to master coordination with a mouse right away, and there’s pretty much no way he can screw it up. (No, I won’t be giving him root yet!) Finally, it’s new and different to him, so he’s interested in it right now.

My first computer was a TRS-80 Color Computer (CoCo) II. Its primary interface, a BASIC interpreter, I guess counts as a command-line interface. I remember learning how to use that, and later DOS on a PC. Some of the games and software back then had no documentation and crashed often. Part of the fun, the challenge, and sometimes the frustration, was figuring out just what a program was supposed to do and how to use it. It will be fun to see what Jacob figures out.

Server upgraded to Debian lenny

This afternoon, I finally decided to upgrade my main server from Debian etch to lenny. Lenny is still testing, but is nearing release. This server is colocated with Core Networks, and I have no physical or console access to it. (Well, I can request the IP KVM if needed.) It also hadn’t been rebooted in over 200 days.

The actual upgrade itself was incredibly smooth. For those of you that don’t use Debian, you might be interested to know that you can upgrade a running system in-place. A reboot is not even strictly necessary, though you won’t get kernel updates without it.

There was a bit of config file tweaking for Exim and Apache, a small bit for PHP, and that was it for the entire thing.

EXCEPT for the two things that always really bug me: Horde/IMP and Ruby/Rails. Horde has the most annoying upgrade process of any web app I’ve used lately. You first go through reading two different upgrade docs. To upgrade imp, you pipe some SQL commands into a PostgreSQL psql process (only they only document the mysql command line). Ditto for horde. But for Turba, the address book, you have to run a PHP program from the command line. Only it doesn’t work from any place in your PATH, so you have to divine a location to copy it to, run it from there, hack up the database stuff in it, then remember to delete it.

And that concludes the documented upgrade process. Only — surprise — it’s not done. At this point, you’ll get weird PHP warnings all over your screen. Then you google them, and find you have to log in to the web app as an administrator, and run three different upgrade procedures from within it, each of which requires you to copy and paste a config file to disk.

A far cry from the WordPress single-click upgrade. And this is easier than Horde/IMP upgrades I’ve done in the past.

The other annoying thing is Ruby on Rails. I run one Rails app, Redmine, and it’s always annoying. You’ve got to get all sorts of these gems just right. Today they decided they didn’t like my new PostgreSQL driver for Ruby, but they weren’t exactly obvious about it. Try upgrading the Gems, and — surprise — AFTER they are upgraded, they say that I need a newer rubygems than’s in Debian. Oooookaaayy…. restore gems from backup, google some more, find a patch, apply it, hack for awhile, and finally it works. But I have no idea why.

So, overall, kudos to all the Debian developers for a smooth upgrade process. I hope I can say that about Horde and Rails in the future.

Oh, and by the way, I did reboot the server. It came right up with the new kernel an OS, no problem.

Administering Dozens of Debian Servers

At work, we have quite a few Debian servers. We have a few physical machines, then a number of virtual machines running under Xen. These servers are split up mainly along task-oriented lines: DNS server, LDAP server, file server, print server, mail server, several web app servers, ERP system, and the like.

In the past, we had fewer virtual instances and combined more services into a single OS install. This led to some difficulties, especially with upgrades. If we wanted to upgrade the OS for, say, the file server, we’d have to upgrade the web apps and test them along with it at the same time. This was not a terribly sustainable approach, hence the heavier reliance on smaller virtual environments.

All these virtual environments have led to their own issues. One of them is getting security patches installed. At present, that’s a mainly manual task. In the past, I used cron-apt a bit, but it seemed to be rather fragile. I’m wondering what people are using to get security updates onto servers in an automated fashion these days.

The other issue is managing the configuration of these things. We have some bits of configuration that are pretty similar between servers — the mail system setup, for instance. Most of them are just simple SMTP clients that need to be able to send out cron reports and the like. We had tried using cfengine2 for this, but it didn’t work out well. I don’t know if it was our approach or not, but we found that hacking cfengine2 after making changes on systems was too time-consuming, and so that task slipped and eventually cfengine2 wasn’t doing what it should anymore. And that even with taking advantage of it being able to do things like put the local hostname in the right places.

I’ve thought a bit about perhaps providing some locally-built packages that establish these config files, or load them up with our defaults. That approach has worked out well for me before, though it also means that pushing out changes isn’t a simple hack of a config file somewhere anymore.

It seems like a lot of the cfengine2/bcfg tools are designed for environments where servers are more homogenous than ours. bcfg2, in particular, goes down that road; it makes it difficult to be able to log on to a web server, apt-get install a few PHP modules that we need for a random app, and just proceed.

Any suggestions?

Crazy Cursor Conspiracy Finally Fully Fixed

So lately I had the bad fortune to type in apt-get install gnome-control-center on my workstation. It pulled in probably a hundred dependencies, but I confirmed installing it, never really looking at that list.

The next day, I had a reason to reboot. When I logged back in, I noticed that my beloved standard X11 cursors had been replaced by some ugly antialiased white cursor theme. I felt as if XP had inched closer to taking over my machine.

I grepped all over $HOME for some indication of what happened. I played with the cursor settings in gnome-control-center’s appearance thing, which didn’t appear to have any effect. When I logged out, I noticed that the cursor was messed up in kdm of all things, and no amount of restarting it could fix it.

After some grepping in /etc, I realized that I could fix it with this command:

update-alternatives –config x-cursor-theme

And I set it back to /etc/X11/cursors/core.theme. Ahh, happiness restored.

I guess that’ll teach me to install bits of gnome on my box. Maybe.

Thoughtfulness on the OpenSSL bug

By now, I’m sure you all have read about the OpenSSL bug discovered in Debian.

There’s a lot being written about it. There’s a lot of misinformation floating about, too. First thing to do is read this post, which should clear up some of that.

Now then, I’d like to think a little about a few things people have been saying.

People shouldn’t try to fix bugs they don’t understand.

At first, that sounds like a fine guideline. But when I thought about it a bit, I think it’s actually more along the lines of useless.

First of all, there is this problem: how do you know whether or not you understand it? Obviously, sometimes you know you don’t understand code well. But there are times when you think you do, but don’t. Especially when we’re talking about C and its associated manual memory management and manual error handling. I’d say that, for a C program of any given size, very few people really understand it. Especially since you may be dealing with functions that call other functions 5 deep, and one of those functions modifies what you thought was an input-only parameter in certain rare cases. Maybe it’s documented to do that, maybe not, but of course documentation cannot always be trusted either.

I’d say it’s more useful to say that people should get peer review of code whenever possible. Which, by the way, did occur here.

The Debian maintainer of this package {is an idiot, should be fired, should be banned}

I happen to know that the Debian programmer that made this patch is a very sharp individual. I have worked with him on several occasions and I would say that kicking him out of maintaining OpenSSL would be a quite stupid thing to do.

He is, like the rest of us, human. We might find that other people are considerably less perfect than he.

Nobody that isn’t running Debian or Ubuntu has any need to worry. This is all Debian’s fault.

I guess you missed the part of the advisory that mentioned that it also fixed an OpenSSL upstream bug (that *everyone* is vulnerable to) that permitted arbitrary code execution in a certain little-used protocol? OpenSSL has a history of security bugs over the years.

Of course, the big keygen bug is a Debian-specific thing.

Debian should send patches upstream

This is general practice in Debian. It happens so often, in fact, that the Debian bug-tracking system has had — for probably more than a decade — a feature that lets a Debian developer record that a bug reported to Debian has been forwarded to an upstream developer or bug-tracking system.

It is routine to send both bug reports and patches upstream. Some Debian developers are more closely aligned with upstream than others. In some cases, Debian developers are part of the upstream team. In others, upstream may be friendly and responsive enough that Debian developers run any potential patches to upstream code by them before committing them to Debian. (I tend to do this for Bacula). In some cases, upstream is busy and doesn’t respond fast or reliably or helpfully enough to permit Debian to make security updates or other important fixes in a timely manner. And sometimes, upstream is plain AWOL.

Of course, it benefits Debian developers to send patches upstream, because then they have a smaller diff to maintain when each new version comes out.

In this particular case, communication with upstream happened, but the end result just fell through the cracks.

Debian shouldn’t patch security-related stuff itself, ever

Well, that’s not a very realistic viewpoint. Every Linux distribution does this, for several reasons. First, a given stable release of a distribution may be older than the current state of the art upstream software, and some upstreams are not interested in patching old versions, while the new upstream versions introduce changes too significant to go into a security update. Secondly, some upstreams do not respond in a timely manner, and Debian wants to protect its users ASAP. Finally, some upstreams are simply bad at security, and having smart folks from Debian — and other distributions — write security patches is a benefit to the community.

LinuxCertified Laptop LC2100S

As you might know from reading my blog, at my workplace, we have largely standardized on Linux on the desktop and laptop.

We use systemimager to maintain a standard desktop image and a separate standard laptop image. These images differ because there are different assumptions. The desktop machines mount /home over NFS, authenticate to LDAP, etc. This doesn’t work on laptops. Moreover, desktops don’t use network-manager or wifi, but laptops do.

Our desktop image uses Debian’s hardware autodetection — plus a little hacking in /etc/init.d/gdm — to automatically adjust to a wide range of hardware. So far this has worked well.

Laptops are much more picky. Our standard laptop model had been the HP nc4400 — a small and light 12″ model that people here loved. HP discontinued that model. Their replacement was the 2510p. We ordered one in here for evaluation. Try as we might, we couldn’t get it to suspend and resume properly in Linux.

So I went out scouring the field of Linux laptops. Companies such as Emperor Linux buy retail laptops from people like Lenovo, test them for Linux, and sell them — at a premium. These were too expensive to justify at the quantities we need them.

Then I stumbled across Linux Certified. I’d never heard of them before. I called them up and asked a few questions. They don’t buy retail laptops, but instead have OEMs in Taiwan build laptops to their spec. They happen to use the same OEM that Fujitsu does, I believe. (No big company builds laptops in the USA these days). I asked them about wifi chipsets, video chipsets, whether they use stock kernels. I got clueful answers to all of these.

So we ordered one of their LC2100s models. They didn’t offer Debian preinstalled, but did offer Ubuntu, so I selected that. The laptop arrived a couple of days (!!) later, configured with the particular CPU, etc. that I selected.

I was surprised at the thrill I felt at taking a brand new laptop out of its box, turning it on, and watching Grub appear before my eyes. Ubuntu proceeded to boot. I then of course installed our regular Debian image on the thing to check it out.

It needed a kernel and xserver-xorg-video-intel from lenny, as well as the ipw3945 driver for wifi, but otherwise worked with the exact same software as our HP nc4400 image. (In fact, it wasn’t hard to support both laptops with that image, since both use a lot of Intel hardware.) The one trick was making hibernate call /etc/init.d/ipw3945d stop so that the ipw3945 module could be unloaded before suspend. (Why this particular chipset needs a daemon is beyond me, but oh well.)

The hardware is great. As far as I know, the ipw3945 was the only component that wasn’t directly and automatically supported by DFSG-free software in lenny main. The screen is sharp and high-contrast (it’s glossy, which I personally don’t like, but I bet our users will). The device itself feels sturdy. It’s small and dense. I haven’t opened it up, but it looks like all you need is a screwdriver to do so.

The only downside is that they don’t sell docking stations for it. Their standard answer on that is to buy a USB docking station. That’s a partial answer, but can’t handle power or video like a standard docking station will.

Also, the LC2100s is much cheaper than the HP laptop, even when configured when nicer specs in every way. That is no doubt partially due to the lack of the Windows tax.

I’m sending off an order for 4 more today, I believe.

Linux Hardware Support Better Than Windows

Something I often hear from people that talk about Linux on the desktop is this: people want to be able to go to the store, buy hardware, and be confident that it will Just Work.

I would like to point out that things are rarely this simple on Windows. And, in fact, things are often simpler on Linux these days.

Here’s the example that prompted this post.

I have a computer that’s about 4 years old. It’s my main desktop machine at home. It was still fast enough for me, but has been developing all sorts of weird behaviors. Certain USB ports stopped working altogether a few months ago. Then it started hanging during POST whenever I’d try to reboot — but would still boot OK about 80% of the time after a power cycle. Then it started randomly losing contact with my USB mouse until a reboot. And the last straw was when the display started randomly going out. I’ve told everyone that my machine has cancer and is slowly dying.

The case is a pretty nice full tower — solid and sturdy. I have an 160GB IDE drive in it. So I figured I will upgrade the motherboard, CPU, RAM, and add a 500GB SATA drive since they’re so cheap these days and I’m running out of space. I’d also have to buy a new video card since my old one was AGP and the new motherboard only has PCI Express for video. So about $700 later from Newegg (I got a Core 2 Duo E6750), the parts arrived.

I spent some time installing it all. The motherboard had only one IDE channel, and I didn’t have any IDE cable long enough to connect both the IDE hard disk and the optical drive, so I popped in an old Maxtor/Promise PCI Ultra133 controller I had sitting around to use with the DVD burner.

Now, to recap, the hardware that the OS would see as new/different is: CPU, RAM, IDE controller, SATA controller, Promise IDE controller, integrated NIC, sound, video.

Then the magic smoke test.

I turned on the machine. Grub appeared. Linux started booting.

Even though I had switched from the default Debian “supports everything” kernel to a K7 kernel, it still booted.

And every single piece of hardware was supported immediately. There was no “add new hardware” wizard that popped up, no “I’ve found new hardware” boxes. It just worked, silently, with no need to tell me anything or have me click on anything.

Only one piece required configuration: the NIC, thanks to some udev design flaws (it got renamed from eth0 to eth1 by udev). That took 20 seconds. Debian saw the IDE HDD, the SATA drive, the Promise controller, the DVD burner, the video card, the sound, and it all worked automatically. And Debian is not even a distro that occurs to a lot of people when they think of great hardware support.

Now let’s turn to Windows.

The Windows Nightmare

I have a legal copy of Windows XP Home that was preinstalled on the machine when I got it. I resized its partition down to about 20GB so that I could use 140GB for Linux. I use it rarely, primarily for gaming, and I’ve bought about 3 games in the last 4 years. I usually disconnect the network when I boot to Windows, though I do keep it current with updates.

I did some research on what Windows was going to do when I replace the hardware. The general consensus from people on the ‘net is that you can’t just replace a motherboard and expect everything to be happy. There were generally three different approaches suggested: 1) don’t even try, just reinstall; 2) do a rescue install after you move over; and 3) use sysprep. The rescue install has to be done by booting from an XP install CD, then picking a rescue install option somewhere. It will overwrite your installed Windows with the version from the CD. That means that I’d have to re-apply SP2, though bits of it that didn’t get overwritten would still be on the hard disk, and who knows what would happen to the registry.

Option #3 was to download sysprep (must have the Genuine Disadvantage ActiveX to get the free download from MS). Sysprep is designed to be used just prior to taking an image with ghost for replication. It removes the hardware-specific config (but not the drivers), as well as the product key, from the machine, but otherwise leaves it untouched. On the next boot, you get the “Welcome to XP” wizard.

One other strike against is that Compaq “helpfully” didn’t ship any install CDs with the machine. Under Windows, they did have a “create rescue CD” tool, which burned 7 CDs for me. But they are full Compaq-specific CDs, not one of them an XP CD, *AND* they check on boot to see if you’re using the same Compaq motherboard, and exit if not. Highly useless.

So I went with sysprep. Before my new hardware even arrived, I downloaded the Windows drivers for all of it. I burned them to a CD, and installed as many as I could on the system in advance. About half of them refused to install since the new hardware wasn’t there yet. I then took a raw image of the partition with dd, just in case. Finally, right before I swapped the hardware, I ran sysprep and let it shut down the machine.

So after the new hardware was installed came the adventure.

Windows booted to the “welcome to XP” thingy. The video, keyboard, mouse, and IDE HDD worked. That’s about it.

I went through the “welcome to XP wizard”. But the network didn’t work yet, so I couldn’t activate it. So I popped my handy driver CD in the drive. But what’s this? Windows doesn’t recognize the DVD drive because it doesn’t have drivers for this Promise controller that came out in, what, 2001? Sigh. Downloaded the drivers with the imac, copy them to a CF card, plug the USB CF reader into Windows.

While I was doing that, about 6 “found new hardware” dialogs got queued up. Not one of them could actually find a driver for my hardware, but that didn’t prevent Windows from making me click through them all.

So, install Promise driver from CF card, reboot. Click through new hardware dialogs again. Install network driver, reboot, click through dialogs. Install sound driver. Install Intel “chipset” driver, click through dialogs. Reboot. Install SATA driver. Reboot.

So the hardware appears to all be working by this point, though I have a Creative volume control (from the old hardware) and a Realtek one in the tray. Minor annoyance to deal with later.

Now I have to re-activate XP. I dutifully key in the magic string from the sticker on my case. Surprise surprise, the Internet-based activation fails because my hardware is different. So I have to call the 800 number. I have to read in 7 blocks of 6 digits, one block at a time. Then I answer some questions: have I activated Windows before, have I changed hardware, was the old hardware defective (yes, yes, and yes). Then I get 7 blocks of 6 digits read to me. Finally Windows is activated. PHEW! Why they couldn’t ask those questions with the online tool is beyond me.

Anyhow. Linux took me 20 seconds to get working. Windows, about 2 hours, plus another 2 hours for prep and research.

I did zero prep for Linux. I made one config change (GUI users could have just configured their machine to use eth1).

Other cool Linux HW features

Say you buy a new printer and want to get it set up. On Windows, you insert the CD, let it install 200MB of print drivers plus ads plus crap plus add something to your taskbar plus who knows what else. Probably reboot. Then the printer might actually print.

On Debian, you plug in the printer to the USB port. You type printconf. 5 seconds later, your printer works.

I have been unpleasantly surprised lately by just how difficult hardware support in Windows really is, especially since everyone keeps saying how good it is. It’s not good. Debian’s is better, in my opinion.

Debian Developers 7 Years Ago

Today while looking for something else, I stumbled across a DVD with the “last archive” of my old personal website. On it were a number of photos from the 2000 Annual Linux Conference in Atlanta, and the Debian developers that were there. These were posted in public for several years.

I’ve now posted all of them on flickr, preserving the original captions.

Here’s the obligatory sample:

20001018-01-06.jpg

That’s Joey Hess, using what I think was his Vaio. Most acrobatic keyboardist ever. Probably the only person that could write Perl with one hand comfortably.

What else can you see? The best of show award that Debian won that is now in my basement due to a complicated series of events, the Debian machines that were being shown off at the show, Sean Perry and Manoj, the photo with long-term corrupted caption, and of course, numerous shots of Branden.

I know the size stinks. It was scanned at a web resolution for 2000. I do still have the negatives somewhere and will post the rest of them, in higher res, when I find them.

Click here to view the full set.

And we’re off!

Yesterday afternoon, we started our information meetings with employees about our Linux on the desktop project. We’re underway on our migration.

But before I talk about that, I need to back up and describe what the project is.

We are converting approximately 80% of our 150 or so PC users to Linux desktops. They’re Debian etch (4.0) running Gnome, Firefox (Iceweasel), Evolution, NFSv4, and SystemImager. Over the coming days and weeks, I’ll be writing about why we’re doing this, how we’re making it happen, things we’ve run into along the way, and the technology behind it.

Today I’d like to start with a high-level overview of the reasons we started investigating this option.

It became apparent that Vista was going to be a problem for us. Most of our desktop PCs are not very old, but Vista meant a significant degradation in performance from the Windows XP Pro that most people were running. A performance dip so significant, in fact, that it would have created a significant negative impact on employee productivity.

We tend to buy PCs with Windows licenses from the vendor (Windows preinstalled). As such, we knew it wouldn’t be long before XP-based machines would be hard to find. If we stuck with Windows, we’d be running a mixed-OS network — which we knew from experience we did NOT want to do. The other option would be to replace all those old PCs. The direct costs of doing that, with the associated Vista and Office licenses, would have been more than $200,000.

So we started to look at other options — changing the way we license Windows, sticking with XP for awhile, or switching away from Windows. This last option sounded the most promising.

I took a spare desktop-class machine, representative of the hardware most end users would have, and installed etch (then testing) on it. I spent a bit of time tweaking the desktop settings, making things as transparent to the user as possible. We liked what we saw and started pursuing it a bit more. We knew we had some Windows apps we couldn’t discard, so we tested running them off a Windows terminal server with the Linux rdesktop client. That worked well — and the appropriate Server 2003 licenses plus CALs would still be far cheaper than a mass migration to Vista.

To make a long story short, we are getting quite a few benefits out of all this. One of the most important is a single unified system image. Excepting a few files like /etc/fstab, every system gets a bit-for-bit identical installation from the server, updated using rsync. /home is mounted from the network using NFS (v4). So our users can sit down at any PC, log in, and have all their programs, settings, email, etc. available. A side benefit is that hardware problems become minor annoyances rather than major inconveniences; if your hard disk dies, we can just bring up a different PC. We had tried numerous times to make roaming profiles work in Windows, but never really achieved a reliable setup there — perhaps because it seemed virtually impossible to assure that each Windows PC had the exact same set of software, in the exact same versions, installed.

More to come.

Disk encryption support in Etch

Well, I got my new MacBook Pro 15″ in yesterday. I’ll write something about that shortly. The main OS for this machine is not Mac OS X, though, but Debian.

I decided that, being a laptop, I would like to run dm-crypt on here. Much to my delight, the etch installers support dm-crypt out of the box.

Not only that, but they supported this setup out of the box, too:

  • Two partitions for Debian — one for /boot, everything else on the second one
  • The second partition is completely encrypted
  • Inside the encrypted container is an LVM physical volume
  • Inside the LVM physical volume are logical volumes for /, /home, /usr, /var, and swap
  • XFS is used for each filesystem

Not only that, but it set up proper boot sequence for all of this out of the box, too.

So I turn on the unit, enter the password for the encrypted partition, and then the system continues booting.

Nice. Very nice.

Kudos to the debian-installer and initramfs teams.