Category Archives: Technology

Real World Haskell Update

Times are exciting. Our book, Real World Haskell, is now available in a number of venues. But before I get to that, I’ve got to talk about what a thrill this project has been.

I created our internal Darcs repository in May, 2007. Since then, the three of us has made 1324 commits — and that doesn’t count work done by copyeditors and others at O’Reilly.

We made available early drafts of the book online for commenting, which served as our tech review process. By the time we finished writing the book, about 800 people had submitted over 7,500 comments. I’ve never seen anything like it, and really appreciate all those that commented about it.

As for availability, RWH is available:

  • For immediate purchase with electronic delivery, from O’Reilly’s page
  • For immediate viewing on Safari Books Online, at its book page
  • Paper editing timing is still tentative, but we’re estimating arrival in bookstores the week of December 8.

People are talking about it on blogs, twitter, etc. We’re excited!

Web Design Companies That Understand Technology

There are a lot of companies out there that do web design work that looks fabulous.

Unfortunately, a lot of these sites look fabulous only when viewed in IE6 build xxxx, with a 75dpi monitor, fonts set to the expected size, running on Windows XP SP2, with JavaScript enabled. Try looking at the site through Safari, Firefox, with larger-than-expected fonts, and things break down: text boxes overlap each other, buttons that should work don’t, and it becomes a mess.

So, if your employer wanted a web design company that has a good grasp of Web standards and the appropriate use of them, where would you look? A company that can write good HTML, CSS, and JavaScript, and still make the site look appealing? A company that has heard of Apache and gets the appropriate nausea when someone mentions ColdFusion or Frontpage?

So far, I’ve seen these places mentioned by others:

WebDevStudios.com
Happy Cog
Crowd Favorite

Converted to WordPress

I have been using Serendipity on my blog for some time now. Overall, I’ve been pleased with it, but the conversion was a pain.

Serendipity is a simple blog engine, and has a wonderful built-in plugin system. It can detect what plugins need upgrading, and install those upgrades, all from directly within the management interface. There’s no unzipping stuff in install directories as with WordPress.
Continue reading Converted to WordPress

Review: Silicon Mechanics

After some hilariously frightening reactions from Dell support to simple problems, and HP becoming aggressively competitive on price, we’ve been using HP servers for a few years now. The hardware is good, and the support, while reasonable, always… pauses… when I mention that we’re running Debian. I try not to let it slip if I don’t have to.

We put in some HP blades a couple of years ago, and I was annoyed to discover that they have discontinued that enclosure and all the blades in it. I decided this was a good time to look at their newer options, as well as at other companies.

Back in July, I had noticed a Silicon Mechanics booth at OSCon. I noticed their slogan “experts included.” That sounds great; we’ve got software experts here, but not hardware experts, and I’d enjoy dealing with a company that knows more about their hardware than I do. I went up to their booth and asked what they’d say about us running Debian on their hardware. “That would be just fine.” “So you’d fully support it when I’m running Debian?” “Sure.” “What about management software – do you have any of that which I’d find annoying to port to Debian?” “Our servers don’t need any management software other than what comes with your kernel.” Good answers.

So, when it came time for us to decide what to do about getting a new server in here, I figured I’d call up Silicon Mechanics and see what they’d recommend. They put me on a conference call with a sales rep and an IT engineer, and wound up recommending a 1U server for us to start with, and an iSCSI storage device to address some of the storage needs we have (both for that server and others). I had heard of iSCSI only vaguely, and asked how it worked, and what the performance would be like compared to our 2Gb FC SAN. I got back intelligent (and correct) answers.

They probably spent 2 hours with me on the phone before we placed an order. I was incredibly happy with their service, level of expertise, and helpfulness. They even did a webinar to demo the management interface on the storage unit for me.

Today, the 1U server arrived. I unboxed it and set it on my desk to configure. First item: set an IP address for the IPMI card. That’s the device that lets me connect to it over a web browser and interact with the console, power cycle it, etc. as if I was there. I set an IP, but somehow couldn’t seem to figure out the username and password for the web interface.

So I called Silicon Mechanics support at the number that was included on the fridge magnet (!) that came with the shipment. Phone rang once. Then a live, capable American answered. No menus, no fuss. I asked my question. He apologized, saying, “I should know that, but I’ll have to look it up… hold on just a bit.” I had my answer about 90 seconds later. He offered to send me the full docs for the IPMI card if I wanted as well.

So I’ve been very impressed with them so far. From what I’ve heard, their iSCSI enclosure ought to be quite something as well. They even helped us spec out a switch that supports trunking for use with it.

I’ll give them a “highly recommended”.

Looking back at WordPress

I’ve hosted this blog on three different platforms: Drupal, WordPress, and at present, Serendipity.

Back in 2006, I rejected WordPress, noting that most of its plugins were incompatible with the current version, its main anti-spam software wasn’t Free, there was no central plugin directory. And, while WordPress supported PostgreSQL, many plugins didn’t.

Serendipity, at the time, had none of those problems.

However, I’ve been having other problems with Serendipity since then. People have repeatedly had trouble with captchas. The RSS feeds have long had subtle incompatibilities with certain aggregators, leading to duplicate posts.

I’m looking back at WordPress now. It looks like it is a lot more mature than it was 2.5 years ago. Perhaps it’s time to switch back.

I hope it will support PostgreSQL better now, but I note that its website seems to list MySQL only these days. Ah well, can’t have it all, I guess.

Search for Backup Tools

Since the last time I went looking for backup software, I’ve still be using rdiff-backup.

It’s nice, except for one thing: it always keeps an uncompressed copy of your current state on the disk. This is becoming increasingly annoying.

I did some tests with dar and BackupPC, and both saved considerable disk space over rdiff-backup. The problem with dar, or compressed full/incrementals with tar, is that eventually you have to make a new full backup. You have to do that, *then* delete all your old fulls and incrementals, so there will be times when you have to store a full backup twice.

The hardlinking approach sounds good. It’s got a few problems, too. One is that it can lose metadata about, ironically enough, hard links. Another is that few of the hard linking programs offer a compressed on-disk format. Here’s what I’ve been looking at:

BackupPC

Nice on the service. I’m a bit annoyed that it’s web-driven rather than commandline-driven, but I can look past that. I can also look past that it won’t let me clamp down on ssh access as much as I’d like.

BackupPC writes metadata to disk alongside files, so it can restore hard links, symlinks, device entries, and the like. It also has the nice feature of being able to hard link identical files across machines, so if you’re backing up /usr on a bunch of machines and have the same files installed, you save space. Nice.

BackupPC also can compress the files on your disk. It uses pre-compression md5sums for identifying files to hard link, which is nice.

Here’s where I get nervous.

BackupPC doesn’t just use regular compression, from say gzip or bzip2. It uses its own low-level algorithm centered around the Perl deflate library. And it does it in a nonstandard way owing to a supposed memory issue with zlib. Why they don’t just pipe it through gzip or equivalent is beyond me.

This means that, first off, it’s using a nonstandard compression format, which makes me nervous to begin with. If that weren’t annoying enough, you have to install Perl plus a bunch of modules to extract the thing. This makes me nervous too.

Dirvish

Doesn’t support compression.

faubackup

Doesn’t support compression.

rdup

Supports compression and encryption. Does not preserve ownership of things unless the destination filesystem does (meaning you must run as root to store your backups.)

Killer lack of feature: it does not preserve knowledge about what was hardlinked on the source system, so when you restore your backup, all hardlinks are lost. Epic fail.

rsnapshot

Doesn’t support compression.

StoreBackup

Does support compression, appears to restore metadata in a sane way. Supports backing up to a different machine on the LAN, but only if you set up NFS. Looks inappropriate for doing backups over VPN. Comprehensive, though confusing, manual. Looks like an oddball design with an oddball manual.

So, any suggestions?

Crazy Cursor Conspiracy Finally Fully Fixed

So lately I had the bad fortune to type in apt-get install gnome-control-center on my workstation. It pulled in probably a hundred dependencies, but I confirmed installing it, never really looking at that list.

The next day, I had a reason to reboot. When I logged back in, I noticed that my beloved standard X11 cursors had been replaced by some ugly antialiased white cursor theme. I felt as if XP had inched closer to taking over my machine.

I grepped all over $HOME for some indication of what happened. I played with the cursor settings in gnome-control-center’s appearance thing, which didn’t appear to have any effect. When I logged out, I noticed that the cursor was messed up in kdm of all things, and no amount of restarting it could fix it.

After some grepping in /etc, I realized that I could fix it with this command:

update-alternatives –config x-cursor-theme

And I set it back to /etc/X11/cursors/core.theme. Ahh, happiness restored.

I guess that’ll teach me to install bits of gnome on my box. Maybe.

New version of datapacker

I wrote before about datapacker, but I didn’t really describe what it is or how it’s different from other similar programs.

So, here’s the basic problem the other day. I have a bunch of photos spanning nearly 20 years stored on my disk. I wanted to burn almost all of them to DVDs. I can craft rules with find(1) to select the photos I want, and then I need to split them up into individual DVDs. There are a number of tools that did that, but not quite powerful enough for what I want.

When you think about splitting things up like this, there are a lot of ways you can split things. Do you want to absolutely minimize the number of DVDs? Or do you keep things in a sorted order, and just start a new DVD when the first one fills up? Maybe you are adding an index to the first DVD, and need a different size for it.

Well, datapacker 1.0.1 can solve all of these problems. As its manpage states, “datapacker is a tool in the traditional Unix style; it can be used in pipes and call other tools.” datapacker accepts lists of files to work on as command-line parameters, piped in from find, piped in from find -print0. It can also output its results in various parser-friendly formats, call other programs directly in a manner similar to find -exec, or create hardlink or symlink forests for ease of burning to DVD (or whatever you’ll be doing with it).

So, what I did was this:


find Pictures -type f -and -not -iwholename "Pictures/2001/*.tif" -and \
-not -wholename "Pictures/Tabor/*" -print0 | \
datapacker -0Dp -s 4g --sort -a hardlink -b ~/bins/%03d -

So I generate a list of photos to process with find. Then datapacker is told to read the list of files to process in a null-separated way (-0), generate bins that mimic the source directory structure (-D), organize into bins preserving order (-p), use a 4GB size per bin (-s 4g), sort the input prior to processing (–sort), create hardlinks for the files (-a hardlink), and then name the bins with a 3-digit number under ~/bins, and finally, read the list of files from stdin (-). By using –sort and -p, the output will be sorted by year (Pictures/2000, Pictures/2001, etc), so that photos from all years aren’t all mixed in on the discs.

This generates 13 DVD-sized bins in a couple of seconds. A simple for loop then can use mkisofs or growisofs to burn them.

The datapacker manpage also contains an example for calling mkisofs directly for each bin, generating ISOs without even an intermediate hardlink forest.

So, when I wrote about datapacker last time, people asked how it differed from other tools. Many of them had different purposes in mind. So I’m not trying to say one tool or the other is better, just highlighting differences. Most of these appear to not have anything like datapacker –deep-links.

gaffiter: No xargs-convenient output, no option to pass results directly to shell commands. Far more complex source (1671 lines vs. 228 lines)

dirsplit: Park of mkisofs package. Uses a random iterative approach, few options.

packcd: Similar packing algorithm options, but few input/input options. No ability to read a large file list from stdin. Could have issues with command line length.

Switched from KDE to xmonad

Within the last couple of days, I’ve started using xmonad, a tiling window manager, instead of KDE. Tiling window managers automatically position most windows on your screen, freeing you from having to move, rearrange, and resize them all the time. It sounds scary at first, but it turns out to be incredibly nice and efficient. There are some nice videos and testimonials at the xmonad homepage.

I’ve switched all the devices I use frequently to xmonad. That includes everything from my 9.1″ Eee (1024×600) to my 24″ workstation at work (1920×1200). I’ve only been using it for 2 days, but already I feel more productive. Also my wrist feels happier because I have almost completely eliminated the need to use a mouse.

xmonad simultaneously feels shiny and modern, and old school. It is perfectly usable as your main interface. Mod-p brings up a dmenu-based quick program launcher, keyboard-oriented of course. No more opening up terminals to launch programs, or worse, having to use the mouse to navigate a menu for them.

There’s a lot of documentation available for xmonad, including an “about xmonad” document, a guided tour, and a step-by-step guide to configuring xmonad that I wrote up.

I’ve been using KDE for at least 8 years now, if not more. WindowMaker, fvwm2, fvwm, etc. before that. This is my first step with tiling window managers, and I like it. You can, of course, use xmonad with KDE. Or you can go “old school” and set up a status bar and tray yourself, as I’ve done. KDE seems quite superfluous when xmonad is in use. I think I’ve replaced a gig of KDE software with a 2MB window manager. Whee!

Take a look at xmonad. If you like the keyboard or the shell, you’ll be hooked.

Command-Line RSS Reader

So, does anybody know of a command-line RSS reader? I want something that will save state and output recent entries in a nicely parsable manner. Maybe URL\tTitle\tDecscription\n or somesuch.

So here’s why I’m asking.

I got to thinking that it might be nice to automatically post to Twitter when I’ve got a new blog post up, rather than manually have to say “just wrote a blog post.”

Then I also got to thinking that when I see an interesting URL that I’m going to Twitter about, I’m also almost always adding it as a bookmark to Delicious. Why not make a fortwitter tag on Delicious, and automatically post my comments about them to Twitter, saving me having to do it twice?

So I’ve got Twidge that can be nicely used in a shell script to do this stuff. I’m hoping to avoid having to write a shell script-friendly RSS aggregator. But I’m just prone to do it if nobody else has already, though I’d really like to avoid reinventing the wheel.