Category Archives: Technology

Time to learn a new language

I have something of an informal goal of learning a new programming language every few years. It’s not so much a goal as it is something of a discomfort. There are so many programming languages out there, with so many niches and approaches to problems, that I get uncomfortable with my lack of knowledge of some of them after awhile. This tends to happen every few years.

The last major language I learned was Haskell, which I started working with in 2004. I still enjoy Haskell and don’t see anything displacing it as my primary day-to-day workhorse.

Yet there are some languages that I’d like to learn. I have an interest in cross-platform languages; one of my few annoyances with Haskell is that it can’t (at least with production quality) be compiled into something like Java bytecode or something else that isn’t architecture-dependent. I have long had a soft spot for functional languages. I haven’t had such a soft spot for static type checking, but Haskell’s type inference changed that for me. Also I have an interest in writing Android apps, which means some sort of Java tie-in would be needed.

Here are my current candidates:

  • JavaScript. I have never learned the language but dislike it intensely based on everything I have learned about it (especially the diverging standards of implementation). Nevertheless, there are certain obvious reasons to try it — the fact that most computers and mobile phones can run it out of the box is an important one.
  • Scheme. Of somewhat less interest since I learned Common Lisp quite awhile back. I’m probably pretty rusty at it, but I’m not sure Scheme would offer me anything novel that I can’t find in Haskell — except for the ability to compile to JVM bytecode.
  • Lua — it sounds interesting, but I’m not sure if it’s general-purpose enough to merit long-term interest.
  • Scala sounds interesting — a OOP and FP language that compiles to JVM bytecode.
  • Smalltalk. Seems sad I’ve never learned this one.
  • There are some amazing webapps written using Cappuccino. The Github issue interface is where I hear about this one.
  • Eclipse. I guess it’s mostly not a programming language but an IDE, but then there’s some libraries (RCP?) or something with it — so to be honest, I don’t know what it is. Some people seem very excited about it. I tried it once, couldn’t figure out how to just open a file and start editing already. Made me feel like I was working for Initech and wouldn’t get to actually compile something until my TPS coversheets were in order. Dunno, maybe it’s not that bad, but I never really understood the appeal of something other than emacs/vi+make.
  • A Haskell web infrastructure such as HSP or hApps. Not a new language, but might as well be…

Of some particular interest to me is that Haskell has interpreters for Scheme, Lua, and JavaScript as well as code generators for some of these languages (though not generic Haskell-to-foo compilers).

Languages not in the running because I already know them include: OCaml, POSIX shell, Python, Perl, Java, C, C++, Pascal, BASIC, Common Lisp, Prolog, SQL. Languages I have no interest in learning right now include Ruby (not different enough from what I already know plus bad experiences with it), any assembly, anything steeped in the Microsoft monoculture (C#, VB, etc.), or anything that is hard to work with outside of an Emacs or vim environment. (If your language requires or strongly encourages me to use your IDE or proprietary compiler, I’m not interested — that means you, flash.)

Brief Reivews of Languages I Have Used

To give you a bit of an idea of where I’m coming from:

  • C: Not much to say there. I think its pros and cons are well-known. I consider it to be too unwieldy for general-purpose use these days, though find myself writing code in it every few years for some reason or other.
  • Perl: The first major interpreted language I learned. Stopped using it entirely after learning Python (didn’t see any advantage of Perl over Python, and plenty of disadvantages.)
  • Python: Used to really like it. Both the language and I have changed. It is no longer the “clean” language it was in the 1.5 days. Too many lists-that-aren’t-lists, __underscorethings__, etc. Still cleaner than Perl. It didn’t scale up to large projects well, and the interpreted dynamic nature left me not all that happy to use it for some tasks. Haven’t looked at Python 3, but also it probably isn’t ready for prime time yet anyhow.
  • Java: Better than C++ for me. Cross-platform bytecode features. That’s about all that I have to say about it that’s kind. The world’s biggest source of carpal tunnel referrals in my book. Mysteriously manages to have web apps that require 2GB of RAM just to load. Dunno where that came from; Apache with PHP and mod_python takes less than 100M.
  • Haskell: A pretty stellar mix of a lot of nice features, type inference being one of the very nice ones in my book. My language of choice for almost all tasks. Laziness can be hard for even experienced Haskellers to understand at times, and the libraries are sometimes in a bit of flux (which should calm down soon). Still a very interesting language and, in fact, a decent candidate for my time as there is some about it I’ve never picked up, including some modules and web toolkits.
  • OCaml: Tried it, eventually discarded it. An I/O library that makes you go through all sorts of contortions to be able to open a file read/write isn’t cool in my book.

Download A Piece of Internet History

Back in the early 1990s, before there was a World Wide Web, there was the Internet Gopher. It was a distributed information system in the same sense as the web, but didn’t use hypertext and was text-based. Gopher was popular back then, as it made it easy to hop from one server to the next in a way that FTP didn’t.

Gopher has hung on over the years, and is still clinging to life in a way. Back in 2007, I was disturbed at the number of old famous Gopher servers that had disappeared off the Internet without a trace. Some of these used to be known by most users of the Internet in the early 90s. To my knowledge, no archive of this data existed. Nobody like archive.org had ever attempted to save Gopherspace.

So I decided I would. I wrote Gopherbot, a spidering archiver for Gopherspace. I ran it in June 2007, and saved off all the documents and sites it could find. That saved 40GB of data, or about 780,000 documents. Since that time, more servers have died. To my knowledge, this is the only comprehensive archive there is of what Gopherspace was like. (Another person is working on a new 2010 archive run, which I’m guessing will find some new documents but turn up fewer overall than 2007 did.)

When this was done, I compressed the archive with tar and bzip2 and split it out to 4 DVDs and mailed copies to a few people in the Gopher community.

Recently, we’ve noted that hard disk failures have hobbled a few actually maintained Gopher sites, so I read this archive back in and posted it on BitTorrent. If you’d like to own a piece of Internet history, download the torrent file and go to town (and please stick around to seed if you can). This is 15GB compressed, and also includes a rare video interview with two of the founders of Gopher.

There are some plans to potentially host this archive publicly in the manner of archive.org; we’ll have to wait and see if anything comes of it.

Finally, I have tried to find a place willing to be a permanent host of this data, and to date have struck out. If anybody knows of such a place, please get in touch. I regret that so many Gopher sites disappeared before 2007, but life is what it is, and this is the best snapshot of the old Gopherspace that I’m aware of and would like to make sure that this piece of history is preserved.

Update: The torrents are now permaseeded at ibiblio.org. See the 2007 archive and the 2006 mirror collection.

Update: The ibiblio mirror is now down, but you can find them on archive.org. See the 2007 archive and the 2006 mirror collection.

Jacob has a new computer — and a favorite shell

Earlier today, I wrote about building a computer with Jacob, our 3.5-year-old, and setting him up with a Linux shell.

We did that this evening, and wow — he loves it. While the Debian Installer was running, he kept begging to type, so I taught him how to hit Alt-F2 and fired up cat for him. That was a lot of fun. But even more fun was had once the system was set up. I installed bsdgames and taught him how to use worm. worm is a simple snake-like game where you use the arrow keys to “eat” the numbers. That was a big hit, as Jacob likes numbers right now. He watched me play it a time or two, then tried it himself. Of course he crashed into the wall pretty quickly, which exits the game.

I taught him how to type “worm” at the computer, then press Enter to start it again. Suffice it to say he now knows how to spell worm very well. Yes, that’s right: Jacob’s first ever Unix command was…. worm.

He’d play the game, and cackle if he managed to eat a number. If he crashed into a wall, he’d laugh much harder and run over to the other side of the room.

Much as worm was a hit, the Linux shell was even more fun. He sometimes has a problem with the keyboard repeat, and one time typed “worrrrrrrrrrrrrrrrrrm”. I tried to pronounce that for him, which he thought was hilarious. He was about to backspace to fix it, when I asked, “Jacob, what will happen if you press Enter without fixing it?” He looked at me with this look of wonder and excitement, as if to say, “Hey, I never thought of that. Let’s see!” And a second later, he pressed Enter.

The result, of course, was:

-bash: worrrrrrrrrrrrrrrrrrm: command not found

“Dad, what did it do?”

I read the text back, and told him it means that the computer doesn’t know what worrrrrrrrrrrrrrrrrrm means. Much laughter. At that point, it became a game. He’d bang at random letters, and finally press Enter. I’d read what it said. Pretty soon he was recognizing the word “bash”, and I heard one time, “Dad, it said BASH again!!!” Sometimes if he’d get semicolons at the right place, he’d get two or three “bashes”. That was always an exciting surprise. He had more fun at the command line than he did with worm, and I think at least half of it was because the shell was called bash.

He took somewhat of an interest in the hardware part earlier in the evening, though not quite as much. He was interested in opening up other computers to take parts out of them, but bored quickly. The fact that Terah was cooking supper probably had something to do with that. He really enjoyed the motherboard (and learned that word), and especially the CPU fan. He loved to spin it with his finger. He thought it interesting that there would be a fan inside his computer.

When it came time to assign a hostname, I told Jacob he could name his computer. Initially he was confused. Terah suggested he could name it “kitty”, but he didn’t go for it. After a minute’s thought, he said, “I will name it ‘Grandma Marla.'” Confusion from us — did he really understand what he was saying? “You want to name your computer ‘Grandma Marla?'” “Yep. That will be silly!” “Sure you don’t want to name it Thomas?” “That would be silly! No. I will name my computer ‘Grandma Marla.”” OK then. My DNS now has an entry for grandma-marla. I had wondered what he would come up with. You never know with a 3-year-old!

It was a lot of fun to see that sense of wonder and experimentation at work. I remember it from the TRS-80 and DOS machine, when I would just try random things to see what they would do. It is lots of fun to watch it in Jacob too, and hear the laughter as he discovers something amusing.

We let Jacob stay up 2 hours past his bedtime to enjoy all the excitement. Tomorrow the computer moves to his room. Should be loads of excitement then too.

Introducing the Command Line at 3 years

Jacob is very interested in how things work. He’s 3.5 years old, and into everything. He loves to look at propane tanks, as the pressure meter, and open the lids on top to see the vent underneath. Last night, I showed him our electric meter and the spinning disc inside it.

And, more importantly, last night I introduced him to the Linux command line interface, which I called the “black screen.” Now, Jacob can’t read yet, though he does know his letters. He had a lot of fun sort of exploring the system.

I ran “cat”, which will simply let him bash on the keyboard, and whenever he presses Enter, will echo what he typed back at him. I taught him how to hold Shift and press a number key to get a fun symbol. His favorite is the “hat” above the 6.

Then I ran tr a-z A-Z for him, and he got to watch the computer convert every lowercase letter into an uppercase letter.

Despite the fact that Jacob enjoys watching Youtube videos of trains and even a bit of Railroad Tycoon 3 with me, this was some pure exploration that he loves. Sometimes he’d say, “Dad, what will this key do?” Sometimes I didn’t know; some media keys did nothing, and some other keys caused weird things to appear. My keyboard has back and forward buttons designed to use with a web browser. He almost squealed with delight when he pressed the forward button and noticed it printed lots of ^@^@^@ characters on the screen when he held it down. “DAD! It makes LOTS of little hats! And what is that other thing?” (The at-sign).

I’ve decided it’s time to build a computer for Jacob. I have an old Sempron motherboard lying around, and an old 9″ black-and-white VGA CRT that’s pretty much indestructible, plus an old case or two. So it will cost nothing. This evening, Jacob will help me find the parts, and then he can help me assemble them all. (This should be interesting.)

Then I’ll install Debian while he sleeps, and by tomorrow he should be able to run cat all by himself. I think that, within a few days, he can probably remember how to log himself in and fire up a program or two without help.

I’m looking for suggestions for text-mode games appropriate to a 3-year-old. So far, I’ve found worm from bsdgames that looks good. It doesn’t require him to have quick reflexes or to read anything, and I think he’ll pick up using the arrow keys to move it just fine. I think that tetris is probably still a bit much, but maybe after he’s had enough of worm he would enjoy trying it.

I was asked on Twitter why I’ll be using the command line for him. There are a few reasons. One is that it will actually be usable on the 9″ screen, but another one is that it will expose the computer at a different level than a GUI would. He will inevitably learn about GUIs, but learning about a CLI isn’t inevitable. He won’t have to master coordination with a mouse right away, and there’s pretty much no way he can screw it up. (No, I won’t be giving him root yet!) Finally, it’s new and different to him, so he’s interested in it right now.

My first computer was a TRS-80 Color Computer (CoCo) II. Its primary interface, a BASIC interpreter, I guess counts as a command-line interface. I remember learning how to use that, and later DOS on a PC. Some of the games and software back then had no documentation and crashed often. Part of the fun, the challenge, and sometimes the frustration, was figuring out just what a program was supposed to do and how to use it. It will be fun to see what Jacob figures out.

Review: Linux IM Software

I’ve been looking at instant messaging and chat software lately. Briefly stated, I connect to Jabber and IRC networks from at least three different computers. I don’t like having to sign in and out on different machines. One of the nice features about Jabber (XMPP) is that I can have clients signing in from all over the place and it will automatically route messages to the active one. If the clients are smart enough, that is.

Gajim

I have been using Gajim as my primary chat client for some time now. It has a good feature set, but has had a history of being a bit buggy for me. It used to have issues when starting up: sometimes it would try to fire up two copies of itself. It still has a bug when being fired up from a terminal: if you run gajim & exit, it will simply die. You have to wait a few seconds to close the terminal you launched it from. It has also had issues with failing to reconnect properly after a dropped network connection and generating spurious “resource already in use” errors. Upgrades sometimes fix bugs, and sometimes introduce them.

The latest one I’ve been dealing with is its auto-idle support. Sometimes it will fail to recognize that I am back at the machine. Even weirder, sometimes it will set one of my accounts to available status, but not the other.

So much for my complaints about Gajim; it also has some good sides. It has excellent multi-account support. You can have it present your multiple accounts as separate sections in the roster, or you can have them merged. Then, say, all your contacts in a group called Friends will be listed together, regardless of which account you use to contact them.

The Jabber protocol (XMPP) permits you to connect from multiple clients. Each client specifies a numeric priority for its connection. When someone sends you a message, it will be sent to the connection with the highest priority. The obvious feature, then, is to lower your priority when you are away (or auto-away due to being idle), so that you always get IMs at the device you are actively using. Gajim supports this via letting you specify timeouts that get you into different away states, and using the advanced configuration editor, you can also set the priority that each state goes to. So, if Gajim actually recognized your idleness correctly, this would be great.

I do also have AIM and MSN accounts which I use rarely. I run Jabber gateways to each of these on my server, so there is no need for me to use a multiprotocol client. That also is nice because then I can use a simple Jabber client on my phone, laptop, whatever and see all my contacts.

Gajim does not support voice or video calls.

Due to an apparent bug in Facebook, the latest Gajim release won’t connect to Facebook servers, but there is a patch that claims to fix it.

Psi

Psi is another single-protocol Jabber client, and like Gajim, it runs on Linux, Windows, and MacOS. Psi has a nicer GUI than Gajim, and is more stable. It is not quite as featureful, and one huge omission is that it doesn’t support dropping priority on auto-away (though it, weirdly, does support a dropped priority when you manually set yourself away).

Psi doesn’t support account merging, so it always shows my contacts from one account separately from those from another. I like having the option in Gajim.

There is a fork of Psi known variously as psi-dev or psi-plus or Psi+. It adds that missing priority feature and some others. Unfortunately, I’ve had it crash on me several times. Not only that, but the documentation, wiki, bug tracker, everything is available only in Russian. That is not very helpful to me, unfortunately. Psi+ still doesn’t support account merging.

Both branches of Psi support media calling.

Kopete

Kopete is a KDE multiprotocol instant messenger client. I gave it only about 10 minutes of time because it is far from meeting my needs. It doesn’t support adjustable priorities that I can tell. It also doesn’t support XMPP service discovery, which is used to do things like establish links to other chat networks using a Jabber gateway. It also has no way to access ejabberd’s “send message to all online users” feature (which can be accessed via service discovery), which I need in emergencies at work. It does offer multimedia calls, but that’s about it.

Update: A comment pointed out that Kopete can do service discovery, though it is in a very non-obvious place. However, it still can’t adjust priority when auto-away, so I still can’t use it.

Pidgin

Pidgin is a multiprotocol chat client. I have been avoiding it for years, with the legitimate fear that it was “jack of all trades, master of none.” Last I looked at it, it had the same limitations that Kopete does.

But these days, it is more capable. It supports all those XMPP features. It supports priority dropping by default, and with a plugin, you can even configure all the priority levels just like with Gajim. It also has decent, though not excellent, IRC protocol support.

Pidgin supports account merging — and in fact, it doesn’t support any other mode. You can, for instance, tell it that a given person on IRC is the same as a given Jabber ID. That works, but it’s annoying because you have to manually do it on every machine you’re running Pidgin on. Worse, they used to support a view without merged accounts, but don’t anymore, and they think that’s a feature.

Pidgin does still miss some nifty features that Gajim and Psi both have. Both of those clients will not only tell you that someone is away, but if you hover over their name, tell you how long someone has been away. (Gajim says “away since”, while Pidgin shows “last status at”. Same data either way.) Pidgin has the data to show this, but doesn’t. You can manually find it in the system log if you like, but unhelpfully, it’s not on the log for an individual person.

Also, the Jabber protocol supports notifications while in a chat: “The contact is typing”, paying attention to a conversation, or closed the chat window. Psi and Gajim have configurable support for these; you can send whatever notifications your privacy preferences say. Pidgin, alas, removed that option, and again they see this as a feature.

Pidgin, as a result, makes me rather nervous. They keep removing useful features. What will they remove next?

It is difficult to change colors in Pidgin. It follows the Gtk theme, and there is a special plugin that will override some, but not all, Gtk options.

Empathy

Empathy supports neither priority dropping when away nor service discovery, so it’s not usable for me. Its feature set appears sparse in general, although it has a unique desktop sharing option.

Update: this section added in response to a comment.

On IRC

I also use IRC, and have been using Xchat for that for quite some time now. I tried IRC in Pidgin. It has OK IRC support, but not great. It can automatically identify to nickserv, but it is under-documented and doesn’t support multiple IRC servers for a given network.

I’ve started using xchat with the bip IRC proxy, which makes connecting from multiple machines easier.

Netbook or thin & light notebook?

I am a big fan of thin and light notebooks. I’ve been using the 9″ EeePC 901 for awhile now, almost since it first came out. I initially loved it. The keyboard, while an obstacle, wasn’t as much as I feared. The thing got insanely long battery life (5-6 hours or more typical), and was so small that my laptop bag is a “DVD player bag”.

Now for the downsides. I am getting into a time where I’m spending more time on the laptop, and I’m starting to be far more acutely aware of them. is performance. Let’s face it: the 901 is just a slow machine all-around. The video performance isn’t great, and I can watch the Thunderbird interface being drawn as it loads. But the real killer is the SSD storage. It is exceptionally slow, and gets in the way of multitasking in a serious way. (Syncing mail? Kiss performance in Firefox goodbye.)

Problem is the 1024×600 screen. This is becoming a serious inconvenience, as when you combine the need to do a lot of scrolling with slow scrolling performance, the result is unpleasant. Coding is pretty difficult at that size.

#3 is the keyboard size. It is OK for light-duty work, but it is getting in the way for more serious work.

So I’m looking for a replacement. I’m thinking something in the 10″ to 12″ range, thin and light, would be ideal. My main criteria are size, weight, performance, and compatibility with Debian. Durability is an added plus as it will be riding in my bicycle bag on a regular basis.

This puts me in something of a grey area: there are netbooks in that range, and then there are machines like the Macbook Air (which may actually be a bit bigger than I’d like).

I have recently been hearing good things about the EeePC 1005PE (Atom N450) and the 1201N (N270 with Nvidia Ion). The 1201N seems to beat the 1005PE in terms of performance (especially video-related), but with far less battery life. I think I could live with the slower CPU, but the 1005PE still has only 1024×600 resolution, which would be a big problem for me.

Lenovo has an IdeaPad U150 with an 11.6″ 1366×768 screen, and in the 12″ size, they’ve got various options, both Atom and Core 2 Duo. The X200 and X200s look interesting, but lack a touchpad. (The X200s appears to be their long-lasting-battery version.) The HP EliteBook 2530p also looks interesting; it has a touchpad, but is heavier and appears to have inferior battery life to the X200s.

What ideas do people have?

Review: Free Software Project Hosting

I asked for suggestions a few days ago. I got several good ones, and investigated them. You can find my original criteria at the link above. Here’s what I came up with:

Google Code

Its very simple interface appeals to me. It has an issue tracker, a wiki, a download area. But zero integration with git. That’s not necessarily a big problem; I can always keep on hosting git repos at git.complete.org. It is a bit annoying, though, since I wouldn’t get to nicely link commit messages to automatic issue closing.

A big requirement of mine is being able to upload tarballs or ZIP files from the command line in an automated fashion. I haven’t yet checked to see if Google Code exports an API for this. Google Code also has a lifetime limit of 25 project creations, though rumor has it they may lift the limit if you figure out where to ask and ask nicely.

URL: googlecode.com

Gitorious

Gitorious is one of the two Git-based sites that put a strong emphasis on community. Like Github, Gitorious tries to make it easy for developers to fork projects, submit pull requests to maintainers, and work together. This aspect of it does hold some appeal to me, though I have never worked with one of these sites, so I am somewhat unsure of how I would use it.

The downside of Gitorious or Github is that they tie me to Git. While I’m happy with Git and have no plans to change now, I’ve changed VCSs many times over the years when better tools show up; I’ve used, in approximately this order, CVS, Subversion, Arch/tla, baz, darcs, Mercurial, and Git, with a brief use of Perforce at a job that required it. I may use Git for another 3 years, but after 5 years will Git still be the best VCS out there? I don’t know.

Gitorious fails several of my requirements, though. It has no issue tracker and no downloads area.

It can spontaneously create a tar.gz file from the head of any branch, but not a zip file. It is possible to provide a download of a specific revision, but this is not very intuitive for the end user.

Potential workarounds include using Lighthouse for bug tracking (they do support git integration for changelog messages) and my own server to host tarballs and ZIP files — which I could trivially upload via scp.

URL: gitorious.org

Github

At first glance, this is a more-powerful version of Gitorious. It has similar community features, has a wiki, but adds an issue tracker, download area, home page capability, and a bunch of features. It has about a dozen pre-built commit hooks, that do everything from integrate with Lighthouse to pop commit notices into Jabber.

But there are surprising drawbacks, limitations, and even outright bugs all throughout. And it all starts with the user interface.

On the main project page, the user gets both a download button and a download tab. But they don’t do the same thing. Talk about confusing!

The download button will make a ZIP or tarball out of any tag in the repo. The download tab will also do the same, though presented in a different way; but the tab can also offer downloads for files that the maintainer has manually uploaded. Neither one lets you limit the set of tags presented, so if you have an old project with lots of checkpoints, the poor end user has to sift through hundreds of tags to find the desired version. It is possible to make a tarball out of a given branch, so a link to the latest revision could be easy, but still.

Even worse, there’s a long-standing issue where several of the tabs get hidden under other on-screen elements. The wiki tab, project administration tab, and sometimes even the download tab are impacted. It’s been open since February with no apparent fix.

And on top of that, uploading arbitrary tarballs requires — yes — Flash. Despite requests to make it scriptable, they reply that there is no option but Flash and they may make some other option sometime.

The issue tracker is nice and simple. But it doesn’t support attachments. So users can’t attach screenshots, debug logs, or diffs.

I really wanted to like Github. It has so many features for developers. But all these surprising limitations make it a pain both for developers (I keep having to “view source” to find the link to the wiki or the project admin page) and for users (confusing download options, lack of issue attachments). In the end, I think the display bug is a showstopper for me. I could work around some of the others by having a wiki page with links to downloads and revisions and giving that out as the home page perhaps. But that’s a lot of manual maintenance that I would rather avoid.

URL: github.com

Launchpad

Launchpad is the project management service operated by Canonical, the company behind Ubuntu. While Launchpad can optionally integrate well with Ubuntu, that isn’t required, so non-developers like me can work with it fine.

Launchpad does offer issue tracking, but no wiki. It has a forum of sorts though (the “Answers” section). It has some other features, such as blueprints, that would likely only be useful for projects larger than the ones I would plan to use it for.

It does have a downloads area, and they say they have a Python API. I haven’t checked it out, but if it supports scriptable uploads, that would work for me.

Besides the lack of a wiki, Launchpad is also tied to the bzr VCS. bzr was one of the early players in DVCS, written as a better-designed successor to tla/Arch and baz, but has no compelling features over Git or Mercurial for me today. I have no intention of switching to or using it any time soon.

Launchpad does let you “import” branches from another VCS such as Git or svn. I set up an “import” branch for a test project yesterday. 12 hours later, it still hasn’t imported anything; it’s just sitting at “pending review.” I have no idea if it ever will, or why setting up a bzr branch requires no review but a git branch requires review. So I am unable to test the integration between it and the changesets, which is really annoying.

So, some possibilities here, but the bzr-only thing really bugs me. And having to have my git trees reviewed really goes against the “quick and simple” project setup that I would have preferred to see.

URL: launchpad.net

Indefero

Indefero is explicitly a Google Code clone, but aims to be a better Google Code than Google Code. The interface is similar to Google’s — very simple and clean. Unlike Google Code, Indefero does support Git. It supports a wiki, downloads area, and issue tracker. You can download the PHP-based code and run it yourself, or you can get hosting from the Indefero site.

I initially was favorably impressed by Indefero, but as I looked into it more, I am not very impressed right now. Although it does integrate with Git, and you can refer to an issue number in a Git commit, a Git commit can’t close an issue. Git developers use git over ssh to interact with it, but it supports only one ssh key per user — so this makes it very annoying if I wish to push changes from all three of the machines I regularly do development with. Despite the fact that this is a “high priority” issue, it hasn’t been touched by the maintainer in almost a month, even though patches have been offered.

Indefero can generate files based on any revision in git, or based on the latest on any branch, but only in ZIP format (no tar.gz).

Although the program looks very nice and the developer clueful, Indefero has only one main active developer or committer, and he is a consultant that also works on other projects. That makes me nervous about putting too many eggs into the Indefero basket.

URL: indefero.net

Trac

Trac is perhaps the gold standard of lightweight project management apps. It has a wiki, downloads, issue tracking, and VCS integration (SVN only in the base version, quite a few others with 3rd-party plugins). I ran trac myself for awhile.

It also has quite a few failings. Chief among them is that you must run a completely separate Trac instance for each project. So there is no possible way to go to some dashboard and see all bugs assigned to you from all projects, for instance. That is what drove me away from it initially. That and the serious performance problems that most of its VCS backends have.

URL: trac.edgewall.org

Redmine

Redmine is designed to be a better Trac than Trac. It uses the same lightweight philosophy in general, has a wiki, issue tracker, VCS integration, downloads area, etc. But it supports multiple projects in a sane and nice way. It’s what I currently use over on software.complete.org.

Redmine has no API to speak of, though I have managed to cobble together an automatic uploader using curl. It was unpleasant and sometimes breaks on new releases, but it generally gets the job done.

I have two big problems with Redmine. One is performance. It’s slow. And when web spiders hit it, it sometimes has been so slow that it takes down my entire server. Because of the way it structures its URLs, it is not possible to craft a robots.txt that does the right thing — and there is no plan to completely fix it. There is, however, a 3rd-party plugin that may help.

The bigger problem relates to maintaining and upgrading Redmine. This is the first Ruby on Rails app I have ever used, and let me say it has made me want to run away screaming from Ruby on Rails. I’ve had such incredible annoyances installing and upgrading this thing that I can’t even describe what was wrong. All sorts of undocumented requirements for newer software, GEMS that are supposed to work with it but don’t, having to manually patch things so they actually work, conflicts with what’s on the system, and nobody in the Redmine, Rails, or Ruby communities being able to help. I upgrade rarely because it is such a hassle and breaks in such spectacular ways. I don’t think this is even Redmine’s fault; I think it’s a Rails and Ruby issue, but nevertheless, I am stuck with it. My last upgrade was a real mess — bugs in the PostgreSQL driver — the newer one that the newer GEM that the newer Redmine required — were sending invalid SQL to it. Finally patched it myself, and this AFTER the whole pain that is installing gems in Ruby.

I’d take a CGI script written in Bash over Ruby on Rails after this.

That said, Redmine has the most complete set of the features I want of all the programs I’ve mentioned on this page.

URL: redmine.org

Savannah

Savannah is operated by the Free Software Foundation, and runs a fork of the SourceForge software. Its fork does support Git, but lacks a wiki. It has the standard *forge issue tracker, download area, home page support, integrated mailing lists, etc. It also has the standard *forge over-complexity.

There is a command-line SourceForge uploader in Debian that could potentially be hacked to work with Savannah, but I haven’t checked.

URL: savannah.nongnu.org

berlios.de

Appears to be another *forge clone. Similar to Savannah, but with a wiki, ugly page layout, and intrusive ads.

URL: berlios.de

SourceForge

Used to be the gold-standard of project hosting. Now looks more like a back alley in a trashy neighborhood. Ads all over the place, and intrusive and ugly ones at that. The ads make it hard to use the interface and difficult to navigate, especially for newbies. No thanks.

Conclusions

The four options that look most interesting to me are: Indefero, Github, Gitorious, and staying with Redmine. The community features of Github, Gitorious, and Launchpad all sound interesting, but I don’t have the experience to evaluate how well they work in practice — and how well they encourage “drive by commits” for small projects.

Gitorious + Lighthouse and my own download server merits more attention. Indefero still makes me nervous due to the level of development activity and single main developer. Github has a lot of promise, but an interface that is too confusing and buggy for me to throw at end users. That leaves me with Redmine, despite all the Rails aggravations. Adding the bot blocking plugin may just get me what I want right now, and is certainly the path of least resistance.

I am trying to find ways to build communities around these projects. If I had more experience with Github or Gitorious, and thought their community features could make a difference for small projects, I would try them.

My Week

It’s been quite the week.

Stomach Flu

Last Friday, my stomach was just starting to feel a little odd. I didn’t think much off it — a little food that didn’t go over well or stress, I thought.

Saturday I got out of bed and almost immediately felt like throwing up. Ugh. I probably caught some sort of stomach flu. I was nauseous all day and had some terrible diarrhea to boot. I spent parts of Saturday, Saturday night, Sunday, and Sunday night “supervising some emergency downloads” as the BOFH would say. By Sunday afternoon, I thought I was doing good enough to attend a practice of the Kansas Mennonite Men’s Choir. I made it through but it wasn’t quite as up to it as I thought.

Monday morning I woke up and thought the worst was behind me, so I went to work. By evening, the worst clearly was not behind me. I was extremely cold, and then got very hot a few hours later. Tuesday I left work a little early because of not feeling well.

Servers

Wednesday a colleague called me at home before I left to say that the ERP database had a major hiccup. That’s never good. The database is this creaky old dinosaur thing that has a habit of inventing novel ways to fail (favorite pastime: exceeding some arbitrary limit to the size of files that no OS has cared about for 5 years, then hanging without telling anybody why). My coworkers had been working on it since 5.

I went into the office and did what I could to help out, though they had mostly taken care of it. Then we went to reboot the server. It didn’t come back. I/O error on sda just after init started, and it hung. Puzzled, as it just used that disk to boot from. Try rebooting again.

This time, I/O error as the fibre channel controller driver loads. Again, puzzled as it just used that controller to load grub. Power cycle this time.

And now the server doesn’t see the fibre channel link at all. Eep. Check our fiber optic cables, and power cycle again.

And THIS time, the server doesn’t power back up. Fans whir for about a second, then an ominous red light I never knew was there shows up. Eeep!

So I call HP. They want me to remove one CPU. Yes, remove one CPU. I tried, and long story short, they dispatch a local guy with a replacement motherboard. “Can you send along a FC controller, in case it’s dead too?” “Nope, not until we diagnose a problem with it.”

Local guy comes out. He’s a sharp guy and I really like him. But the motherboard wasn’t in stock at the local HP warehouse, so he had to have it driven in from Oklahoma City. He gets here with it by about 4:30. At this point the single most important server to the company’s business has been down almost 12 hours.

He replaces the motherboard. The server now powers up — yay! And it POSTs, and it…. doesn’t see the disks. !#$!#$

He orders the FC controller, which is so very much not in stock that they can’t get it to us until 8:30AM the next morning (keep in mind this thing is on a 4-hour 24/7 contract).

Next morning rolls around. Outage now more than 24 hours. He pops the FC controller in, we tweak the SAN settings appropriately, we power up the machine, and….

still doesn’t see any disks, and the SAN switch still doesn’t see any link. EEP!

Even the BIOS firmware tool built into the controller doesn’t see a link, so we KNOW it’s not a software issue. We try plugging and unplugging cables, trying different ports, everything. Nothing makes a difference.

At this point, while he ponders what else he can replace while we start migrating the server to a different blade. We get ERP back up on its temporary home an hour later, and he basically orders us every part he can think of while we’ve bought him some room.

Several additional trips later, he’s replaced just about everything at least once, some things 2 or 3 times, and still no FC link. Meanwhile, I’ve asked my colleague to submit a new ticket to HP’s SAN team so we can try checking of the switch has an issue. They take their sweet time answering until he informs them this morning that it’s been *48 HOURS* since we first reported the outage. All of a sudden half a dozen people at HP take a keen interest in our case. As if they could smell this blog post coming…

So they advise us to upgrade the firmware in the SAN switch, but they also say “we really should send this to the blade group; the problem can’t be with the SAN” — and of course the blade people are saying “the problem’s GOT to be with the SAN”. We try to plan the firmware upgrade. In theory, we can lose a switch and nobody ever notices due to multipathing redundancy. In practice, we haven’t tested that in 2 years. None of this equipment had even been rebooted in 390 days.

While investigating this, we discovered that one of the blade servers could only see one path to its disks, not two. Strange. Fortunately, THAT blade wasn’t mission-critical on a Friday, so I power cycled it.

And it powered back up. And it promptly lost connection to its disks entirely, causing the SAN switches to display the same mysterious error they did with the first blade — the one that nobody at HP had heard of, could find in their documentation, or even on Google. Yes, that’s right. Apparently power cycling a server means it loses access to its disks.

Faced with the prospect of our network coming to a halt if anything else rebooted (or worse, if the problem started happening without a reboot), we decided we’d power cycle one switch now and see what would happen. If it worked out, our problems would be fixed. If not, at least things would go down in our and HP’s presence.

And that… worked? What? Yes. Power cycling the switch fixed every problem over the course of about 2 minutes, without us having to do anything.

Meanwhile, HP calls back to say, “Uhm, that firmware upgrade we told you to do? DON’T DO IT!” We power cycle the other switch, and have a normal SAN life again.

I let out a “WOOHOO!” My colleague, however, had the opposite reaction. “Now we’ll never be able to reproduce this problem to get it fixed!” Fair point, I suppose.

Then began the fairly quick job of migrating ERP back to its rightful home — it’s all on Xen already, designed to be nimble for just these circumstances. Full speed restored 4:55PM today.

So, to cap it all off, within the space of four hours, we had fail:

  • One ERP database
  • ERP server’s motherboard
  • Two fiber optic switches — but only regarding their ability to talk to machines recently rebooted
  • And possibly one FC controller

Murphy, I hate you.

The one fun moment out of this was this conversation:

Me to HP guy: “So yeah, that machine you’ve got open wasn’t rebooted in 392 days until today.”

HP guy: “WOW! That’s INCRED — oh wait, are you running Linux on it?”

Me: “Yep.”

HP: “Figures. No WAY you’d get that kind of uptime from Windows.”

And here he was going to be all impressed.

How To Think About Compression, Part 2

Yesterday, I posted part 1 of how to think about compression. If you haven’t read it already, take a look now, so this post makes sense.

Introduction

In the part 1 test, I compressed a 6GB tar file with various tools. This is a good test if you are writing an entire tar file to disk, or if you are writing to tape.

For part 2, I will be compressing each individual file contained in that tarball individually. This is a good test if you back up to hard disk and want quick access to your files. Quite a few tools take this approach — rdiff-backup, rdup, and backuppc are among them.

We can expect performance to be worse both in terms of size and speed for this test. The compressor tool will be executed once per file, instead of once for the entire group of files. This will magnify any startup costs in the tool. It will also reduce compression ratios, because the tools won’t have as large a data set to draw on to look for redundancy.

To add to that, we have the block size of the filesystem — 4K on most Linux systems. Any file’s actual disk consumption is always rounded up to the next multiple of 4K. So a 5-byte file takes up the same amount of space as a 3000-byte file. (This behavior is not unique to Linux.) If a compressor can’t shrink enough space out of a file to cross at least one 4K barrier, it effectively doesn’t save any disk space. On the other hand, in certain situations, saving one byte of data could free 4K of disk space.

So, for the results below, I use du to calculate disk usage, which reflects the actual amount of space consumed by files on disk.

The Tools

Based on comments in part 1, I added tests for lzop and xz to this iteration. I attempted to test pbzip2, but it would have taken 3 days to complete, so it is not included here — more on that issue below.

The Numbers

Let’s start with the table, using the same metrics as with part 1:

Tool MB saved Space vs. gzip Time vs. gzip Cost
gzip 3081 100.00% 100.00% 0.41
gzip -1 2908 104.84% 82.34% 0.36
gzip -9 3091 99.72% 141.60% 0.58
bzip2 3173 97.44% 201.87% 0.81
bzip2 -1 3126 98.75% 182.22% 0.74
lzma -1 3280 94.44% 163.31% 0.63
lzma -2 3320 93.33% 217.94% 0.83
xz -1 3270 94.73% 176.52% 0.68
xz -2 3309 93.63% 200.05% 0.76
lzop -1 2508 116.01% 77.49% 0.39
lzop -2 2498 116.30% 76.59% 0.39

As before, in the “MB saved” column, higher numbers are better; in all other columns, lower numbers are better. I’m using clock seconds here on a dual-core machine. The cost column is clock seconds per MB saved.

Let’s draw some initial conclusions:

  • lzma -1 continues to be both faster and smaller than bzip2. lzma -2 is still smaller than bzip2, but unlike the test in part 1, is now a bit slower.
  • As you’ll see below, lzop ran as fast as cat. Strangely, lzop -3 produced larger output than lzop -1.
  • gzip -9 is probably not worth it — it saved less than 1% more space and took 42% longer.
  • xz -1 is not as good as lzma -1 in either way, though xz -2 is faster than lzma -2, at the cost of some storage space.
  • Among the tools also considered for part 1, the difference in space and time were both smaller. Across all tools, the difference in time is still far more significant than the difference in space.

The Pretty Charts

Now, let’s look at an illustration of this. As before, the sweet spot is the lower left, and the worst spot is the upper right. First, let’s look at the compression tools themselves:

compress2-zoomed

At the extremely fast, but not as good compression, end is lzop. gzip is still the balanced performer, bzip2 still looks really bad, and lzma -1 is still the best high-compression performer.

Now, let’s throw cat into the mix:

compress2-big

Here’s something notable, that this graph makes crystal clear: lzop was just as fast as cat. In other words, it is likely that lzop was faster than the disk, and using lzop compression would be essentially free in terms of time consumed.

And finally, look at the cost:

compress2-efficiency

What happened to pbzip2?

I tried the parallel bzip2 implementation just like last time, but it ran extremely slow. Interestingly, pbzip2 < notes.txt > notes.txt.bz2 took 1.002 wall seconds, but pbzip2 notes.txt finished almost instantaneously. This 1-second startup time for pbzip2 was a killer, and the test would have taken more than 3 days to complete. I killed it early and omitted it from my results. Hopefully this bug can be fixed. I didn’t expect pbzip2 to help much in this test, and perhaps even to see a slight degradation, but not like THAT.

Conclusions

As before, the difference in time was far more significant than the difference in space. By compressing files individually, we lost about 400MB (about 7%) space compared to making a tar file and then combining that. My test set contained 270,101 files.

gzip continues to be a strong all-purpose contender, posting fast compression time and respectable compression ratios. lzop is a very interesting tool, running as fast as cat and yet turning in reasonable compression — though 25% worse than gzip on its default settings. gzip -1 was almost as fast, though, and compressed better. If gzip weren’t fast enough with -6, I’d be likely to try gzip -1 before using lzop, since the gzip format is far more widely supported, and that’s important to me for backups.

These results still look troubling for bzip2. lzma -1 continued to turn in far better times and compression ratios that bzip2. Even bzip2 -1 couldn’t match the speed of lzma -1, and compressed barely better than gzip. I think bzip2 would be hard-pressed to find a comfortable niche anywhere by now.

As before, you can download my spreadsheet with all the numbers behind these charts and the table.

How To Think About Compression

… and the case against bzip2

Compression is with us all the time. I want to talk about general-purpose lossless compression here.

There is a lot of agonizing over compression ratios: the size of output for various sizes of input. For some situations, this is of course the single most important factor. For instance, if you’re Linus Torvalds putting your code out there for millions of people to download, the benefit of saving even a few percent of file size is well worth the cost of perhaps 50% worse compression performance. He compresses a source tarball once a month maybe, and we are all downloading it thousands of times a day.

On the other hand, when you’re doing backups, the calculation is different. Your storage media costs money, but so does your CPU. If you have a large photo collection or edit digital video, you may create 50GB of new data in a day. If you use a compression algorithm that’s too slow, your backup for one day may not complete before your backup for the next day starts. This is even more significant a problem when you consider enterprises backing up terabytes of data each day.

So I want to think of compression both in terms of resulting size and performance. Onward…

Starting Point

I started by looking at the practical compression test, which has some very useful charts. He has charted savings vs. runtime for a number of different compressors, and with the range of different settings for each.

If you look at his first chart, you’ll notice several interesting things:

  • gzip performance flattens at about -5 or -6, right where the manpage tells us it will, and in line with its defaults.
  • 7za -2 (the LZMA algorithm used in 7-Zip and p7zip) is both faster and smaller than any possible bzip2 combination. 7za -3 gets much slower.
  • bzip2’s performance is more tightly clustered than the others, both in terms of speed and space. bzip2 -3 is about the same speed as -1, but gains some space.

All this was very interesting, but had one limitation: it applied only to the gimp source tree, which is something of a best-case scenario for compression tools.

A 6GB Test
I wanted to try something a bit more interesting. I made an uncompressed tar file of /usr on my workstation, which comes to 6GB of data. My /usr contains highly compressible data such as header files and source code, ELF binaries and libraries, already-compressed documentation files, small icons, and the like. It is a large, real-world mix of data.

In fact, every compression comparison I saw was using data sets less than 1GB in size — hardly representative of backup workloads.

Let’s start with the numbers:

Tool MB saved Space vs. gzip Time vs. gzip Cost
gzip 3398 100.00% 100.00% 0.15
bzip2 3590 92.91% 333.05% 0.48
pbzip2 3587 92.99% 183.77% 0.26
lzma -1 3641 91.01% 195.58% 0.28
lzma -2 3783 85.76% 273.83% 0.37

In the “MB saved” column, higher numbers are better; in all other columns, lower numbers are better. I’m using clock seconds here on a dual-core machine. The cost column is clock seconds per MB saved.

What does this tell us?

  • bzip2 can do roughly 7% better than gzip, at a cost of a compression time more than 3 times as long.
  • lzma -1 compresses better than bzip2 -9 in less than twice the time of gzip. That is, it is significantly faster and marginally smaller than bzip2.
  • lzma -2 is significantly smaller and still somewhat faster than bzip2.
  • pbzip2 achieves better wall clock performance, though not better CPU time performance, than bzip2 — though even then, it is only marginally better than lzma -1 on a dual-core machine.

Some Pretty Charts

First, let’s see how the time vs. size numbers look:

compress-zoomed

Like the other charts, the best area is the lower left, and worst is upper right. It’s clear we have two outliers: gzip and bzip2. And a cluster of pretty similar performers.

This view somewhat magnifies the differences, though. Let’s add cat to the mix:

compress-big

And finally, look at the cost:

compress-efficiency

Conclusions

First off, the difference in time is far larger than the difference in space. We’re talking a difference of 15% at the most in terms of space, but orders of magnitude for time.

I think this pretty definitively is a death knell for bzip2. lzma -1 can achieve better compression in significantly less time, and lzma -2 can achieve significantly better compression in a little less time.

pbzip2 can help even that out in terms of clock time on multicore machines, but 7za already has a parallel LZMA implementation, and it seems only a matter of time before /usr/bin/lzma gets it too. Also, if I were to chart CPU time, the numbers would be even less kind to pbzip2 than to bzip2.

bzip2 does have some interesting properties, such as resetting everything every 900K, which could provide marginally better safety than any other compressor here — though I don’t know if lzma provides similar properties, or could.

I think a strong argument remains that gzip is most suitable for backups in the general case. lzma -1 makes a good contender when space is at more of a premium. bzip2 doesn’t seem to make a good contender at all now that we have lzma.

I have also made my spreadsheet (OpenOffice format) containing the raw numbers and charts available for those interested.

Update

Part 2 of this story is now available, which considers more compression tools, and looks at performance compressing files individually rather than the large tar file.