Category Archives: Software

Web Design Companies That Understand Technology

There are a lot of companies out there that do web design work that looks fabulous.

Unfortunately, a lot of these sites look fabulous only when viewed in IE6 build xxxx, with a 75dpi monitor, fonts set to the expected size, running on Windows XP SP2, with JavaScript enabled. Try looking at the site through Safari, Firefox, with larger-than-expected fonts, and things break down: text boxes overlap each other, buttons that should work don’t, and it becomes a mess.

So, if your employer wanted a web design company that has a good grasp of Web standards and the appropriate use of them, where would you look? A company that can write good HTML, CSS, and JavaScript, and still make the site look appealing? A company that has heard of Apache and gets the appropriate nausea when someone mentions ColdFusion or Frontpage?

So far, I’ve seen these places mentioned by others:

WebDevStudios.com
Happy Cog
Crowd Favorite

Converted to WordPress

I have been using Serendipity on my blog for some time now. Overall, I’ve been pleased with it, but the conversion was a pain.

Serendipity is a simple blog engine, and has a wonderful built-in plugin system. It can detect what plugins need upgrading, and install those upgrades, all from directly within the management interface. There’s no unzipping stuff in install directories as with WordPress.
Continue reading Converted to WordPress

Looking back at WordPress

I’ve hosted this blog on three different platforms: Drupal, WordPress, and at present, Serendipity.

Back in 2006, I rejected WordPress, noting that most of its plugins were incompatible with the current version, its main anti-spam software wasn’t Free, there was no central plugin directory. And, while WordPress supported PostgreSQL, many plugins didn’t.

Serendipity, at the time, had none of those problems.

However, I’ve been having other problems with Serendipity since then. People have repeatedly had trouble with captchas. The RSS feeds have long had subtle incompatibilities with certain aggregators, leading to duplicate posts.

I’m looking back at WordPress now. It looks like it is a lot more mature than it was 2.5 years ago. Perhaps it’s time to switch back.

I hope it will support PostgreSQL better now, but I note that its website seems to list MySQL only these days. Ah well, can’t have it all, I guess.

Search for Backup Tools

Since the last time I went looking for backup software, I’ve still be using rdiff-backup.

It’s nice, except for one thing: it always keeps an uncompressed copy of your current state on the disk. This is becoming increasingly annoying.

I did some tests with dar and BackupPC, and both saved considerable disk space over rdiff-backup. The problem with dar, or compressed full/incrementals with tar, is that eventually you have to make a new full backup. You have to do that, *then* delete all your old fulls and incrementals, so there will be times when you have to store a full backup twice.

The hardlinking approach sounds good. It’s got a few problems, too. One is that it can lose metadata about, ironically enough, hard links. Another is that few of the hard linking programs offer a compressed on-disk format. Here’s what I’ve been looking at:

BackupPC

Nice on the service. I’m a bit annoyed that it’s web-driven rather than commandline-driven, but I can look past that. I can also look past that it won’t let me clamp down on ssh access as much as I’d like.

BackupPC writes metadata to disk alongside files, so it can restore hard links, symlinks, device entries, and the like. It also has the nice feature of being able to hard link identical files across machines, so if you’re backing up /usr on a bunch of machines and have the same files installed, you save space. Nice.

BackupPC also can compress the files on your disk. It uses pre-compression md5sums for identifying files to hard link, which is nice.

Here’s where I get nervous.

BackupPC doesn’t just use regular compression, from say gzip or bzip2. It uses its own low-level algorithm centered around the Perl deflate library. And it does it in a nonstandard way owing to a supposed memory issue with zlib. Why they don’t just pipe it through gzip or equivalent is beyond me.

This means that, first off, it’s using a nonstandard compression format, which makes me nervous to begin with. If that weren’t annoying enough, you have to install Perl plus a bunch of modules to extract the thing. This makes me nervous too.

Dirvish

Doesn’t support compression.

faubackup

Doesn’t support compression.

rdup

Supports compression and encryption. Does not preserve ownership of things unless the destination filesystem does (meaning you must run as root to store your backups.)

Killer lack of feature: it does not preserve knowledge about what was hardlinked on the source system, so when you restore your backup, all hardlinks are lost. Epic fail.

rsnapshot

Doesn’t support compression.

StoreBackup

Does support compression, appears to restore metadata in a sane way. Supports backing up to a different machine on the LAN, but only if you set up NFS. Looks inappropriate for doing backups over VPN. Comprehensive, though confusing, manual. Looks like an oddball design with an oddball manual.

So, any suggestions?

Crazy Cursor Conspiracy Finally Fully Fixed

So lately I had the bad fortune to type in apt-get install gnome-control-center on my workstation. It pulled in probably a hundred dependencies, but I confirmed installing it, never really looking at that list.

The next day, I had a reason to reboot. When I logged back in, I noticed that my beloved standard X11 cursors had been replaced by some ugly antialiased white cursor theme. I felt as if XP had inched closer to taking over my machine.

I grepped all over $HOME for some indication of what happened. I played with the cursor settings in gnome-control-center’s appearance thing, which didn’t appear to have any effect. When I logged out, I noticed that the cursor was messed up in kdm of all things, and no amount of restarting it could fix it.

After some grepping in /etc, I realized that I could fix it with this command:

update-alternatives –config x-cursor-theme

And I set it back to /etc/X11/cursors/core.theme. Ahh, happiness restored.

I guess that’ll teach me to install bits of gnome on my box. Maybe.

New version of datapacker

I wrote before about datapacker, but I didn’t really describe what it is or how it’s different from other similar programs.

So, here’s the basic problem the other day. I have a bunch of photos spanning nearly 20 years stored on my disk. I wanted to burn almost all of them to DVDs. I can craft rules with find(1) to select the photos I want, and then I need to split them up into individual DVDs. There are a number of tools that did that, but not quite powerful enough for what I want.

When you think about splitting things up like this, there are a lot of ways you can split things. Do you want to absolutely minimize the number of DVDs? Or do you keep things in a sorted order, and just start a new DVD when the first one fills up? Maybe you are adding an index to the first DVD, and need a different size for it.

Well, datapacker 1.0.1 can solve all of these problems. As its manpage states, “datapacker is a tool in the traditional Unix style; it can be used in pipes and call other tools.” datapacker accepts lists of files to work on as command-line parameters, piped in from find, piped in from find -print0. It can also output its results in various parser-friendly formats, call other programs directly in a manner similar to find -exec, or create hardlink or symlink forests for ease of burning to DVD (or whatever you’ll be doing with it).

So, what I did was this:


find Pictures -type f -and -not -iwholename "Pictures/2001/*.tif" -and \
-not -wholename "Pictures/Tabor/*" -print0 | \
datapacker -0Dp -s 4g --sort -a hardlink -b ~/bins/%03d -

So I generate a list of photos to process with find. Then datapacker is told to read the list of files to process in a null-separated way (-0), generate bins that mimic the source directory structure (-D), organize into bins preserving order (-p), use a 4GB size per bin (-s 4g), sort the input prior to processing (–sort), create hardlinks for the files (-a hardlink), and then name the bins with a 3-digit number under ~/bins, and finally, read the list of files from stdin (-). By using –sort and -p, the output will be sorted by year (Pictures/2000, Pictures/2001, etc), so that photos from all years aren’t all mixed in on the discs.

This generates 13 DVD-sized bins in a couple of seconds. A simple for loop then can use mkisofs or growisofs to burn them.

The datapacker manpage also contains an example for calling mkisofs directly for each bin, generating ISOs without even an intermediate hardlink forest.

So, when I wrote about datapacker last time, people asked how it differed from other tools. Many of them had different purposes in mind. So I’m not trying to say one tool or the other is better, just highlighting differences. Most of these appear to not have anything like datapacker –deep-links.

gaffiter: No xargs-convenient output, no option to pass results directly to shell commands. Far more complex source (1671 lines vs. 228 lines)

dirsplit: Park of mkisofs package. Uses a random iterative approach, few options.

packcd: Similar packing algorithm options, but few input/input options. No ability to read a large file list from stdin. Could have issues with command line length.

Switched from KDE to xmonad

Within the last couple of days, I’ve started using xmonad, a tiling window manager, instead of KDE. Tiling window managers automatically position most windows on your screen, freeing you from having to move, rearrange, and resize them all the time. It sounds scary at first, but it turns out to be incredibly nice and efficient. There are some nice videos and testimonials at the xmonad homepage.

I’ve switched all the devices I use frequently to xmonad. That includes everything from my 9.1″ Eee (1024×600) to my 24″ workstation at work (1920×1200). I’ve only been using it for 2 days, but already I feel more productive. Also my wrist feels happier because I have almost completely eliminated the need to use a mouse.

xmonad simultaneously feels shiny and modern, and old school. It is perfectly usable as your main interface. Mod-p brings up a dmenu-based quick program launcher, keyboard-oriented of course. No more opening up terminals to launch programs, or worse, having to use the mouse to navigate a menu for them.

There’s a lot of documentation available for xmonad, including an “about xmonad” document, a guided tour, and a step-by-step guide to configuring xmonad that I wrote up.

I’ve been using KDE for at least 8 years now, if not more. WindowMaker, fvwm2, fvwm, etc. before that. This is my first step with tiling window managers, and I like it. You can, of course, use xmonad with KDE. Or you can go “old school” and set up a status bar and tray yourself, as I’ve done. KDE seems quite superfluous when xmonad is in use. I think I’ve replaced a gig of KDE software with a 2MB window manager. Whee!

Take a look at xmonad. If you like the keyboard or the shell, you’ll be hooked.

Command-Line RSS Reader

So, does anybody know of a command-line RSS reader? I want something that will save state and output recent entries in a nicely parsable manner. Maybe URL\tTitle\tDecscription\n or somesuch.

So here’s why I’m asking.

I got to thinking that it might be nice to automatically post to Twitter when I’ve got a new blog post up, rather than manually have to say “just wrote a blog post.”

Then I also got to thinking that when I see an interesting URL that I’m going to Twitter about, I’m also almost always adding it as a bookmark to Delicious. Why not make a fortwitter tag on Delicious, and automatically post my comments about them to Twitter, saving me having to do it twice?

So I’ve got Twidge that can be nicely used in a shell script to do this stuff. I’m hoping to avoid having to write a shell script-friendly RSS aggregator. But I’m just prone to do it if nobody else has already, though I’d really like to avoid reinventing the wheel.

New Twitter Client: Twidge

I’ve lately been thinking about Twitter. I wanted some way to quickly post tweets from the command line. But I also wanted to be able to receive them in a non-intrusive way on all my machines. And I wanted to work with Twitter and Identi.ca both.

Nothing quite existed to do that, so I wrote Twidge.

Twidge is a command-line Twitter client. It can be run quite nicely interactively, with output piped through less. Or you can run it as a unidirectional or bidirectional mail gateway. Or you can use its parser-friendly output options to integrate it with shell scripts or other programs.

It’s got an 11-page manual (which is also its manpage). User-friendly in the best tradition of command line programs.

And it’s released today. The source packages include the debian/ directory for you to use for building them, but I’ve also posted an i386 binary that runs on etch and sid on my webpage, until it gets out of NEW.

See the homepage for more info.

Oh, it’s written in Haskell, by the way.

I Want Something eBay Doesn’t Have

I always have time to think while mowing the lawn. And today while mowing the lawn, I got the notion that it would be great fun to play Colossal Cave and other early text adventure games on a teletype. And, of course, since Linux has teletype support in its genes, if I played my cards right, I could probably get a login prompt to my workstation with the teletype, too.

Now, at this point, I am compelled to take a small diversion and explain just what a teletype is — for those of you, like me, who are too young to remember them. (I will graciously omit comment on those of you too old to remember them!) Teletypes have been around since about the 1930s or so, but the ones I have in mind are the ones that were used to interact with computers in the 1960s and 1970s. Instead of a keyboard and monitor, you’d have a keyboard and printer. Believe it or not, surplus teletypes were the interface of choice for teletypes even in the later years because they were so much cheaper than video terminals.

So anyhow, back to the plot. Teletypes operated at speeds ranging from about 40bps to 110bps, but it seems that the most common protocol was Baudot-coded 50cps 5N2 serial format — that is, 5 data bits, 2 stop bits. Amazingly, the serial UART in modern PCs is still capable of communicating with these devices (though it may take some circuitry to tweak the voltages), and at least one person has made it work with Linux.

So I zip on over to eBay to look for teletypes. What do I find? NOT A ONE! A few manuals, and apparently there is a GPS named the teletype. And some company that has something they think *might* be compatible with a teletype, but they don’t know.

eBay has sorely let me down. An antique geeky item should be right up their alley, and zilch. They can sell everything from cars to advertisements on some guy’s bald head, but not a teletype? C’mon!

So anyhow, I am afraid I will have to improvise. Perhaps I can find a dot-matrix printer with a serial port (or, I guess, a parallel port would do too) and an unbuffered printing mode. Then the trick would be getting keyboard input. Perhaps I could rig up a pty to do this, input from /dev/console, output to /dev/ttyS0. It would still be old, but not quite the real deal.

So if any of you have a working teletype you’d like to get rid of, do please let me know. I’ll send you a photo of the printout of me getting lost in Colossal Cave.

Oh, and for those keeping track at home, I guess you can add this to the list of old technologies I’m interested in: Gopher, typewriters, teletypes… they’re all alike, right?