Category Archives: Linux

Unix Password and Authority Management

One of the things that everyone seems to do different is managing passwords. We haven’t looked at that in quite some time, despite growth both of the company and the IT department.

As I look to us moving some things to the cloud, and shifting offsite backups from carrying tapes to a bank to backups via the Internet, I’m aware that the potential for mischief — whether intentional or not — is magnified. With cloud hosting, a person could, with the press of a button, wipe out the equivalent of racks of machines in a few seconds. With disk-based local and Internet-based offsite backups, the potential for malicious behavior may be magnified; someone could pretty quickly wipe out local and remote backups.

Add to that the mysterious fact that many enterprise-targeted services allow only a single username/password for an account, and make no provision for ACLs to delegate permissions to others. Even Rackspace Cloud has this problem, as do their JungleDisk backup product, and many, many other offsite backup products. Amazon AWS seems to be the only real exception to this rule, and their ACL support is more than a little complicated.

So one of the questions we will have to address is the balance of who has these passwords. Too many people and the probability of trouble, intentional or not, rises. Too few and productivity is harmed, and potentially also the ability to restore. (If only one person has the password, and that person is unavailable, company data may be as well.) The company does have some storage locations, including locked vaults and safe deposit boxes, that no IT people have access to. I am thinking that putting a record of passwords in those locations may be a good first step, as putting the passwords in the control of those that can’t use them seems a reasonable step.

But we’ve been thinking of this as it pertains to our local systems as well. We have, for a number of years now, assigned a unique root password to every server. These passwords are then stored in a password-management tool, encrypted with a master password, and stored on a shared filesystem. Everyone in the department therefore can access every password.

Many places where I worked used this scheme, or some variant of it. The general idea was that if root on one machine was compromised and the attacker got root’s password, it would prevent the person from being able to just try that password on the other servers on the network and achieve a greater level of intrusion.

However, the drawback is that we now have more servers than anyone can really remember the passwords for. So many people are just leaving the password tool running. Moreover, while the attack described above is still possible, these days I worry more about automated intrusion attempts that most likely won’t try that attack vector.

A couple of ways we could go may include using a single root password everywhere, or a small set of root passwords. Another option may be to not log in to root accounts at all — possibly even disabling their password — and requiring the use of user accounts plus sudo. This hasn’t been practical to date. We don’t want to make a dependency on LDAP from a bunch of machines just to be able to use root, and we haven’t been using a tool such as puppet or cfengine to manage this stuff. Using such a tool is on our roadmap and could let us manage that approach more easily. But this approach has risks too. One is that if user accounts can get to root on many machines, then we’re not really more secure than a standard root password. Second is that it makes it more difficult to detect and enforce password expiration and systematic password changes.

I’m curious what approaches other people are taking on this.

Research on deduplicating disk-based and cloud backups

Yesterday, I wrote about backing up to the cloud. I specifically was looking at cloud backup services. I’ve been looking into various options there, but also various options for disk-based backups. I’d like to have both onsite and offsite backups, so both types of backup are needed. Also, it is useful to think about how the two types of backups can be combined with minimal overhead.

For the onsite backups, I’d want to see:

  1. Preservation of ownership, permissions, etc.
  2. Preservation of symlinks and hardlinks
  3. Space-efficient representation of changes — ideally binary deltas or block-level deduplication
  4. Ease of restoring
  5. Support for backing up Linux and Windows machines

Deduplicating Filesystems for Local Storage

Although I initially thought of block-level deduplicating file systems as something to use for offsite backups, they could also make an excellent choice for onsite disk-based backups.

rsync-based dedup backups

One way to use them would be to simply rsync data to them each night. Since copies are essentially free, we could do (or use some optimized version of) cp -r current snapshot/2011-01-20 or some such to save off historic backups. Moreover, we’d get dedup both across and within machines. And, many of these can use filesystem-level compression.

The real upshot of this is that the entire history of the backups can be browsed as a mounted filesystem. It would be fast and easy to find files, especially when users call about that file that they deleted at some point in the past but they don’t remember when, exactly what it was called, or exactly where it was stored. We can do a lot more with find and grep to locate these things than we could do with tools in Bacula (or any other backup program) restore console. Since it is a real mounted filesystem, we could also do fun things like make tarballs of it at will, zip parts up, scp them back to the file server, whatever. We could potentially even give users direct access to their files to restore things they need for themselves.

The downside of this approach is that rsync can’t store all the permissions unless it’s running as root on the system. Wrappers such as rdup around rsync could help with that. Another downside is that there isn’t a central scheduling/statistics service. We wouldn’t want the backup system to be hammered by 20 servers trying to send it data at once. So there’d be an element of rolling our own scripts, though not too bad. I’d have preferred not to authorize a backup server with root-level access to dozens of machines, but may be inescapable in this instance.

Bacula and dedup

The other alternative I thought of system such as Bacula with disk-based “volumes”. A Bacula volume is normally a tape, but Bacula can just write them to disk files. This lets us use the powerful Bacula scheduling engine, logging service, pre-backup and post-backup jobs, etc. Normally this would be an egregious waste of disk space. Bacula, like most tape-heritage programs, will write out an entire new copy of a file if even one byte changes. I had thought that I could let block-level dedupe reduce the storage size of Bacula volumes, but after looking at the Bacula block format spec, this won’t be possible as each block will have timestamps and such in it.

The good things about this setup revolve around using the central Bacula director. We need only install bacula-fd on each server to be backed up, and it has a fairly limited set of things it can do. Bacula already has built-in support for defining simple or complicated retention policies. Its director will email us if there is a problem with anything. And its logs and catalog are already extensive and enable us to easily find out things such as how long backups take, how much space they consume, etc. And it backs up Windows machines intelligently and comprehensively in addition to POSIX ones.

The downsides are, of course, that we don’t have all the features we’d get from having the entire history on the filesystem all at once, and far less efficient use of space. Not only that, but recovering from a disaster would require a more extensive bootstrapping process.

A hybrid option may be possible: automatically unpacking bacula backups after they’ve run onto the local filesystem. Dedupe should ensure this doesn’t take additional space — if the Bacula blocksize aligns with the filesystem blocksize. This is certainly not a given however. It may also make sense to use Bacula for Windows and rsync/rdup for Linux systems.

This seems, however, rather wasteful and useless.

Evaluation of deduplicating filesystems

I set up and tested three deduplicating filesystems available for Linux: S3QL, SDFS, and zfs-fuse. I did not examine lessfs. I ran a similar set of tests for each:

  1. Copy /usr/bin into the fs with tar -cpf - /usr/bin | tar -xvpf - -C /mnt/testfs
  2. Run commands to sync/flush the disk cache. Evaluate time and disk used at this point.
  3. Rerun the tar command, putting the contents into a slightly different path in the test filesystem. This should consume very little additional space since the files will have already been there. This will validate that dedupe works as expected, and provide a hint about its efficiency.
  4. Make a tarball of both directories from the dedup filesystem, writing it to /dev/zero (to test read performance)

I did not attempt to flush read caches during this, but I did flush write caches. The test system has 8GB RAM, 5GB of which was free or in use by a cache. The CPU is a Core2 6420 at 2.13GHz. The filesystems which created files atop an existing filesystem had ext4 mounted noatime beneath them. ZFS was mounted on an LVM LV. I also benchmarked native performance on ext4 as a baseline. The data set consists of 3232 files and 516MB. It contains hardlinks and symlinks.

Here are my results. Please note the comments below as SDFS could not accurately complete the test.

Test ext4 S3QL SDFS zfs-fuse
First copy 1.59s 6m20s 2m2s 0m25s
Sync/Flush 8.0s 1m1s 0s 0s
Second copy+sync N/A 0m48s 1m48s 0m24s
Disk usage after 1st copy 516MB 156MB 791MB 201MB
Disk usage after 2nd copy N/A 157MB 823MB 208MB
Make tarball 0.2s 1m1s 2m22s 0m54s
Max RAM usage N/A 150MB 350MB 153MB
Compression none lzma none gzip-2

It should be mentioned that these tests pretty much ruled out SDFS. SDFS doesn’t appear to support local compression, and it severely bloated the data store, which was much larger than the original data. Moreover, it permitted any user to create and modify files, even if the permissions bits said that the user couldn’t. tar gave many errors unpacking symlinks onto the SDFS filesystem, and du -s on the result threw up errors as well. Besides that, I noted that find found 10 fewer files than in my source data. Between the huge memory consumption, the data integrity concerns, and inefficient disk storage, SDFS is out of the running for this project.

S3QL is optimized for storage to S3, though it can also store its files locally or on an sftp server — a nice touch. I suspect part of its performance problem stems from being designed for network backends, and using slow compression algorithms. S3QL worked fine, however, and produced no problems. Creating a checkpoint using s3qlcp (faster than cp since it doesn’t have to read the data from the store) took 16s.

zfs-fuse appears to be the most-used ZFS implementation on Linux at the moment. I set up a 2GB ZFS pool for this test, and set dedupe=on and compress=gzip-2. When I evaluated compression in the past, I hadn’t looked at lzjb. I found a blog post comparing lzjb to the gzip options supported by zfs and wound up using gzip-2 for this test.

ZFS really shone here. Compared to S3QL, it took 25s instead of over 6 minutes to copy the data over — and took only 28% more space. I suspect that if I selected gzip -9 compression it would have been closer both in time and space to S3QL. But creating a ZFS snapshot was nearly instantaneous. Although ZFS-fuse probably doesn’t have as many users as ZFS on Solaris, still it is available in Debian, and has a good backing behind it. I feel safer using it than I do using S3QL. So I think ZFS wins this comparison.

I spent quite some time testing ZFS snapshots, which are instantaneous. (Incidentally, ZFS-fuse can’t mount them directly as documented, so you create a clone of the snapshot and mount that.) They worked out as well as could be hoped. Due to dedupe, even deleting and recreating the entire content of the original filesystem resulted in less than 1MB additional storage used. I also tested creating multiple filesystems in the zpool, and confirmed that dedupe even works between filesystems.

Incidentally — wow, ZFS has a ton of awesome features. I see why you OpenSolaris people kept looking at us Linux folks with a sneer now. Only our project hasn’t been killed by a new corporate overlord, so guess that maybe didn’t work out so well for you… <grin>.

The Cloud Tie-In

This discussion leaves another discussion: what to do about offsite backups? Assuming for the moment that I want to back them up over the Internet to some sort of cloud storage facility, there are about 3 options:

  1. Get an Amazon EC2 instance with EBS storage and rsync files to it. Perhaps run ZFS on that thing.
  2. Use a filesystem that can efficiently store data in S3 or Cloud Files (S3QL is the only contender here)
  3. Use a third-party backup product (JungleDisk appears to be the leading option)

There is something to be said for using a different tool for offsite backups — if there is some tool-level issue, that could be helpful.

One of the nice things about JungleDisk is that bandwidth is free, and disk is the same $0.15/GB-mo that RackSpace normally charges. JungleDisk also does block-level dedup, and has a central management interface. This all spells “nice” for us.

The only remaining question would be whether to just use JungleDisk to back up the backup server, or to put it on each individual machine as well. If it just backs up the backup server, then administrative burdens are lower; we can back everything there up by default and just not worry about it. On the other hand, if there is a problem with our main backups, we could be really stuck. So I’d say I’m leaning towards ZFS plus some sort of rsync solution and JungleDisk for offsite.

I had two people suggest CrashPlan Pro on my blog. It looks interesting, but is a very closed product which makes me nervous. I like using standard tools and formats — gives me more peace of mind, control, and recovery options. CrashPlan Pro supports multiple destinations and says that they do cloud hosting, but don’t list pricing anywhere. So I’ll probably not mess with it.

I’m still very interested in what comments people may have on all this. Let me know!

Jacob has a new computer — and a favorite shell

Earlier today, I wrote about building a computer with Jacob, our 3.5-year-old, and setting him up with a Linux shell.

We did that this evening, and wow — he loves it. While the Debian Installer was running, he kept begging to type, so I taught him how to hit Alt-F2 and fired up cat for him. That was a lot of fun. But even more fun was had once the system was set up. I installed bsdgames and taught him how to use worm. worm is a simple snake-like game where you use the arrow keys to “eat” the numbers. That was a big hit, as Jacob likes numbers right now. He watched me play it a time or two, then tried it himself. Of course he crashed into the wall pretty quickly, which exits the game.

I taught him how to type “worm” at the computer, then press Enter to start it again. Suffice it to say he now knows how to spell worm very well. Yes, that’s right: Jacob’s first ever Unix command was…. worm.

He’d play the game, and cackle if he managed to eat a number. If he crashed into a wall, he’d laugh much harder and run over to the other side of the room.

Much as worm was a hit, the Linux shell was even more fun. He sometimes has a problem with the keyboard repeat, and one time typed “worrrrrrrrrrrrrrrrrrm”. I tried to pronounce that for him, which he thought was hilarious. He was about to backspace to fix it, when I asked, “Jacob, what will happen if you press Enter without fixing it?” He looked at me with this look of wonder and excitement, as if to say, “Hey, I never thought of that. Let’s see!” And a second later, he pressed Enter.

The result, of course, was:

-bash: worrrrrrrrrrrrrrrrrrm: command not found

“Dad, what did it do?”

I read the text back, and told him it means that the computer doesn’t know what worrrrrrrrrrrrrrrrrrm means. Much laughter. At that point, it became a game. He’d bang at random letters, and finally press Enter. I’d read what it said. Pretty soon he was recognizing the word “bash”, and I heard one time, “Dad, it said BASH again!!!” Sometimes if he’d get semicolons at the right place, he’d get two or three “bashes”. That was always an exciting surprise. He had more fun at the command line than he did with worm, and I think at least half of it was because the shell was called bash.

He took somewhat of an interest in the hardware part earlier in the evening, though not quite as much. He was interested in opening up other computers to take parts out of them, but bored quickly. The fact that Terah was cooking supper probably had something to do with that. He really enjoyed the motherboard (and learned that word), and especially the CPU fan. He loved to spin it with his finger. He thought it interesting that there would be a fan inside his computer.

When it came time to assign a hostname, I told Jacob he could name his computer. Initially he was confused. Terah suggested he could name it “kitty”, but he didn’t go for it. After a minute’s thought, he said, “I will name it ‘Grandma Marla.'” Confusion from us — did he really understand what he was saying? “You want to name your computer ‘Grandma Marla?'” “Yep. That will be silly!” “Sure you don’t want to name it Thomas?” “That would be silly! No. I will name my computer ‘Grandma Marla.”” OK then. My DNS now has an entry for grandma-marla. I had wondered what he would come up with. You never know with a 3-year-old!

It was a lot of fun to see that sense of wonder and experimentation at work. I remember it from the TRS-80 and DOS machine, when I would just try random things to see what they would do. It is lots of fun to watch it in Jacob too, and hear the laughter as he discovers something amusing.

We let Jacob stay up 2 hours past his bedtime to enjoy all the excitement. Tomorrow the computer moves to his room. Should be loads of excitement then too.

Introducing the Command Line at 3 years

Jacob is very interested in how things work. He’s 3.5 years old, and into everything. He loves to look at propane tanks, as the pressure meter, and open the lids on top to see the vent underneath. Last night, I showed him our electric meter and the spinning disc inside it.

And, more importantly, last night I introduced him to the Linux command line interface, which I called the “black screen.” Now, Jacob can’t read yet, though he does know his letters. He had a lot of fun sort of exploring the system.

I ran “cat”, which will simply let him bash on the keyboard, and whenever he presses Enter, will echo what he typed back at him. I taught him how to hold Shift and press a number key to get a fun symbol. His favorite is the “hat” above the 6.

Then I ran tr a-z A-Z for him, and he got to watch the computer convert every lowercase letter into an uppercase letter.

Despite the fact that Jacob enjoys watching Youtube videos of trains and even a bit of Railroad Tycoon 3 with me, this was some pure exploration that he loves. Sometimes he’d say, “Dad, what will this key do?” Sometimes I didn’t know; some media keys did nothing, and some other keys caused weird things to appear. My keyboard has back and forward buttons designed to use with a web browser. He almost squealed with delight when he pressed the forward button and noticed it printed lots of ^@^@^@ characters on the screen when he held it down. “DAD! It makes LOTS of little hats! And what is that other thing?” (The at-sign).

I’ve decided it’s time to build a computer for Jacob. I have an old Sempron motherboard lying around, and an old 9″ black-and-white VGA CRT that’s pretty much indestructible, plus an old case or two. So it will cost nothing. This evening, Jacob will help me find the parts, and then he can help me assemble them all. (This should be interesting.)

Then I’ll install Debian while he sleeps, and by tomorrow he should be able to run cat all by himself. I think that, within a few days, he can probably remember how to log himself in and fire up a program or two without help.

I’m looking for suggestions for text-mode games appropriate to a 3-year-old. So far, I’ve found worm from bsdgames that looks good. It doesn’t require him to have quick reflexes or to read anything, and I think he’ll pick up using the arrow keys to move it just fine. I think that tetris is probably still a bit much, but maybe after he’s had enough of worm he would enjoy trying it.

I was asked on Twitter why I’ll be using the command line for him. There are a few reasons. One is that it will actually be usable on the 9″ screen, but another one is that it will expose the computer at a different level than a GUI would. He will inevitably learn about GUIs, but learning about a CLI isn’t inevitable. He won’t have to master coordination with a mouse right away, and there’s pretty much no way he can screw it up. (No, I won’t be giving him root yet!) Finally, it’s new and different to him, so he’s interested in it right now.

My first computer was a TRS-80 Color Computer (CoCo) II. Its primary interface, a BASIC interpreter, I guess counts as a command-line interface. I remember learning how to use that, and later DOS on a PC. Some of the games and software back then had no documentation and crashed often. Part of the fun, the challenge, and sometimes the frustration, was figuring out just what a program was supposed to do and how to use it. It will be fun to see what Jacob figures out.

Review: Linux IM Software

I’ve been looking at instant messaging and chat software lately. Briefly stated, I connect to Jabber and IRC networks from at least three different computers. I don’t like having to sign in and out on different machines. One of the nice features about Jabber (XMPP) is that I can have clients signing in from all over the place and it will automatically route messages to the active one. If the clients are smart enough, that is.

Gajim

I have been using Gajim as my primary chat client for some time now. It has a good feature set, but has had a history of being a bit buggy for me. It used to have issues when starting up: sometimes it would try to fire up two copies of itself. It still has a bug when being fired up from a terminal: if you run gajim & exit, it will simply die. You have to wait a few seconds to close the terminal you launched it from. It has also had issues with failing to reconnect properly after a dropped network connection and generating spurious “resource already in use” errors. Upgrades sometimes fix bugs, and sometimes introduce them.

The latest one I’ve been dealing with is its auto-idle support. Sometimes it will fail to recognize that I am back at the machine. Even weirder, sometimes it will set one of my accounts to available status, but not the other.

So much for my complaints about Gajim; it also has some good sides. It has excellent multi-account support. You can have it present your multiple accounts as separate sections in the roster, or you can have them merged. Then, say, all your contacts in a group called Friends will be listed together, regardless of which account you use to contact them.

The Jabber protocol (XMPP) permits you to connect from multiple clients. Each client specifies a numeric priority for its connection. When someone sends you a message, it will be sent to the connection with the highest priority. The obvious feature, then, is to lower your priority when you are away (or auto-away due to being idle), so that you always get IMs at the device you are actively using. Gajim supports this via letting you specify timeouts that get you into different away states, and using the advanced configuration editor, you can also set the priority that each state goes to. So, if Gajim actually recognized your idleness correctly, this would be great.

I do also have AIM and MSN accounts which I use rarely. I run Jabber gateways to each of these on my server, so there is no need for me to use a multiprotocol client. That also is nice because then I can use a simple Jabber client on my phone, laptop, whatever and see all my contacts.

Gajim does not support voice or video calls.

Due to an apparent bug in Facebook, the latest Gajim release won’t connect to Facebook servers, but there is a patch that claims to fix it.

Psi

Psi is another single-protocol Jabber client, and like Gajim, it runs on Linux, Windows, and MacOS. Psi has a nicer GUI than Gajim, and is more stable. It is not quite as featureful, and one huge omission is that it doesn’t support dropping priority on auto-away (though it, weirdly, does support a dropped priority when you manually set yourself away).

Psi doesn’t support account merging, so it always shows my contacts from one account separately from those from another. I like having the option in Gajim.

There is a fork of Psi known variously as psi-dev or psi-plus or Psi+. It adds that missing priority feature and some others. Unfortunately, I’ve had it crash on me several times. Not only that, but the documentation, wiki, bug tracker, everything is available only in Russian. That is not very helpful to me, unfortunately. Psi+ still doesn’t support account merging.

Both branches of Psi support media calling.

Kopete

Kopete is a KDE multiprotocol instant messenger client. I gave it only about 10 minutes of time because it is far from meeting my needs. It doesn’t support adjustable priorities that I can tell. It also doesn’t support XMPP service discovery, which is used to do things like establish links to other chat networks using a Jabber gateway. It also has no way to access ejabberd’s “send message to all online users” feature (which can be accessed via service discovery), which I need in emergencies at work. It does offer multimedia calls, but that’s about it.

Update: A comment pointed out that Kopete can do service discovery, though it is in a very non-obvious place. However, it still can’t adjust priority when auto-away, so I still can’t use it.

Pidgin

Pidgin is a multiprotocol chat client. I have been avoiding it for years, with the legitimate fear that it was “jack of all trades, master of none.” Last I looked at it, it had the same limitations that Kopete does.

But these days, it is more capable. It supports all those XMPP features. It supports priority dropping by default, and with a plugin, you can even configure all the priority levels just like with Gajim. It also has decent, though not excellent, IRC protocol support.

Pidgin supports account merging — and in fact, it doesn’t support any other mode. You can, for instance, tell it that a given person on IRC is the same as a given Jabber ID. That works, but it’s annoying because you have to manually do it on every machine you’re running Pidgin on. Worse, they used to support a view without merged accounts, but don’t anymore, and they think that’s a feature.

Pidgin does still miss some nifty features that Gajim and Psi both have. Both of those clients will not only tell you that someone is away, but if you hover over their name, tell you how long someone has been away. (Gajim says “away since”, while Pidgin shows “last status at”. Same data either way.) Pidgin has the data to show this, but doesn’t. You can manually find it in the system log if you like, but unhelpfully, it’s not on the log for an individual person.

Also, the Jabber protocol supports notifications while in a chat: “The contact is typing”, paying attention to a conversation, or closed the chat window. Psi and Gajim have configurable support for these; you can send whatever notifications your privacy preferences say. Pidgin, alas, removed that option, and again they see this as a feature.

Pidgin, as a result, makes me rather nervous. They keep removing useful features. What will they remove next?

It is difficult to change colors in Pidgin. It follows the Gtk theme, and there is a special plugin that will override some, but not all, Gtk options.

Empathy

Empathy supports neither priority dropping when away nor service discovery, so it’s not usable for me. Its feature set appears sparse in general, although it has a unique desktop sharing option.

Update: this section added in response to a comment.

On IRC

I also use IRC, and have been using Xchat for that for quite some time now. I tried IRC in Pidgin. It has OK IRC support, but not great. It can automatically identify to nickserv, but it is under-documented and doesn’t support multiple IRC servers for a given network.

I’ve started using xchat with the bip IRC proxy, which makes connecting from multiple machines easier.

Switched from KDE to xmonad

Within the last couple of days, I’ve started using xmonad, a tiling window manager, instead of KDE. Tiling window managers automatically position most windows on your screen, freeing you from having to move, rearrange, and resize them all the time. It sounds scary at first, but it turns out to be incredibly nice and efficient. There are some nice videos and testimonials at the xmonad homepage.

I’ve switched all the devices I use frequently to xmonad. That includes everything from my 9.1″ Eee (1024×600) to my 24″ workstation at work (1920×1200). I’ve only been using it for 2 days, but already I feel more productive. Also my wrist feels happier because I have almost completely eliminated the need to use a mouse.

xmonad simultaneously feels shiny and modern, and old school. It is perfectly usable as your main interface. Mod-p brings up a dmenu-based quick program launcher, keyboard-oriented of course. No more opening up terminals to launch programs, or worse, having to use the mouse to navigate a menu for them.

There’s a lot of documentation available for xmonad, including an “about xmonad” document, a guided tour, and a step-by-step guide to configuring xmonad that I wrote up.

I’ve been using KDE for at least 8 years now, if not more. WindowMaker, fvwm2, fvwm, etc. before that. This is my first step with tiling window managers, and I like it. You can, of course, use xmonad with KDE. Or you can go “old school” and set up a status bar and tray yourself, as I’ve done. KDE seems quite superfluous when xmonad is in use. I think I’ve replaced a gig of KDE software with a 2MB window manager. Whee!

Take a look at xmonad. If you like the keyboard or the shell, you’ll be hooked.

Asus violating GPL again?

There was a small firestorm last year when people realized that Asus was not releasing source code to GPL’d components of the EeePC. At the time, they eventually did post source code on their website.

Recently I bought an Eee 901. Asus has modified the kernel’s ACPI driver. They released the source code to that on an 8G surf model, but the 901 has additional hardware features in the ACPI space (bluetooth radio power toggle, for instance) that are not in the source they released back then. There are no sources released at all under the 901 section of their website.

Anyone know whom to contact at Asus about this?

Kernel interrupt weirdness?

I’ve had a problem with recent kernels. (I think it’s the kernel that’s doing this.) When my workstation is doing heavy I/O, it repeats keystrokes. For instance, while I was typing this paragraph, audacity was writing audio to disk, and I got this word:

heavvvvvvvvvvvy

It seems as if it thinks I haven’t let up on the keys.

I’ve seen this on two different machines and it seems to have started with 2.6.24 or 2.6.25.

Has anyone else seen this? Any ideas where I’d go to fix it? Incidentally, I’m in X when this happens. I don’t use the console much when there would be a chance for it to happen.

This is such a weird problem I’ve struck out googling, and I’m not even sure which mailing list to take it to.

Linux on the Desktop

Later this month, I will be giving a talk at OSCon about Linux on the corporate desktop — something we have done where I work. I’ve been alloted a 45-minute timeslot. I will, of course, be posting my slides online and I think OSCon also posts videos of these things.

I’m wondering if readers of my blog would like to leave me some comments on what you’d like to see. What would you like to know about Linux on the corporate desktop? Is there anything that you’d like to make sure I discuss?

datapacker

Every so often, I come across some utility that need. I think it must have been written before, but I can’t find it.

Today I needed a tool to take a set of files and split them up into directories in a size that will fit on DVDs. I wanted a tool that could either produce the minimum number of DVDs, or keep the files in order. I couldn’t find one. So I wrote datapacker.

datapacker is a tool to group files by size. It is perhaps most often used to fit a set of files onto the minimum number of CDs or DVDs.

datapacker is designed to group files such that they fill fixed-size containers (called “bins”) using the minimum number of containers. This is useful, for instance, if you want to archive a number of files to CD or DVD, and want to organize them such that you use the minimum possible number of CDs or DVDs.

In many cases, datapacker executes almost instantaneously. Of particular note, the hardlink action can be used to effectively copy data into bins without having to actually copy the data at all.

datapacker is a tool in the traditional Unix style; it can be used in pipes and call other tools.

I have, of course, uploaded it to sid. But while it sits in NEW, you can download the source tarball (with debian/ directory) from the project homepage at http://software.complete.org/datapacker. I’ve also got an HTML version of the manpage online, so you can see all the cool features of datapacker. It works nicely with find, xargs, mkisofs, and any other Unixy pipe-friendly program.

Those of you that know me will not be surprised that I wrote datapacker in Haskell. For this project, I added a bin-packing module and support for parsing inputs like 1.5g to MissingH. So everyone else that needs to do that sort of thing can now use library functions for it.

Update… I should have mentioned the really cool thing about this. After datapacker compiled and ran, I had only one mistake that was not caught by the Haskell compiler: I said < where I should have said <= one place. This is one of the very nice things about Haskell: the language lends itself to compilers that can catch so much. It’s not that I’m a perfect programmer, just that my compiler is pretty crafty.