Category Archives: Software

hpodder to be multithreaded… done right.

I’ll be hacking on my hpodder program this weekend. hpodder is a full-featured podcast aggregator that runs on the command line, and has many features over other command-line podcatchers like bashpodder, and even over GUI tools like iPodder.

I originally envisioned hpodder to be something that I’d cron up and run in the background. But I have tended to run it in the foreground more than in the background. Some others have too, and the requested hpodder feature is parallel downloads.

So I am working on that. I already have code working, in fact, that will parallelize both the hpodder update (downloading the feeds) and the hpodder download (downloading the actual episodes) commands.

Unlike ipodder, my code will make sure that no more than 1 thread will ever be downloading from a given server at a given time. ipodder had the terribly annoying habit of pointing all of its threads at a single server, thus pounding it while also providing little benefit for someone with a pipe fatter than the server’s.

Before all this multithreaded stuff could be written, I needed to write my own status bar code instead of just letting curl display its own status bar. (That wouldn’t work when there are 5 curls running at once)

I decided that I would write some generic status bar code, rather than something specific to hpodder. I took the apt-get status bar as an example, and whipped one up in Haskell and added it to my MissingH Haskell library.

But a status bar just begged for another feature: a generalized progress tracker. Something that could keep track of where a task (and its sub-tasks) are, calculate ETA, estimated time remaining, speed, etc. So I wrote that and made the status bar use it.

AND, a status bar begged for a generalized numeric formatter: something that could render 512 as 512, 2048 as 2K, 1048576 as 1M, etc. So I wrote that, and it’s general enough that it can render into both SI and “binary” units by default (and others that users may want).

Finally, I wrote a function to take a number of seconds and render it in something friendly like 23m5s like apt uses, and shoved that in MissingH as well.

So now hpodder will have a status bar, and any other Haskell program can use the same status bar code in minutes because it’s all generic. Or if someone just needs to render a number in megabytes, they can do that.

I really enjoy it when a program needs a solution that is generic enough to put in a larger library. I try to put as much of my Haskell code in MissingH as I can, so as to make it useful to others (and my other programs).

Desktop Linux: NFS or something else?

Recently, I asked for opinions on desktop Linux. Thanks very much to those that replied. I’ve set up an old laptop as an experiment. I’m using Debian, Gnome, and Systemimager. It’s been an interesting project (especially getting SystemImager and a splash screen program to do what I want).

I’d like for my desktop machines to mount /home over the network. I could use NFS, but of course that has all the well-known security risks. Is there a better network filesystem that is easy to use, fast, and more secure than NFS?

Desktop Linux Opinions?

I’m brainstorming about ways of setting up Linux desktops machines for people used to Window users on a LAN. It could be any size of LAN.

I’d like people to be able to sit down at any Linux machine on the LAN and log in — probably use a LDAP directory for that, and NFS-mounted home directories. I wouldn’t want to NFS-mount the entire thing for performance reasons.

So, some of the things I’m thinking about are:

  • Desktop environment: KDE or Gnome? Which would give Windows users all the tools they’d want? Which would they feel most at home with? I’m thinking it’s KDE, but Gnome has a more polished “feel” too it.
  • Image management. How could the desktops be updated? Just rsync everything except fstab over? Can we actually have a single system image? Is XOrg powerful enough to just recognize hardware at boot and Do The Right Thing? Can we build a unified initrd somehow?
  • Distribution. Debian, Ubuntu, Kubuntu? Do the Ubuntus bring anything to the table, if we take as a given that an experienced Debian admin is managing all this?
  • Laptops. What do we do about the home directories there? Some sort of automated rsync thingy?
  • Installation. FAI? Or some homegrown thing that just boots up, partitions, and runs rsync?

Managing Software

Recently I mentioned that I hate releasing software. It’s true, and I’ve decided that the first part of fixing it is to tackle the presentation of software to the world.

My current scheme of darcs.complete.org for repositories plus bare directories on my gopher (yes, that gopher) site leaves a lot to be desired. There is no bug tracker, there are few screenshots, there is no consistency. It is also not easy to empower others to work on them directly.

At the same time, I am the sole or primary contributor to most of them. These are not huge kernel-sized projects. These are smaller, bite-size projects. So I don’t want or need a lot of overhead. I’ve been thinking about my options.

  • I could just use sourceforge.net. I poked around there today, and all the advertising there is a real eyesore. Plus I figure that if anyone is getting paid for all my hard work, why should it be some random people that no longer write free software? On the other hand, it would be an easy way for my projects to gain visibility. Or I could use alioth, and give up both the advertisements and the larger visibility. But I don’t like giving up control over my site’s appearance, or behing beholden to others for backups, uptime, etc.
  • I could use trac. It’s nice, and is the only option that supports darcs, and has a very cool wiki integrated into everything (even parsing out keywords from changelogs). On the other hand, downloads are — at best — attachments to wiki pages. There is no download manager. And you have to set up a separate trac instance for each project. That is a non-starter for me. If I can’t see all my bug reports at one place, the bug tracker will be too annoying to use.
  • I could use gforge or savane. These are the sourceforge forks. Neither seems to be as resource-hungry as I expected, and debs are available for both. I could just install them locally and use them for my projects, though that seems like overkill. Plus, like SourceForge and Alioth, they have a crappy web-only bug tracker. I’d rather use something like RT that works by email. (Though RT is too resource-intensive to run on my server). However, web-only is better than nothing so I could hold my nose and use it.
  • I could write my own. But I’d rather not, if there’s already something workable out there.

Is anyone else thinking about this? What are your thoughts?

I use more than one computer

I use more than one computer, and quite a bit. I use three regularly, and two or three more on occasion.

But this seems to be a surprise to many programs.

I want to carry certain things with me from machine to machine, access them from anywhere, and have changes propogate across.

Things such as:

  • Bookmarks
  • newsrc files (to mark which Usenet articles are read)
  • mail (solved with my OfflineIMAP program)
  • A small set of files
  • Contacts
  • Calendar/scheduler (appointments)

Now, MacOS X seems to do some of this with their for-pay mac.com service. But I wonder why so few other apps do this out of the box?

The newsrc question is a particularly difficult one to crack, it seems. There are various schemes for synchronizing bookmarks, but none seem to work reliably.

Sigh.

I Hate Releasing Software

I’ve written a bunch of software. I like coding, I like debugging. I like getting e-mail from people that have used my software and are happy.

I don’t like actually having to make a release.

To do a good and proper release of a program, I’d be doing approximately these tasks:

  • Upload to Debian
  • Push to my darcs repo
  • Upload a tar.gz to my server
  • Update a webpage with the latest tar.gz
  • Announce the release to freshmeat
  • Announce the release to a mailing list
  • Update/post screenshots, if things have changed

So I have two wishes. First, I want a tool that maintains a website with software listings. Each program should have its own page, with a description, links to mailing lists, download links, links to the darcs repo, screenshots, etc. It should be simple but I’m too lazy to write it.

Secondly, there should be a tool that will do all of the above tasks (except the screenshots) for me. It should infer the name of the project and the version from the data in my working directory. It should be able to automate this while process without me having to lift a finger.

Sadly, no such thing seems to exist.

And, to date, I’ve been too lazy to write one. Does anyone know of such a thing?

Disk encryption support in Etch

Well, I got my new MacBook Pro 15″ in yesterday. I’ll write something about that shortly. The main OS for this machine is not Mac OS X, though, but Debian.

I decided that, being a laptop, I would like to run dm-crypt on here. Much to my delight, the etch installers support dm-crypt out of the box.

Not only that, but they supported this setup out of the box, too:

  • Two partitions for Debian — one for /boot, everything else on the second one
  • The second partition is completely encrypted
  • Inside the encrypted container is an LVM physical volume
  • Inside the LVM physical volume are logical volumes for /, /home, /usr, /var, and swap
  • XFS is used for each filesystem

Not only that, but it set up proper boot sequence for all of this out of the box, too.

So I turn on the unit, enter the password for the encrypted partition, and then the system continues booting.

Nice. Very nice.

Kudos to the debian-installer and initramfs teams.

Another Haskell Solution to Lars’ Problem

Yesterday, I posted an 18-line solution to Lars’ language problem. One problem with it was that it was not very memory-efficient (or time-efficient, for that matter). In other words, it was optimized for elegance.

Here is a 22-line solution that is much more memory-efficient and works well with his “huge” test case. Note to Planet readers: Planet seems to corrupt code examples at times; click on the original story to see the correct code.

import System.Environment
import Data.List
import Data.Char
import qualified Data.Map as Map

custwords = filter (/= "") . lines . map (conv . toLower)
    where iswordchar x = isAlphaNum x && isAscii x
          conv x = if iswordchar x then x else '\n'

wordfreq inp = Map.toList $ foldl' updmap (Map.empty::Map.Map String Int) inp
    where updmap nm word = case Map.lookup word nm of
                             Nothing -> Map.insert word 1 nm
                             Just x -> (Map.insert word $! x + 1) nm

freqsort (w1, c1) (w2, c2) = if c1 == c2
                                 then compare w1 w2
                                 else compare c2 c1

showit (word, count) = show count ++ " " ++ word
main = do args <- getArgs
          interact $ unlines . map showit . take (read . head $ args) .
                     sortBy freqsort . wordfreq . custwords

The main change from the previous example to this one is using a Map to keep track of the frequency of each word.

A Haskell solution to Lars’ Problem

Thanks to a little glitch in planet, one of Lars’ posts from 2004 came to my attention. In it, he proposes a test for language benchmarking:

Read text from the standard input and count the number of times each word occurs. Convert letters to lower case. Order the words according to frequency, words with the same frequency should be ordered in ascending lexicographic order according to character code. Print out the top N words, where N is a decimal number given on the command line. Each output line must contain the count, a space, and the word (in lower case), and end in an ASCII LINE FEED character. Output must contain exactly N such output lines and no other output lines.

A word contains only ASCII letters A through Z and a through z (convert upper case to lower case) and ASCII digits 0 through 9 and is not empty. All other characters separate words and are ignored except to notice word boundaries. Word boundaries only occur at the beginning and end of the file and at non-word characters. You may not assume a maximum length for the word, line, or input file.

He provides a tarball with sample implementations in C, Python, and Shell.

His C code is 183 lines long, Python 57, and Shell 11. The specs for this test seem particularly suited for shell.

I wrote a version in Haskell, commented and formatted approximately the same as his Python version, but using an algorithm more like the shell version. It comes in at 18 lines. Here it is:

import System.Environment
import Data.List
import Data.Char

custwords = filter (/= "") . lines . map (conv . toLower)
    where iswordchar x = isAlphaNum x && isAscii x
          conv x = if iswordchar x then x else '\n'

wordfreq = map (\x -> (head x, length x)) . group . sort

freqsort (w1, c1) (w2, c2) = if c1 == c2
                                 then compare w1 w2
                                 else compare c2 c1

showit (word, count) = show count ++ " " ++ word
main = do args <- getArgs
          interact $ unlines . map showit . take (read . head $ args) .
                     sortBy freqsort . wordfreq . custwords

Taking a look at this, one thing that might strike you is the function composition in main. This takes the output from one function and feeds it into the next -- and the Haskell syntactic sugar for this makes it look a lot like pipes in the shell version. The interact call takes, as a parameter, a function that takes a string and returns a string. interact supplies stdin as the input and prints the output to stdout. Note that, since Haskell is lazily, this does not mean buffering up the entire input or output -- it is read and written on demand.

The rest of the functions are also standard in Haskell, and you can find them in the index to the library reference if you want to learn more.

I understand and agree that short code doesn't necessarily mean good code, but I think that Haskell provides a very elegant and expressive solution to many problems -- one that also happens to be remarkably concise.

Updated 9/4: Changed isLower to isAlphaNum to fix a bug, and removed unnecessary Data.Map import

Lazy big-O and Haskell Answers

First, Evan has a host of interesting articles about Haskell, and I found his lazy big-O article particulary interesting.

Next, Eric Warmenhoven has recently taken up Haskell and posted some Haskell questions on his blog. Eric, here are some answers for you.

First, regarding shared libraries. While Haskell can be compiled to machine code, and GHC is a popular way to do that, a standard C way of representing information about a library (.h and .so files) is not really rich enough for Haskell. Consider, for instance, that functions may accept arguments of a wide range of types (or even things such as lists of any type). Haskell also performs type checking, and thus must know the type of arguments a function expects, as well as its return type, at compile time. So you do not generally compile Haskell code directly to .so files, but rather use the compiler’s module or package support to do that. See Cabal for more information on packages. Through the FFI (Foreign Function Interface), it is possible to both call into C and be called from C with Haskell code, if that’s where you want to go. It is actually easier in Haskell than in any other high-level language I’ve dealt with before.

Regarding circular module deps — I’ve never used them and can’t really comment. I can say, though, that the .boot files are internal files created by GHC.

Regarding practical stuff in tutorials — I share your complaint there. I have found a few that are better than the others: Yet Another Haskell Tutorial, and Haskell: The Craft of Functional Programming, 2nd ed., by Simon Thompson. Several of us are working intermittently on a project called Haskell V8 — take a look and darcs send me patches! I would say that Haskell’s I/O system is the most powerful I’ve seen in many ways — especially with regard to laziness — and in the upcoming GHC 6.6 release, it will be both lazy *and* blazingly fast. Very nice.

There isn’t much Debian-specific documentation, but there is a draft policy and a mailing list (link to it is in the policy doc).

Hope this helps!