Category Archives: Software

Migrated from Hetzner to OVH hosting

Since August 2011, my sites such as complete.org have been running on a Xen-backed virtual private server (VPS) at Hetzner Online, based in Germany. I had what they called their VQ19 package, which included 2GB RAM, 80GB HDD, 100Mb NIC and 4TB transfer.

Unlike many other VPS hosts, I never had performance problems. However, I did sometimes have hardware problems with the host, and it could take hours to resolve. Their tech support only works business hours German time, which was also a problem.

Meanwhile, OVH, a large European hosting company, recently opened a datacenter in Canada. Although they no longer offer their value-line Kimsufi dedicated servers there — starting at $11.50/mo — they do offer their midrange SoYouStart servers there. $50/mo gets a person a 4-core 3.2GHz Xeon server with 32GB RAM, 2x2TB SATA HDD, 200Mbps bandwidth. Not bad at all! The Kimsufi options are still good for lower-end needs as well.

I signed up for one of the SoYouStart servers. I’ve been pleased with my choice to migrate, and at the possibilities that having hardware like that at my disposal open up, but it is not without its downside.

The primary downside is lack of any kind of KVM console. If the server doesn’t boot, I can’t see the Grub error message (or whatever) behind it. They do provide hardware support and automatic technician dispatching when the server isn’t pingable, but… they state they have no KVM access at all. They support many OS flavors, and have a premade image for them, but there is no using a custom ISO to install; if you want ZFS on Linux, for instance, you can’t very easily build it into root.

My server was promised within 72 hours, but delivered much quicker: within about 1. I had two times they said they had to replace a motherboard within the first day; once they did it in 30 minutes, and the other took them 2.5 hours for some reason. They do have phone support, which answers almost immediately, but the people there are not the people actually in the datacenter. It was frustrating with a server down for hours and nobody really commenting on what was going on.

The server performs quite well, and after the initial issues, I’ve been happy.

I was initially planning an all-ZFS installation. SoYouStart does offer a rescue environment, but it doesn’t support ZFS, so I figured I better stick with an ext4 root at least. The default Debian install uses RAID1 on md-raid, with a 20GB root partition and the rest of the 2TB drive in /home, and then a swap partition on each drive (mysteriously NOT in the RAID!) So I broke the mirror on /home and converted those into the two legs of a mirrored vdev for a zpool.

I run all of the real work inside KVM VMs, so that should minimize the number of times I have to do anything to the root filesystem that could cause trouble.

SoYouStart includes 100GB of space on a separate FTP server for backup purposes. I have scripts that upload nightly tarballs of the root filesystem, plus full “zfs send” streams of everything else. Every hour, it uploads an incremental “zfs send” stream as well. This all works quite nicely; even if the machine is a complete loss, I’d never lose more than an hour’s work, and could restore it completely from a rescue environment. Very nice!

I’ll write more in a few days about the ZFS setup I’m using, and some KVM discoveries as well.

VirtFS isn’t quite ready

Despite claims to the contrary [PDF], VirtFS — the 9P-based virtio KVM/QEMU layer designed to pass through a host’s filesystem to the guest — is quite slow. I have yet to get it to perform at even 1/10 the speed of the virtual block device (VBD). That’s unfortunate, because in theory it should be significantly faster. At this rate, I suspect even NFS will be significantly faster.

Beyond that, it seems impossible to use VirtFS as the root filesystem in a VM, at least with Debian; initramfs-tools doesn’t know how to build an initrd in that situation, and the support is just not there.

It would make a great combination with btrfs or zfs, but unfortunately looks to be just not ready yet.

How to fix “fstrim: Operation not supported” under KVM?

Maybe someone out there will have some ideas.

I have a KVM host running wheezy, with wheezy-backports versions of libvirt and qemu. I have defined a guest, properly set discard=unmap in the domain XML file for it, verified that’s being passed to the guest, but TRIM/DISCARD is just not working.

Mounting the ext4 filesystem with discard has no effect, and fstrim / always reports:

fstrim: /: FITRIM ioctl failed: Operation not supported

Every single time.

I’ve tried with the virtio, IDE, and SCSI (both default and virtio-scsi) backend drivers. The guest is also running wheezy (i386 version; the host is amd64) and I’ve tried the latest 3.12 backported kernel for it. No dice.

If I shut down the VM and mount the filesystem on the host, fstrim works fine.

Everything says this should work. But it doesn’t.

Any ideas?

Why and how to run ZFS on Linux

I’m writing a bit about ZFS these days, and I thought I’d write a bit about why I am using it, why it might or might not be interesting for you, and what you might do about it.

ZFS Features and Background

ZFS is not just a filesystem in the traditional sense, though you can use it that way. It is an integrated storage stack, which can completely replace the need for LVM, md-raid, and even hardware RAID controllers. This permits quite a bit of flexibility and optimization not present when building a stack involving those components. For instance, if a drive in a RAID fails, it needs only rebuild the parts that have actual data stored on them.

Let’s look at some of the features of ZFS:

  • Full checksumming of all data and metadata, providing protection against silent data corruption. The only other Linux filesystem to offer this is btrfs.
  • ZFS is a transactional filesystem that ensures consistent data and metadata.
  • ZFS is copy-on-write, with snapshots that are cheap to create and impose virtually undetectable performance hits. Compare to LVM snapshots, which make writes notoriously slow and require an fsck and mount to get to a readable point.
  • ZFS supports easy rollback to previous snapshots.
  • ZFS send/receive can perform incremental backups much faster than rsync, particularly on systems with many unmodified files. Since it works from snapshots, it guarantees a consistent point-in-time image as well.
  • Snapshots can be turned into writeable “clones”, which simply use copy-on-write semantics. It’s like a cp -r that completes almost instantly and takes no space until you change it.
  • The datasets (“filesystems” or “logical volumes” in LVM terms) in a zpool (“volume group”, to use LVM terms) can shrink or grow dynamically. They can have individual maximum and minimum sizes set, but unlike LVM, where if, say, /usr gets bigger than you thought, you have to manually allocate more space to it, ZFS datasets can use any space available in the pool.
  • ZFS is designed to run well in big iron, and scales to massive amounts of storage. It supports SSDs as L2 cache and ZIL (intent log) devices.
  • ZFS has some built-in compression methods that are quite CPU-efficient and can yield not just space but performance benefits in almost all cases involving compressible data.
  • ZFS pools can host zvols, a block device under /dev that stores its data in the zpool. zvols support TRIM/DISCARD, so are ideal for storing VM images, as they can instantly release space released by the guest OS. They can also be snapshotted and backed up like the rest of ZFS.

Although it is often considered a server filesystem, ZFS has been used in plenty of other situations for some time now, with ports to FreeBSD, Linux, and MacOS. I find it particularly useful:

  • To have faith that my photos, backups, and paperwork archives are intact. zpool scrub at any time will read the entire dataset and verify the integrity of every bit.
  • I can create snapshots of my system before running apt-get dist-upgrade, making it easy to track down issues or roll back to a known-good configuration. Ideal for people tracking sid or testing. One can also easily simply boot from a previous snapshot.
  • Many scripts exist that make frequent snapshots, and retain the for a period of time as a way of protecting work in progress against an accidental rm. There is no reason not to snapshot /home every 5 minutes, for instance. It’s almost as good as storing / in git.

The added level of security in having cheap snapshots available is almost worth it by itself.

ZFS drawbacks

Compared to other Linux filesystems, there are a few drawbacks of ZFS:

  • CDDL will prevent it from ever being part of the Linus kernel tree
  • It is more RAM-hungry than most, although with tuning it can even run on the Raspberry Pi.
  • A 64-bit kernel is strongly preferred, even in low-memory situations.
  • Performance on many small files may be less than ext4
  • The ZFS cache does not shrink and expand in response to changing RAM usage conditions on the system as well as the normal Linux cache does.
  • Compared to btrfs, ZFS lacks some features of btrfs, such as being able to shrink an existing pool or easily change storage allocation on the fly. On the other hand, the features in ZFS have never caused me a kernel panic, and half the things I liked about btrfs seem to have.
  • ZFS is already quite stable on Linux. However, the GRUB, init, and initramfs code supporting booting from a ZFS root and /boot is less stable. If you want to go 100% ZFS, be prepared to tweak your system to get it to boot properly. Once done, however, it is quite stable.

Converting to ZFS

I have written up an extensive HOWTO on converting an existing system to use ZFS. It covers workarounds for all the boot-time bugs I have encountered as well as documenting all steps needed to make it happen. It works quite well.

Additional Hints

If setting up zvols to be used by VirtualBox or some such system, you might be interested in managing zvol ownership and permissions with udev.

Debian-Live Rescue image with ZFS On Linux; Ditched btrfs

I’m a geek. I enjoy playing with different filesystems, version control systems, and, well, for that matter, radios.

I have lately started to worry about the risks of silent data corruption, and as such, looked to switch my personal systems to either ZFS or btrfs, both of which offer built-in checksumming of all data and metadata. I initially opted for btrfs, because of its tighter integration into the Linux kernel and ability to shrink an existing btrfs filesystem.

However, as I wrote last month, that experiment was not a success. I had too many serious performance regressions and one too many kernel panics and decided it wasn’t worth it. And that the SuSE people got it wrong, deeply wrong, when they declared btrfs ready for production. I never lost any data, to its credit. But it simply reduces uptime too much.

That left ZFS. Before I build a system, I always want to make sure I can repair it. So I started with the Debian Live rescue image, and added the zfsonlinux.org repository to it, along with some key packages to enable the ZFS kernel modules, GRUB support, and initramfs support. The resulting image is described, and can be downloaded from, my ZFS Rescue Disc wiki page, which also has a link to my source tree on github.

In future blog posts in the series, I will describe the process of converting existing Debian installations to use ZFS, of getting them to boot from ZFS, some bugs I encountered along the way, and some surprising performance regressions in ZFS compared to ext4 and btrfs.

Results with btrfs and zfs

The recent news that openSUSE considers btrfs safe for users prompted me to consider using it. And indeed I did. I was already familiar with zfs, so considered this a good opportunity to experiment with btrfs.

btrfs makes an intriguing filesystem for all sorts of workloads. The benefits of btrfs and zfs are well-documented elsewhere. There are a number of features btrfs has that zfs lacks. For instance:

  • The ability to shrink a device that’s a member of a filesystem/pool
  • The ability to remove a device from a filesystem/pool entirely, assuming enough free space exists elsewhere for its data to be moved over.
  • Asynchronous deduplication that imposes neither a synchronous performance hit nor a heavy RAM burden
  • Copy-on-write copies down to the individual file level with cp --reflink
  • Live conversion of data between different profiles (single, dup, RAID0, RAID1, etc)
  • Live conversion between on-the-fly compression methods, including none at all
  • Numerous SSD optimizations, including alignment and both synchronous and asynchronous TRIM options
  • Proper integration with the VM subsystem
  • Proper support across the many Linux architectures, including 32-bit ones (zfs is currently only flagged stable on amd64)
  • Does not require excessive amounts of RAM

The feature set of ZFS that btrfs lacks is well-documented elsewhere, but there are a few odd btrfs missteps:

  • There is no way to see how much space subvolume/filesystem is using without turning on quotas. Even then, it is cumbersome and not reported with df like it should be.
  • When a maxmium size for a subvolume is set via a quota, it is not reported via df; applications have no idea when they are about to hit the maximum size of a filesystem.

btrfs would be fine if it worked reliably. I should say at the outset that I have never lost any data due to it, but it has caused enough kernel panics that I’ve lost count. I several times had a file that produced a panic when I tried to delete it, several times when it took more than 12 hours to unmount a btrfs filesystem, behaviors where hardlink-heavy workloads take days longer to complete than on zfs or ext4, and that’s just the ones I wrote about. I tried to use btrfs balance to change the metadata allocation on the filesystem, and never did get it to complete; it seemed to go into an endless I/O pattern after the first 1GB of metadata and never got past that. I didn’t bother trying the live migration of data from one disk to another on this filesystem.

I wanted btrfs to work. I really, really did. But I just can’t see it working. I tried it on my laptop, but had to turn of CoW on my virtual machine’s disk because of the rm bug. I tried it on my backup devices, but it was unusable there due to being so slow. (Also, the hardlink behavior is broken by default and requires btrfstune -r. Yipe.)

At this point, I don’t think it is really all that worth bothering with. I think the SuSE decision is misguided and ill-informed. btrfs will be an awesome filesystem. I am quite sure it will, and will in time probably displace zfs as the most advanced filesystem out there. But that time is not yet here.

In the meantime, I’m going to build a Debian Live Rescue CD with zfsonlinux on it. Because I don’t ever set up a system I can’t repair.

Why are we still backing up to hardlink farms?

A person can find all sorts of implementations of backups using hardlink trees to save space for incrementals. Some of them are fairly rudimentary, using rsync --link-dest. Others, like BackupPC, are more sophisticated, doing file-level dedup to a storage pool indexed by a hash.

While these are fairly space-efficient, they are really inefficient in other ways, because they create tons of directory entries. It would not be surprising to find millions of directory entries consumed very quickly. And while any given backup set can be deleted without impact on the others, the act of doing so can be very time-intensive, since often a full directory tree is populated with every day’s backup.

Much better is possible on modern filesystems. ZFS has been around for quite awhile now, and is stable on Solaris, FreeBSD and derivatives, and Linux. btrfs is also being used for real workloads and is considered stable on Linux.

Both have cheap copy-on-write snapshot operations that would work well with a simple rsync --inplace to achieve the same effect ad hardlink farms, but without all the performance penalties. When creating and destroying snapshots is a virtually instantaneous operation, and the snapshots work at a file block level instead of an entire file level, and preserve changing permissions and such as well (which rsync --link-dest can have issues with), why are we not using it more?

BackupPC has a very nice scheduler, a helpful web interface, and a backend that doesn’t have a mode to take advantage of these more modern filesystems. The only tool I see like this is dirvish, which someone made patches for btrfs snapshots three years ago that never, as far as I can tell, got integrated.

A lot of folks are rolling a homegrown solution involving rsync and snapshots. Some are using zfs send / btrfs send, but those mechanisms require the same kind of FS on the machine being backed up as on the destination, and do not permit excluding files from the backup set.

Is this an area that needs work, or am I overlooking something?

Incidentally, hats off to liw’s obnam. It doesn’t exactly do this, but sort of implements its own filesystem with CoW semantics.

Voice Keying with bash, sox, and aplay

There are plenty of times where it is nice to have Linux transmit things out a radio. One obvious example is the digital communication modes, where software acts as a sort of modem. A prominent example of this in Debian is fldigi.

Sometimes, it is nice to transmit voice instead of a digital signal. This is called voice keying. When operating a contest, for instance, a person might call CQ over and over, with just some brief gaps.

Most people that interface a radio with a computer use a sound card interface of some sort. The more modern of these have a simple USB cable that connects to the computer and acts as a USB sound card. So, at a certain level, all that you have to do is play sound out a specific device.

But it’s not quite so easy, because there is one other wrinkle: you have to engage the radio’s transmitter. This is obviously not something that is part of typical sound card APIs. There are all sorts of ways to do it, ranging from dedicated serial or parallel port circuits involving asserting voltage on certain pins, to voice-activated (VOX) circuits.

I have used two of these interfaces: the basic Signalink USB and the more powerful RigExpert TI-5. The Signalink USB integrates a VOX circuit and provides cabling to engage the transmitter when VOX is tripped. The TI-5, on the other hand, emulates three USB serial ports, and if you raise RTS on one of them, it will keep the transmitter engaged as long as RTS is high. This is a more accurate and precise approach.

VOX-based voice keying with the Signalink USB

But let’s first look at the Signalink USB case. The problem here is that its VOX circuit is really tuned for digital transmissions, which tend to be either really loud or completely silent. Human speech rises and falls in volume, and it tends to rapidly assert and drop PTT (Push-To-Talk, the name for the control that engages the radio’s transmitter) when used with VOX.

The solution I hit on was to add a constant, loud tone to the transmitted audio, but one which is outside the range of frequencies that the radio will transmit (which is usually no higher than 3kHz). This can be done using sox and aplay, the ALSA player. Here’s my script to call cq with Signalink USB:

#!/bin/bash
# NOTE: use alsamixer and set playback gain to 99
set -e

playcmd () {
        sox -V0 -m "$1" \
           "| sox -V0 -r 44100 $1 -t wav -c 1 -   synth sine 20000 gain -1" \
            -t wav - | \
           aplay -q  -D default:CARD=CODEC
}

DELAY=${1:-1.5}

echo -n "Started at: "
date

STARTTIME=`date +%s`
while true; do
        printf "\r"
        echo -n $(( (`date +%s`-$STARTTIME) / 60))
        printf "m/${DELAY}s: TRANSMIT"
        playcmd ~/audio/cq/cq.wav
        printf "\r"
        echo -n $(( (`date +%s`-$STARTTIME) / 60))
        printf "m/${DELAY}s: off         "
        sleep $DELAY
done

Run this, and it will continuously play your message, with a 1.5s gap in between during which the transmitter is not keyed.

The screen will look like this:

Started at: Fri Aug 24 21:17:47 CDT 2012
2m/1.5s: off

The 2m is how long it’s been going this time, and the 1.5s shows the configured gap.

The sox commands are really two nested ones. The -m causes sox to merge the .wav file in $1 with the 20kHz sine wave being generated, and the entire thing is piped to the ALSA player.

Tweaks for RigExpert TI-5

This is actually a much simpler case. We just replace playcmd as follows:

playcmd () {
        ~/bin/raiserts /dev/ttyUSB1 'aplay -q -D default:CARD=CODEC' < "$1"
}

Where raiserts is a program that simply keeps RTS asserted on the serial port while the given command executes. Here's its source, which I modified a bit from a program I found online:

/* modified from
 * https://www.linuxquestions.org/questions/programming-9/manually-controlling-rts-cts-326590/
 * */
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 


static struct termios oldterminfo;


void closeserial(int fd)
{
    tcsetattr(fd, TCSANOW, &oldterminfo);
    if (close(fd) < 0)
        perror("closeserial()");
}


int openserial(char *devicename)
{
    int fd;
    struct termios attr;

    if ((fd = open(devicename, O_RDWR)) == -1) {
        perror("openserial(): open()");
        return 0;
    }
    if (tcgetattr(fd, &oldterminfo) == -1) {
        perror("openserial(): tcgetattr()");
        return 0;
    }
    attr = oldterminfo;
    attr.c_cflag |= CRTSCTS | CLOCAL;
    attr.c_oflag = 0;
    if (tcflush(fd, TCIOFLUSH) == -1) {
        perror("openserial(): tcflush()");
        return 0;
    }
    if (tcsetattr(fd, TCSANOW, &attr) == -1) {
        perror("initserial(): tcsetattr()");
        return 0;
    }
    return fd;
}


int setRTS(int fd, int level)
{
    int status;

    if (ioctl(fd, TIOCMGET, &status) == -1) {
        perror("setRTS(): TIOCMGET");
        return 0;
    }
    status &= ~TIOCM_DTR;   /* ALWAYS clear DTR */
    if (level)
        status |= TIOCM_RTS;
    else
        status &= ~TIOCM_RTS;
    if (ioctl(fd, TIOCMSET, &status) == -1) {
        perror("setRTS(): TIOCMSET");
        return 0;
    }
    return 1;
}


int main(int argc, char *argv[])
{
    int fd, retval;
    char *serialdev;

    if (argc < 3) {
        printf("Syntax: raiserts /dev/ttyname 'command to run while RTS held'\n");
        return 5;
    }
    serialdev = argv[1];
    fd = openserial(serialdev);
    if (!fd) {
        fprintf(stderr, "Error while initializing %s.\n", serialdev);
        return 1;
    }

    setRTS(fd, 1);
    retval = system(argv[2]);
    setRTS(fd, 0);

    closeserial(fd);
    return retval;
}

This compiles to an executable less than 10K in size. I love it when that happens.

So these examples support voice keying both with VOX circuits and with serial-controlled PTT. raiserts.c could be trivially modified to control other serial pins as well, should you have an interface which uses different ones.

How to get started programming?

I have been asked for advice from several people recently on how to get started programming, or how to further develop a nascent interest in coding or software engineering. The people asking the questions range in age from about 10 years old to older than me. These are people that, for various reasons, are not very easily able to take computer science courses right now.

One would think that, since I’ve been doing this for somewhere around a quarter century (oh I do feel old now), that I’d be ready to offer up some great advice. And offer some suggestions I have. But I’m not convinced they’re good ones.

I have two main tensions. The first is that I, like many in the communities I tend to hang out in such as Debian’s, have a personality that leads me to take a deep dive into details of anything that holds my interest. Whether it’s Linux, Haskell, or amateur radio, I want to do more than skim the surface if I’m having fun with it. Many people are not like that. They may have a lot of fun programming in Visual Basic, not really caring that other languages are out there. Or some people are not like this yet. I feel unqualified to provide good advice to people that are different from me in that way. To put it a different way: most people don’t want to wait 4 years to be useful, and want to start out right away and get better over time (and I was the same way too.)

The second is related. I learned programming at a time when, other than BASIC, interpreted languages were not really available to me. (Yes, they were available, but not to me.) I cut my teeth on BASIC, Pascal, and C. Although I rarely use C anymore, I can still drop into it at a moment’s notice and be perfectly comfortable. I feel it was a fundamentally valuable experience, and that it would be very hard to become a great programmer without ever having lived and breathed something like C, where memory and pointers must be managed manually. Having said that, it is probably possible to become a good coder without ever having touched C.

Here, then, is an edited version of some rambly advice I sent to someone recently, where learning OOP was particularly mentioned. I would welcome your comments and suggestions. I may point people that ask to this post in the future.

For simply learning how to write code, Dive Into Python has long been a decent resource, though it may assume more experience than some have. I haven’t read them myself, but I’ve also heard good things about the How to Think Like a Computer Scientist series from Green Tea Press. They’re all available as free PDF downloads, too!

Eric S. Raymond’s The Art of Unix Programming is another work I’ve heard good things about, despite having never read it myself. A quick glance at the table of contents makes me think that even if people don’t wind up working on Unix, the lessons and philosophy should be informative.

It seems that many Computer Science programs are using Java for the core of their instruction, or even almost exclusively. Whether that is good or bad, I’m not completely sure. It certainly gets people into OOP more deeply, but I’m a “right tool for the job” kind of person. Despite the hype, OO — like everything else — isn’t the right tool for every job.

It is fine for people to dive straight into OO and become good programmers/engineers. However, I think it would be difficult to become a great programmer/engineer without ever having a solid understanding of a more low-level language, such as C in particular. I did my CS work when it was mostly based in C, and am glad for it. If someone never has to manage memory or pointers, I suspect they will be at a disadvantage in the long run for not being able to understand or work with the system at a more fundamental level. If a person knows C, plus some concepts of OO and Functional Programming (FP), it should be easy to pick up just about any other language out there.

I used to think Python was a great first language, but during the 2.x series they added so much fluff and so many special cases that I’m less enthusiastic now, though I don’t know how much of that got cleaned up in 3.x. I am not too keen on Java as a first language, because too many things that should be simple aren’t. I have a fondness for Haskell, and its close relationship to mathematics could make it a great first language — or maybe a poor one, depending on your perspective.

One other thing – I think it’s important for good programmers to have experience with all three major models of programming (procedural, OO, functional.) Even if a person winds up working mostly in one universe, knowledge of and experience with the others is important and informative and, in my experience, leads to better algorithms and architecture all around.

  • Procedural languages: Obviously C, but also Unix shell
  • OO languages: Python, Java, plenty of other fine choices
  • Functional: Lisp, Scheme, Haskell (also the only lazy and pure language on this list)
  • Having said all that, more important than a choice of book or language is experience. I have heard people suggest that it takes 10,000 hours of practice to become a superstar at something, whatever that “something” is, and I wouldn’t doubt it. Seth Godin discusses that a bit, with some criticism of the idea too.

    So that leads to the most important piece of advice: dive in to whatever your interest is. Experiment, write code, put theory into practice in a way that holds interest and excitement. People that try to do things they don’t enjoy don’t seem to stick with them as long or execute as well, and thus will never become great.

Decisions on Listening to Music

A couple of days ago, I commented about my thoughts on finding a better way to listen to music, and asked for suggestions. I’ve checked out the avenues suggested, and here are my thoughts so far.

Streaming service: Spotify

Of the streaming service offerings, Spotify seems the most compelling. It has both Linux and Android clients. The Linux client is unofficial and a bit buggy, but serviceable. The service is fairly nice, and the serendipity of listening to “radio” of tracks like any track or artist I may pick is an incredible feature. It will mean a different mindset; rather than carefully choosing what I buy, I have a vast catalog at my fingertips. I’d have to give up some control, but might be better off for it.

Spotify can play local files, but sadly lacks support for FLAC that most of my collection is in. A little work with the shell, and I had a quick multi-core FLAC to MP3 conversion done. It can also identify local files and match them up with its catalog, but it does a somewhat poor job of it, despite the fact that my files are tagged by Musicbrainz. There are also a fair number of albums in my collection (from Magnatune and from local artists) that aren’t in Spotify’s catalog and probably never will be. I may wind up using git-annex or something like it to sync them between PCs.

Surprisingly, it doesn’t have any kind of web-based player available. It has a lot of features, but many of them are under-documented.

Subsonic / Supersonic

The idea of being able to stream music anywhere certainly has appeal, and the idea of streaming that music from my own machine — my own collection — is quite the appealing one. Subsonic seems to be the standard that people recommend for this. It has a web-based player that works well, as well as mobile clients for various platforms. The Android mobile client is particularly well-regarded and is probably the best one here, aside from Spotify. The playlists work well, as expected, and generally it has the appearance of a well-rounded system. There is a lot of flexibility in bitrates; maximum bitrates can be set per-user and per-client. It, of course, transcodes on the fly.

My chief gripe about Subsonic is that it has no true album/artist/genre browser. Its main browser is mostly a filesystem browser. I have used clients that show album/artist/genre panes or sortable lists, based on tags, for so long that my filesystem organization is rather different – centered around bitrate of rip, origin, etc. It is not very easy to navigate my collection in that way. The search feature can search artist and album, and this can partially make up for it. But there is simply no way to see a list of all artists in the collection.

The server is written in Java, and takes up 362MB at startup. That’s, I guess, decent for a Java server, but doesn’t make me pleased as I’d be running it on lower-end hardware.

Supersonic is a git-maintained actively synced subsonic fork which removes the license cost (the source is GPL3 anyhow so that wasn’t a big deal).

Ampache

Often mentioned in the same breath as Subsonic, this is yet another frequent suggestion for streaming your own collection. Its primary player is web-based, like Subsonic. It has full artist/genre browsers, and adds tag cloud and various other options as well – by default, it loaded the tag cloud with my genre tags, which was a nice touch. The search feature is a little less robust, as it simply searches anywhere and returns a list of all matching tracks, rather than listing matching albums and artists before going into the list of all tracks. But that is somewhat of a minor nit.

Ampache’s web-based player, liks Subsonic’s, uses Flash, but the implementation is really quite awkward. You can’t simply click a play button and have playback start. Rather, you must add things to the “playlist” (queue), a frame on the web interface. Then, you must click play in THAT frame, which launches a new window for the player and transfers its contents to it. You thus have two separate running playlists. And if you think you can just add to the playlist of the player window, think again; you can only replace it. It is simply not possible to add a track to the end of that list without losing your existing place. This makes the usefulness of the web-based player rather, well, low.

Playlist functionality in Ampache is rather odd. You can have playlists, but if you want to add a track to a saved playlist, first you have to clear you current playlist, then add the track to it, then save it to the existing playlist.

Frontends for Ampache exist in banshee and in amarok. Banshee’s is broken, and simply silently fails to do anything. Unless you look in the console, in which case you’ll see a .NET backtrace. Banshee has had these mysterious errors for me for years. The Amarok plugin is so basic that you can browse by artist but not by album, and is almost unusable. There is also a rhythmbox plugin in Debian, but it is apparently for an old version of Rhythmbox and no longer works.

The best frontend is a dedicated Ampache frontend called Viridian. It has a nice interface, provides features such as taskbar integration, and generally works pretty well. The only thing it’s bad at is playlists, and here it’s even worse than the web interface. It doesn’t support modifying the server-side playlists at all. And accessing one is an annoying “load playlist” operation, that replaces your current play queue (also called the “playlist”) with its contents by default.

Several Android clients exist for Ampache, though it seems only one has seen any updates since 2010.

mpd

This is an interesting program, as it is designed more to provide multiple points of control for a single player than it is to stream a single collection to multiple devices. Nevertheless, I tested its HTTP and icecast stream modules. (It can also stream with PulseAudio, but PulseAudio causes devious breakage on my workstation, so I refuse to bother.) This could be made to work reasonably well on my LAN using the gmpc client. There are several Android clients as well, though none of them integrate as well as Subsonic or Spotify. Most seem focused on controlling a remote player. The ones that do stream do so with some hiccups, and don’t integrate controls with the lock screen.

gmpc is a reasonable control program and, combined with mplayer, makes a decent player. It can browse the database every way I’d want to. It doesn’t let me right-click on tracks to add them to playlists, but I can use copy/paste to do so, or add them to the play queue and from there to a playlist. It’s annoying, but not completely unworkable.

But here’s the real concern. mpd has no provision whatsoever for any sort of encryption such as SSL. If I want to be listening to my music collection from on the road, I don’t really wish to open up a streaming music service for the entire Internet. Nor do I really want to mess with SSH tunnels and the like, which are cumbersome on Android (and can be in general). This is pretty much a fatal flaw to me.

Google Music

An interesting service, but ultimately one I don’t see the point for in my situation – where I already have my music on a PC that is always on. It would probably accommodate my entire collection free, but how many weeks would it take me to get it there? And how would it stay in sync? It simply doesn’t offer me any compelling advantage over streaming direct from my PC, so I don’t see much need to bother with it.

Others

I briefly considered Streeme (poor mobile support), Audiogalaxy (no Linux support), Jinzora (appears dead), and Sockso (no mobile support). I also briefly considered other streaming services such as Pandora, Slacker, rdio, etc., but based on reviews didn’t think that they were worth bothering with.

Conclusions

I’m left mainly considering Spotify and Subsonic. For right now, I’m going to try Spotify and see where that leads me, but may wind up getting back to Subsonic if I can’t find tracks I want with Spotify.