Category Archives: Technology

VirtFS isn’t quite ready

Despite claims to the contrary [PDF], VirtFS — the 9P-based virtio KVM/QEMU layer designed to pass through a host’s filesystem to the guest — is quite slow. I have yet to get it to perform at even 1/10 the speed of the virtual block device (VBD). That’s unfortunate, because in theory it should be significantly faster. At this rate, I suspect even NFS will be significantly faster.

Beyond that, it seems impossible to use VirtFS as the root filesystem in a VM, at least with Debian; initramfs-tools doesn’t know how to build an initrd in that situation, and the support is just not there.

It would make a great combination with btrfs or zfs, but unfortunately looks to be just not ready yet.

How to fix “fstrim: Operation not supported” under KVM?

Maybe someone out there will have some ideas.

I have a KVM host running wheezy, with wheezy-backports versions of libvirt and qemu. I have defined a guest, properly set discard=unmap in the domain XML file for it, verified that’s being passed to the guest, but TRIM/DISCARD is just not working.

Mounting the ext4 filesystem with discard has no effect, and fstrim / always reports:

fstrim: /: FITRIM ioctl failed: Operation not supported

Every single time.

I’ve tried with the virtio, IDE, and SCSI (both default and virtio-scsi) backend drivers. The guest is also running wheezy (i386 version; the host is amd64) and I’ve tried the latest 3.12 backported kernel for it. No dice.

If I shut down the VM and mount the filesystem on the host, fstrim works fine.

Everything says this should work. But it doesn’t.

Any ideas?

Why and how to run ZFS on Linux

I’m writing a bit about ZFS these days, and I thought I’d write a bit about why I am using it, why it might or might not be interesting for you, and what you might do about it.

ZFS Features and Background

ZFS is not just a filesystem in the traditional sense, though you can use it that way. It is an integrated storage stack, which can completely replace the need for LVM, md-raid, and even hardware RAID controllers. This permits quite a bit of flexibility and optimization not present when building a stack involving those components. For instance, if a drive in a RAID fails, it needs only rebuild the parts that have actual data stored on them.

Let’s look at some of the features of ZFS:

  • Full checksumming of all data and metadata, providing protection against silent data corruption. The only other Linux filesystem to offer this is btrfs.
  • ZFS is a transactional filesystem that ensures consistent data and metadata.
  • ZFS is copy-on-write, with snapshots that are cheap to create and impose virtually undetectable performance hits. Compare to LVM snapshots, which make writes notoriously slow and require an fsck and mount to get to a readable point.
  • ZFS supports easy rollback to previous snapshots.
  • ZFS send/receive can perform incremental backups much faster than rsync, particularly on systems with many unmodified files. Since it works from snapshots, it guarantees a consistent point-in-time image as well.
  • Snapshots can be turned into writeable “clones”, which simply use copy-on-write semantics. It’s like a cp -r that completes almost instantly and takes no space until you change it.
  • The datasets (“filesystems” or “logical volumes” in LVM terms) in a zpool (“volume group”, to use LVM terms) can shrink or grow dynamically. They can have individual maximum and minimum sizes set, but unlike LVM, where if, say, /usr gets bigger than you thought, you have to manually allocate more space to it, ZFS datasets can use any space available in the pool.
  • ZFS is designed to run well in big iron, and scales to massive amounts of storage. It supports SSDs as L2 cache and ZIL (intent log) devices.
  • ZFS has some built-in compression methods that are quite CPU-efficient and can yield not just space but performance benefits in almost all cases involving compressible data.
  • ZFS pools can host zvols, a block device under /dev that stores its data in the zpool. zvols support TRIM/DISCARD, so are ideal for storing VM images, as they can instantly release space released by the guest OS. They can also be snapshotted and backed up like the rest of ZFS.

Although it is often considered a server filesystem, ZFS has been used in plenty of other situations for some time now, with ports to FreeBSD, Linux, and MacOS. I find it particularly useful:

  • To have faith that my photos, backups, and paperwork archives are intact. zpool scrub at any time will read the entire dataset and verify the integrity of every bit.
  • I can create snapshots of my system before running apt-get dist-upgrade, making it easy to track down issues or roll back to a known-good configuration. Ideal for people tracking sid or testing. One can also easily simply boot from a previous snapshot.
  • Many scripts exist that make frequent snapshots, and retain the for a period of time as a way of protecting work in progress against an accidental rm. There is no reason not to snapshot /home every 5 minutes, for instance. It’s almost as good as storing / in git.

The added level of security in having cheap snapshots available is almost worth it by itself.

ZFS drawbacks

Compared to other Linux filesystems, there are a few drawbacks of ZFS:

  • CDDL will prevent it from ever being part of the Linus kernel tree
  • It is more RAM-hungry than most, although with tuning it can even run on the Raspberry Pi.
  • A 64-bit kernel is strongly preferred, even in low-memory situations.
  • Performance on many small files may be less than ext4
  • The ZFS cache does not shrink and expand in response to changing RAM usage conditions on the system as well as the normal Linux cache does.
  • Compared to btrfs, ZFS lacks some features of btrfs, such as being able to shrink an existing pool or easily change storage allocation on the fly. On the other hand, the features in ZFS have never caused me a kernel panic, and half the things I liked about btrfs seem to have.
  • ZFS is already quite stable on Linux. However, the GRUB, init, and initramfs code supporting booting from a ZFS root and /boot is less stable. If you want to go 100% ZFS, be prepared to tweak your system to get it to boot properly. Once done, however, it is quite stable.

Converting to ZFS

I have written up an extensive HOWTO on converting an existing system to use ZFS. It covers workarounds for all the boot-time bugs I have encountered as well as documenting all steps needed to make it happen. It works quite well.

Additional Hints

If setting up zvols to be used by VirtualBox or some such system, you might be interested in managing zvol ownership and permissions with udev.

Debian-Live Rescue image with ZFS On Linux; Ditched btrfs

I’m a geek. I enjoy playing with different filesystems, version control systems, and, well, for that matter, radios.

I have lately started to worry about the risks of silent data corruption, and as such, looked to switch my personal systems to either ZFS or btrfs, both of which offer built-in checksumming of all data and metadata. I initially opted for btrfs, because of its tighter integration into the Linux kernel and ability to shrink an existing btrfs filesystem.

However, as I wrote last month, that experiment was not a success. I had too many serious performance regressions and one too many kernel panics and decided it wasn’t worth it. And that the SuSE people got it wrong, deeply wrong, when they declared btrfs ready for production. I never lost any data, to its credit. But it simply reduces uptime too much.

That left ZFS. Before I build a system, I always want to make sure I can repair it. So I started with the Debian Live rescue image, and added the zfsonlinux.org repository to it, along with some key packages to enable the ZFS kernel modules, GRUB support, and initramfs support. The resulting image is described, and can be downloaded from, my ZFS Rescue Disc wiki page, which also has a link to my source tree on github.

In future blog posts in the series, I will describe the process of converting existing Debian installations to use ZFS, of getting them to boot from ZFS, some bugs I encountered along the way, and some surprising performance regressions in ZFS compared to ext4 and btrfs.

Results with btrfs and zfs

The recent news that openSUSE considers btrfs safe for users prompted me to consider using it. And indeed I did. I was already familiar with zfs, so considered this a good opportunity to experiment with btrfs.

btrfs makes an intriguing filesystem for all sorts of workloads. The benefits of btrfs and zfs are well-documented elsewhere. There are a number of features btrfs has that zfs lacks. For instance:

  • The ability to shrink a device that’s a member of a filesystem/pool
  • The ability to remove a device from a filesystem/pool entirely, assuming enough free space exists elsewhere for its data to be moved over.
  • Asynchronous deduplication that imposes neither a synchronous performance hit nor a heavy RAM burden
  • Copy-on-write copies down to the individual file level with cp --reflink
  • Live conversion of data between different profiles (single, dup, RAID0, RAID1, etc)
  • Live conversion between on-the-fly compression methods, including none at all
  • Numerous SSD optimizations, including alignment and both synchronous and asynchronous TRIM options
  • Proper integration with the VM subsystem
  • Proper support across the many Linux architectures, including 32-bit ones (zfs is currently only flagged stable on amd64)
  • Does not require excessive amounts of RAM

The feature set of ZFS that btrfs lacks is well-documented elsewhere, but there are a few odd btrfs missteps:

  • There is no way to see how much space subvolume/filesystem is using without turning on quotas. Even then, it is cumbersome and not reported with df like it should be.
  • When a maxmium size for a subvolume is set via a quota, it is not reported via df; applications have no idea when they are about to hit the maximum size of a filesystem.

btrfs would be fine if it worked reliably. I should say at the outset that I have never lost any data due to it, but it has caused enough kernel panics that I’ve lost count. I several times had a file that produced a panic when I tried to delete it, several times when it took more than 12 hours to unmount a btrfs filesystem, behaviors where hardlink-heavy workloads take days longer to complete than on zfs or ext4, and that’s just the ones I wrote about. I tried to use btrfs balance to change the metadata allocation on the filesystem, and never did get it to complete; it seemed to go into an endless I/O pattern after the first 1GB of metadata and never got past that. I didn’t bother trying the live migration of data from one disk to another on this filesystem.

I wanted btrfs to work. I really, really did. But I just can’t see it working. I tried it on my laptop, but had to turn of CoW on my virtual machine’s disk because of the rm bug. I tried it on my backup devices, but it was unusable there due to being so slow. (Also, the hardlink behavior is broken by default and requires btrfstune -r. Yipe.)

At this point, I don’t think it is really all that worth bothering with. I think the SuSE decision is misguided and ill-informed. btrfs will be an awesome filesystem. I am quite sure it will, and will in time probably displace zfs as the most advanced filesystem out there. But that time is not yet here.

In the meantime, I’m going to build a Debian Live Rescue CD with zfsonlinux on it. Because I don’t ever set up a system I can’t repair.

Why are we still backing up to hardlink farms?

A person can find all sorts of implementations of backups using hardlink trees to save space for incrementals. Some of them are fairly rudimentary, using rsync --link-dest. Others, like BackupPC, are more sophisticated, doing file-level dedup to a storage pool indexed by a hash.

While these are fairly space-efficient, they are really inefficient in other ways, because they create tons of directory entries. It would not be surprising to find millions of directory entries consumed very quickly. And while any given backup set can be deleted without impact on the others, the act of doing so can be very time-intensive, since often a full directory tree is populated with every day’s backup.

Much better is possible on modern filesystems. ZFS has been around for quite awhile now, and is stable on Solaris, FreeBSD and derivatives, and Linux. btrfs is also being used for real workloads and is considered stable on Linux.

Both have cheap copy-on-write snapshot operations that would work well with a simple rsync --inplace to achieve the same effect ad hardlink farms, but without all the performance penalties. When creating and destroying snapshots is a virtually instantaneous operation, and the snapshots work at a file block level instead of an entire file level, and preserve changing permissions and such as well (which rsync --link-dest can have issues with), why are we not using it more?

BackupPC has a very nice scheduler, a helpful web interface, and a backend that doesn’t have a mode to take advantage of these more modern filesystems. The only tool I see like this is dirvish, which someone made patches for btrfs snapshots three years ago that never, as far as I can tell, got integrated.

A lot of folks are rolling a homegrown solution involving rsync and snapshots. Some are using zfs send / btrfs send, but those mechanisms require the same kind of FS on the machine being backed up as on the destination, and do not permit excluding files from the backup set.

Is this an area that needs work, or am I overlooking something?

Incidentally, hats off to liw’s obnam. It doesn’t exactly do this, but sort of implements its own filesystem with CoW semantics.

Kindergarten Computer Class and Password Security

Jacob started Kindergarten last week. More on that in another post.

He’s been loving it, until yesterday. At least part of his disgruntlement was because it was his first visit to computer class. Putting together a few conversations, we learned this:

Jacob: Something was different about Kindergarten today.

Us: Oh? What was it?

Jacob: I had computer class today.

Us: What did you do?

Jacob, super frustrated: Nothing. NOTHING, NOTHING, NOOOO THIIING. Nothing.

Us: You didn’t get to use a computer?

Jacob: All we did was log on and log off the whole time. Log on and log off. My username is Jacob and my password is Jacob. (annoyed and confused voice) Why are they the same??

I guess his teachers weren’t used to children that had been logging on to computers for two years before Kindergarten. And probably also weren’t expecting any of them to take some sort of offense at their password poicy. He probably couldn’t appreciate how reasonable it was to tech Kindergarteners how to log in to a computer on the first day of computer class…

Voice Keying with bash, sox, and aplay

There are plenty of times where it is nice to have Linux transmit things out a radio. One obvious example is the digital communication modes, where software acts as a sort of modem. A prominent example of this in Debian is fldigi.

Sometimes, it is nice to transmit voice instead of a digital signal. This is called voice keying. When operating a contest, for instance, a person might call CQ over and over, with just some brief gaps.

Most people that interface a radio with a computer use a sound card interface of some sort. The more modern of these have a simple USB cable that connects to the computer and acts as a USB sound card. So, at a certain level, all that you have to do is play sound out a specific device.

But it’s not quite so easy, because there is one other wrinkle: you have to engage the radio’s transmitter. This is obviously not something that is part of typical sound card APIs. There are all sorts of ways to do it, ranging from dedicated serial or parallel port circuits involving asserting voltage on certain pins, to voice-activated (VOX) circuits.

I have used two of these interfaces: the basic Signalink USB and the more powerful RigExpert TI-5. The Signalink USB integrates a VOX circuit and provides cabling to engage the transmitter when VOX is tripped. The TI-5, on the other hand, emulates three USB serial ports, and if you raise RTS on one of them, it will keep the transmitter engaged as long as RTS is high. This is a more accurate and precise approach.

VOX-based voice keying with the Signalink USB

But let’s first look at the Signalink USB case. The problem here is that its VOX circuit is really tuned for digital transmissions, which tend to be either really loud or completely silent. Human speech rises and falls in volume, and it tends to rapidly assert and drop PTT (Push-To-Talk, the name for the control that engages the radio’s transmitter) when used with VOX.

The solution I hit on was to add a constant, loud tone to the transmitted audio, but one which is outside the range of frequencies that the radio will transmit (which is usually no higher than 3kHz). This can be done using sox and aplay, the ALSA player. Here’s my script to call cq with Signalink USB:

#!/bin/bash
# NOTE: use alsamixer and set playback gain to 99
set -e

playcmd () {
        sox -V0 -m "$1" \
           "| sox -V0 -r 44100 $1 -t wav -c 1 -   synth sine 20000 gain -1" \
            -t wav - | \
           aplay -q  -D default:CARD=CODEC
}

DELAY=${1:-1.5}

echo -n "Started at: "
date

STARTTIME=`date +%s`
while true; do
        printf "\r"
        echo -n $(( (`date +%s`-$STARTTIME) / 60))
        printf "m/${DELAY}s: TRANSMIT"
        playcmd ~/audio/cq/cq.wav
        printf "\r"
        echo -n $(( (`date +%s`-$STARTTIME) / 60))
        printf "m/${DELAY}s: off         "
        sleep $DELAY
done

Run this, and it will continuously play your message, with a 1.5s gap in between during which the transmitter is not keyed.

The screen will look like this:

Started at: Fri Aug 24 21:17:47 CDT 2012
2m/1.5s: off

The 2m is how long it’s been going this time, and the 1.5s shows the configured gap.

The sox commands are really two nested ones. The -m causes sox to merge the .wav file in $1 with the 20kHz sine wave being generated, and the entire thing is piped to the ALSA player.

Tweaks for RigExpert TI-5

This is actually a much simpler case. We just replace playcmd as follows:

playcmd () {
        ~/bin/raiserts /dev/ttyUSB1 'aplay -q -D default:CARD=CODEC' < "$1"
}

Where raiserts is a program that simply keeps RTS asserted on the serial port while the given command executes. Here's its source, which I modified a bit from a program I found online:

/* modified from
 * https://www.linuxquestions.org/questions/programming-9/manually-controlling-rts-cts-326590/
 * */
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 


static struct termios oldterminfo;


void closeserial(int fd)
{
    tcsetattr(fd, TCSANOW, &oldterminfo);
    if (close(fd) < 0)
        perror("closeserial()");
}


int openserial(char *devicename)
{
    int fd;
    struct termios attr;

    if ((fd = open(devicename, O_RDWR)) == -1) {
        perror("openserial(): open()");
        return 0;
    }
    if (tcgetattr(fd, &oldterminfo) == -1) {
        perror("openserial(): tcgetattr()");
        return 0;
    }
    attr = oldterminfo;
    attr.c_cflag |= CRTSCTS | CLOCAL;
    attr.c_oflag = 0;
    if (tcflush(fd, TCIOFLUSH) == -1) {
        perror("openserial(): tcflush()");
        return 0;
    }
    if (tcsetattr(fd, TCSANOW, &attr) == -1) {
        perror("initserial(): tcsetattr()");
        return 0;
    }
    return fd;
}


int setRTS(int fd, int level)
{
    int status;

    if (ioctl(fd, TIOCMGET, &status) == -1) {
        perror("setRTS(): TIOCMGET");
        return 0;
    }
    status &= ~TIOCM_DTR;   /* ALWAYS clear DTR */
    if (level)
        status |= TIOCM_RTS;
    else
        status &= ~TIOCM_RTS;
    if (ioctl(fd, TIOCMSET, &status) == -1) {
        perror("setRTS(): TIOCMSET");
        return 0;
    }
    return 1;
}


int main(int argc, char *argv[])
{
    int fd, retval;
    char *serialdev;

    if (argc < 3) {
        printf("Syntax: raiserts /dev/ttyname 'command to run while RTS held'\n");
        return 5;
    }
    serialdev = argv[1];
    fd = openserial(serialdev);
    if (!fd) {
        fprintf(stderr, "Error while initializing %s.\n", serialdev);
        return 1;
    }

    setRTS(fd, 1);
    retval = system(argv[2]);
    setRTS(fd, 0);

    closeserial(fd);
    return retval;
}

This compiles to an executable less than 10K in size. I love it when that happens.

So these examples support voice keying both with VOX circuits and with serial-controlled PTT. raiserts.c could be trivially modified to control other serial pins as well, should you have an interface which uses different ones.

A Verbose, Hands-On Nexus 7 Review

Some of you may have noticed that I am not a concise author. Perhaps that has something to do with the fact that I am not a concise reader. I like facts, details, and lots of them. So as a recent Nexus 7 purchaser, here you go.

Genesis

I’ve long used Android devices, and last year had a company-issued Motorola Xoom, which was the first Google Experience tablet with Honeycomb. That tablet has specs roughly similar to iPads; its 10.1″ screen was the same, the 1280×800 screen was better than the iPad available at the time, and its 730g weight identical to the early iPads (though 10% higher than the current iPad). I lost access to it when I changed jobs, and had been without a tablet until recently.

Other devices I own are the Galaxy Nexus, sporting a 4.65″ screen; and what’s now called the Kindle Keyboard, with an eInk screen.

I had been somewhat interested in the Kindle Fire, but the closed nature and limited capability of the system kept me away.

The Nexus 7 reviews, however, were stunning, as was the price. $200 for a great tablet. I wound up buying the $250 16GB model. But not until after I spent a great deal of time thinking about size.

Physical Size

My main concern was that the Nexus 7 would be too small to be useful. I had never been particularly pleased with my input speed on the Xoom. I tried to touch type on it, but was just never fast enough to surpass “frustratingly slow.” I have long been a fast and accurate typist on keyboard; well over 100 words per minute, and it is frustrating when my fingers can’t maintain that speed.

I figured the situation would be even worse on the Nexus 7, given its smaller size.

I also found the Xoom to sometimes feel a little small with the 10″ screen, and was concerned about that as well.

And finally, 7″ doesn’t sound all that much larger than 4.65″.

However, having actually had the Nexus 7 for a little while now, I’m very pleased with the size, and may even prefer it. The 10″ tablets are just too big and heavy to comfortably hold in one hand, and I’ve realized that part of my Xoom frustration was the fact that I had to set it down and prop it up for anything beyond very brief use. At 340g, the Nexus 7 is less than half the weight of the Xoom or iPad, and it makes a huge difference. While still nowhere near where I’d be with a keyboard, two-thumb typing in portrait mode, or even something approaching touch typing in landscape mode, is possible on the Nexus 7.

The screen size hasn’t been a bother, at all. This may be due to the fact that it’s higher resolution (it’s 1280×800 like the Xoom, but those pixels are crammed into only 7″). I think it’s also partly due to the fact that the browser in Jelly Bean is significantly better than the one in Honeycomb, and perhaps that websites are better at tablet-friendliness, too.

Overall, the Nexus 7 feels a lot farther from the size of a laptop than did the Xoom, and as such is more prone to come with me in lots of situations, I think.

It works reasonably well with foldable Bluetooth keyboards, so when thinking about a laptop replacement or alternative, that might be the way I go. A Bluetooth mouse also works with it, though I found it didn’t provide near the utility that a BT keyboard does.

Display

The display is both amazing and disappointing. Browse some photos and some of them will show up in eye-popping clarity. Websites display fine. But the screen can also take on a washed-out appearance at times. I am notoriously picky in my displays, and this bothered me enough that I researched it. Analysis has shown that poor firmware calibration has lead to the compression of highlights, which mirrors what I was seeing. I am mostly used to it by now, but it’s a disappointment.

Most of the time, though, the screen is excellent. In comparison to my eInk Kindle, however, I don’t think any tablet will ever be as good for book reading. The eInk screen truly is easier on the eyes, and the reflection of overhead lights on the Nexus 7 display can be distracting at first.

I have had occasional issues with it not registering touches properly. This is always cleared up by touching the power button to put the unit to sleep, then waking it back up.

Other Hardware

There are three hardware buttons: power and volume up/down. Physically, the device fits my hand well, though I might wish it was a little lighter like my Kindle. Charging is accomplished via high-power 2A micro-USB, and there is, of course, a headphone port. There is no alert LED like my Galaxy Nexus has, and no vibration feature. The speaker is on the back, and the microphones along the left side – a position which, it appears, many Nexus 7 cases are blocking.

Battery Life

I am astonished at how good this device is battery-wise, especially compared to the battery disaster that is the Galaxy Nexus. Google claims the Nexus 7 can survive 8 hours of solid screen-on use, and I don’t doubt it. Mine’s never gotten low enough to get a solid measurement.

Wifi

The wifi works well, as far as it goes. The wifi doesn’t support 802.11n in 5GHz, which although somewhat common for devices like this, is a bit of a disappointment.

Software

The big story about the Nexus 7 is Jelly Bean. I had used Honeycomb on the Xoom, and Ice Cream Sandwich on my Galaxy Nexus, so I’m familiar with its predecessors. Let’s take a look.

Project Butter

Much has been made of Project Butter, Google’s attempt to optimize Android to improve its responsiveness and the smoothness of things like scrolling. I can say they have done quite well. This device is so smooth you don’t notice how smooth it is. It wasn’t until I had been using it for a bit that I really noticed. That’s a job well-done.

Chrome

The browser in Jelly Bean is now called Chrome. I am not sure if this is just marketing or not. It doesn’t really feel all that different from previous versions of the Android browser, and the changes have been along the lines of incremental changes Google has introduced before.

One of the very best new features happens when you touch a link that is close to other links on a page. Rather than getting a pretty much random page, Chrome pops up a partial-screen zoom box showing the part of the page near where your finger touched. With everything showing up huge, it is now easy to touch the precise link you want. Do so, and the box goes away, and your page loads. I am amazed at how much improvement this one change brings. Compared to ICS Browser, bookmarks can be brought up quicker, and the tab interface is nicer.

All is not perfect in the land of Chrome, however. It contains several regressions from the Ice Cream Sandwich browser.

I have two complaints about bookmarks. One is that previous versions of the browser would show thumbnails of sites in the bookmark viewer. This was a nice navigation aid. Chrome shows only favorites icons, if one is available, or a generic icon if not. Also, the bookmarks synced with other Android devices are called, confusingly enough, “Desktop Bookmarks” now, and require an extra tap to access.

I have had occasional trouble with Chrome not wanting to prompt for credentials for servers on my LAN that use HTTP auth.

Chrome has also removed the ICS browser’s ability to save a page, including all its elements, for offline viewing. Good for things like an airline checkin screen and such. I have no idea why Chrome removed this. I installed the Firefox Beta for Android, which also doesn’t have the offline save feature, but it does have a save to PDF feature.

Soft Keyboard

The on-screen soft keyboard in Jelly Bean is a significant regression from previous versions of Android. My biggest complaint is the lack of visual feedback for keypresses. On earlier versions of Android, when you push a key, you’ll see an image of it pop up on the screen, offset a little from the location of the key itself. In JB, all that happens is that the key itself changes colors. Not very helpful, because it is under your finger at the time. This small thing frustrates me to no end.

The keyboard in ICS introduced some nice features as well, mainly long-presses as shortcuts to other features. For instance, you can long-press a key on the top row of letters to get numerals without having to switch to the number mode. Similarly, long press the period and you get other common punctuation. The JB keyboard removed both of those features.

Thankfully, in the Market, there is an app called Ice Cream Sandwich Keyboard. It appears geared towards people running earlier versions of Android. Sadly, it is also a step up over what we have in JB.

Google Now and Voice Recognition

The other main headline feature in Jelly Bean is Google Now. The somewhat-competitor to Apple’s Siri, Google Now takes a bit of a different approach than Apple. It is said that Siri is better than Google Now at responding to queries, but Google Now is better at predicting what you want to know before you ever ask. I haven’t ever used Siri, but I would buy that explanation.

Google Now is available with a swipe up from the bottom of the screen, or with a single touch from any Home screen. Bring it up and it shows you current information about what it thinks you need to know. Examples include weather and forecast information, time to get to home or work from your current location, alerts that you need to leave soon to get to a certain place on time, flight schedules, sports scores, etc.

Google Now has been mostly a gimmick to me, but that may be because I fall outside its target demographic in significant ways. I live nowhere near a public transportation system, work at home for the most part, haven’t flown wince I’ve had the Nexus 7, don’t follow sports, and already know how long it takes to get places (and when it varies, it’s because of muddy roads or harvest — neither things that traffic services know about.)

The weather widget always seems to show the temperature from a couple of hours ago. It does show the weather in your current location. Well, mostly. I was in Newton, KS one day. I tapped on the icon for more detail. That simply took me to a Google search for “weather Newton”. Which showed me the weather for Newton, Massachusetts — 1600 miles away. Fail.

Speech recognition in JB is definitely improved. It is somewhat useful with Google Now. I like being able to simply say “set alarm for 30 minutes.” And it does it a lot quicker than I could in the interface. It’s supposed to be able to let me bring up my contacts in the same way, but it is much more likely to try turning such an attempt into a Google search than an actual display of a contact. It’s picky on the precise language used for setting an alarm too; say it slightly differently, and it’s another Google search.

JB also supports limited offline speech recognition. I say limited because it’s a bit strange. I have, for instance, a Remember the Milk widget on my home screen. It has a microphone icon to use to speak a new reminder. Tap it, and you can’t use it offline. It also has a button that brings up the on-screen keyboard. Do that, then touch the microphone on the keyboard, and you can use offline recognition. I have no idea how to explain this difference, since both are clearly using Google’s engine.

The speech recognition is indeed better, and might make it suitable for use instead of a keyboard for composing short texts and such. But it rarely produces even a sentence that I don’t have to correct in some way, even now.

Conclusion

Despite some of its shortcomings, I am very fond of the Nexus 7. It is an excellent device. And at $200-$250, it is an AMAZING device. I am truly impressed with it, and don’t regret my purchase at all.

How to get started programming?

I have been asked for advice from several people recently on how to get started programming, or how to further develop a nascent interest in coding or software engineering. The people asking the questions range in age from about 10 years old to older than me. These are people that, for various reasons, are not very easily able to take computer science courses right now.

One would think that, since I’ve been doing this for somewhere around a quarter century (oh I do feel old now), that I’d be ready to offer up some great advice. And offer some suggestions I have. But I’m not convinced they’re good ones.

I have two main tensions. The first is that I, like many in the communities I tend to hang out in such as Debian’s, have a personality that leads me to take a deep dive into details of anything that holds my interest. Whether it’s Linux, Haskell, or amateur radio, I want to do more than skim the surface if I’m having fun with it. Many people are not like that. They may have a lot of fun programming in Visual Basic, not really caring that other languages are out there. Or some people are not like this yet. I feel unqualified to provide good advice to people that are different from me in that way. To put it a different way: most people don’t want to wait 4 years to be useful, and want to start out right away and get better over time (and I was the same way too.)

The second is related. I learned programming at a time when, other than BASIC, interpreted languages were not really available to me. (Yes, they were available, but not to me.) I cut my teeth on BASIC, Pascal, and C. Although I rarely use C anymore, I can still drop into it at a moment’s notice and be perfectly comfortable. I feel it was a fundamentally valuable experience, and that it would be very hard to become a great programmer without ever having lived and breathed something like C, where memory and pointers must be managed manually. Having said that, it is probably possible to become a good coder without ever having touched C.

Here, then, is an edited version of some rambly advice I sent to someone recently, where learning OOP was particularly mentioned. I would welcome your comments and suggestions. I may point people that ask to this post in the future.

For simply learning how to write code, Dive Into Python has long been a decent resource, though it may assume more experience than some have. I haven’t read them myself, but I’ve also heard good things about the How to Think Like a Computer Scientist series from Green Tea Press. They’re all available as free PDF downloads, too!

Eric S. Raymond’s The Art of Unix Programming is another work I’ve heard good things about, despite having never read it myself. A quick glance at the table of contents makes me think that even if people don’t wind up working on Unix, the lessons and philosophy should be informative.

It seems that many Computer Science programs are using Java for the core of their instruction, or even almost exclusively. Whether that is good or bad, I’m not completely sure. It certainly gets people into OOP more deeply, but I’m a “right tool for the job” kind of person. Despite the hype, OO — like everything else — isn’t the right tool for every job.

It is fine for people to dive straight into OO and become good programmers/engineers. However, I think it would be difficult to become a great programmer/engineer without ever having a solid understanding of a more low-level language, such as C in particular. I did my CS work when it was mostly based in C, and am glad for it. If someone never has to manage memory or pointers, I suspect they will be at a disadvantage in the long run for not being able to understand or work with the system at a more fundamental level. If a person knows C, plus some concepts of OO and Functional Programming (FP), it should be easy to pick up just about any other language out there.

I used to think Python was a great first language, but during the 2.x series they added so much fluff and so many special cases that I’m less enthusiastic now, though I don’t know how much of that got cleaned up in 3.x. I am not too keen on Java as a first language, because too many things that should be simple aren’t. I have a fondness for Haskell, and its close relationship to mathematics could make it a great first language — or maybe a poor one, depending on your perspective.

One other thing – I think it’s important for good programmers to have experience with all three major models of programming (procedural, OO, functional.) Even if a person winds up working mostly in one universe, knowledge of and experience with the others is important and informative and, in my experience, leads to better algorithms and architecture all around.

  • Procedural languages: Obviously C, but also Unix shell
  • OO languages: Python, Java, plenty of other fine choices
  • Functional: Lisp, Scheme, Haskell (also the only lazy and pure language on this list)
  • Having said all that, more important than a choice of book or language is experience. I have heard people suggest that it takes 10,000 hours of practice to become a superstar at something, whatever that “something” is, and I wouldn’t doubt it. Seth Godin discusses that a bit, with some criticism of the idea too.

    So that leads to the most important piece of advice: dive in to whatever your interest is. Experiment, write code, put theory into practice in a way that holds interest and excitement. People that try to do things they don’t enjoy don’t seem to stick with them as long or execute as well, and thus will never become great.