Review: Those new-fangled paper books

Everyone seems to be familiar with ebooks these days. I own a Kindle 2, and of course we’ve spent weeks hearing how great the Nook will be, then weeks hearing how terrible it turned out to be. But nobody seems to be casting an eye back towards paper, so I thought I’d rectify that here, especially since paper books have some serious stability issues that are often overlooked!

Before I begin, I feel it wise to offer this hint to the reader: this review should not be taken too literally. If you have an uncontrollable urge to heave a volume of the Oxford English Dictionary at me as if I am some European prime minister, please plant your tongue more firmly in your cheek and begin again.

Today I picked up a paper book to read just for fun — The Happiest Days of Our Lives by Wil Wheaton. Long-time (since this spring!) Kindle user that I am, I immediately noticed the dashing use of color on its front cover, but when I opened it, I was disappointed that I couldn’t scale the font size down from the default. It seems that paper books have only one font option — what are all these Kindle forum posters complaining about with its six sizes of a single font?

On the very first page, I encountered a word I wasn’t familiar with (Namaste). I thought I knew what it meant from the context clues, and even had the thought that on the Kindle, I could just highlight it and confirm my guess. But my paper dictionary was in the basement, so I didn’t bother looking it up until I wrote this post. (My hunch was reasonably correct.)

Interface-wise, the paper book is solid, and crashes, lockups, or other malfunctions are rare. I have, however, noted severe stability problems when attempting to read outdoors, especially when it’s windy (which, since I live in Kansas, is pretty much always). Pages start turning themselves, even without me making the “turn page” gesture. Sometimes the book will even lose its memory of my last page read. This is rather annoying, and might even involve a lengthy search for a suitable temporary replacement bookmark. Also, I haven’t tried it, but I suspect that the trick of putting a Kindle in a ziplock bag to read at the beach or in the tub without risk of getting it wet would be impractical with a paper book.

Paper does have its advantages. For one, it’s faster to flip rapidly through pages on paper than on an ebook reader. If you know roughly where in the book something was written, but not the precise wording, searching can be faster on paper. On the other had, if you are looking for a particular word or phrase, the ebook reader may win hands-down, especially if the paper book has no index.

Paper is so stable that some would argue that the extreme impracticality of making good backups isn’t really a problem at all. But on the other hand, paper books degrade slightly each time they are used, and this condition can be aggravated by placement in bags for transport. Eventually, they will wear out. If my Kindle wears out, I can always restore David Copperfield from my backup copy to a new one. If my printed edition (all two volumes) of it wear out, then I have to hope that the used bookstore will still sell me another one for $10. Otherwise I’d have to either drive 45 miles to find one for sale, attempt to deal with the DRM for paper books at a library, or wait a couple of days for Amazon to get it to my door. A fire or flood could be devastating.

Paper books also have some advantages for showing photos; no ebook reader is close to the size and resolution of glossy paper photos books in a reasonable price.

The contrast on most paper books is better than that of my Kindle, but some older ones are actually worse, smell dusty, and suffer from occasional display corruption as bits of them actually break off of the book device.

As to cost, it is a mixed bag. Out of copyright classics are free as ebooks from Project Gutenberg and the like, while still costing money on paper. I have found that the accuracy of some of these paper editions can be rather questionable — people have sometimes manually removed important bits of the story to save on printing costs, rather than let Google Books OCR mangle it for them automatically. On the other hand, used paper editions of more current works can be found for a fraction of the cost of the new ebook edition — though you are often limited in selection of these bargains. But you can usually browse a paper book for a few minutes before buying it, which is rarely available for an ebook.

What’s more, libraries might let you borrow paper books for free. But you often have to expel greenhouse gases to get to them, and then they enforce DRM on you — you only get to read it a certain amount of time before they start adding fees. You could easily wind up paying $2 if you keep it a week longer than you should have. With ebooks, of course, there is no free borrowing (and the Nook feature it too limited to count.) And you of course know that most libraries are run by the government, so they have your address. Trying to circumvent a library’s DRM could wind up involving the police, so you had best comply.

Making copies of a paper book is expensive and requires specialized equipment, even if you just want a copy for backup.

Compatibility problems with paper books are rare, and are usually found among readers with poor eyesight. A few works can be found in “large font editions,” but most can’t, so those readers are left needing expensive specialty magnifiers.

All in all, I prefer reading books on my Kindle, but still read on paper when that’s how I have a book.

Review: A Christmas Carol

I guess you can say that A Christmas Carol by Charles Dickens has been a success. It was published in 1843 and has never been out of print since then. It’s spawned all manner of plays, films, adaptations, and spoofs. It’s been adapted at least twice by Disney, once featuring Mickey Mouse and another time featuring Jim Carey. We’re almost inundated with the story — I’m not sure how many ways I’ve seen it. Yet I had never read the original story by Dickens until just now.

And I must say, what a treat it was. Despite knowing the plot in advance, it was a very good read. The 19th century London setting was done well. It wasn’t some idealized London as is often portrayed in film adaptations. It had depth, as did the characters. Dickens’ Scrooge had a troubled childhood, the son of poor and apparently abusive parents. He turned to business, with which he was successful. Along the way, he lost sight of family, and really of his humanity in general, striving to be a richer and more successful businessman at the cost of all else.

How apropos this story is for us in the 21st century. Our large banks define success in terms of profits made for their shareholders, while adding more gotchas to the terms of the credit cards held by their customers. Our governments play geopolitical games over weapons, oil, and gas, while unwilling to sacrifice anything to prevent a climate disaster. Our politicians, even in the season of Christmas, turn a blind eye and a cold heart to the suffering of those that can’t afford health care for naught but political reasons, rather than trying their hardest to make a plan that will help them reality as soon as possible.

And what of us, the citizens of the 21st century? We consume ever flashier cars, houses, computers, and cellphones with data plans, while poverty intensifies across the globe in this economic downturn.

Well, count me among those many inspired and reminded by Dickens to be a more empathetic person, to remember how good even many of the poor in the West have it compared to other parts of the world, and to try to do more for others.

And that, perhaps, is part of the genius of Dickens. He inspired a complete change of how people looked at Christmas in his time. And his work is no less relevant today; perhaps it hits even closer to home these days. He invites us to carefully consider the question: what does it mean to achieve success in life? And he deftly illustrates that “wealth” is wrong answer. Here’s hoping that many others will also learn a small bit about life from Dickens.

How to find it:

A Christmas Carol is available for free from Project Gutenberg for reading online, printing, or reading on an ebook reader such as the Kindle.

Be careful when buying printed editions. Many have been abridged or “improved for a modern audience”, and thus lose a lot of the quality of the original. I found at least one edition that looks true to the original; I’m sure there are others.

[This review also posted to Goodreads]

Apache Update

Thanks everyone for the helpful comments yesterday on Apache vs. lighttpd. I’ve taken the first few steps towards improving things. I’ve eliminated mod_php5, switched all PHP to FastCGI, and switched from the prefork to the event MPM.

The event MPM docs say that it’s incompatible with mod_ssl, but it worked, and googling for that turned up mailing list posts from 2006 saying it was fixed.

The only real glitch was that egroupware inexplicably depends on libapache2-mod-php5. It works fine without it, but I had to create a fakelibapache2-mod-php5 that provides libapache2-mod-php5 to convince the system not to remove egroupware when I switched to the event MPM.

I went from 28 Apache processes to 4 Apache processes plus an average of around 2 php5-cgi processes — which, despite the name, actually do grok FastCGI. Three of my Apache processes now have an RSS of about 20M, and the other of about 30M. The php5-cgi processes are at around 40M. My old Apache processes ranged from 18M to over 40M. So there is some memory savings here, though not drastic.

My Redmine processes, two of them, each use over 50M. Ruby on Rails still is leaving me with a bad taste. At least it’s not a Java app; it seems a lot of those allocate 2GB before they ever start accepting connections.

I’ll see how this goes for awhile, but for now I doubt that moving to lighttpd (or similar) will yield enough benefit to make it worth the effort. But there may be some benefit in inserting a caching proxy in front of Apache, so I may yet do that.

Apache vs. lighttpd

I’ve had a somewhat recurring problem that Apache on my server is a memory hog. The machine has a full GB of RAM now, but even so, heavy activity from spidering bots or reddit can bring it to its knees.

I’ve been trying to figure out what to do about it. I run Apache on there, and here’s an idea of what it hosts:

  • This blog, and two others like it, using WordPress
  • Several sites that are nothing but static files
  • gitweb
  • A redmine (ruby on rails) site
  • Several sites running MoinMoin
  • A set of tens of thousands of redirects from mailing list archives I used to host over to gmane (this had negligible impact on resource use when I added it)
  • A few other smaller PHP apps

Due to PHP limitations, it will only work with the Apache prefork or itk MPMs. That constrains the entire rest of the system to those particular MPMs. Other than PHP, most of the rest of these sites are running using FastCGI — that includes redmine and MoinMoin, although gitweb is a plain cgi. Many of these sites, such as this blog, have changed underlying software over the years, and I use mod_rewrite to issue permanent redirects from old URLs to their corresponding new ones as much as possible.

I do have two IPs allocated to the server, and so I can run multiple webservers if desired.

lighttpd is a lightweight webserver. I’ve browsed their documentation some, and I’m having a lot of trouble figuring out whether it would help me any. The way I see it, I have four options:

  1. Keep everything as it is now
  2. Stick with Apache, discard mod_php and use FastCGI for PHP, and pick some better MPM (whatever that might be)
  3. Keep Apache for PHP and move the rest to lighttpd
  4. Move everything to lighttpd, with FastCGI for PHP

I know Apache pretty well, and lighttpd not at all, so if all else is equal, I’d want to stick with Apache. But I’m certainly not above trying something else.

One other wrinkle is that right now everything runs as www-data — PHP, MoinMoin, static sites, everything. That makes me nervous, and I’d like to run some sites under different users for security. I’m not sure if Apache or lighttpd is better for that.

If anybody has thoughts, I’m all ears.

Ahh, updates…

It’s been awhile since I’ve made a blog post, and it will probably soon be evident why.

For myself, I am taking two college classes — philosophy and gerontology — this semester, and still working full time. That’s busy right there. I’m really enjoying them, and in particular enjoying the philosophy class. I will, at long last, graduate this December.

Oliver is about 3 months old now. He’s starting to smile at us more, make his cute little baby noises more, and is very interested in taking in all of his surroundings. He has his opinions about things, but isn’t expressing them too loudly just yet.

Jacob, on the other hand, is sometimes.

Our person from Parents As Teachers was talking to him for his 3-year-old evaluation. She said, “Jacob, are you a boy or a girl?” “I a kitty.” “Are you a boy kitty or a girl kitty?” “I just a PLAIN kitty. Meow.”

He seems to delight in catching someone saying or doing something in a manner he considers wrong. “No, not like THAT!” is heard a lot in our house these days. Or perhaps, “No, not RAILROAD tracks. They TRAIN tracks, dad!”

Jacob’s imagination is very active. Sometimes if it is time to go use the potty, he will insist that we all stop because a freight train is going through the kitchen, the crossing guard lights are flashing, so we have to STOP. He pretends to be kitties, runaway bunnies (he has a book called Runaway Bunny), and occasionally other things.

MoinMoin as a Personal Wiki, Zen To Done, And A Bit of Ikiwiki

Since I last evaluated and complained about wikis last year, I’ve been using moinmoin for two sites: a public one and a personal one.

The personal site has notes on various projects, and my task lists. I’ve been starting out with the Zen To Done (ebook, PDF, paper) idea. It sounds great, by the way; a nice improvement on the better-known GTD.

My To Do Page

Anyhow, in MoinMoin, I have a ToDos page. At the top are links to pages with different tasks: personal, work, yard, etc. Below that, are the three “big rocks” (as ZTD calls them) for the day: three main goals for the day. I edit that section every day.

The Calendar

And below that, I use MoinMoin’s excellent MonthCalendar macro. I have three calendars in a row: this month, next month, and last month. Each day on the calendar is a link to a wiki page; for instance, ToDos/Calendar/2009-10-01. The day has a red background if the wiki page exists, and white otherwise. So when I need to do something on or by a specific day, I click on the link, click my TaskTemplate, and create a simple wiki page. When I complete all the tasks for that day, I delete that day’s wiki page (and can note what I did as the log message if I like). Very slick.

The Task Lists

My task pages are similar. They look like this:


= Personal =

<<NewPage(TaskTemplate,Create new task,@SELF)>>

<<Navigation(children,1)>>
<<BR>>

So, my personal task page has a heading, then it has an input form with a text box and a button that says “Create new task.” Type something in and that becomes the name for a wiki page, and takes you do the editor to describe it. Below the button is a list of all the sub-pages under the Personal page, which represent the tasks. When a task is done, I delete the page and off the list it goes. I can move items from one list to another by renaming the page. It works very, very nicely.

Collecting

Part of both ZTD and GTD is that it must be very easy to get your thoughts down. The idea is that if you have to think, “I’ve got to remember this,” then you’ll be stressed and worried about the things you might be forgetting. I have a “Collecting” page, like the Personal or Work pages, that new items appear on when I’m not editing my wiki. They get there by email.

MoinMoin has a nice email system. I’ve set up a secret email address. Mail sent there goes directly into MoinMoin. It does some checks on it, then looks at a combination of the From and Subject lines to decide what to do with it. If I name an existing page, it will append my message the the end. If it’s a new page, it’ll create it. I have it set up so that it takes the subject line as a page name to create/append to under ToDos/Collecting/$subject (by putting that as the “name” on the To line).

So, on my computers, I have a “newtodo” script that invokes mail(1), asks for a subject, and optionally lets me supply a body. Quick and painless.

Also, I’ve added the address to my mobile phone’s address book. That way I don’t have to carry around pen and paper. Need to get down some thought ? No problem. Hit send email, pull the last address sent to, give it a subject and maybe a body. Very slick.

Wiki Software

As a way of updating my posts from last year: I’ve been very happy with MoinMoin overall. It has some oddities, and the biggest one that concerns me is its attachment support. It doesn’t let you specify a maximum upload size, and doesn’t very well let you restrict attachment work to only certain people. But the biggest problem is that it doesn’t track history on attachments. If a vandal deletes the attachment on a page, it’s GONE. They expect to have that fixed in 2.0, coming out in approximately November, 2010.

I also looked at Ikiwiki carefully over the past few days. Several things impressed me. First, everything can be in git. This makes for a very nice offline mode, better than Moin’s offline sync. The comment module is nicer than anything in Moin, and the tagging system is as well. Ikiwiki truly could make a nice blog, and Moin just couldn’t. It also puts backlinks at the bottom of each page automatically, a nice feature. And it’s by Joey Hess, who writes very solid software.

There are also some drawbacks. Chief on that list is that ikiwiki has no built-in history of a page feature. Click History and it expects to take you to gitweb or ViewVC or some such tool. That means that reverting a page requires either git access or cut and pasting. That’s fine for me, but throwing newbies to gitweb suddenly might not be the most easy. Since ikiwiki is a (very smart) wiki compiler, its permission system is a lot less powerful than Moin’s, and notably can’t control read access to pages at all. If you need to do that, you’d have to do it at the webserver level. It does have a calendar, but not one that works like Moin’s does, though I could probably write one easily enough based on what’s there.

A few other minor nits: the email receiving feature is not as versatile as Moin’s, you can’t subscribe to get email notifications on certain pages (RSS feeds only, which would have to be manually tweaked later), and you can’t easily modify the links at the top of each page or create personal bookmarks.

Ikiwiki looks like an excellent tool, but just not quite the right fit for my needs right at the moment. I’ve also started to look at DokuWiki a bit. I was initially scared off by all the plugins I’d have to use, but it does look like a nice software.

I also re-visited MediaWiki, and once again concluded that it is way too complicated for its own good. There are something like a dozen calendar plugins for it, some of which even are thought to work. The one that looked like the one I’d use had a 7-step (2-page) installation process that involved manually running SQL commands and cutting and pasting some obscure HTML code with macros in it. No thanks.

How To Record High-Definition MythTV Files to DVD or Blu-Ray

I’ve long had a problem. Back on January 20, I took the day off work to watch the inauguration of Barack Obama. I saved the HD video recordings MythTV made of the day (off the ATSC broadcast), intending to eventually save them somehow. I hadn’t quite figured out how until recently, so there they sat: 9 hours of video taking up about 60GB of space on my disk.

MythTV includes a program called mytharchive that will helpfully transcode your files and burn a DVD from them. But it helpfully will transcode the beautiful 1920x1080i down to DVD resolution of — on a good day — 720×480. I couldn’t have that.

Background

My playback devices might be a PC, the MythTV, or the PlayStation 3. Of these, I figured the PS3 was going to be hardest to accommodate.

ATSC (HD) broadcasts in the United States are an MPEG Transport Stream (TS). Things are a bit complicated, because there may be errors in the TS due to reception problems, the resolution and aspect ratio may change multiple times (for instance, down to SD for certain commercials). And, I learned that some ATSC broadcasts are actually 1920×1088 because the vertical resolution has to be a multiple of a certain number, and those bottom 8 pixels shouldn’t be displayed.

Adding to the complexity, one file was 7 hours of video and about 50GB itself. I was going to have to do quite some splitting to get it onto 4.7GB DVD+Rs. I also didn’t want to re-encode any video, both for quality and for time reasons.

Attempts

So, I set out to try and figure out how to do this. My first approach was the sledgehammer one: split(1). split takes a large file and splits it on byte or line boundaries. It has no knowledge of MPEG files, so it may split them in the middle of frames. I figured that, worst case, I could always use cat to reassemble the file later.

Surprisingly, both mplayer and xine could play back any of these files, but the PS3 would only play back the first part. I remembered this as an option if all else failed.

Next, I tried avidemux. Quite the capable program — and I thought I could use it to cut my file into DVD-sized bits. But I couldn’t get it to let me copy the valid MPEG TS into another MPEG TS — it kept complaining of incompatible formats, but wouldn’t tell me in what way they were incompatible. I could get it to transcode to MPEG4, and produce a result that worked on the PS3, but that wasn’t really what I was after.

Then, I tried mpgsplit. Didn’t recognize the MPEG TS as a valid file, and even when I used a different tool to convert to MPEG PS, acted all suspicious as if it bought the MPEG from a shady character on a grungy street corner.

dvbcut

I eventually wound up using dvbcut to split up the ATSC (DVB) recordings. It understood the files natively and did exactly what I wanted. Well, *almost* exactly what I wanted. It has no command-line interface and didn’t have a way to split by filesize, but I calculated that about 35 minutes of the NBC broadcast and 56 minutes of the PBS broadcast could fit on a single DVD+R.

It worked very, very nicely. The resulting files tested out well on both the PS3 and the Linux box.

So after that, I wrote up an index.txt file to add to each disc and a little shell scripting later and I had a directory for each disc. I started burning them with growisofs. Discs 1 and 2 burned well, but then I got an error like this:


File disc06/VIDEO/0930-inaug-ksnw-06.mpg is larger than 4GiB-1.
-allow-limited-size was not specified. There is no way do represent this file size. Aborting.

Eeeeepp. So apparently the ISO 9660 filesystem can’t represent files bigger than 4GB. My files on disc 1 had represented multiple different programs, and stayed under that limit; and disc 2’s file was surprisingly just a few KB short. But the rest of them weren’t. I didn’t want to have to go back and re-split the data to be under 4GB. I also didn’t want to waste 700MB per disc, or to have to make someone change video files every 15 minutes.

So I decided to investigate UDF, the filesystem behind Blu-Ray discs. mkisofs couldn’t make a pure UDF, only a hybrid 9660/UDF disc that risked compatibility issues with big files. There is a mkudffs, but it doesn’t take a directory of its own. So I wrote a script to do it. Note that this script may fail with dotfiles or files with spaces in them:

#!/bin/bash

set -e

if [ ! -d "$1" -o -e "$2" -o -z "$2" ]; then
   echo "Syntax: $0 srcdir destimage [volid]"
   echo "destimage must not exist"
   exit 5
fi

if [ "`id -u`" != "0" ]; then
   echo "This program must run as root."
fi

EXTRAARGS=""
if [ ! -z "$3" ]; then
   EXTRAARGS="--vid=$3"
fi

# Get capacities at http://en.wikipedia.org/wiki/DVD+R as of 9/27/2009

SECSIZE=2048

# I'm going to set it a few lower, than capacity 2295104 just in case.
# Must be at least one lower than the actual size for dd to do its thing.
# SECTORS=2295103
SECTORS=2295000

echo "Allocating image..."

dd if=/dev/zero "of=$2" bs=2048 "seek=$SECTORS" count=1

echo "Creating filesystem..."

mkudffs --blocksize=2048 $EXTRAARGS "$2"
mkdir "$2.mnt"

echo "Populating..."
mount -o rw,loop -t udf "$2" "$2.mnt"
cp -rvi "$1/"* "$2.mnt/"
echo "Unounting..."
umount "$2.mnt"
rmdir "$2.mnt"

echo "Done."

That was loosely based on a script I found on the Arch Linux site. But I didn’t like the original script, because it tried to do too much, wasted tons of time writing out 4.7GB of NULLs when it could have created a sparse file in an instant, and was interactive.

So there you have it. HD broadcast to playable on PS3, losslessly. Note that this info will also work equally well if you have a Bluray drive.

Town Hall Questions

Sen. Brownback (R-KS) will be at my employer Monday, and will have a short town hall session. I’m debating whether to go or not, and whether to say anything or not. I don’t agree with him on much, and highly doubt that I’d change his mind on anything. Should I go? If so, what should I say?

Here are some random facts I’m considering mentioning:

  • When Jacob was born 3 years ago, it cost us $250. When Oliver was born this summer, it cost us around $3000, and Oliver wasn’t a more complicated pregnancy.
  • Most of the past 8 years, my insurance premiums have gone up and my benefits have gone down.
  • If I lost my coverage, I couldn’t afford to buy it on my own, even if I still had my job. Insuring our family on the individual market would cost more each month than our mortgage.
  • Almost as much (80%) of my paycheck goes to health care as to federal taxes. I’d gladly take a tax increase if it slowed growth in health care costs.
  • Brownback says it’s too expensive to to comprehensive reform right now. The estimated cost of reform is $900 billion over 10 years, and it will be paid for.
  • Brownback voted for the Bush tax cuts, which cost $2.5 trillion over 10 years, yes to attack Iraq ($700 billion so far), yes on Medicare prescription drug coverage ($400 billion). He didn’t vote for a way to pay for any of these.

Suggestions?

One final note: I will not be doing anything disruptive or disrespectful.

Late Summer

It’s that time of the year again. Everything is changing, and maybe for the better.

The days are getting shorter. When I left for work on my bicycle yesterday morning, it was still dark outside, and a little nippy. There’s nothing quite like riding a bicycle down a deserted country road at night, a cool breeze at your back, and having the sun come up as you ride.

And with the cooler weather, we can open our windows at night instead of running the air conditioner. It’s also nice to have a pleasant cool breeze flowing through the house, and hear the frogs, crickets, owls, coyotes, and other wildlife at night. Out here, we certainly don’t hear sounds of traffic, or loud car radios, though on a really clear night we might hear the rumble and whistle of a train from a few miles away.

This is also sunflower season in Kansas. The wild sunflowers, when their smaller-than-most-people-think flowers, grow everywhere. Some ditches turn into a sea of person-height yellow. Sunflowers are on the sides of bridges, around people’s mailboxes — just about anywhere that isn’t mowed or farmed. Then you also pass the sunflower fields, with their larger flowers, even more sea-like.

The beans are getting tall in the fields this time of year, and it won’t be long before the milo starts to turn its deep, dark reddish brown.

But, you know, we’re Kansans. We can’t really let ourselves enjoy it all that much. Just today, I heard a conversation — apparently the Farmer’s Almanac is predicting a brutally frigid winter this year. Gotta keep our sense of pessimism about weather alive now…

Google Groups Fail

Last month, I wrote that I was looking for mailing list hosting. It looks like some of the lists I host will move to Alioth, and some to Google Groups.

Google Groups has a nice newbie-friendly interface with the ability to read a group as a forum or as a mailing list. Also, they don’t have criteria about the subject matter of a group, so the Linux user’s group list I host could be there but not at Alioth.

So I set up a Google Group for the LUG. I grabbed the subscriber list from Ecartis, and went to “Directly Add” the members. This was roughly August 12.

After doing so, I got a message saying that Google needs to review these requests to make sure they’re legit, and will get back to me in 1-2 days. OK, that’s reasonable.

Problem is, nobody appears to be reviewing them. This is three weeks later and no action.

So I decided I would ask someone at Google about it. The only way they give to do that is to post in the Google Groups Help Forum. So I did. Guess what? They ignore that, too.

Let me say: relying on this sort of service for something important really makes me think twice. It makes me nervous about using Google Voice (what if my Google Voice number goes down?) It certainly makes me think twice about ever relying on Gmail or the other Apps for anything important.

My own mail server may not have the features that theirs does, but if it breaks, I don’t have to worry about whether anybody even cares to fix it.