Monthly Archives: December 2006

Memories of Christmas, part 1

Merry Christmas to everyone today!

Today, and for the next few days, I’m going to write about some of my Christmas memories from earlier years. Then I’ll finish up with some photos from this year.

This post is about Christmas at home growing up.

One of the first signs of Christmas happened when my dad put up the lights on the outside of our house.

My brothers and I had (and still do) stockings that got one piece of candy per day during December, then got stuffed full when we opened our presents. The tree went up over the Thanksgiving weekend, and it was always a lot of fun to help with that.

On Christmas Eve, we’d always go to the program at church. It was the kids’ Christmas play. Church would be packed. On the way out, everyone got a gift sack with fruit (apples or oranges), some peanuts, and maybe some candy. Then we’d go home. Mom or dad would read the Christmas story from the Bible, and after a prayer, it was time to open presents. In addition to the regular presents, dad would always give us a large paper gift sack. It would usually have a 12-pack of pop and some sort of nut (peanuts, pastachios, or maybe peanut brittle).

Sometimes we’d all stand still long enough for a photo.

We’d usually stay up late into the night Christmas Eve and sleep in on Christmas day.

Renovation: Week 24

Wow! What a week. (Click here for all of this week’s photos) On Sunday afternoon, we walked into the house and saw this:

Yes, over the weekend, the cabinet makers had installed the cabinets earlier than expected! I never expected that I would get excited about kitchen cabinets, but I sure was. They look great.

My grandma’s china cabinet is back up in the dining room. They’ve removed the tile from the dining room, exposing the wood floor underneath. The floor restoration people have spent some time doing some patching already.

Doors and doorjams are starting to go in. Trim is arriving. And the wood in the wall alongside one of the staircases has been seamlessly extended.

It’s exciting. (photos here)

And check this out:

That hunk of ice is the exhaust from our furnace. No joke! The furnace is so efficient that, when its running, it’s a moist and slightly warm breeze blowing out of that exhaust pipe. In winter, it condenses and drips down below, forming an ice hunk.

Saving Power with CPU Frequency Scaling

Yesterday I wrote about the climate crisis. Today, let’s start doing something about it.

Electricity, especially in the United States and China, turns out to be a pretty dirty energy source. Most of our electricity is generated using coal, which despite promises of “clean coal” to come, burns dirty. Not only does it contribute to global warming, but it also has been shown to have an adverse impact on health.

So let’s start simple: reduce the amount of electricity our computers consume. Even for an individual person, this can add up to quite a bit of energy (and money) savings in a year. When you think about multiplying this over companies, server rooms, etc., it adds up fast. This works on desktops, servers, laptops, whatever.

The easiest way to save power is with CPU frequency scaling. This is a technology that lets you adjust how fast a running CPU runs, while it’s running. When CPUs run at slower speeds, they consume less power. Most CPUs are set to their maximum speed all the time, even when the system isn’t using them. Linux has support for keeping the CPU at maximum speed unless it is idle. By turning on this feature, we can save power at virtually no cost to performance. The Linux feature to handle CPU frequency scaling is called cpufreq.

Set up modules

Let’s start by checking to see whether cpufreq support is already enabled in your kernel. These commands will need to be run as root.

# cd /sys/devices/system/cpu/cpu0
# ls -l

If you see an entry called cpufreq, you are good and can skip to the governor selection below.

If not, you’ll need to load cpufreq support into your kernel. Let’s get a list of available drivers:

# ls /lib/modules/`uname -r`/kernel/arch/*/kernel/cpu/cpufreq

Now it’s guess time. It doesn’t really hurt if you guess wrong; you’ll just get a harmless error message. One hint, though: try acpi-cpufreq last; it’s the option of last resort.

On my system, I see:

acpi-cpufreq.ko     longrun.ko      powernow-k8.ko         speedstep-smi.ko
cpufreq-nforce2.ko  p4-clockmod.ko  speedstep-centrino.ko
gx-suspmod.ko       powernow-k6.ko  speedstep-ich.ko
longhaul.ko         powernow-k7.ko  speedstep-lib.ko

For each guess, you’ll run modprobe with the driver name. I have an Athlon64, which is a K8 machine, so I run:

#modprobe powernow-k8

Note that you leave off the “.ko” bit. If you don’t get any error message, it worked.

Once you find a working module, edit /etc/modules and add the module name there (again without the “.ko”) so it will be loaded for you on boot.

Governor Selection

Next, we need to load the driver that tells the kernel what governor to use. The governor is the thing that monitors the system and adjusts the speed accordingly.

I’m going to suggest the ondemand governor. This governor keeps the system’s speed at maximum unless it is pretty sure that the system is idle. So this will be the one that will let you save power with the least performance impact.

Let’s load the module now:

# modprobe cpufreq_ondemand

You should also edit /etc/modules and add a line that says simply cpufreq_ondemand to the end of the file so that the ondemand governor loads at next boot.

Turning It On

Now, back under /sys/devices/system/cpu/cpu0, you should see a cpufreq directory. cd into it.

To turn on the ondemand governor, run this:

# echo echo ondemand > scaling_governor

That’s it, your governor is enabled. You can see what it’s doing like this:

# cat cpuinfo_min_freq
800000
# cat cpuinfo_max_freq
2200000
# cat cpuinfo_cur_freq
800000

That shows that my CPU can go as low as 800MHz, as high as 2.2GHz, and that at the present moment, it’s running at 800MHz presently.

Now, check your scaling governor settings:

# cat scaling_min_freq
800000
# cat scaling_max_freq
800000

This is showing that the system is constraining the governor to only ever operate on an 800MHz to 800MHz range. That’s not what I want; I want it to scale over the entire range of the CPU. Since my cpuinfo_max_freq was 2200000, I want to write that out to scaling_max_freq as well:

echo 2200000 > scaling_max_freq

Making This The Default

The last step is to make this happen on each boot. Open up your /etc/sysfs.conf file. If you don’t have one, you will want to run a command such as apt-get install sysfsutils (or the appropriate one for your distribution).

Add a line like this:

devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand
devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 2200000

Remember to replace the 2200000 with your own cpu_max_freq value.

IMPORTANT NOTE: If you have a dual-core CPU, or more than one CPU, you’ll need to add a line for each CPU. For instance:

devices/system/cpu/cpu1/cpufreq/scaling_governor = ondemand
devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 2200000

You can see what all CPU devices you have with ls /sys/devices/system/cpu.

Now, save this file, and you’ll have CPU frequency scaling saving you money, and helping the environment, every time you boot. And with the ondemand governor, chances are you’ll never notice any performance loss.

This article showed you how to save power using CPU frequency scaling on Linux. I have no idea if it’s possible to do the same on Windows, Mac, or the various BSDs, but it would be great if someone would leave comments with links to resources for doing that if so.

Updated: added scaling_max_freq info

The Climate Crisis

Wow.

We just watched An Inconvenient Truth. Not much in there was new to me, but to see it all presented at once is amazing.

There are vast undisputed scientific facts out there — for instance, that CO2 content in the atmosphere is higher than it’s been ever — and we can go back 650,000 years. The linkage between that and temperatures is inescapable.

Gore makes a good point: shouldn’t we be worried about more than terrorism?

Does the thought of parts of Manhattan, San Francisco, and large parts of Florida going underwater suggest a problem exists?

This really is critical and urgent.

We’ve already been thinking about it lately as we renovate our house. We’re paying a little more now for things like airtight insulation, low-energy lighting, efficient heating. Not only will it save us money in the long run, it will help improve our lives and Jacob’s life down the road.

The movie’s website is over at climatecrisis.net.

Insurance

I was visiting with our homeowners insurance company the other day (very good folks). Because we weren’t living in our house during renovation, we had to get a more expensive type of policy. The agent that sold it do is is no longer with the company, so the new person handling our account asked me, “Why did he want you to get this policy?”

“Well, he said it was because we weren’t living there.” Then I suddenly remembered the rest of what he said, and couldn’t help cracking up. “Plus, he said that if something like a fire happened, people might not notice.”

What can I say. Psychic insurance salesman.

On that note, I was having a discussion with someone about large windows on a house. That person was saying that “everyone” always puts their large windows on the front (or back of their house, I forget which) so that potential burglers are easier to spot. He was concerned about where we were putting windows on our house.

This had never occured to me.

But I pointed out that our yard was on fire and nobody noticed for hours. What difference would it make where our windows are?

“Ah, good point,” he said.

On that note, you ought to check out rush hour in the country. Truly this is a lot of traffic for roads around here.

Review of VPSLink

Back in June, I wrote about my switch to VPSLink for my virtual private server (VPS) host.

Now it’s 6 months later, my initial contract is up, and it’s time to consider whether to renew it.

Overall, VPSLink has worked out reasonably well. I have their Link-4 plan, which provides 512MB RAM, 20GB of disk storage, and 500GB of bandwidth for $40/mo (or down to $33/mo if you pay for 12 months in advance).

Reliability and Uptime

VPSLink has been reasonably reliable. I wouldn’t say that they have been exceptionally reliable, though.

Some outages:

  • Back in July, the server was down for more than 8 hours. vpslink support blamed it on filesystem corruption; they rebooted, the system came up, FS went into readonly mode, so they went back down for fsck. I don’t know what FS they use.
  • In October, an outage of about 30-60 minutes that apparently was due to load problems on the host. The control panel also was broken during this time. Apparently there is a reboot queue, and the system can only reboot one customer’s VPS at a time, and everybody wanted to reboot. But the UI was not really designed for this situation and presented very confusing status messages.
  • In November, a kernel panic caused an outage of about 60 minutes.
  • Various other outages that were resolved fast enough that I never emailed them about it.

They have generally responded to support requests in a reasonable amount of time, and support has been approximately as helpful as I’d expect.

Overall, there’ve been a few issues but, aside from that 8-hour outage in July, nothing especially remarkable.

Automated Tools

VPSLink has a “control panel” where you can reboot, start, or stop your virtual machine. You can see how much bandwidth you’ve used in a month, your IPs, and billing information.

You can also create support requests and view the status of support requests there. You can also view all correspondence on a given ticket, and add new correspondence to it — a nice touch that’s useful if your email is down because your VPS is down.

The reboot/start/stop facility didn’t work during one outage, though.

Resources

The 20GB of disk space is nice to have, though I never used it all. No complaints there.

But the memory setup is rather strange. This may be because VPSLink is using OpenVZ instead of its more popular (and featureful) commercial counterpart Virtuozzo, or something such as Xen.

Those that have used OpenVZ/Virtuozzo know there is a /proc/user_beancounters file in the virtual environment that reports on the limits of resources allocated to your virtual server. VZ lets the server admins regulate the amount of resident RAM, swap, inodes, pending IP connections, amount of virtual RAM, etc, etc. About 3 dozen items in total.

Unlike some companies like JohnCompanies, VPSLink was not willing to make any adjustments to this file for me. For instance, my mail server may need more simultaneous open files than they have by default, but they wouldn’t work with me to make reasonable adjustments.

But the larger problem was with regard to RAM. This was particularly annoying. VPSLink does not permit any access to swap, so the 512MB is what you get. But tools like top show the entire memory allocation on the host, and are mostly useless in tracking down your own usage.

You can look at the privvmpages entry in /proc/user_beancounters to get an idea of your current usage, but that’s it. There is a script on their wiki that will turn this into a more useful number, but again, it’s not the most helpful.

The minute you try to allocate anything past 512MB, your processes start getting killed. According to /proc/user_beancounters, even though my system normally hovers around 70% of that 512MB allocated, I’ve had 290,567 instances where a process has been refused its request to allocate memory since the last reboot. Such instances usually cause a process to crash.

Other Virtuozzo companies like JohnCompanies will give you “burstable RAM”, so if others aren’t using their full RAM allocation at the moment that your machine goes over your regular limit, you can go over it to a certain extent. That was very effective at preventing crashes during times of higher than usual memory usage, and I never had trouble with this at JohnCompanies.

I’ve had a lot of trouble with VPSLink because of these hard, no-exceptions limits. Getting hit by the Google bot at the same time as the MSN bot could raise memory usage enough that processes would start getting killed. Depending on which process it is, it could take down cron, apache, the mail server, whatever. And there is no log of which processes die because of this.

In normal circumstances, this wouldn’t pose even a noticeable impact to a server; inactive bits could be swapped out, etc.

I am also suspicious of exactly how OpenVZ calculates memory usage. Since moving to a physical Linux box, my memory usage seems to be much lower than the privvmpages would seem to indicate, even though it’s running the exact same code.

So I had some outages on my server that were caused by OpenVZ memory limits, which didn’t get listed in the overall outages above.

This problem is the reason I’m leaving VPSLink.

Performance

Overall performance has been acceptable. The CPU speed appears to be reasonable and about as expected.

The disk performance has been more problematic, however. It tends to be rather slow. There have been times when I can type “ls” in a directory with about 10 files and it takes 5-10 seconds to respond. And I wasn’t even using “-l”.

Now that sort of thing is certainly the exception. But it makes databases — and websites that rely on databases — very slow.

The performance measurements over at RealMetrics also show VPSLink as being at the bottom of the pack for disk performance.

Overall

VPSLink is a reasonable value, and if monthly cost is important to you, probably a good choice. Don’t expect it to be spotless, though. I would call the overall experience pretty average. Nothing too spectacular in either direction in any respect.

The Haskell Blog Tutorial

The first installment of Mark C. Chu-Carroll’s Haskell tutorial series went up last week.

It begins this way:

Before diving in and starting to explain Haskell, I thought it would be good to take a moment and answer the most important question before we start:

Why should you want to learn Haskell?

It’s always surprised me how many people don’t ask questions like that.

Farther down:

So what makes Haskell so wonderful? Or, to ask the question in a slightly better way: what is so great about the pure functional programming model as exemplified by Haskell?

The answer is simple: glue.

Languages like Haskell have absolutely amazing support for modular development.

An interesting and though-provoking article, even for someone that’s been using Haskell for more than 2 years now. (Yikes, I had no idea it was that long)

You can also see all his posts on Haskell, which include a couple more installments.