Category Archives: Linux

Sierra Wireless 595U / Sprint on Linux

Here’s how you use a Sierra Wireless 595U USB modem to connect to wireless Internet service with Sprint:

Insert the modem into the USB slot. lsusb should show:

Bus 001 Device 005: ID 1199:0120 Sierra Wireless, Inc.

rmmod usbserial

Then:

modprobe usbserial vendor=0x1199 product=0x0120

You should see /dev/ttyUSB0, ttyUSB1, and ttyUSB2 appear. See also instructions for automating this with a similar card (modify vendor and product to above settings).

Now you will need to configure PPP for this. On Debian, run pppconfig. Your settings will be:

Phone number: #777
Username: 1234567890@sprintpcs.com (replace 1234567890 with your data card’s “phone number”, no dashes)
Password: your sprint password
Speed (BPS): 921600 (use lower numbers such as 115200 if you have trouble with this)
Port: /dev/ttyUSB0
Init string: ATZ

Here are some other helpful pages:

Verizon EVDO
Sprint and Linux
Cingular AT&T UMTS
Sierra’s Linux page

Conferences Suggestions

At work, we use Linux (and Debian, in specific) for a lot of different things: everything from our phone system (running on Asterisk) to file serving and running some proprietary applications. I’m one of the people that finds, sets up, and maintains these systems, and I write code for our in-house use as well. I like to learn from others, and get to know others that may have things in common with me and with our environment. So going to conferences is a useful thing to do.

I’m hoping that some reader out there will have a good suggestion for a conference I ought to attend. Here are some that I’ve looked into and my thoughts on them:

  • Usenix Annual Tech Conference: I went last year. There were some very good talks, but the audience size was not all that large. I got to meet some peers there, but I didn’t get to talk in-person to anybody I’d worked with online before. (LISA being in fall/winter means it’s too early to consider it just yet)
  • LinuxWorld Conference & Expo: My general impression of LWCE in the past has been that its technical talks aren’t very technical; that is, they either cover things I already know or don’t care about. They are starting to publish their program this year, and I see a few interesting things though. There have traditionally been a lot of Debian folks at the .org pavillion.
  • Debconf: It seems to be focused almost exclusively on developing the Debian operating system, rather than on using it. While I am a Debian developer and have been for quite awhile, I would find new uses of Debian to be more interesting than new ways to hack on Debian. Plus, the insanely early registration requirements means that it’s too late to go this year anyhow. (And my brother is getting married right in the middle of it.)
  • OSCon: There look to be some interesting talks in the database area, and some about Xen and virtualization, and Simon PJ (one of the ghc hackers) will be there. So this would be interesting, though somewhat light on the admin side of things.
  • OLS: Seems very focused on the kernel, and not much else. That is of interest, of course, but is one piece of many. Though there was a talk about Linux deployment at Nortel that sounded interesting.

My leading candidates are probably Usenix and OSCon. I’m interested to hear what people think, especially those that have attended some of these conferences.

And we’re off!

Yesterday afternoon, we started our information meetings with employees about our Linux on the desktop project. We’re underway on our migration.

But before I talk about that, I need to back up and describe what the project is.

We are converting approximately 80% of our 150 or so PC users to Linux desktops. They’re Debian etch (4.0) running Gnome, Firefox (Iceweasel), Evolution, NFSv4, and SystemImager. Over the coming days and weeks, I’ll be writing about why we’re doing this, how we’re making it happen, things we’ve run into along the way, and the technology behind it.

Today I’d like to start with a high-level overview of the reasons we started investigating this option.

It became apparent that Vista was going to be a problem for us. Most of our desktop PCs are not very old, but Vista meant a significant degradation in performance from the Windows XP Pro that most people were running. A performance dip so significant, in fact, that it would have created a significant negative impact on employee productivity.

We tend to buy PCs with Windows licenses from the vendor (Windows preinstalled). As such, we knew it wouldn’t be long before XP-based machines would be hard to find. If we stuck with Windows, we’d be running a mixed-OS network — which we knew from experience we did NOT want to do. The other option would be to replace all those old PCs. The direct costs of doing that, with the associated Vista and Office licenses, would have been more than $200,000.

So we started to look at other options — changing the way we license Windows, sticking with XP for awhile, or switching away from Windows. This last option sounded the most promising.

I took a spare desktop-class machine, representative of the hardware most end users would have, and installed etch (then testing) on it. I spent a bit of time tweaking the desktop settings, making things as transparent to the user as possible. We liked what we saw and started pursuing it a bit more. We knew we had some Windows apps we couldn’t discard, so we tested running them off a Windows terminal server with the Linux rdesktop client. That worked well — and the appropriate Server 2003 licenses plus CALs would still be far cheaper than a mass migration to Vista.

To make a long story short, we are getting quite a few benefits out of all this. One of the most important is a single unified system image. Excepting a few files like /etc/fstab, every system gets a bit-for-bit identical installation from the server, updated using rsync. /home is mounted from the network using NFS (v4). So our users can sit down at any PC, log in, and have all their programs, settings, email, etc. available. A side benefit is that hardware problems become minor annoyances rather than major inconveniences; if your hard disk dies, we can just bring up a different PC. We had tried numerous times to make roaming profiles work in Windows, but never really achieved a reliable setup there — perhaps because it seemed virtually impossible to assure that each Windows PC had the exact same set of software, in the exact same versions, installed.

More to come.

Saving Power with CPU Frequency Scaling

Yesterday I wrote about the climate crisis. Today, let’s start doing something about it.

Electricity, especially in the United States and China, turns out to be a pretty dirty energy source. Most of our electricity is generated using coal, which despite promises of “clean coal” to come, burns dirty. Not only does it contribute to global warming, but it also has been shown to have an adverse impact on health.

So let’s start simple: reduce the amount of electricity our computers consume. Even for an individual person, this can add up to quite a bit of energy (and money) savings in a year. When you think about multiplying this over companies, server rooms, etc., it adds up fast. This works on desktops, servers, laptops, whatever.

The easiest way to save power is with CPU frequency scaling. This is a technology that lets you adjust how fast a running CPU runs, while it’s running. When CPUs run at slower speeds, they consume less power. Most CPUs are set to their maximum speed all the time, even when the system isn’t using them. Linux has support for keeping the CPU at maximum speed unless it is idle. By turning on this feature, we can save power at virtually no cost to performance. The Linux feature to handle CPU frequency scaling is called cpufreq.

Set up modules

Let’s start by checking to see whether cpufreq support is already enabled in your kernel. These commands will need to be run as root.

# cd /sys/devices/system/cpu/cpu0
# ls -l

If you see an entry called cpufreq, you are good and can skip to the governor selection below.

If not, you’ll need to load cpufreq support into your kernel. Let’s get a list of available drivers:

# ls /lib/modules/`uname -r`/kernel/arch/*/kernel/cpu/cpufreq

Now it’s guess time. It doesn’t really hurt if you guess wrong; you’ll just get a harmless error message. One hint, though: try acpi-cpufreq last; it’s the option of last resort.

On my system, I see:

acpi-cpufreq.ko     longrun.ko      powernow-k8.ko         speedstep-smi.ko
cpufreq-nforce2.ko  p4-clockmod.ko  speedstep-centrino.ko
gx-suspmod.ko       powernow-k6.ko  speedstep-ich.ko
longhaul.ko         powernow-k7.ko  speedstep-lib.ko

For each guess, you’ll run modprobe with the driver name. I have an Athlon64, which is a K8 machine, so I run:

#modprobe powernow-k8

Note that you leave off the “.ko” bit. If you don’t get any error message, it worked.

Once you find a working module, edit /etc/modules and add the module name there (again without the “.ko”) so it will be loaded for you on boot.

Governor Selection

Next, we need to load the driver that tells the kernel what governor to use. The governor is the thing that monitors the system and adjusts the speed accordingly.

I’m going to suggest the ondemand governor. This governor keeps the system’s speed at maximum unless it is pretty sure that the system is idle. So this will be the one that will let you save power with the least performance impact.

Let’s load the module now:

# modprobe cpufreq_ondemand

You should also edit /etc/modules and add a line that says simply cpufreq_ondemand to the end of the file so that the ondemand governor loads at next boot.

Turning It On

Now, back under /sys/devices/system/cpu/cpu0, you should see a cpufreq directory. cd into it.

To turn on the ondemand governor, run this:

# echo echo ondemand > scaling_governor

That’s it, your governor is enabled. You can see what it’s doing like this:

# cat cpuinfo_min_freq
800000
# cat cpuinfo_max_freq
2200000
# cat cpuinfo_cur_freq
800000

That shows that my CPU can go as low as 800MHz, as high as 2.2GHz, and that at the present moment, it’s running at 800MHz presently.

Now, check your scaling governor settings:

# cat scaling_min_freq
800000
# cat scaling_max_freq
800000

This is showing that the system is constraining the governor to only ever operate on an 800MHz to 800MHz range. That’s not what I want; I want it to scale over the entire range of the CPU. Since my cpuinfo_max_freq was 2200000, I want to write that out to scaling_max_freq as well:

echo 2200000 > scaling_max_freq

Making This The Default

The last step is to make this happen on each boot. Open up your /etc/sysfs.conf file. If you don’t have one, you will want to run a command such as apt-get install sysfsutils (or the appropriate one for your distribution).

Add a line like this:

devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand
devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 2200000

Remember to replace the 2200000 with your own cpu_max_freq value.

IMPORTANT NOTE: If you have a dual-core CPU, or more than one CPU, you’ll need to add a line for each CPU. For instance:

devices/system/cpu/cpu1/cpufreq/scaling_governor = ondemand
devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 2200000

You can see what all CPU devices you have with ls /sys/devices/system/cpu.

Now, save this file, and you’ll have CPU frequency scaling saving you money, and helping the environment, every time you boot. And with the ondemand governor, chances are you’ll never notice any performance loss.

This article showed you how to save power using CPU frequency scaling on Linux. I have no idea if it’s possible to do the same on Windows, Mac, or the various BSDs, but it would be great if someone would leave comments with links to resources for doing that if so.

Updated: added scaling_max_freq info

Desktop Linux: NFS or something else?

Recently, I asked for opinions on desktop Linux. Thanks very much to those that replied. I’ve set up an old laptop as an experiment. I’m using Debian, Gnome, and Systemimager. It’s been an interesting project (especially getting SystemImager and a splash screen program to do what I want).

I’d like for my desktop machines to mount /home over the network. I could use NFS, but of course that has all the well-known security risks. Is there a better network filesystem that is easy to use, fast, and more secure than NFS?

Desktop Linux Opinions?

I’m brainstorming about ways of setting up Linux desktops machines for people used to Window users on a LAN. It could be any size of LAN.

I’d like people to be able to sit down at any Linux machine on the LAN and log in — probably use a LDAP directory for that, and NFS-mounted home directories. I wouldn’t want to NFS-mount the entire thing for performance reasons.

So, some of the things I’m thinking about are:

  • Desktop environment: KDE or Gnome? Which would give Windows users all the tools they’d want? Which would they feel most at home with? I’m thinking it’s KDE, but Gnome has a more polished “feel” too it.
  • Image management. How could the desktops be updated? Just rsync everything except fstab over? Can we actually have a single system image? Is XOrg powerful enough to just recognize hardware at boot and Do The Right Thing? Can we build a unified initrd somehow?
  • Distribution. Debian, Ubuntu, Kubuntu? Do the Ubuntus bring anything to the table, if we take as a given that an experienced Debian admin is managing all this?
  • Laptops. What do we do about the home directories there? Some sort of automated rsync thingy?
  • Installation. FAI? Or some homegrown thing that just boots up, partitions, and runs rsync?

RedHat Gripes

Lately we are looking at groupware options, and have been looking at Scalix and Zimbra. We may need the features in the proprietary versions of these products, unfortunately.

So I downloaded an evaluation copy of Scalix.

They say they support RedHat and SuSE. Fine, I think, I’ll just alien the RPMs to debs and be happy.

Not so fast. They have a whole proprietary install system. They check for /etc/redhat_release or /etc/SuSE_release (or something like that) and do different things depending on what is there. Ugh. Why can’t these proprietary vendors just target LSB? The differences seem mostly related to init anyway.

So I touch /etc/SuSE_release into existence, run the installer again. It complains that DISPLAY is not set. UGH. I log in with ssh forwarding, to root (sigh), and run it again.

Now it complains that the SuSE_release file doesn’t contain a valid release. I google a bit, but the file format doesn’t seem to be documented anywhere. I extract it from an RPM somewhere, but no luck.

So, I figure at this point, let’s try an actual RPM distro. I’m running this in a Xen domain anyway, so it should be no big deal, right?

I think CentOS will be a good choice. It’s RHEL with the non-free stuff stripped out. And they support RHEL and don’t need any non-free stuff. I google, and find instructions for installing via rpmstrap for Xen uses.

Let me say, rpmstrap is not nearly the nice tool that cdebootstrap is. rpmstrap totally hosed the networking on the Xen host machine, requiring me to reboot to get it back to proper state. The resulting install wouldn’t boot, either — I later found out that, even though I listed explicit devices in /etc/fstab like usual, it requires labels on all my partitions to boot. Ugh. There are a host of other problems with the rpmstrap-installed chroot, and it’s broken beyond my ability to repair due to problems with the rpm database.

So then I downloaded the “Server” CD for CentOS, which is supposed to have just the stuff a person would need for a server, and leave off all the graphical tools, multimedia, etc. I fired up VMware and did an install. Then I booted Debian From Scratch in VMware and used tar and netcat to copy the installed image over to Xen.

I got it booting fairly easily. But now I start to remember why I had this instinctive gag reflex last time I used RHEL.

First off, the network configuration, by default, is tied to the MAC address of your ethernet card. So if you replace your Ethernet card, your network is broken by default.

Then, there’s the way the network is brought up. It uses arping as part of its procedure to bring up a NIC. If it sees a reply anywhere on the network with the IP you’re trying to assign, it leaves the NIC half-up — it’s been ifconfig’d up, but without an IP. So that’s right, if somebody happens to have a rogue device plugged in at the moment your server boots, your server will come up without a network configured. This is *Enterprise* Linux and it’s pulling this sort of thing. Terrible design.

Next, there’s the way the network is *configured*. There are commands such as system-config-network-tui, -gui, -cmd, -druid, etc. I go for -tui. to start with. It’s a dialog-like interface, and asks the basics like IP address, etc. It doesn’t have any way to configure more than one Ethernet card that I can tell. And some of the settings — like nameserver — apparently require you to press F12 to visit. But the program doesn’t recognize F12 as sent by an xterm, so it doesn’t work.

All the other options require X. So, I reluctantly ssh -X into it as root and run system-config-network-gui. It doesn’t work — complains it can’t find DISPLAY. Strange, I think; DISPLAY is set properly to localhost:whatever. It turns out that /etc/hosts is empty by default, so the thing can’t resolve localhost! Argh. I add a line to /etc/hosts and it fires up.

This tool works decently. I save, uncheck the tie to a MAC address box, and exit. I then think it might be good to fire it up again and see what it did. I try running it again, and get the same error about DISPLAY. The stupid tool blew away /etc/hosts and replaced it with an empty file! This is NOT what I would expect from an Enterprise Linux. You don’t blow away a config file the administrator touched without asking, EVER.

Next, I figure, let’s try installing the XFS tools so I can switch the root filesystem to xfs. I start with “yum update”, which doesn’t quite do what I expect. (It is more like apt-get update && apt-get -u dist-upgrade) So I hit Ctrl-C, but — surprise — IT DOESN’T WORK. I press it a few more times, and it seems to just make the downloader cycle through mirrors because of a “download error”. So I hit Ctrl-Z and kill %1. I have my prompt, but it’s STILL DOWNLOADING STUFF and spewing all over my console. Ugh.

I finally use ps and kill -9 and eventually get it killed off. Stupid thing.

I don’t understand why anybody would want to use RedHat Enterprise Linux in an enterprise. It seems more suited to a hobbyist system at home. From reading some forums, it seems there are quite a few people out there using Debian for enterprise systems for similar reasons.

So now, maybe I’ll have the chance to actually try Scalix.

(BTW, our intern got Zimbra installed on Debian just fine, so that’s a plus for it.)

Multipath is working

Yesterday, we got multipath working with our HP MSA1500cs SAN. We have a fully redundant setup with redundant controllers, fibre channel switches, and two FC controllers per host.

We had been having a lot of trouble getting things to work right with active/passive controllers. We could get failover to work in some cases, but getting everything to communicate correctly in the event of a failure was difficult, since every machine would have to flip over to the passive controller simultaneously.

With a firmware upgrade, the MSA 1500cs can support active/active controllers. With the dual-active setup, both controllers are active simultaneously and both are valid paths.

Despite HP support’s indications to the contrary, HP does have information on using built-in multipathd in Linux instead of their proprietary multipath solution. It’s document c00635587, part AA-RW8RA-TE.

We’ve configured multipathd.conf like this:

      path_grouping_policy  multibus
      path_checker              tur
      failback                  immediate
      no_path_retry             60
      path_selector             "round-robin 0"

Just put that in your default block and it should work.

Hello, ext3. Goodbye, reiser4.

So I’ve been trying out various filesystems over the past few months, by converting a few machines to them and using them on a daily basis.

I’ve found that reiser3, JFS, and XFS are all risky and actually corrupt data on crashes. JFS also has a few weird bugs that make the kernel oops, and sometimes cause filesystem corruption. All of the above also have starvation issues, where one IO-intensive process can dramatically slow down everything on the system (by a factor of 100 or more).

Reiser4 has proven better — only one small issue that I can recall. But it’s got a huge problem: no ability to resize a Reiser4 partition. That is rather ridiculous these days, and really reduces the utility of LVM. (Hans says he’ll make it resizable when someone pays.)

So I’ve tried out ext3 again, for the first time in a few years. I’m using data=ordered,commit=300 (or 600 on some machines), which still makes it safer than the other journaled filesystems.

And I must say that it is impressive. The old bottlenecks that I was used to were gone. The thing is reliable and fast, and scales well. I’m going to move everything back to ext3.

So why do Hans’s benchmarks show reiser4 being better? For one thing, most benchmarks measure throughput, not response time, so things like starvation don’t cause black marks in them. Most of them don’t even use multiple processes to simulate real-world activity anyway. Plus, ext3’s default mount options (commit=5, for instance) are much more conservatve than other filesystem’s. To get a fair test, one should increase that commit= number on ext3.

Here’s another discussion about ext3.

Linux, Bluetooth and Mobile Phones

I got my first Bluetooth-enabled mobile phone this week, a Motorola v551. I’ve been playing with the Linux utilities for working with mobile phones and have assembled some links. Most of the pages out there seem focused on SMS features of a mobile, or using a mobile phone for Internet access for a Linux box. I’m interested in neither, and care more about phone book syncing and transferring files back and forth between the phone itself and a PC.

There seems to be quite a community built around hacking Motorola phones as well. The Hofo Guide is the authoritative resource.

HowardForums.Com is also a great site.