Category Archives: Linux

Desktop Linux: Gnome

I had been intending to write an entire series of posts about our corporate switch to Linux on the Desktop. To date, I wrote only one introducing the project and our reasons for switching from Windows. That was back in April.

Today I’d like to start talking about it all some more.

We have standardized on Gnome for our desktops. Given the Windows background of our user base, it was pretty much a given that we would have to use either Gnome or KDE. Something like fvwm or a non-integrated environment just wouldn’t be a good option.

We evaluated both Gnome and KDE. The very “clean” appearance of Gnome was a nice thing for us. KDE seemed to be to “chatty”, talked about entering in audiocd:/ when it shouldn’t have needed to, and generally violated the KISS and principle of least surprise too often. That said, I continue to run KDE for my personal desktop because Gnome just doesn’t have the flexibility that KDE does. It is too bad that Gnome has gone on this remove functionality kick, and KDE hasn’t gotten the KISS religion yet.

Anyway, Gnome worked well for the most part. We have set some defaults in gconf for things like panel icons. We also set a few mandatory defaults. I fixed a couple of bugs in the vfs system related to nfs4 support, which manifested themselves as icons for files newly saved to the desktop never showing up.

We wanted to present a customized menu to people based on what their job function is. That is, we are using a single system image, so all apps will be installed on all machines. But we didn’t want people to have to see a ton of software that they don’t use. That was easily enough accomplished for custom apps by creating desktop files with mode 0640 and setting the group to the set of people that should see the program on their menu. We removed a few stock programs (such as the terminal) from the menu as well, using dpkg-statoverride. That was also quite easily done. However, I will say that the entire Gnome XDG menu thing is woefully under-documented.

We use Firefox for the standard web browser. It is integrated well enough with Gnome and we have no problems there, aside from sites that are IE-only. We solve that with a Windows terminal server, which I’ll discuss later.

Our network printing was already based on Cups. The individual machines are set up as Cups clients only, which works fine. We did find, however, that gnome-cups-manager automatically installs a tray monitor for cups. This monitor puts little printer icons on the tray when printers are in use. Unfortunately, it figures out which printers are in use by polling the server, and it is turned on by default out of the box, with no good way to disable it short of dpkg-statoverriding it to 0000. You can imagine that hundreds of users times dozens of printers times numerous polls per minute created quite the load on the server. This was a really braindead design and the people that wrote it should have known better. It is also quite useless to have icons coming on for all the printers on the network, which on some networks could be thousands, and not even on the same continent as the user.

Printing is generally a bit iffy in Gnome. They seem to be transitioning between about 3 different printing toolkits, all of which have different print dialog boxes with different supported features and different ways of selecting printers. One chief annoyance is that the print box in evince (the document/PDF viewer) does not let people access printer-specific features such as hole punching and stapling. So we installed gtklp and xpdf for people. The people that print heavy PDFs are huge fans of gtklp these days; it’s a nicer solution than we had in Windows. Nobody really likes evince. We also have had some trouble with evince generating PostScript output that some printers can’t grok. It sounds like all this should be much better in newer versions of Gnome, which if true, would be welcome news.

The Gnome screenshot tool makes it easy to save off a screenshot to a file, or to drag it into an email, but it is difficult to print it (you have to save it first). That was a common complaint around here, so I wrote a little wrapper around xwd and gtklp for printing screenshots. People really like that because gtklp gives them lots of options about orientation and size of the image if they want it, or a simple “Print” button to click if they don’t care. We set a gconf default to bind this to Ctrl-PrintScr and it works well. KDE’s screenshot tool is much more capable, and if we were using KDE, we wouldn’t have had any problem with screenshots.

The bottom line on Gnome is that we, and are users, are happy with it after we’ve made these customizations. But we have had to do more customization that we should have. I still think that Gnome has been better for our users than KDE, but I do wonder how long we’ll be able to survive with our “no KDE libraries” policy, as people want ksnapshot, kolour, etc.

Linux Hardware Support Better Than Windows

Something I often hear from people that talk about Linux on the desktop is this: people want to be able to go to the store, buy hardware, and be confident that it will Just Work.

I would like to point out that things are rarely this simple on Windows. And, in fact, things are often simpler on Linux these days.

Here’s the example that prompted this post.

I have a computer that’s about 4 years old. It’s my main desktop machine at home. It was still fast enough for me, but has been developing all sorts of weird behaviors. Certain USB ports stopped working altogether a few months ago. Then it started hanging during POST whenever I’d try to reboot — but would still boot OK about 80% of the time after a power cycle. Then it started randomly losing contact with my USB mouse until a reboot. And the last straw was when the display started randomly going out. I’ve told everyone that my machine has cancer and is slowly dying.

The case is a pretty nice full tower — solid and sturdy. I have an 160GB IDE drive in it. So I figured I will upgrade the motherboard, CPU, RAM, and add a 500GB SATA drive since they’re so cheap these days and I’m running out of space. I’d also have to buy a new video card since my old one was AGP and the new motherboard only has PCI Express for video. So about $700 later from Newegg (I got a Core 2 Duo E6750), the parts arrived.

I spent some time installing it all. The motherboard had only one IDE channel, and I didn’t have any IDE cable long enough to connect both the IDE hard disk and the optical drive, so I popped in an old Maxtor/Promise PCI Ultra133 controller I had sitting around to use with the DVD burner.

Now, to recap, the hardware that the OS would see as new/different is: CPU, RAM, IDE controller, SATA controller, Promise IDE controller, integrated NIC, sound, video.

Then the magic smoke test.

I turned on the machine. Grub appeared. Linux started booting.

Even though I had switched from the default Debian “supports everything” kernel to a K7 kernel, it still booted.

And every single piece of hardware was supported immediately. There was no “add new hardware” wizard that popped up, no “I’ve found new hardware” boxes. It just worked, silently, with no need to tell me anything or have me click on anything.

Only one piece required configuration: the NIC, thanks to some udev design flaws (it got renamed from eth0 to eth1 by udev). That took 20 seconds. Debian saw the IDE HDD, the SATA drive, the Promise controller, the DVD burner, the video card, the sound, and it all worked automatically. And Debian is not even a distro that occurs to a lot of people when they think of great hardware support.

Now let’s turn to Windows.

The Windows Nightmare

I have a legal copy of Windows XP Home that was preinstalled on the machine when I got it. I resized its partition down to about 20GB so that I could use 140GB for Linux. I use it rarely, primarily for gaming, and I’ve bought about 3 games in the last 4 years. I usually disconnect the network when I boot to Windows, though I do keep it current with updates.

I did some research on what Windows was going to do when I replace the hardware. The general consensus from people on the ‘net is that you can’t just replace a motherboard and expect everything to be happy. There were generally three different approaches suggested: 1) don’t even try, just reinstall; 2) do a rescue install after you move over; and 3) use sysprep. The rescue install has to be done by booting from an XP install CD, then picking a rescue install option somewhere. It will overwrite your installed Windows with the version from the CD. That means that I’d have to re-apply SP2, though bits of it that didn’t get overwritten would still be on the hard disk, and who knows what would happen to the registry.

Option #3 was to download sysprep (must have the Genuine Disadvantage ActiveX to get the free download from MS). Sysprep is designed to be used just prior to taking an image with ghost for replication. It removes the hardware-specific config (but not the drivers), as well as the product key, from the machine, but otherwise leaves it untouched. On the next boot, you get the “Welcome to XP” wizard.

One other strike against is that Compaq “helpfully” didn’t ship any install CDs with the machine. Under Windows, they did have a “create rescue CD” tool, which burned 7 CDs for me. But they are full Compaq-specific CDs, not one of them an XP CD, *AND* they check on boot to see if you’re using the same Compaq motherboard, and exit if not. Highly useless.

So I went with sysprep. Before my new hardware even arrived, I downloaded the Windows drivers for all of it. I burned them to a CD, and installed as many as I could on the system in advance. About half of them refused to install since the new hardware wasn’t there yet. I then took a raw image of the partition with dd, just in case. Finally, right before I swapped the hardware, I ran sysprep and let it shut down the machine.

So after the new hardware was installed came the adventure.

Windows booted to the “welcome to XP” thingy. The video, keyboard, mouse, and IDE HDD worked. That’s about it.

I went through the “welcome to XP wizard”. But the network didn’t work yet, so I couldn’t activate it. So I popped my handy driver CD in the drive. But what’s this? Windows doesn’t recognize the DVD drive because it doesn’t have drivers for this Promise controller that came out in, what, 2001? Sigh. Downloaded the drivers with the imac, copy them to a CF card, plug the USB CF reader into Windows.

While I was doing that, about 6 “found new hardware” dialogs got queued up. Not one of them could actually find a driver for my hardware, but that didn’t prevent Windows from making me click through them all.

So, install Promise driver from CF card, reboot. Click through new hardware dialogs again. Install network driver, reboot, click through dialogs. Install sound driver. Install Intel “chipset” driver, click through dialogs. Reboot. Install SATA driver. Reboot.

So the hardware appears to all be working by this point, though I have a Creative volume control (from the old hardware) and a Realtek one in the tray. Minor annoyance to deal with later.

Now I have to re-activate XP. I dutifully key in the magic string from the sticker on my case. Surprise surprise, the Internet-based activation fails because my hardware is different. So I have to call the 800 number. I have to read in 7 blocks of 6 digits, one block at a time. Then I answer some questions: have I activated Windows before, have I changed hardware, was the old hardware defective (yes, yes, and yes). Then I get 7 blocks of 6 digits read to me. Finally Windows is activated. PHEW! Why they couldn’t ask those questions with the online tool is beyond me.

Anyhow. Linux took me 20 seconds to get working. Windows, about 2 hours, plus another 2 hours for prep and research.

I did zero prep for Linux. I made one config change (GUI users could have just configured their machine to use eth1).

Other cool Linux HW features

Say you buy a new printer and want to get it set up. On Windows, you insert the CD, let it install 200MB of print drivers plus ads plus crap plus add something to your taskbar plus who knows what else. Probably reboot. Then the printer might actually print.

On Debian, you plug in the printer to the USB port. You type printconf. 5 seconds later, your printer works.

I have been unpleasantly surprised lately by just how difficult hardware support in Windows really is, especially since everyone keeps saying how good it is. It’s not good. Debian’s is better, in my opinion.

OSCon Thursday Part 2: Linux on laptops

Matthew Garrett

A lot of background on the state of laptop support in Linux. It worked reasonably well in the 90s, but with the migration to ACPI, has become much more complicated and less reliable in general, especially with suspend/resume and video.

Emporer Linux and System 76 produce Linux supported laptops.

I wanted some more in-depth technical information and found Matthew later on at the Intel booth. Here’s what I learned:

s2ram is little more than a wrapper around standard ACPI sleep that has options to do video mode save/restore

I asked him about all the many, many different userland laptop management tools. He recommends simple acpi-utils with the ondemand governor. laptop-mode-tools tries to do way too much, and there is little point to using a userland governor anymore.

I’ve been having a problem with my MacBook Pro (Core Duo) hanging on suspend about 10% of the time. I explained the symptoms and asked him how to go about debugging it. He suggested disabling console suspend and enabling PM debug in the kernel. I will give that a try and see what I get.

Sierra Wireless 595U / Sprint on Linux

Here’s how you use a Sierra Wireless 595U USB modem to connect to wireless Internet service with Sprint:

Insert the modem into the USB slot. lsusb should show:

Bus 001 Device 005: ID 1199:0120 Sierra Wireless, Inc.

rmmod usbserial

Then:

modprobe usbserial vendor=0x1199 product=0x0120

You should see /dev/ttyUSB0, ttyUSB1, and ttyUSB2 appear. See also instructions for automating this with a similar card (modify vendor and product to above settings).

Now you will need to configure PPP for this. On Debian, run pppconfig. Your settings will be:

Phone number: #777
Username: 1234567890@sprintpcs.com (replace 1234567890 with your data card’s “phone number”, no dashes)
Password: your sprint password
Speed (BPS): 921600 (use lower numbers such as 115200 if you have trouble with this)
Port: /dev/ttyUSB0
Init string: ATZ

Here are some other helpful pages:

Verizon EVDO
Sprint and Linux
Cingular AT&T UMTS
Sierra’s Linux page

Conferences Suggestions

At work, we use Linux (and Debian, in specific) for a lot of different things: everything from our phone system (running on Asterisk) to file serving and running some proprietary applications. I’m one of the people that finds, sets up, and maintains these systems, and I write code for our in-house use as well. I like to learn from others, and get to know others that may have things in common with me and with our environment. So going to conferences is a useful thing to do.

I’m hoping that some reader out there will have a good suggestion for a conference I ought to attend. Here are some that I’ve looked into and my thoughts on them:

  • Usenix Annual Tech Conference: I went last year. There were some very good talks, but the audience size was not all that large. I got to meet some peers there, but I didn’t get to talk in-person to anybody I’d worked with online before. (LISA being in fall/winter means it’s too early to consider it just yet)
  • LinuxWorld Conference & Expo: My general impression of LWCE in the past has been that its technical talks aren’t very technical; that is, they either cover things I already know or don’t care about. They are starting to publish their program this year, and I see a few interesting things though. There have traditionally been a lot of Debian folks at the .org pavillion.
  • Debconf: It seems to be focused almost exclusively on developing the Debian operating system, rather than on using it. While I am a Debian developer and have been for quite awhile, I would find new uses of Debian to be more interesting than new ways to hack on Debian. Plus, the insanely early registration requirements means that it’s too late to go this year anyhow. (And my brother is getting married right in the middle of it.)
  • OSCon: There look to be some interesting talks in the database area, and some about Xen and virtualization, and Simon PJ (one of the ghc hackers) will be there. So this would be interesting, though somewhat light on the admin side of things.
  • OLS: Seems very focused on the kernel, and not much else. That is of interest, of course, but is one piece of many. Though there was a talk about Linux deployment at Nortel that sounded interesting.

My leading candidates are probably Usenix and OSCon. I’m interested to hear what people think, especially those that have attended some of these conferences.

And we’re off!

Yesterday afternoon, we started our information meetings with employees about our Linux on the desktop project. We’re underway on our migration.

But before I talk about that, I need to back up and describe what the project is.

We are converting approximately 80% of our 150 or so PC users to Linux desktops. They’re Debian etch (4.0) running Gnome, Firefox (Iceweasel), Evolution, NFSv4, and SystemImager. Over the coming days and weeks, I’ll be writing about why we’re doing this, how we’re making it happen, things we’ve run into along the way, and the technology behind it.

Today I’d like to start with a high-level overview of the reasons we started investigating this option.

It became apparent that Vista was going to be a problem for us. Most of our desktop PCs are not very old, but Vista meant a significant degradation in performance from the Windows XP Pro that most people were running. A performance dip so significant, in fact, that it would have created a significant negative impact on employee productivity.

We tend to buy PCs with Windows licenses from the vendor (Windows preinstalled). As such, we knew it wouldn’t be long before XP-based machines would be hard to find. If we stuck with Windows, we’d be running a mixed-OS network — which we knew from experience we did NOT want to do. The other option would be to replace all those old PCs. The direct costs of doing that, with the associated Vista and Office licenses, would have been more than $200,000.

So we started to look at other options — changing the way we license Windows, sticking with XP for awhile, or switching away from Windows. This last option sounded the most promising.

I took a spare desktop-class machine, representative of the hardware most end users would have, and installed etch (then testing) on it. I spent a bit of time tweaking the desktop settings, making things as transparent to the user as possible. We liked what we saw and started pursuing it a bit more. We knew we had some Windows apps we couldn’t discard, so we tested running them off a Windows terminal server with the Linux rdesktop client. That worked well — and the appropriate Server 2003 licenses plus CALs would still be far cheaper than a mass migration to Vista.

To make a long story short, we are getting quite a few benefits out of all this. One of the most important is a single unified system image. Excepting a few files like /etc/fstab, every system gets a bit-for-bit identical installation from the server, updated using rsync. /home is mounted from the network using NFS (v4). So our users can sit down at any PC, log in, and have all their programs, settings, email, etc. available. A side benefit is that hardware problems become minor annoyances rather than major inconveniences; if your hard disk dies, we can just bring up a different PC. We had tried numerous times to make roaming profiles work in Windows, but never really achieved a reliable setup there — perhaps because it seemed virtually impossible to assure that each Windows PC had the exact same set of software, in the exact same versions, installed.

More to come.

Saving Power with CPU Frequency Scaling

Yesterday I wrote about the climate crisis. Today, let’s start doing something about it.

Electricity, especially in the United States and China, turns out to be a pretty dirty energy source. Most of our electricity is generated using coal, which despite promises of “clean coal” to come, burns dirty. Not only does it contribute to global warming, but it also has been shown to have an adverse impact on health.

So let’s start simple: reduce the amount of electricity our computers consume. Even for an individual person, this can add up to quite a bit of energy (and money) savings in a year. When you think about multiplying this over companies, server rooms, etc., it adds up fast. This works on desktops, servers, laptops, whatever.

The easiest way to save power is with CPU frequency scaling. This is a technology that lets you adjust how fast a running CPU runs, while it’s running. When CPUs run at slower speeds, they consume less power. Most CPUs are set to their maximum speed all the time, even when the system isn’t using them. Linux has support for keeping the CPU at maximum speed unless it is idle. By turning on this feature, we can save power at virtually no cost to performance. The Linux feature to handle CPU frequency scaling is called cpufreq.

Set up modules

Let’s start by checking to see whether cpufreq support is already enabled in your kernel. These commands will need to be run as root.

# cd /sys/devices/system/cpu/cpu0
# ls -l

If you see an entry called cpufreq, you are good and can skip to the governor selection below.

If not, you’ll need to load cpufreq support into your kernel. Let’s get a list of available drivers:

# ls /lib/modules/`uname -r`/kernel/arch/*/kernel/cpu/cpufreq

Now it’s guess time. It doesn’t really hurt if you guess wrong; you’ll just get a harmless error message. One hint, though: try acpi-cpufreq last; it’s the option of last resort.

On my system, I see:

acpi-cpufreq.ko     longrun.ko      powernow-k8.ko         speedstep-smi.ko
cpufreq-nforce2.ko  p4-clockmod.ko  speedstep-centrino.ko
gx-suspmod.ko       powernow-k6.ko  speedstep-ich.ko
longhaul.ko         powernow-k7.ko  speedstep-lib.ko

For each guess, you’ll run modprobe with the driver name. I have an Athlon64, which is a K8 machine, so I run:

#modprobe powernow-k8

Note that you leave off the “.ko” bit. If you don’t get any error message, it worked.

Once you find a working module, edit /etc/modules and add the module name there (again without the “.ko”) so it will be loaded for you on boot.

Governor Selection

Next, we need to load the driver that tells the kernel what governor to use. The governor is the thing that monitors the system and adjusts the speed accordingly.

I’m going to suggest the ondemand governor. This governor keeps the system’s speed at maximum unless it is pretty sure that the system is idle. So this will be the one that will let you save power with the least performance impact.

Let’s load the module now:

# modprobe cpufreq_ondemand

You should also edit /etc/modules and add a line that says simply cpufreq_ondemand to the end of the file so that the ondemand governor loads at next boot.

Turning It On

Now, back under /sys/devices/system/cpu/cpu0, you should see a cpufreq directory. cd into it.

To turn on the ondemand governor, run this:

# echo echo ondemand > scaling_governor

That’s it, your governor is enabled. You can see what it’s doing like this:

# cat cpuinfo_min_freq
800000
# cat cpuinfo_max_freq
2200000
# cat cpuinfo_cur_freq
800000

That shows that my CPU can go as low as 800MHz, as high as 2.2GHz, and that at the present moment, it’s running at 800MHz presently.

Now, check your scaling governor settings:

# cat scaling_min_freq
800000
# cat scaling_max_freq
800000

This is showing that the system is constraining the governor to only ever operate on an 800MHz to 800MHz range. That’s not what I want; I want it to scale over the entire range of the CPU. Since my cpuinfo_max_freq was 2200000, I want to write that out to scaling_max_freq as well:

echo 2200000 > scaling_max_freq

Making This The Default

The last step is to make this happen on each boot. Open up your /etc/sysfs.conf file. If you don’t have one, you will want to run a command such as apt-get install sysfsutils (or the appropriate one for your distribution).

Add a line like this:

devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand
devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 2200000

Remember to replace the 2200000 with your own cpu_max_freq value.

IMPORTANT NOTE: If you have a dual-core CPU, or more than one CPU, you’ll need to add a line for each CPU. For instance:

devices/system/cpu/cpu1/cpufreq/scaling_governor = ondemand
devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 2200000

You can see what all CPU devices you have with ls /sys/devices/system/cpu.

Now, save this file, and you’ll have CPU frequency scaling saving you money, and helping the environment, every time you boot. And with the ondemand governor, chances are you’ll never notice any performance loss.

This article showed you how to save power using CPU frequency scaling on Linux. I have no idea if it’s possible to do the same on Windows, Mac, or the various BSDs, but it would be great if someone would leave comments with links to resources for doing that if so.

Updated: added scaling_max_freq info

Desktop Linux: NFS or something else?

Recently, I asked for opinions on desktop Linux. Thanks very much to those that replied. I’ve set up an old laptop as an experiment. I’m using Debian, Gnome, and Systemimager. It’s been an interesting project (especially getting SystemImager and a splash screen program to do what I want).

I’d like for my desktop machines to mount /home over the network. I could use NFS, but of course that has all the well-known security risks. Is there a better network filesystem that is easy to use, fast, and more secure than NFS?

Desktop Linux Opinions?

I’m brainstorming about ways of setting up Linux desktops machines for people used to Window users on a LAN. It could be any size of LAN.

I’d like people to be able to sit down at any Linux machine on the LAN and log in — probably use a LDAP directory for that, and NFS-mounted home directories. I wouldn’t want to NFS-mount the entire thing for performance reasons.

So, some of the things I’m thinking about are:

  • Desktop environment: KDE or Gnome? Which would give Windows users all the tools they’d want? Which would they feel most at home with? I’m thinking it’s KDE, but Gnome has a more polished “feel” too it.
  • Image management. How could the desktops be updated? Just rsync everything except fstab over? Can we actually have a single system image? Is XOrg powerful enough to just recognize hardware at boot and Do The Right Thing? Can we build a unified initrd somehow?
  • Distribution. Debian, Ubuntu, Kubuntu? Do the Ubuntus bring anything to the table, if we take as a given that an experienced Debian admin is managing all this?
  • Laptops. What do we do about the home directories there? Some sort of automated rsync thingy?
  • Installation. FAI? Or some homegrown thing that just boots up, partitions, and runs rsync?

RedHat Gripes

Lately we are looking at groupware options, and have been looking at Scalix and Zimbra. We may need the features in the proprietary versions of these products, unfortunately.

So I downloaded an evaluation copy of Scalix.

They say they support RedHat and SuSE. Fine, I think, I’ll just alien the RPMs to debs and be happy.

Not so fast. They have a whole proprietary install system. They check for /etc/redhat_release or /etc/SuSE_release (or something like that) and do different things depending on what is there. Ugh. Why can’t these proprietary vendors just target LSB? The differences seem mostly related to init anyway.

So I touch /etc/SuSE_release into existence, run the installer again. It complains that DISPLAY is not set. UGH. I log in with ssh forwarding, to root (sigh), and run it again.

Now it complains that the SuSE_release file doesn’t contain a valid release. I google a bit, but the file format doesn’t seem to be documented anywhere. I extract it from an RPM somewhere, but no luck.

So, I figure at this point, let’s try an actual RPM distro. I’m running this in a Xen domain anyway, so it should be no big deal, right?

I think CentOS will be a good choice. It’s RHEL with the non-free stuff stripped out. And they support RHEL and don’t need any non-free stuff. I google, and find instructions for installing via rpmstrap for Xen uses.

Let me say, rpmstrap is not nearly the nice tool that cdebootstrap is. rpmstrap totally hosed the networking on the Xen host machine, requiring me to reboot to get it back to proper state. The resulting install wouldn’t boot, either — I later found out that, even though I listed explicit devices in /etc/fstab like usual, it requires labels on all my partitions to boot. Ugh. There are a host of other problems with the rpmstrap-installed chroot, and it’s broken beyond my ability to repair due to problems with the rpm database.

So then I downloaded the “Server” CD for CentOS, which is supposed to have just the stuff a person would need for a server, and leave off all the graphical tools, multimedia, etc. I fired up VMware and did an install. Then I booted Debian From Scratch in VMware and used tar and netcat to copy the installed image over to Xen.

I got it booting fairly easily. But now I start to remember why I had this instinctive gag reflex last time I used RHEL.

First off, the network configuration, by default, is tied to the MAC address of your ethernet card. So if you replace your Ethernet card, your network is broken by default.

Then, there’s the way the network is brought up. It uses arping as part of its procedure to bring up a NIC. If it sees a reply anywhere on the network with the IP you’re trying to assign, it leaves the NIC half-up — it’s been ifconfig’d up, but without an IP. So that’s right, if somebody happens to have a rogue device plugged in at the moment your server boots, your server will come up without a network configured. This is *Enterprise* Linux and it’s pulling this sort of thing. Terrible design.

Next, there’s the way the network is *configured*. There are commands such as system-config-network-tui, -gui, -cmd, -druid, etc. I go for -tui. to start with. It’s a dialog-like interface, and asks the basics like IP address, etc. It doesn’t have any way to configure more than one Ethernet card that I can tell. And some of the settings — like nameserver — apparently require you to press F12 to visit. But the program doesn’t recognize F12 as sent by an xterm, so it doesn’t work.

All the other options require X. So, I reluctantly ssh -X into it as root and run system-config-network-gui. It doesn’t work — complains it can’t find DISPLAY. Strange, I think; DISPLAY is set properly to localhost:whatever. It turns out that /etc/hosts is empty by default, so the thing can’t resolve localhost! Argh. I add a line to /etc/hosts and it fires up.

This tool works decently. I save, uncheck the tie to a MAC address box, and exit. I then think it might be good to fire it up again and see what it did. I try running it again, and get the same error about DISPLAY. The stupid tool blew away /etc/hosts and replaced it with an empty file! This is NOT what I would expect from an Enterprise Linux. You don’t blow away a config file the administrator touched without asking, EVER.

Next, I figure, let’s try installing the XFS tools so I can switch the root filesystem to xfs. I start with “yum update”, which doesn’t quite do what I expect. (It is more like apt-get update && apt-get -u dist-upgrade) So I hit Ctrl-C, but — surprise — IT DOESN’T WORK. I press it a few more times, and it seems to just make the downloader cycle through mirrors because of a “download error”. So I hit Ctrl-Z and kill %1. I have my prompt, but it’s STILL DOWNLOADING STUFF and spewing all over my console. Ugh.

I finally use ps and kill -9 and eventually get it killed off. Stupid thing.

I don’t understand why anybody would want to use RedHat Enterprise Linux in an enterprise. It seems more suited to a hobbyist system at home. From reading some forums, it seems there are quite a few people out there using Debian for enterprise systems for similar reasons.

So now, maybe I’ll have the chance to actually try Scalix.

(BTW, our intern got Zimbra installed on Debian just fine, so that’s a plus for it.)