Category Archives: Desktop Linux

I’m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

Switched from KDE to xmonad

Within the last couple of days, I’ve started using xmonad, a tiling window manager, instead of KDE. Tiling window managers automatically position most windows on your screen, freeing you from having to move, rearrange, and resize them all the time. It sounds scary at first, but it turns out to be incredibly nice and efficient. There are some nice videos and testimonials at the xmonad homepage.

I’ve switched all the devices I use frequently to xmonad. That includes everything from my 9.1″ Eee (1024×600) to my 24″ workstation at work (1920×1200). I’ve only been using it for 2 days, but already I feel more productive. Also my wrist feels happier because I have almost completely eliminated the need to use a mouse.

xmonad simultaneously feels shiny and modern, and old school. It is perfectly usable as your main interface. Mod-p brings up a dmenu-based quick program launcher, keyboard-oriented of course. No more opening up terminals to launch programs, or worse, having to use the mouse to navigate a menu for them.

There’s a lot of documentation available for xmonad, including an “about xmonad” document, a guided tour, and a step-by-step guide to configuring xmonad that I wrote up.

I’ve been using KDE for at least 8 years now, if not more. WindowMaker, fvwm2, fvwm, etc. before that. This is my first step with tiling window managers, and I like it. You can, of course, use xmonad with KDE. Or you can go “old school” and set up a status bar and tray yourself, as I’ve done. KDE seems quite superfluous when xmonad is in use. I think I’ve replaced a gig of KDE software with a 2MB window manager. Whee!

Take a look at xmonad. If you like the keyboard or the shell, you’ll be hooked.

Linux on the Desktop

Later this month, I will be giving a talk at OSCon about Linux on the corporate desktop — something we have done where I work. I’ve been alloted a 45-minute timeslot. I will, of course, be posting my slides online and I think OSCon also posts videos of these things.

I’m wondering if readers of my blog would like to leave me some comments on what you’d like to see. What would you like to know about Linux on the corporate desktop? Is there anything that you’d like to make sure I discuss?

Desktop Linux: Gnome

I had been intending to write an entire series of posts about our corporate switch to Linux on the Desktop. To date, I wrote only one introducing the project and our reasons for switching from Windows. That was back in April.

Today I’d like to start talking about it all some more.

We have standardized on Gnome for our desktops. Given the Windows background of our user base, it was pretty much a given that we would have to use either Gnome or KDE. Something like fvwm or a non-integrated environment just wouldn’t be a good option.

We evaluated both Gnome and KDE. The very “clean” appearance of Gnome was a nice thing for us. KDE seemed to be to “chatty”, talked about entering in audiocd:/ when it shouldn’t have needed to, and generally violated the KISS and principle of least surprise too often. That said, I continue to run KDE for my personal desktop because Gnome just doesn’t have the flexibility that KDE does. It is too bad that Gnome has gone on this remove functionality kick, and KDE hasn’t gotten the KISS religion yet.

Anyway, Gnome worked well for the most part. We have set some defaults in gconf for things like panel icons. We also set a few mandatory defaults. I fixed a couple of bugs in the vfs system related to nfs4 support, which manifested themselves as icons for files newly saved to the desktop never showing up.

We wanted to present a customized menu to people based on what their job function is. That is, we are using a single system image, so all apps will be installed on all machines. But we didn’t want people to have to see a ton of software that they don’t use. That was easily enough accomplished for custom apps by creating desktop files with mode 0640 and setting the group to the set of people that should see the program on their menu. We removed a few stock programs (such as the terminal) from the menu as well, using dpkg-statoverride. That was also quite easily done. However, I will say that the entire Gnome XDG menu thing is woefully under-documented.

We use Firefox for the standard web browser. It is integrated well enough with Gnome and we have no problems there, aside from sites that are IE-only. We solve that with a Windows terminal server, which I’ll discuss later.

Our network printing was already based on Cups. The individual machines are set up as Cups clients only, which works fine. We did find, however, that gnome-cups-manager automatically installs a tray monitor for cups. This monitor puts little printer icons on the tray when printers are in use. Unfortunately, it figures out which printers are in use by polling the server, and it is turned on by default out of the box, with no good way to disable it short of dpkg-statoverriding it to 0000. You can imagine that hundreds of users times dozens of printers times numerous polls per minute created quite the load on the server. This was a really braindead design and the people that wrote it should have known better. It is also quite useless to have icons coming on for all the printers on the network, which on some networks could be thousands, and not even on the same continent as the user.

Printing is generally a bit iffy in Gnome. They seem to be transitioning between about 3 different printing toolkits, all of which have different print dialog boxes with different supported features and different ways of selecting printers. One chief annoyance is that the print box in evince (the document/PDF viewer) does not let people access printer-specific features such as hole punching and stapling. So we installed gtklp and xpdf for people. The people that print heavy PDFs are huge fans of gtklp these days; it’s a nicer solution than we had in Windows. Nobody really likes evince. We also have had some trouble with evince generating PostScript output that some printers can’t grok. It sounds like all this should be much better in newer versions of Gnome, which if true, would be welcome news.

The Gnome screenshot tool makes it easy to save off a screenshot to a file, or to drag it into an email, but it is difficult to print it (you have to save it first). That was a common complaint around here, so I wrote a little wrapper around xwd and gtklp for printing screenshots. People really like that because gtklp gives them lots of options about orientation and size of the image if they want it, or a simple “Print” button to click if they don’t care. We set a gconf default to bind this to Ctrl-PrintScr and it works well. KDE’s screenshot tool is much more capable, and if we were using KDE, we wouldn’t have had any problem with screenshots.

The bottom line on Gnome is that we, and are users, are happy with it after we’ve made these customizations. But we have had to do more customization that we should have. I still think that Gnome has been better for our users than KDE, but I do wonder how long we’ll be able to survive with our “no KDE libraries” policy, as people want ksnapshot, kolour, etc.

Linux Hardware Support Better Than Windows

Something I often hear from people that talk about Linux on the desktop is this: people want to be able to go to the store, buy hardware, and be confident that it will Just Work.

I would like to point out that things are rarely this simple on Windows. And, in fact, things are often simpler on Linux these days.

Here’s the example that prompted this post.

I have a computer that’s about 4 years old. It’s my main desktop machine at home. It was still fast enough for me, but has been developing all sorts of weird behaviors. Certain USB ports stopped working altogether a few months ago. Then it started hanging during POST whenever I’d try to reboot — but would still boot OK about 80% of the time after a power cycle. Then it started randomly losing contact with my USB mouse until a reboot. And the last straw was when the display started randomly going out. I’ve told everyone that my machine has cancer and is slowly dying.

The case is a pretty nice full tower — solid and sturdy. I have an 160GB IDE drive in it. So I figured I will upgrade the motherboard, CPU, RAM, and add a 500GB SATA drive since they’re so cheap these days and I’m running out of space. I’d also have to buy a new video card since my old one was AGP and the new motherboard only has PCI Express for video. So about $700 later from Newegg (I got a Core 2 Duo E6750), the parts arrived.

I spent some time installing it all. The motherboard had only one IDE channel, and I didn’t have any IDE cable long enough to connect both the IDE hard disk and the optical drive, so I popped in an old Maxtor/Promise PCI Ultra133 controller I had sitting around to use with the DVD burner.

Now, to recap, the hardware that the OS would see as new/different is: CPU, RAM, IDE controller, SATA controller, Promise IDE controller, integrated NIC, sound, video.

Then the magic smoke test.

I turned on the machine. Grub appeared. Linux started booting.

Even though I had switched from the default Debian “supports everything” kernel to a K7 kernel, it still booted.

And every single piece of hardware was supported immediately. There was no “add new hardware” wizard that popped up, no “I’ve found new hardware” boxes. It just worked, silently, with no need to tell me anything or have me click on anything.

Only one piece required configuration: the NIC, thanks to some udev design flaws (it got renamed from eth0 to eth1 by udev). That took 20 seconds. Debian saw the IDE HDD, the SATA drive, the Promise controller, the DVD burner, the video card, the sound, and it all worked automatically. And Debian is not even a distro that occurs to a lot of people when they think of great hardware support.

Now let’s turn to Windows.

The Windows Nightmare

I have a legal copy of Windows XP Home that was preinstalled on the machine when I got it. I resized its partition down to about 20GB so that I could use 140GB for Linux. I use it rarely, primarily for gaming, and I’ve bought about 3 games in the last 4 years. I usually disconnect the network when I boot to Windows, though I do keep it current with updates.

I did some research on what Windows was going to do when I replace the hardware. The general consensus from people on the ‘net is that you can’t just replace a motherboard and expect everything to be happy. There were generally three different approaches suggested: 1) don’t even try, just reinstall; 2) do a rescue install after you move over; and 3) use sysprep. The rescue install has to be done by booting from an XP install CD, then picking a rescue install option somewhere. It will overwrite your installed Windows with the version from the CD. That means that I’d have to re-apply SP2, though bits of it that didn’t get overwritten would still be on the hard disk, and who knows what would happen to the registry.

Option #3 was to download sysprep (must have the Genuine Disadvantage ActiveX to get the free download from MS). Sysprep is designed to be used just prior to taking an image with ghost for replication. It removes the hardware-specific config (but not the drivers), as well as the product key, from the machine, but otherwise leaves it untouched. On the next boot, you get the “Welcome to XP” wizard.

One other strike against #2 is that Compaq “helpfully” didn’t ship any install CDs with the machine. Under Windows, they did have a “create rescue CD” tool, which burned 7 CDs for me. But they are full Compaq-specific CDs, not one of them an XP CD, *AND* they check on boot to see if you’re using the same Compaq motherboard, and exit if not. Highly useless.

So I went with sysprep. Before my new hardware even arrived, I downloaded the Windows drivers for all of it. I burned them to a CD, and installed as many as I could on the system in advance. About half of them refused to install since the new hardware wasn’t there yet. I then took a raw image of the partition with dd, just in case. Finally, right before I swapped the hardware, I ran sysprep and let it shut down the machine.

So after the new hardware was installed came the adventure.

Windows booted to the “welcome to XP” thingy. The video, keyboard, mouse, and IDE HDD worked. That’s about it.

I went through the “welcome to XP wizard”. But the network didn’t work yet, so I couldn’t activate it. So I popped my handy driver CD in the drive. But what’s this? Windows doesn’t recognize the DVD drive because it doesn’t have drivers for this Promise controller that came out in, what, 2001? Sigh. Downloaded the drivers with the imac, copy them to a CF card, plug the USB CF reader into Windows.

While I was doing that, about 6 “found new hardware” dialogs got queued up. Not one of them could actually find a driver for my hardware, but that didn’t prevent Windows from making me click through them all.

So, install Promise driver from CF card, reboot. Click through new hardware dialogs again. Install network driver, reboot, click through dialogs. Install sound driver. Install Intel “chipset” driver, click through dialogs. Reboot. Install SATA driver. Reboot.

So the hardware appears to all be working by this point, though I have a Creative volume control (from the old hardware) and a Realtek one in the tray. Minor annoyance to deal with later.

Now I have to re-activate XP. I dutifully key in the magic string from the sticker on my case. Surprise surprise, the Internet-based activation fails because my hardware is different. So I have to call the 800 number. I have to read in 7 blocks of 6 digits, one block at a time. Then I answer some questions: have I activated Windows before, have I changed hardware, was the old hardware defective (yes, yes, and yes). Then I get 7 blocks of 6 digits read to me. Finally Windows is activated. PHEW! Why they couldn’t ask those questions with the online tool is beyond me.

Anyhow. Linux took me 20 seconds to get working. Windows, about 2 hours, plus another 2 hours for prep and research.

I did zero prep for Linux. I made one config change (GUI users could have just configured their machine to use eth1).

Other cool Linux HW features

Say you buy a new printer and want to get it set up. On Windows, you insert the CD, let it install 200MB of print drivers plus ads plus crap plus add something to your taskbar plus who knows what else. Probably reboot. Then the printer might actually print.

On Debian, you plug in the printer to the USB port. You type printconf. 5 seconds later, your printer works.

I have been unpleasantly surprised lately by just how difficult hardware support in Windows really is, especially since everyone keeps saying how good it is. It’s not good. Debian’s is better, in my opinion.

And we’re off!

Yesterday afternoon, we started our information meetings with employees about our Linux on the desktop project. We’re underway on our migration.

But before I talk about that, I need to back up and describe what the project is.

We are converting approximately 80% of our 150 or so PC users to Linux desktops. They’re Debian etch (4.0) running Gnome, Firefox (Iceweasel), Evolution, NFSv4, and SystemImager. Over the coming days and weeks, I’ll be writing about why we’re doing this, how we’re making it happen, things we’ve run into along the way, and the technology behind it.

Today I’d like to start with a high-level overview of the reasons we started investigating this option.

It became apparent that Vista was going to be a problem for us. Most of our desktop PCs are not very old, but Vista meant a significant degradation in performance from the Windows XP Pro that most people were running. A performance dip so significant, in fact, that it would have created a significant negative impact on employee productivity.

We tend to buy PCs with Windows licenses from the vendor (Windows preinstalled). As such, we knew it wouldn’t be long before XP-based machines would be hard to find. If we stuck with Windows, we’d be running a mixed-OS network — which we knew from experience we did NOT want to do. The other option would be to replace all those old PCs. The direct costs of doing that, with the associated Vista and Office licenses, would have been more than $200,000.

So we started to look at other options — changing the way we license Windows, sticking with XP for awhile, or switching away from Windows. This last option sounded the most promising.

I took a spare desktop-class machine, representative of the hardware most end users would have, and installed etch (then testing) on it. I spent a bit of time tweaking the desktop settings, making things as transparent to the user as possible. We liked what we saw and started pursuing it a bit more. We knew we had some Windows apps we couldn’t discard, so we tested running them off a Windows terminal server with the Linux rdesktop client. That worked well — and the appropriate Server 2003 licenses plus CALs would still be far cheaper than a mass migration to Vista.

To make a long story short, we are getting quite a few benefits out of all this. One of the most important is a single unified system image. Excepting a few files like /etc/fstab, every system gets a bit-for-bit identical installation from the server, updated using rsync. /home is mounted from the network using NFS (v4). So our users can sit down at any PC, log in, and have all their programs, settings, email, etc. available. A side benefit is that hardware problems become minor annoyances rather than major inconveniences; if your hard disk dies, we can just bring up a different PC. We had tried numerous times to make roaming profiles work in Windows, but never really achieved a reliable setup there — perhaps because it seemed virtually impossible to assure that each Windows PC had the exact same set of software, in the exact same versions, installed.

More to come.