Category Archives: Software

A little more on Vim and Emacs file handling

Yesterday’s post about switching back to Emacs saw quite a few comments from people (most of them useful, even). I learned a few things.

My biggest gripe about Vim was that for the file types I worked with most, its indentation and syntax highlighting was inferior to that of Emacs. I’d like to illustrate that with an example.

Let’s consider one of those file types: XML containing DocBook markup.

Vim has a DocBook mode. It doesn’t autodetect DocBook files, so I have this at the top of each one:

<!-- vim: set filetype=docbkxml shiftwidth=2 autoindent expandtab tw=77 : -->

Now, why should Vim need a separate DocBook mode? DocBook is just XML or SGML, and these things have a well-formed nature. Well, part of the reason is that /usr/share/vim/vim71/syntax/docbk.vim has a ton of lines like this:

syn keyword docbkKeyword chapter citation citerefentry citetitle city contained

Yes, they are hard-coding all the DocBook element names into the editing mode. It’s probably used for completion, highlighting, maybe indentation. I’m not sure, really. I remember that editing these files without the DocBook mode was much more painful anyway, but that was 8 months ago and I can’t quite remember why.

Now, what about Emacs? I don’t know if Emacs even has a DocBook mode, mainly because I don’t have to care. The Emacs psgml mode actually parses the DTD for your XML or SGML files. It knows exactly what the valid tags are from doing so. This means it has full functionality not just for DocBook, but for any XML or SGML file with a DTD.

Not only that, but it knows more about the files than Vim does. For instance, both Emacs and Vim can do completion of various things. Vim </ C-x C-o (ooo, sounds like Emacs!) can complete my closing tags. But it can’t autocomplete my opening tags, and it certainly isn’t aware

Not only can Emacs autocomplete opening and closing tags, but it knows exactly what tags are valid at a given place in the document (thanks to the DTD) and will only consider those tags for completion. Moreover, depending on how you have configured it, it could also insert spots for you to add any required attributes. So, for instance, if you’re editing XHTML and autocompletion gives you an <img> tag, it would add src="" in it for you, as a reminder that src is required.

There are a host of other smart things that Emacs can do with XML or SGML documents. For instance, you can get a list of all tags valid at the current point with C-c C-t or Shift-RightClick — useful if you’ve forgotten the name of a tag for a moment.

The difference isn’t as great with everything. But it sure is noticable as I work with XML and Haskell files.

So long, Vim. I’m returning to Emacs

I’d been using Emacs for quite awhile, and about 8 months ago I decided I would try using Vim. I’d only used vi for system emergency work, but knew a number of people that swore by it for regular work. So I decided I would learn Vim and use it for my regular work. I figure that with things like this, I don’t get a real feel for how well they work unless I use them for all my work. So I haven’t really opened Emacs at all in the past 8 months.

Yesterday I finally decided that Vim was not living up to my expectations and I’m in the process of switching back to Emacs. I thought I ought to write down why I’m doing that, for my own future reference… and since nobody has ever written about Emacs vs. Vim, I might as well post it where everyone can see it.

So here we are.

Original Reasons for Using Vim

It would lead to more comfortable typing. Lots of Vim users mention that you don’t have to hold down keys while hitting other keys as much in Vim as in Emacs, and that the movement keys are all on the home row. That’s true, but I didn’t find it to be that big of an improvement, since Esc is a farther reach than anything in Emacs, and let me tell you, you’re hitting Esc all the time in Vim. I found that removing the armrests from my chair made my hands happier than Vim ever did, and swapping Ctrl and CapsLock in Emacs will probably help there too.

It starts faster. I’m not sure if that really was true even when I switched, but it certainly isn’t true on any of my machines today. Both Vim and Emacs have had major version upgrades (v7 and v22, respectively) since I started using Vim. People seem to say that Emacs 22 feels faster, though I don’t know if that’s true. The startup times of the two, if they’re different, are imperceptible.

Vim would use less RAM. Frankly, these days, both Emacs and Vim are way down on the list of things that use up RAM. Heck, kmail has 141MB resident, and each of its two IMAP processes is using more than 30MB. Emacs in X right after start has 16MB resident, 10MB of which is shared, and 25MB VSS. gvim right after start has 8MB resident, 5MB of which is shared, and a 43MB VSS. Emacs tends to use fewer processes for things that vim. So they’re not all that different, and Emacs could come out smaller in certain situations. But the difference is irrelevant on today’s machines, and modern Gnome and KDE apps are many times larger than both of them.

It will make me more comfortable in rescue environments where I have only traditional vi available. Actually, the vi on AIX is so different from modern Vim that this didn’t really help.

It would make me more productive. There are some editing commands that did, but as you’ll see below, it was more than balanced out by other problems.

Things I Liked about Vim

The commands dt, dT, df, dF. Wonderful little things those. Emacs now has M-z (Zap), which is similar to df but can actually go to other lines (a nice addition). And there are easy ways to bind keys to the others as well, though that doesn’t make it a pervasive convention like it is in Vim.

Antialiased fonts. It’s crazy that Emacs doesn’t have this yet. But not a showstopper; I still like good ole 10×20 just fine.

Regexp search-and-replace. Emacs actually has this now, and maybe it had it back then too. M-C-%. Apparently in Emacs22 the replacement expression can also have lisp code in it, which sounds really slick but I can’t see myself using it regularly.

Annoying Things in Vim

Syntax highlighting. The syntax highlighting for most languages in Vim felt like it was about as smart as it was in Emacs about 10 years ago. Strings like "Hello!\"" (in languages where \” inserts a literal “) often confused it. Sometimes quotes within comments confused it. Sometimes it would be confused permanently. Other times, just until I scrolled around in the file or reloaded it.

Indentation. This is much more annoying than the syntax highlighting, really. In many languages — and especially the two modes I’ve used most recently, XML and Haskell — it really, really stinks. The indentation there isn’t aware of syntax, or not very much. Sometimes it is smart enough to know that if an XML line starts with </ that it moves left and if it starts with an opening tag, that the next line moves right. But it’s not smart enough to do this reliably. Not only that, but indentation is not handled with consistent configuration between languages. And even though Vim ships with a ton of language modes, the central docs only cover indentation for C.

I’ve asked Vim experts about this, and have tried all sorts of various tweaks, have read through Vim indentation mode source files, etc. There is just no way to get it anywhere near the intelligence of Emacs for most languages, short of writing my own mode, it appears. This is even worse because when using the backspace key in insert mode, for awhile it deletes individual spaces, and then all of a sudden deletes a big chunk of whitespace back to the beginning of the line. (And no, the insertion of Tab characters is disabled.) Indentation is my complaint about Vim, and something that shows no progress towards being fixed any time soon.

And forget about anything like Emacs M-x reindent-region. This is a syntax-aware indenter. You can write out an entire source file with no indentation whatsoever, and it will indent the entire thing according to the indentation rules you’ve defined and the syntax of the language you’re using. The best I’ve seen in Vim are commands that add or remove space at the beginning of every line in a region.

In short, Emacs seems to “understand” the file format on a much deeper level than Vim, and can automate things to a much better extent because of it.

Too many things disrupt the paste buffer. I can use Y or y to yank some text in Vim, and it’s really, really easy to overwrite that buffer with other things. Yes, I know that I can yank it into a named buffer, but that’s inconvenient and I don’t usually know in advance that I’ll have that need. In Emacs, only C-k and other “large area” commands disrupt it.

Vim doesn’t like you having lots of files open at once. It’s surprisingly convoluted to do this. If you use the basic documented command to edit another file, :e, it closes the file you’re working on. The normal way to open multiple files at once is to use split windows. Well, I don’t like split windows all that well, and often just want to make a quick change in one file — in full screen — and then go back to another. Even though I use set hidden in my ~/.vimrc, it still is annoying and more convoluted than it should be.

Vim can’t create new top-level X windows. In Emacs, I can press C-x 5 2, and poof, I have a second Emacs window in X, and it’s tied to the same editing session and Emacs process. Not a new process, with a different set of files, its own buffers, etc. The same process, same set of files. Just like a split window, but with a new top-level X window instead. gvim simply has no way to do that. This is also a large annoyance.

gqap stinks. This has burned me more than once. I’ll be editing an XML document, and insert some text in the middle of a paragraph. Now I have a really wide line. So I type gqap to reformat the paragraph. My cursor is near the bottom of the screen, so I don’t really see much past the current line. I then save the document and exit. Later I discover that vim considered the entire rest of the document part of the single paragraph, and removed all the different indentation levels at </para> and the like, so it’s completely messed up. Emacs is smart enough to know what is a paragraph in XML mode, and M-q does the right thing. Oh, and Emacs reindent-region can fix the Vim gqap-induced mess.

Desktop Linux: Gnome

I had been intending to write an entire series of posts about our corporate switch to Linux on the Desktop. To date, I wrote only one introducing the project and our reasons for switching from Windows. That was back in April.

Today I’d like to start talking about it all some more.

We have standardized on Gnome for our desktops. Given the Windows background of our user base, it was pretty much a given that we would have to use either Gnome or KDE. Something like fvwm or a non-integrated environment just wouldn’t be a good option.

We evaluated both Gnome and KDE. The very “clean” appearance of Gnome was a nice thing for us. KDE seemed to be to “chatty”, talked about entering in audiocd:/ when it shouldn’t have needed to, and generally violated the KISS and principle of least surprise too often. That said, I continue to run KDE for my personal desktop because Gnome just doesn’t have the flexibility that KDE does. It is too bad that Gnome has gone on this remove functionality kick, and KDE hasn’t gotten the KISS religion yet.

Anyway, Gnome worked well for the most part. We have set some defaults in gconf for things like panel icons. We also set a few mandatory defaults. I fixed a couple of bugs in the vfs system related to nfs4 support, which manifested themselves as icons for files newly saved to the desktop never showing up.

We wanted to present a customized menu to people based on what their job function is. That is, we are using a single system image, so all apps will be installed on all machines. But we didn’t want people to have to see a ton of software that they don’t use. That was easily enough accomplished for custom apps by creating desktop files with mode 0640 and setting the group to the set of people that should see the program on their menu. We removed a few stock programs (such as the terminal) from the menu as well, using dpkg-statoverride. That was also quite easily done. However, I will say that the entire Gnome XDG menu thing is woefully under-documented.

We use Firefox for the standard web browser. It is integrated well enough with Gnome and we have no problems there, aside from sites that are IE-only. We solve that with a Windows terminal server, which I’ll discuss later.

Our network printing was already based on Cups. The individual machines are set up as Cups clients only, which works fine. We did find, however, that gnome-cups-manager automatically installs a tray monitor for cups. This monitor puts little printer icons on the tray when printers are in use. Unfortunately, it figures out which printers are in use by polling the server, and it is turned on by default out of the box, with no good way to disable it short of dpkg-statoverriding it to 0000. You can imagine that hundreds of users times dozens of printers times numerous polls per minute created quite the load on the server. This was a really braindead design and the people that wrote it should have known better. It is also quite useless to have icons coming on for all the printers on the network, which on some networks could be thousands, and not even on the same continent as the user.

Printing is generally a bit iffy in Gnome. They seem to be transitioning between about 3 different printing toolkits, all of which have different print dialog boxes with different supported features and different ways of selecting printers. One chief annoyance is that the print box in evince (the document/PDF viewer) does not let people access printer-specific features such as hole punching and stapling. So we installed gtklp and xpdf for people. The people that print heavy PDFs are huge fans of gtklp these days; it’s a nicer solution than we had in Windows. Nobody really likes evince. We also have had some trouble with evince generating PostScript output that some printers can’t grok. It sounds like all this should be much better in newer versions of Gnome, which if true, would be welcome news.

The Gnome screenshot tool makes it easy to save off a screenshot to a file, or to drag it into an email, but it is difficult to print it (you have to save it first). That was a common complaint around here, so I wrote a little wrapper around xwd and gtklp for printing screenshots. People really like that because gtklp gives them lots of options about orientation and size of the image if they want it, or a simple “Print” button to click if they don’t care. We set a gconf default to bind this to Ctrl-PrintScr and it works well. KDE’s screenshot tool is much more capable, and if we were using KDE, we wouldn’t have had any problem with screenshots.

The bottom line on Gnome is that we, and are users, are happy with it after we’ve made these customizations. But we have had to do more customization that we should have. I still think that Gnome has been better for our users than KDE, but I do wonder how long we’ll be able to survive with our “no KDE libraries” policy, as people want ksnapshot, kolour, etc.

Linux Hardware Support Better Than Windows

Something I often hear from people that talk about Linux on the desktop is this: people want to be able to go to the store, buy hardware, and be confident that it will Just Work.

I would like to point out that things are rarely this simple on Windows. And, in fact, things are often simpler on Linux these days.

Here’s the example that prompted this post.

I have a computer that’s about 4 years old. It’s my main desktop machine at home. It was still fast enough for me, but has been developing all sorts of weird behaviors. Certain USB ports stopped working altogether a few months ago. Then it started hanging during POST whenever I’d try to reboot — but would still boot OK about 80% of the time after a power cycle. Then it started randomly losing contact with my USB mouse until a reboot. And the last straw was when the display started randomly going out. I’ve told everyone that my machine has cancer and is slowly dying.

The case is a pretty nice full tower — solid and sturdy. I have an 160GB IDE drive in it. So I figured I will upgrade the motherboard, CPU, RAM, and add a 500GB SATA drive since they’re so cheap these days and I’m running out of space. I’d also have to buy a new video card since my old one was AGP and the new motherboard only has PCI Express for video. So about $700 later from Newegg (I got a Core 2 Duo E6750), the parts arrived.

I spent some time installing it all. The motherboard had only one IDE channel, and I didn’t have any IDE cable long enough to connect both the IDE hard disk and the optical drive, so I popped in an old Maxtor/Promise PCI Ultra133 controller I had sitting around to use with the DVD burner.

Now, to recap, the hardware that the OS would see as new/different is: CPU, RAM, IDE controller, SATA controller, Promise IDE controller, integrated NIC, sound, video.

Then the magic smoke test.

I turned on the machine. Grub appeared. Linux started booting.

Even though I had switched from the default Debian “supports everything” kernel to a K7 kernel, it still booted.

And every single piece of hardware was supported immediately. There was no “add new hardware” wizard that popped up, no “I’ve found new hardware” boxes. It just worked, silently, with no need to tell me anything or have me click on anything.

Only one piece required configuration: the NIC, thanks to some udev design flaws (it got renamed from eth0 to eth1 by udev). That took 20 seconds. Debian saw the IDE HDD, the SATA drive, the Promise controller, the DVD burner, the video card, the sound, and it all worked automatically. And Debian is not even a distro that occurs to a lot of people when they think of great hardware support.

Now let’s turn to Windows.

The Windows Nightmare

I have a legal copy of Windows XP Home that was preinstalled on the machine when I got it. I resized its partition down to about 20GB so that I could use 140GB for Linux. I use it rarely, primarily for gaming, and I’ve bought about 3 games in the last 4 years. I usually disconnect the network when I boot to Windows, though I do keep it current with updates.

I did some research on what Windows was going to do when I replace the hardware. The general consensus from people on the ‘net is that you can’t just replace a motherboard and expect everything to be happy. There were generally three different approaches suggested: 1) don’t even try, just reinstall; 2) do a rescue install after you move over; and 3) use sysprep. The rescue install has to be done by booting from an XP install CD, then picking a rescue install option somewhere. It will overwrite your installed Windows with the version from the CD. That means that I’d have to re-apply SP2, though bits of it that didn’t get overwritten would still be on the hard disk, and who knows what would happen to the registry.

Option #3 was to download sysprep (must have the Genuine Disadvantage ActiveX to get the free download from MS). Sysprep is designed to be used just prior to taking an image with ghost for replication. It removes the hardware-specific config (but not the drivers), as well as the product key, from the machine, but otherwise leaves it untouched. On the next boot, you get the “Welcome to XP” wizard.

One other strike against is that Compaq “helpfully” didn’t ship any install CDs with the machine. Under Windows, they did have a “create rescue CD” tool, which burned 7 CDs for me. But they are full Compaq-specific CDs, not one of them an XP CD, *AND* they check on boot to see if you’re using the same Compaq motherboard, and exit if not. Highly useless.

So I went with sysprep. Before my new hardware even arrived, I downloaded the Windows drivers for all of it. I burned them to a CD, and installed as many as I could on the system in advance. About half of them refused to install since the new hardware wasn’t there yet. I then took a raw image of the partition with dd, just in case. Finally, right before I swapped the hardware, I ran sysprep and let it shut down the machine.

So after the new hardware was installed came the adventure.

Windows booted to the “welcome to XP” thingy. The video, keyboard, mouse, and IDE HDD worked. That’s about it.

I went through the “welcome to XP wizard”. But the network didn’t work yet, so I couldn’t activate it. So I popped my handy driver CD in the drive. But what’s this? Windows doesn’t recognize the DVD drive because it doesn’t have drivers for this Promise controller that came out in, what, 2001? Sigh. Downloaded the drivers with the imac, copy them to a CF card, plug the USB CF reader into Windows.

While I was doing that, about 6 “found new hardware” dialogs got queued up. Not one of them could actually find a driver for my hardware, but that didn’t prevent Windows from making me click through them all.

So, install Promise driver from CF card, reboot. Click through new hardware dialogs again. Install network driver, reboot, click through dialogs. Install sound driver. Install Intel “chipset” driver, click through dialogs. Reboot. Install SATA driver. Reboot.

So the hardware appears to all be working by this point, though I have a Creative volume control (from the old hardware) and a Realtek one in the tray. Minor annoyance to deal with later.

Now I have to re-activate XP. I dutifully key in the magic string from the sticker on my case. Surprise surprise, the Internet-based activation fails because my hardware is different. So I have to call the 800 number. I have to read in 7 blocks of 6 digits, one block at a time. Then I answer some questions: have I activated Windows before, have I changed hardware, was the old hardware defective (yes, yes, and yes). Then I get 7 blocks of 6 digits read to me. Finally Windows is activated. PHEW! Why they couldn’t ask those questions with the online tool is beyond me.

Anyhow. Linux took me 20 seconds to get working. Windows, about 2 hours, plus another 2 hours for prep and research.

I did zero prep for Linux. I made one config change (GUI users could have just configured their machine to use eth1).

Other cool Linux HW features

Say you buy a new printer and want to get it set up. On Windows, you insert the CD, let it install 200MB of print drivers plus ads plus crap plus add something to your taskbar plus who knows what else. Probably reboot. Then the printer might actually print.

On Debian, you plug in the printer to the USB port. You type printconf. 5 seconds later, your printer works.

I have been unpleasantly surprised lately by just how difficult hardware support in Windows really is, especially since everyone keeps saying how good it is. It’s not good. Debian’s is better, in my opinion.

Disasters happen

It’s been one of those days.

It started with me forgetting my laptop bag at home. Then I had to work with an attorney on some things. He’s a nice guy, and does a good job, but you know, it’s not what I’d really like to spend my day doing.

Then in the middle of the afternoon, I hung up the phone and checked my email. About 50 messages from Samba telling me that processes with various PIDs had crashed unexpectedly. Uh-oh. I think we still have some 80ish people using that.

About a minute later, a coworker says, “John, I’ve been bad.” “Is that why I just got 50 emails from Samba?” “No. Well, yes. Well, maybe. I don’t know.”

It turned out he was working on a restore from tape that, out of necessity, grabbed more data than he needed. He meant to type rm -r ./var but typed rm -r /var instead. Oops. He hit Ctrl-C halfway through, so /var was still there enough to send email but not enough for Samba (or, apparently, NFS) to work.

As he dashed off to pull yesterday’s tapes from offsite storage, I prepared the restore and made a plan. We hadn’t installed any software since yesterday, so I restored var to a temporary location, took the server down into single-user mode, overwrote the /var that still existed, and rebooted the Xen instance in question. Everything back to normal. Except, that is, for the potentially dozens of users that will require assistance running SCANPST.EXE because their Outlook PST, being the fragile heap of garbage that it is, will have somehow been corrupted by this little incident.

So, what did we learn from this?

  • Deleting /var was probably the least annoying outage I’ve had to deal with yet. Certainly less nerve-wracking than the time I was working on a live, powered-up server and my wedding ring shorted out something on a circuit board. I didn’t know if that thing would just reboot or if we’d be down for hours waiting for parts…
  • It was really nice knowing what was going on, rather than trying to find that bit out
  • One coworker commented, “if he had to delete part of an operating system, at least it wasn’t Windows. We wouldn’t have recovered in 15 minutes if it was.” True.
  • Bacula is great.
  • Backups are great, even if you don’t use Bacula to make them.
  • I dislike programs that take server load from 0.3 to 9.5 just telling you that there’s something wrong with the server.

Haskell Fun

Bryan O’Sullivan noted over at the Real World Haskell blog that Haskell made quite the impact at OSCon. And I can attest to Simon Peyton-Jones having trouble leaving the building because of all the people that wanted to talk to him about Haskell. It was interesting to think about “why now” for Haskell’s popularity. Bryan’s post has links to the video of Simon’s talks, which are great. (Sample quote: “Oh look, a whiteboard has appeared as if by magic! What joy!”)

I followed Bryan’s link to the Haskell/OSCon-related blog posts at Technorati. Here’s an interesting one by chromatic, who gave some Perl talks at OSCon. Favorite quote:

I sat next to Nat Torkington at the tutorial. He kept rubbing his temples. At one point I leaned over and said, “The interesting thing about Haskell is that its functions only take one argument.” He turned green.

In all seriousness, well-factored Haskell code resembles well-factored Smalltalk code: if you have functions (or methods) longer than a handful of lines, you’re probably doing too much. Lower level languages such as C rarely give the opportunity for composition and abstraction that you can get out of functional languages. The presence of pure functions–functions which never change global state and which return the same output for the same input–is also immensely important.

It’s actually the combination of the two features which give these languages such power. When Haskell forces you to mark impure functions explicitly, it gives you tools to isolate behavior which can change global state in the smallest possible scopes, and prevents you from composing impure and pure code together accidentally. When Haskell lets you compose functions into larger functions, not only does it help you write code more concisely, but it provides well-defined units of behavior which work along well-defined and isolated boundaries.

So what is this business about Haskell functions taking only one argument? Let’s look at a quick example. Say I wanted to write a function to multiply two numbers. I’d write:

mul a b = a * b

I could give the type of this function like this:

mul :: Int -> Int -> Int

You could read that as “mul takes two Ints and returns an Int”. And you can think of it this way. But you could also write the type this way:

mul :: Int -> (Int -> Int)

It means the exact same thing and is valid to Haskell. To read it, you’d say “mul takes an Int and returns a function that takes an Int and returns an Int.” And truly this is what Haskell functions that take multiple parameters are doing. Now, I can say:

fifteen = mul 3 5
mulByThree = mul 3
fifteen' = mulByThree 5
fifteen'' = (mul 3) 5

mulByThree has type Int -> Int; it’s a function that we got by simply not applying “mul” to all its arguments. fifteen” illustrates what is going on internally when you write “mul 3 5”. It turns out that being able to call a function with multiple arguments is just some syntactic sugar to Haskell.

This is a tremendously useful feature. For instance:

filter (>= 10) [1..20]

I’ve applied >= to only one argument here. That’s fine; it returns a function that’s exactly what filter wants.

I’ve been watching the video of Simon’s Taste of Haskell talk. I could *hear* when the audience grasped the utility of this because there was a collective “Oooooo!” from them.

By the way, Haskell has type inference, so I didn’t have to give types at all. Oh, and you can also refer to the function itself by not giving any arguments.

mul = (*)

This is one small reason I like to say “Haskell manipulates functions with the same ease that Perl manipulates strings.” Mind-bending, isn’t it?

OSCon Friday

Fudge Update

Yesterday I blogged about the guy handing out fudge. He saw my post and explained why he was doing it. Today he was around handing out fudge, and I thanked him for his comment. He gave me two pieces of fudge for blogging about it.

Pineapple

At most of the breaks, they have this truly wonderful real (non-canned) pineapple. I think I have eaten more pineapple this week than any other week, ever. Very insidious, O’Reilly.

Conference materials

Slides of talks are available. Apparently keynotes are being posted on Youtube and Google Video, though they haven’t provided a specific link AFAICT.

Philip Rosedale, Founder/CEO of Linden Lab

Second Life and the XPrize are two examples of escapism: you can escape to virtual earth, or escape the planet entirely.

In order for Second Life to grow, it has to become profoundly open. It has to be a standardized protocol. They are working on code to let people run their own servers. They are trying to make the server trusted and make it able to be open sourced as well. They see openness as the key way for it to grow.

If you are writing a web app that depends on the network effect — such as Facebook — you should open source everything right from the beginning. Not because it’s best for humanity, but because you will win.

Jimmy Wales

He’s trying to do open source search. Nice idea, could have been summarized in 45 seconds, I think.

Simon Wardli

Was going to talk about Zimki, which was going to open sourced, but they decided not to.

Funny and interesting talk, I sorta forgot to take notes while listening…

Nat Torkington

OSCon program chair.

Funny talk about Linux, an adolescent at 16 years old. Thinking about different languages and how they’re doing. How the Linux community is organized: are we fighting battles, and do we need to?

James Larsson

This guy takes old hardware and makes insane devices out of it. 15,000 volts running over a coat hanger used as a “game controller”, for instance.

In all, awesome keynotes today.

OSCon Thursday Part 2: Linux on laptops

Matthew Garrett

A lot of background on the state of laptop support in Linux. It worked reasonably well in the 90s, but with the migration to ACPI, has become much more complicated and less reliable in general, especially with suspend/resume and video.

Emporer Linux and System 76 produce Linux supported laptops.

I wanted some more in-depth technical information and found Matthew later on at the Intel booth. Here’s what I learned:

s2ram is little more than a wrapper around standard ACPI sleep that has options to do video mode save/restore

I asked him about all the many, many different userland laptop management tools. He recommends simple acpi-utils with the ondemand governor. laptop-mode-tools tries to do way too much, and there is little point to using a userland governor anymore.

I’ve been having a problem with my MacBook Pro (Core Duo) hanging on suspend about 10% of the time. I explained the symptoms and asked him how to go about debugging it. He suggested disabling console suspend and enabling PM debug in the kernel. I will give that a try and see what I get.

OSCon Thursday

First off, this guy is walking around handing out free fudge. Neat idea, but I can’t quite figure out why!

Bill Hilf from Microsoft spoke today. He started by saying “I guarantee you won’t agree with a lot of things I say today.”

Nat introduced him as “our man on the inside.”

Microsoft is submitting their Shared Source license for OSI approval. I don’t know if this means that MS will have to actually make any modifications.

New website today: http://www.microsoft.com/opensource.

Bill invited feedback from anyone at billhilf at microsoft dot com.

He got some fairly pointed questions from Nat about software patents. Bill said two things about that: 1) Microsoft flubbed the communication with the press and that their intent isn’t to sue, and 2) Microsoft is learning and he’s working to help them learn.

Rick Falkvinge from the Pirate Party

Funny and informative talk about why they’ve started a Pirate Party in Sweden.

Originally copyright only impacted public places, but now it impacts people’s private lives as well.

Copyright is protecting the entertainment industry — funded by luxury purchases — at the expense of privacy and individual freedoms.

“The Norwegian Liberal Party forked our platform into Norwegian”

www.piratpartiet.se/donate

“Political donations in Sweden are not regulated”, so you all can contributed

Steve Yegge (Google)

“Google will probably fire me for this talk”

Neither Steve’s mic nor his laptop appear to work… after his line about being fired, someone shouted “Google is very powerful!”

“A brand, in geek terms, is a pointer.”

GTE was, for a time, the most reviled brand in the US. Then they invested in improvements and ran commercials basically saying “we don’t suck anymore”, but nobody believed it.

“The single biggest branding problem in Open Source is the name ‘Open Source’.”

OSCon Wednesday

Nat Torkington, program chair, started off the day. He commented that one of the most interesting trends these days is the expansion of the Open Source ideals beyond software.

Tim O’Reilly commented about the FSF’s four freedoms, and asked how we maintain them. We have to think about preserving freedoms — questions such as Free Software that relies on proprietary services, data, or business processes. It’s important to remember to pay attention to freedom and not just to the success of businesses. But businesses matter and have enormous power and will always be related.

Tim really pushed expanding the boundaries of Open Source and thinking ahead: wikipedia, OpenID, etc. He also asked: does Congress need a version control system?

He suggested there are four open source success factors: frictionless software distribution, collaborative development, freedom to build/adapt/extend, freedom to fork.

Hadoop is an interesting FLOSS project to build some infrastructure like Google has. Apparently Yahoo is very interested.

Back to Nat… hardware is cheap and everyone keeps buying more of it.

James Reinders from Intel talking about multi-core parallelism. Saying that parallelism is going to be more and more important. Intel released threading building blocks, a series of templates for C++, as GPL’d software at the conference this week. I’m not all that excited about a C++ project, though, since I think languages like Haskell have more promise here anyway.

The other Intel guy mentioned Intel’s open source involvement: intellinuxgraphics.org, intellinuxwireless.org, linuxpowertop.org, kernel,org, moblin.org. Linux laptops have the longest runtimes compared to other laptops.

“It’s amazing how many people you can make paranoid by showing up with a tie and a suit to do a keynote at OSCON.” — James Reinders

Simon Peyton-Jones is up now, and Nat says he will “stretch your brain until only tiny bits are left.”

State of the art in parallelism is really 30 years old with locks and condition variables — like building a skyscraper out of bananas.

Locks are difficult to do right and have “diabolical error recovery”.

Let’s do transactions against memory instead of against a database. Implementation can even be similar to databases. The idea is transactional memory, and it sounds very, very slick.

Mark Shuttleworth and Tim for an interview…

Mark was fine, but I wish Tim had more interesting questions for him.

I went up to the front a few minutes after the event to talk to Simon PJ. He was talking to someone, who saw my nametag, and said, “Hi John, nice to meet you.” He looked familiar but I couldn’t quite place him, so I asked who he was. “Mark Shuttleworth.” Yep, I was sitting just far enough back from the stage that I wasn’t behind one of the large TV screens and couldn’t make out faces real well, and I didn’t recognize him. Erg..