Category Archives: Software

Thoughts on Redmine

A few days ago, I discussed Trac and Redmine. Redmine is a project management tool, similar to Trac, with built-in download tools, bug tracking, etc.

Redmine has a lot of nice features. Chief among them is better integration between multiple projects, so I don’t have to go to 17 separate pages to see the open bugs on my projects.

But I’m worried about the Redmine community. It appears to live in an insular Ruby world, without much participation outside. I wrote about some of those concerns in their forums. I’ve also submitted bugs to Redmine, some with patches.

Also, it’s concerning that, although Redmine includes a very nice forum module, the Redmine forums are still on RubyForge. Also, there are many bugs in the Redmine BTS that have patches but little, if any, comments from the Redmine people that have commit access.

It could just be that Redmine is a fairly new project and just needs some time to get on its feet more. It’s been around since July, 2006, which isn’t all that long on the one hand… or quite awhile, depending on how you look at it.

The git support patch for Redmine looks very nice. However, after a month, it still hasn’t been replied and there’s no indication why. Which also is troubling.

So I think I’ll sit with Trac for a little while until I get a better feel of how Redmine is progressing.

hg.complete.org is no more

As of today, hg.complete.org is no more. I have removed mercurial and hgwebdir from my server, removed hg from my DNS zone, and converted everything that was in Mercurial over to Git. (Except for hg-buildpackage, which I have orphaned) So there is now stuff at git.complete.org.

I still have a ton of Darcs repos to convert, which will take more time.

Also I have heard a lot of people say that the GitPlugin for Trac is not very good. I have two Trac instances running it: one for commithooks and another for ListLike. Both seem OK so far, but I haven’t pushed them very much yet.

Trac & Git

For quite some time now, I’ve been running Trac over at software.complete.org. Most of my free software projects — well, the ones where I actually go to the effort to make formal releases — have a Trac instance. This Trac instance provides a wiki, bug tracker, downloads area, timeline (with RSS feeds), and VCS integration.

Trac is a nice program, but one thing has bugged me about it all this time:

Every trac instance is its own island.

I have 17 trac instances out there for my projects. To see what bugs are out there on my own server, I have to check 17 websites (or 17 RSS feeds or whatnot). Publishing a new program is not a lightweight process.

So today I started poking around looking for something better. I really like Trac’s way of integrating the wiki with the BTS and the commits; wiki markup can refer to a bug or a changeset, and bugs can use wiki markup too.

I looked at Redmine, Mantis, and Roundup, and I also have experience with RT.

Of these, Redmine looks the most interesting. Multiple projects support, per project wiki and forums, gantt charting even, and support for SVN, CVS, Mercurial, Bazaar, and Darcs — with Git support out there as patches to their development tree already too. Oh, and I saw references to a Trac importer as well. One thing that makes me nervous, though, is that they have no links to sites that use Redmine (except one in the news section), and Google isn’t turning up users either. Does nobody use this thing?

What else should I be looking at?

Over on the Git side, I’m still liking Git. I have now migrated several Mercurial projects over to git (see git.complete.org). I am also playing with Darcs to git migration using darcs2git, which also is going well. Sometimes gitk shows a nicer representation of a Git repo converted from Darcs than I was able to get from Darcs.

Experimenting with Git

I’ve been writing about Git a bit lately.

I’ve decided to switch some of my Debian work over to it to start with, as well as some of my other projects.

Although I was thoroughly frustrated with Git a year ago, now I am quite pleased with it. What’s different? The documentation is a LOT better. So far I have only found one manpage (git-show) that omits lots of its options. The system is friendlier, keystroke-happier, and powerful.

Compared to Mercurial, I’ve found some nice things:

In-directory branching. I didn’t expect to care about this, since both git and hg permit lightweight clones. But it turns out to be so easy to use that it is great. Especially since I don’t have to setup multiple branch repos on the server. I really like this. Note that “hg branch” is not the same as a git branch, and see the discussion on the hg lists about renaming that before 1.0.0 for why.

Flexibility in getting things around. Plain HTTP works fine (no static-http:// hack). ssh. git daemon. rsync. Very slick.

Performance. Surprisingly, git actually feels faster than Mercurial, especially when pushing or pulling. I didn’t expect that.

Tags. They seem smarter in git. No more merging of .hgtags all the time. Also I like that I can attach a message to a tag and sign it.

All that power. There is a *lot* that Git can do. I should have been taking notes about it all.

My main complaint is still that Git doesn’t have something as nice as “darcs send”. Mercurial doesn’t either, but it’s a bit closer. Git has moved closer, but still has room to improve on that.

So I have set up git.complete.org and am starting to publish my Debian stuff on Debian’s alioth server as well.

Also, hg-fast-export in the fast-export project is *awesome*. Branch-aware and everything. It made a perfect Git version of my Mercurial work.

Git looks really nice, until….

So I have been learning about Git this weekend. It has some really nice-looking features for sure — some things Mercurial doesn’t have.

I was getting interested in switching, until I found what I consider a big problem.

Many projects that use git require you to submit things using git-format-patch instead of pushing/pulling from you. They don’t want your merge history.

git-format-patch, though, doesn’t preserve SHA1s, nor does it preserve merges.

Now, say we started from a common base where line 10 of file X said “hi”, I locally changed it to “foo”, upstream changed it to “bar”, and at merge time I decide that we were both wrong and change it to “baz”. I don’t want to lose the fact that I once had it at “foo”, in case it turns out later that really was the right decision.

When we track upstream changes, and submit with git format-patch, the canonical way to merge upstream appears to be:

git fetch; get rebase origin/master

Now, problem with that is it loses your original pre-conflict code on a case like this.

There appears to be no clean way around that whatsoever. I tried a separate “submission” branch, that rebases a local development-with-merge branch, but it requires a ton of git rebase –skip during the rebase process.

Thoughts?

Revisiting Git and Mercurial

Exactly one year ago today, I wrote about Git, Mercurial, and Bzr. I have long been interested in VCS, and looked at the three main DVCS systems back then.

A Quick Review

Mercurial was, and for the moment, remains, my main VCS. Bzr remains really uninteresting; I don’t see it offering anything compelling that Mercurial or Git can’t do. My Git gripes mainly revolved around its interface and documentation. Also, I do have Windows people using my software, and need a plausible solution for them, even though I personally do no development on that platform.

Ted Tso wrote his own article in reply to mine, noting that the Git community had identified many of the same things I had ans was working on them.

I followed up to Ted with:

… So if Ted’s right, and a year from now git is easier to use, better documented, more featureful, and runs well on Windows, it won’t be that hard to switch over and preserve history. Ted’s the sort of person that usually is right, so maybe I should starting looking at hg2git right now.

So I guess that means it’s time to start looking at Git again.

This is rather rambly, I know. It’s late and I want to get these thoughts down before going to sleep…

Looking at Git

I started at the Git wikipedia page for an overview of the software. It linked to two Google Tech Talks about Git: one by Linus Torvalds and another by Randal Schwartz. Of the two, I found Linus’ more entertaining and Randal’s more informative. Linus’ point that CVS is fundamentally broken, and that SVN trying to be “a better CVS” (an early goal of svn, at least) means it too is fundamentally broken, strikes me as quite sound.

One other interesting tidbit I picked up is that git can show you where functions have moved from one file to another, thanks to its rename-detection heuristic. That sounds really sweet, and is the best reason I’ve yet heard for Git’s stubborn refusal to track renames.

The Landscape

I’ve been following Mercurial and Darcs somewhat, and not paying much attention to Git. Mercurial has been adding small features, and is nearing version 1.0. Darcs has completed a major overhaul both of its repository format and internal algorithms and is nearing version 2.0, and appears to have finally killed the doppleganger (aka conflict spinlock) bug for good.

Git, meanwhile, seems to have made strides in usability and documentation in its 1.5.x versions.

One thing particularly interesting to me is: what projects are using the different VCSs. High-profile projects now using Mercurial include OpenSolaris, OpenJDK (Java 7), and Mozilla’s projects. Git has, of course, the Linux kernel. It also has just about everything associated with freedesktop.org, including X. Also a ton of Unixy stuff.

Both Mercurial and Git communities are working on TortoiseHg/TortoiseGit types of GUIs for Windows users. Git appears to have a sane Windows port now as well, putting it on pretty much even footing with Mercurial and Darcs there. However, I didn’t spot anything with obvious Windows ties in the Git “what projects use git” pages.

The greater speed of Mercurial and Git — even for pushing and pulling small patches — likely will keep me away from Darcs for the moment.

Onwards…

As time allows (I do have other things keeping me busy), I plan to install git and work through some tutorials and try to use it in practice as much as possible, to get a good feel for it.

Future

It is beneficial to be using a VCS that is popular, though that is certainly not a major criterion for me. I refuse to use SVN because its lack of distributed functionality makes it too unproductive to be useful. But it looks like Git is gaining a lot of traction these days, especially in Debian circles, which also makes it more interesting.

I notice that Ted did convert e2fsprogs over to git as he said he might, incidentally.

Two new bashisms

I learned about two bash features I hadn’t known about today.

From a colleague, GLOBIGNORE. A colon-separated glob of files to ignore when expanding globs. Helpful behavior when set to “*~” and used with grep.

From the Git FAQ, in a section explaining that it breaks the Git build process, CDPATH. A colon-separated search path to use when you type cd. Possibly useful to refer to subdirs of ~ or other common areas. Seems like it’s prone to break a ton of scripts if exported though.

A Cloud Filesystem

A Slashdot question today about putting to use all the unused disk space on corporate desktops got me to thinking. Now, before I start, comments there raised valid points about performance, reliability, etc.

But let’s say that we have a “cloud filesystem”. This filesystem would, at its core, have one configurable parameter: how many copies of each block of data must exist in the cloud. Now, we add servers with disk space to the cloud. As we add servers, the amount of available space on the cloud increases, subject to having enough space for replication according to our parameters.

Then, say we say we want a minimum of 3 copies of each block replicated. Each write to the filesystem will then cause a write to at least 3 different servers. Now, what if one server goes down? If the cloud filesystem is short on space, we may be down to only 2 copies of some blocks until that server comes back up. Otherwise, space permitting, it can rebuild that third copy on other servers.

Now, has this been done before? As far as I can tell, no. Wouldn’t it be sweet?

But there are some projects that are close. Most notably, GlusterFS. GlusterFS does all of the above, except the automated bits. You can have this 3 copy redundancy, but you have to manually tell it where each copy goes, manually reconfigure if a server goes offline, etc. Other options such as NBD, OpenAFS, GFS, DRBD, Lustre, GFS, etc. aren’t really well-suited for this scenario for various reasons.

So, what does everyone think? Can this work? Has it been done outside of Google?

LinuxCertified Laptop LC2100S

As you might know from reading my blog, at my workplace, we have largely standardized on Linux on the desktop and laptop.

We use systemimager to maintain a standard desktop image and a separate standard laptop image. These images differ because there are different assumptions. The desktop machines mount /home over NFS, authenticate to LDAP, etc. This doesn’t work on laptops. Moreover, desktops don’t use network-manager or wifi, but laptops do.

Our desktop image uses Debian’s hardware autodetection — plus a little hacking in /etc/init.d/gdm — to automatically adjust to a wide range of hardware. So far this has worked well.

Laptops are much more picky. Our standard laptop model had been the HP nc4400 — a small and light 12″ model that people here loved. HP discontinued that model. Their replacement was the 2510p. We ordered one in here for evaluation. Try as we might, we couldn’t get it to suspend and resume properly in Linux.

So I went out scouring the field of Linux laptops. Companies such as Emperor Linux buy retail laptops from people like Lenovo, test them for Linux, and sell them — at a premium. These were too expensive to justify at the quantities we need them.

Then I stumbled across Linux Certified. I’d never heard of them before. I called them up and asked a few questions. They don’t buy retail laptops, but instead have OEMs in Taiwan build laptops to their spec. They happen to use the same OEM that Fujitsu does, I believe. (No big company builds laptops in the USA these days). I asked them about wifi chipsets, video chipsets, whether they use stock kernels. I got clueful answers to all of these.

So we ordered one of their LC2100s models. They didn’t offer Debian preinstalled, but did offer Ubuntu, so I selected that. The laptop arrived a couple of days (!!) later, configured with the particular CPU, etc. that I selected.

I was surprised at the thrill I felt at taking a brand new laptop out of its box, turning it on, and watching Grub appear before my eyes. Ubuntu proceeded to boot. I then of course installed our regular Debian image on the thing to check it out.

It needed a kernel and xserver-xorg-video-intel from lenny, as well as the ipw3945 driver for wifi, but otherwise worked with the exact same software as our HP nc4400 image. (In fact, it wasn’t hard to support both laptops with that image, since both use a lot of Intel hardware.) The one trick was making hibernate call /etc/init.d/ipw3945d stop so that the ipw3945 module could be unloaded before suspend. (Why this particular chipset needs a daemon is beyond me, but oh well.)

The hardware is great. As far as I know, the ipw3945 was the only component that wasn’t directly and automatically supported by DFSG-free software in lenny main. The screen is sharp and high-contrast (it’s glossy, which I personally don’t like, but I bet our users will). The device itself feels sturdy. It’s small and dense. I haven’t opened it up, but it looks like all you need is a screwdriver to do so.

The only downside is that they don’t sell docking stations for it. Their standard answer on that is to buy a USB docking station. That’s a partial answer, but can’t handle power or video like a standard docking station will.

Also, the LC2100s is much cheaper than the HP laptop, even when configured when nicer specs in every way. That is no doubt partially due to the lack of the Windows tax.

I’m sending off an order for 4 more today, I believe.

Viper

Well, now this is quite the experience.

I’ve been trying Viper for the past few days. Viper, for those that don’t know, is usually described as a set of Vi bindings for Emacs.

After reading the nearly 100 pages of documentation and trying it a bit, I have realized that this is not really an accurate description. Viper is a port of vi to Elisp.

But that doesn’t really do it justice. Viper seems to have pretty much everything going for it that Vim does, and then some. It is extensible with Elisp, and works with all the Emacs major modes (indentation and so forth). Yet it also is a very authentic Vi implementation, yet more customizable than Vim. And, in my opinion, more capable than Vim too.

On the one hand, this is a really neat combination: the power of the vi editing commands with the power of Emacs and Elisp for indentation, customization, etc.

On the other hand, it makes my head hurt. While Viper and Vim both are supersets of the vi command set, they don’t always implement extensions (such as multiple windows) the same way or with the same keys. Of course, you could remap them in both, but it’s a bit jarring to run Viper in expert mode, press C-w to start creating a new window, and have it run the Emacs cut command. (You can run Viper in a more limited mode where it does not recognize any regular Emacs keys if you don’t want that)

It’s just weird. It mostly looks like Emacs. It is modal like Vim, and responds to all Vi and most Vim commands. It has an additional mode: the Emacs mode. Also if configured to run in expert configuration, Emacs commands are accepted most places. Yes, you can move with h, j, k, l and C-n, C-p, C-f, C-b all at the same time.

The main drawback I can see is that Viper mode doesn’t work well with Info mode, which has other bindings for keyboard shortcuts… so all of a sudden, hjkl don’t work in info mode.

I don’t know yet if I’ll use viper much, but it is a slick program.