Thursday, October 27, 2005

Why Free Software usability tends to suck

Matt Thomas, a Mozilla contributor, has some interesting observations about design on free software projects. If you're a fan of evolutionary design by accretion and multitudes, you might want to check out his concerns in Why Free Software usability tends to suck. (It was picked up by a number of people including Joel back in the day.)

One of his points will be controversial to some people, I think: Every contributor to the project tries to take part in the interface design, regardless of how little they know about the subject. And once you have more than one designer, you get inconsistency, both in vision and in detail. The quality of an interface design is inversely proportional to the number of designers.

I don't think this is necessarily true in a non-opensource environment; and, to be more concrete, in a software environment where people aren't argumentative prima donnas, communicate regularly, and reach consensus before implementation of the crucial features. But when there's frequent handoff of work, a tendency towards grandstanding or power plays in the design phase, or poor communication, it will be true.

Updated to add: He has a sequel article based on comments he got on the original, at Why Free Software usability tends to suck even more. His points continue to be good, including the inspiring last comment, which I think is also is true in any organization: As with previous critiques of Free Software, each of these weaknesses will become less of an issue proportionally to the number of contributors who read about them, and learn to recognize and combat them.

In software companies, this is known as "risk management." Doing that well in a design process requires recognizing the failure modes, worrying about them, and making yourself immune to them.


Angus McIntyre said...

The "inspiring last comment" you point to also hints at another issue. The comment suggests that growing awareness of usability issues will improve usability. I wonder if it might make sense to speak in terms of a "culture of usability" associated with a particular platform.

Historically, the Mac user interface (with some glaring exceptions) tended to be better looking and more 'thought-out' from the point of view of usability than the Windows interface. Apple even had its vaunted "Human Interface Guidelines", which encouraged some good practices from a usability standpoint, as well as a consistent look and feel. Something similar must have existed for Windows, but I don't believe that it was as widely-known.

Generalizing recklessly from my own limited experience (hey, I'm a linguist by training: it's what we do), I would say that Macintosh applications - including free software - were more likely to have a consistent appearance/behavior and to avoid some basic usability mistakes than their counterparts in the Windows world. The big developers were bludgeoned (with varying degrees of success) into following the HIG; the little developers absorbed its precepts implicitly by trying to make their programs "look and behave like a Macintosh app". On Windows, however, where expectations about appearance and behavior were less well-defined, small or informal developers didn't have a 'body of work' to guide them, leading to an anarchic variety of different solutions.

(Note that I'm using the past tense here, as I'm not convinced this is still the case).

I would speculate that when a code of approved practice such as Apple's HIG exists, developers tend to follow it. Some do so consciously, others absorb or express the principles simply by imitating the perceived 'standard'.

A lot of free software might be said to be Unix- or Linux-oriented: many of the more prominent projects include a Un*x implementation, sometimes as the 'flagship' or reference implementation. And Un*x has never to my knowledge had a publicly-recognized 'code of practice' equivalent to Apple's HIG or even high-profile examples of good practice to imitate. (Things might have been different if Eazel, which employed ex-Appleites Andy Hertzfeld and Arlo Rose among others, hadn't folded).

Making individual developers aware of usability issues may be one route towards better usability in free software. It's also possible, however, that if there were a clear-cut 'standard' that incorporated good practice and was well-represented by some prominent applications, then some kind of improvement might come about simply as a result of developers' tendency to imitate what they perceive to be 'the way things should be'.

Doug Orleans said...

This is somewhat of a nitpick, but I think much of that article is not about free vs. non-free software but the bazaar vs. cathedral styles of free software development. In a cathedral-style free software project, there might actually only be one interface designer, who can serve as the gatekeeper when approving contributions to make sure they are consistent with the rest of the interface design.