Sunday, March 29, 2009

So You Need to Do a Usability Test... But the Product So Obviously Sucks.

Most of us in interaction design jobs have been asked to do a usability test on something that we don't think is ready for prime time. You think it needs some design work before it even gets to users. The usual scenario is that your company or client is unwilling to listen to another stakeholder with design opinions (yours, or your team's), but will believe it coming from users. Either that, or they don't know what design is yet, but do have an idea that usability testing is a good idea.

Some strategies for turning this to your advantage, if your ultimate goal is being involved in the design, not the evaluation:

  • Make it a test that's just Pass/Fail. Don't allow room for wiggling on the results, or you wasted your time getting them data they don't want to hear. Agree up front on what this thing is supposed to support, scenarios they think it has to handle, and be firm on delivering the news afterwards. It's too easy to get wishy-washy about informal usability testing, and leave room for argument, otherwise.
  • Create alternate designs for use in the usability test. One of the best ways to educate the organization about the value of design is to DO IT. And to turn the test into an evaluation of multiple designs will help build your credibility and turn post-hoc evaluation into formative, and more useful, usability testing. Throw in some mockups at the end, or at the start, and ask for comments on those in comparison to the product that so clearly has problems, from your perspective.
  • Write the report before the test. No, not as unethical as it sounds - you won't deliver it, but you're checking on your skills to predict what's going to be hard to use. So, you got all worked up about how this thing is hard to use, for obvious reasons; now check to see if you were right! Better yet, have more than one person on your team do their own reports, privately, before the test, and seal them up. Afterwards, see who was the best at predicting the disaster. Chances are, the users will fail in ways never imagined (thanks to Kevin Berni for this line), and you'll be a little less cocky next time this situation comes up. But if you get it all right, you're building your case and confidence for pushing back on this kind of request next time around.
  • Ask others in the org to predict what might cause the users problems in the current design before the test. I used this tactic once when a QA organization felt the way I did about the usability of the product we were testing. I gave an award to the person with the most accurate prediction of user difficulty after the analysis. You want everyone in the company to know who has good insights into design and usability, right? People with good instincts need to be making the judgment calls, in the future. And you need to be able to illustrate that not everyone's opinion is equally valuable when it comes to making design decisions. Design insight requires talent and skill. This contest before the usability test helps identify talent-- skill development can come later.
Again, the only way you'll get involved in the earlier stages of design is if you show you can do that part, and what it entails. Make alternatives, and show why they are better. Next time, you'll be at the table for that part of the job instead.

Sunday, March 22, 2009

R, for Open Source Data Goodness

After I left The MathWorks, I stopped being able to afford MATLAB for stats, and switched to the open source R. Recently, there have been a few articles on how R is being used at companies like Google and Facebook (here and here). I thought I'd post some names of books and sites to help out those new to R.

Some books:

  • Data Manipulation with R, by Phil Spector. One of the new Springer R series, and note that these books aren't cheap. This could be twice as long as it is, and I found a lot of the meat is towards the end. But still a good book to have. The power of R is that it's a programmable environment, allowing you to do data transformations on the fly, as well as automating your tests/displays/operations. You have to know how to move stuff around.
  • Lattice: Multivariate Data Visualization with R, by Deepan Sarkar. Lattice is a very powerful visualization library, highly recommended. This is The Book, another Springer one.
  • Interactive and Dynamic Graphics for Data Analysis With R and GGobi, by Dianne Cook and Deborah Swayne. You can check out the GGobi site for flavor. I will have to admit that while my initial forays into using it have been enticing, the UI has some learning curve, and I've backed off a bit and not gone in very far yet. YMMV.
  • Statistics: An Introduction Using R, by Michael Crawley. This is an excellent book; it has intro stuff and is very deep on the stats, in terms of application and big picture. Obviously all examples use R, so you can replicate anything you want. The almost artistic side of statistical modeling really comes through here. Don't be fooled by "introduction," though - it's not light and easy reading.
  • Introductory Statistics with R, by Peter Dalgaard. I think this is more basic than the one by Crawley. Or just not as deep. I don't look at it as much-- mainly for contrast.
  • A Handbook of Statistical Analyses Using R, by Brian Everitt and Torsten Hothorn. A heavy-duty guide with applications of different methods for different types of questions and data sets, including (e.g.) survival analysis, recursive partitioning, multidimensional scaling, longitudinal analysis. A good purchase.
  • R Graphics, by Paul Murrell. This is not my favorite book. You can customize anything in an R graph, but it's nasty and difficult to do so. The book doesn't make it easy, and neither did the course I took online from statistics.com from the author. Still, this could be kept around for extreme need. Some of the chapters and source code live here on his site. I think you're better off getting Lattice, which is slightly higher level; and if you must get very pretty output, get the numbers or basic shapes out and use another tool like Excel or Illustrator. Except this guy Steven Murdoch had some real success in his Tufte experiment with R, shown below:
Steven Murdoch's Graph in R

Some tools and helpful sites:

  • Quick-R, a great site for getting started, and getting readable nice overviews of different techniques. Lots of pointers and basic help for stuff like graph customization.
  • Togaware's Rattle GUI - a timesaver for a bunch of basic and advanced descriptive stats.
  • I use SciViews as my R UI, because it has a variable browser and a script area. Some people like Tinn-R. I haven't checked out the emacs client for R yet but intend to.
  • There is an R Python interface. Haven't used it, but it makes me happy. (Other languages are supported too.)
  • Since R is open source, most of the useful help lies in mailing list threads. There are a few R-specific search engines, including Dan Goldstein's and others listed here on Jonathan Baron's page.
  • A nice list of R "tips" lives here.
Well, this could go on for quite a bit... R is infinitely powerful, but has a learning curve. That flexibility really pays off, I find! Excel just doesn't scale for big data problems.