An initial note: CHI as a conference has a huge percentage of academic and research attendees. How to make it "relevant" to the "practitioner" audience is a regular concern of the conference committee. Why research isn't necessarily relevant is one of the reasons for their paper, I think. (And for things I've spoken and written about in the past, too.)
The main argument was...
...We too often perceive … an unquestioning adoption of the doctrine of usability evaluation by interface researchers and practitioners. Usability evaluation is not a universal panacea. It does not guarantee user-centered design. It will not always validate a research interface. It does not always lead to a scientific outcome.Their supporting arguments were these:
- CHI reviewers require evaluation, and usually quantitative (lab study) testing results, as a part of a submitted paper (reflected in the submissions guidelines)
- Quantitative usability studies are often the wrong type of study for certain kinds of design: such as inventions in prototype stage; other types of user study may be more correct for these.
- In an argument familiar from Buxton's book Sketching User Experiences, a focus on usability evaluation too early in a development cycle produces poorer final results than will experimenting with more design concepts (or "sketches")
- Early-stage technical innovations that are disruptive or paradigm changing may produce poor or ambiguous user testing results, which may prematurely kill them off as research topics -- when long-term these ideas might find audiences and produce large-scale social or practice change after adoption.
Greenberg and Buxton argue that CHI has too great a focus on scientific results (and poor ones at that), rather than on supporting good design and invention.
“Science has one methodology, art and design have another. Are we surprised that art and design are remarkable for their creativity and innovation? While we pride our rigorous stance, we also bemoan the lack of design and innovation. Could there be a correlation between methodology and results?”
Comments ran the gamut from polite disagreement about the counts of types of papers accepted at the conference, to observations that publication-treadmills don't allow time for disruptive risky innovation that can be studied longitudinally, especially for students in grad school. Saul asked the CHI audience to review papers differently -- after all, the audience there constitutes what gets in, and what's considered good work. What constitutes good work worthy of acceptance is in the hands of the reviewers in the room! Finally, it was noted that different, "riskier" work of a design or featuring ethnographic evaluation instead of user testing is regularly accepted at other conferences in the same ACM family: DIS, DUX, CSCW, even Ubicomp and UIST.
Most difficult, for me, is the idea that the CHI reviewing audience has the credibility and experience to review riskier design work that doesn't come along with (the right kind of) user study. With mostly academics and researchers on the reviewer list, I question whether this audience has the depth of practical design experience and credentials required to recognize and talk about "good design" with credibility. What do I require for credibility: having done a lot of real-world design, and having evaluated a lot of products from a customer-centric perspective. When I say "real world" I don't mean academic design - where it's notoriously easy to go wild and crazy. In the context of a business or large organization, the kinds of compromises that designers face are what separate the real good from the mediocre.
I would like to repeat that human computer interaction is not fully represented at CHI. The conference is just one forum. While it's true that CHI publication counts more than most others to researchers in this field, it doesn't necessarily represent the full range of activities and professional expertise in the broader field of interaction design.