The old-time reader who knows me will automatically connect this to one of my favorite findings of recent years, that I seem to repost every six months: Incompetent People Really Have No Clue that they're incompetent (that was the popular press reportage, here's the scientific article).
Back to Explanatory Depth:
The idea behind the illusion of explanatory depth (and it may be a dangerous one) is simply that there are many cases in which we think we know what's going on, but we don't. There are many great examples in cognitive psychology (e.g., psychological essentialism, in which we believe that our concepts have definitions, but when pressed, learn that either they do not have definitions, or we don't have conscious access to those definitions), but you don't have to look to scientific research to find them. If you ask 100 people on the street if they know how a toilet's flushing mechanism works, many, if not most will tell you "Of course I do!" But if you then ask them to explain it, you will quickly find that they really have no idea how a toilet's flushing mechanism works. This is the illusion of explanatory depth. They know that when they push down on the flusher, the water leaves the bowl, and then fills back up, but they don't know how this happens, they only think they do.
In experiments, it was shown that participants' ratings of their own knowledge of a subject decreased over time, upon being asked to explain device functioning and having problems doing so, and upon receiving explanations that clarified their issues. They realized what they didn't know, by being forced to explore it and being "corrected," essentially.
Chris recaps 3 factors that influence the phenomenon:
- Confusing environmental support with representation: People may rely on visible parts to build their (shallow) theories about how things work. For me this relates -- very tangentially-- to the notion of "affordances" in UI theory, where roughly speaking analogies to physical world behavior are sometimes leveragable for indicating functionality of controls. I have to think about the implications a bit more.
- Levels of analysis confusion: Multiple causation means you can stop at any level you want, and usually stop early. For me this raises the question of when a level is sufficient, and whether it always matters to go further? In UI design, we want to work with and understand existing mental models of application behavior, but also educate our users with feedback in the UI about necessary differences between their "naive" expectations and how things really work. We don't need to explain object models and for-loops in our code (hopefully) but we sometimes need to indicate relationships that aren't obvious to our users from their current world knowledge.
- Indeterminate end state: People have a hard time knowing when they know enough, partly because of the above point. Stories about how things work help clarify this, because of their determinate beginnings and endings -- assuming they're well-structured and not ultra-postmodern and intended to confuse! (This reminds me, again tangentially, of Harvey Sacks' proto-story told by a small child: "The baby cried. The mommy picked it up." Causation is represented, there's a problem and a solution, a beginning and an end. But it's also a story of the most simple analytic level possible!)
Chris says: To sum up, then, the [Illusion of Explanatory Depth] exists for explanations that involve multiple relations between parts, particularly causal relations, but not for more surface knowledge (e.g., facts, stories, and simple procedures), and it shows up fairly early in childhood. The concern it raises for doing science is that increasing specialization -- depth in branches of science -- means shallower understanding on the part of practioners of mechanisms outside their immediate field.
Now see the article noting that geniuses built their work on the work of other geniuses. Although not all examples are cross-disciplinary, it's clear that cross-disciplinary work is a huge opportunity and an increasing challenge.