In another AT&T Labs connection, the Netflix prize FAQ cites an excellent overview paper on evaluation of recommender systems (pdf), co-authored by a colleague of mine, Loren Terveen, from the HCI department I was in at Bell Labs. Loren worked closely with Will Hill, one of the Bellcore researchers who (co-temporaneous with Pattie Maes at MIT and Paul Resnick) kicked off the work on recommender and ratings systems that you now find implemented all over the Internet. Recommender systems as a broad theme include all user ratings on products or comment postings (such as Amazon book ratings, or ratings implemented in almost all forum software now); they're intended to help others find good quality content by aggregates of ratings from other users, not from editorial oversight which is costly and therefore scales poorly to large amounts of content. There are important tweaks you can apply to your system or your filtering mechanism, such as "ratings of people like me" versus ratings of everyone, of course. (Netflix has some version of predicted "ratings for YOU", specifically, which I haven't investigated in any detail.)
I recommend glancing through Loren et al.'s paper, for a refreshingly meta perspective on a piece of technology that now defines a lot of assumptions behind what is called "web 2.0." As a more personal note, I wander among mostly non-research types these days, and the hot topics du jour (like "social networking") tend to get dropped into web system design discussions all the time, with a kind of naive "of course we need it" mentality. I can only sigh at how old I feel sometimes. Critical evaluation and careful implementation do matter, even for all the stuff that made it out of research projects into profit-making companies and community-platform toolkits.
As another personal note, I'm generally pleased by the level of researchy savvy I detect in the Netflix prize FAQ. Hey, if you're hiring at a software company, consider investing in some serious research-minded folks for competitive advantage!