I was too busy yesterday enjoying beautiful spring weather, a beautiful baby granddaughter and exciting NCAA basketball to join a lively Twitter discussion of anonymous comments.
One of the primary discussants (it wasn’t combat, but it was pretty vigorous) was Mathew Ingram of GigaOm, who blogged about the topic (and has a link to a search string that pulls much of the discussion together). Steve Yelvington also blogged on the topic, noting that an ounce of leadership is worth a pound of management.
They summarize the issue well in detail, so I will summarize more broadly (and, admittedly, oversimplify) here:
One side (led on Twitter yesterday by Howard Owens) argues that anonymous comments inevitably become ugly and you have a more civil, responsible online discussion if you require people to participate by their real, verified names, as newspapers have always done in letters to the editor.
The other side (led by Ingram) embraces the freewheeling discussion of the anonymous comments, noting that responsible moderation of and engagement with the conversation can rein in (or remove) the ugliest exchanges, while keeping debate lively and honest. Without anonymity, whistleblowers are less likely to join the discussion, they rightly note (and the other side will rightly note that the anonymous bigots way outnumber the anonymous whistleblowers in story and blog comments). And besides, don’t we sometimes want to know how ugly people can be?
This is one of the most pressing issues I face as I work on community engagement plans for the metro Washington local news site for Allbritton Communications (we’ll have a name soon, so I can stop using that mouthful, or at least use it as an explainer following the name). And in Washington, we have lots of government workers or workers for government contractors or nonprofit associations who might be actually barred or strongly inhibited from commenting publicly on some issues.
I wonder if we can have it both ways. How would it work to provide an incentive for people submitting to some form of verified identity or registering through Facebook Connect (not verified, but Facebook is a place where most people identify themselves accurately)?
What if those comments appear on the same page as the story or post, and you have to click to another page to read or join the anonymous comments? Or could you put them all on one page, but the anonymous comments go to the bottom while comments from verified users go to the top. Either of these approaches would disrupt the flow of conversation (for instance, an anonymous response to a verified-ID comment would appear on a different page or far below). On the other hand, the real flow of comments is often pretty uneven, with responses appearing several comments apart from the original comment, and with some appearing in chronological order and others posting most recent comments on top.
Would some other incentives work to encourage verified ID? Some gifts or site benefits? Would it be sufficient to require user profiles but allow unverified screen names? So I might reveal something about myself in the user profile, and at the least, you could click to my profile and see all my comments in one place and decide whether you think I’m a sage or a jerk.
Would some user-rating system help sort comments, giving favor to verified users and trusted anonymous users? This would allow the community to identify the trolls and banish them to some sewer where people who wanted to read the ugly stuff could find it but the rest of us wouldn’t happen across it.
When I was at Gazette Communications, I was pleased to work with Rich Gordon’s class at the Medill School at Northwestern University on development of News Mixer (alas, never implemented at the Gazette). We may explore using News Mixer or something like it.
I would appreciate your suggestions. And I’d especially appreciate your name. But for now, I’m not insisting or verifying. (By the way, I’m Steve Buttry.)