One of our most important challenges in changing Digital First newsrooms will be measuring success. As I explained last month, Project Unbolt involves changing the culture and workflow of our company’s newsrooms.
But how do we measure our progress? How do we know when we’re succeeding? I’ve asked the editors of our pilot newsrooms to consider these questions as they assess their newsrooms against the characteristics I’ve described of an unbolted newsroom.
In some cases, we will be able to chart our progress using detailed metrics that are already available to us. In other cases, we might need to measure ourselves in some way (and decide whether the time and effort of measuring are worth the insight we gain). In some respects, numerical measurement will be difficult, but we can describe how we operate now and how we’ve changed at some point down the road. Project Unbolt will probably require all of these ways of measuring and more.
I started the measurement process by asking the pilot editors to rate their newsrooms on a scale of 1 to 5 in relation to the 40-plus characteristics of an unbolted newsroom. Of course, that’s a subjective rating, but it puts all the characteristics on the same scale.
The scores aren’t useful for comparing one newsroom to another, because we can’t tell whether a low score reflects a newsroom with further to go or a newsroom whose editor is more demanding. But the scores are useful in identifying areas where that newsroom needs to focus its unbolting work. And the scores will provide some measure of improvement if the same editor, however demanding or lenient, makes the same ratings in another month or six months or year.
As I compiled those scores for the editors, I quickly saw some potential pitfalls even in using simple ratings like that. I first compiled average scores in each of six areas: news coverage and storytelling, processes, engagement, planning and management, mobile and standards. Then I compiled an overall average.
But which is the better way to compile the overall average: Take the average of the six areas or an overall average of all the scores of individual characteristics. News coverage and storytelling included nine characteristics and processes six. So if news coverage and storytelling is 50 percent more important than processes, it’s probably better to get the average of all the characteristics. But if each major area is equally important, an average of the six would be the best way to get an overall score.
Either of those averages assumes some equality in the importance of the characteristics, either overall or within a category. And they aren’t equally important. (We’re deciding now which are most important for each of the pilot newsrooms.)
We’ll also need to measure success in pursuing each of the characteristics. The quality and type of metrics to measure success will vary widely. For instance, under live coverage, one of the characteristics of news coverage and storytelling, we could measure several ways:
Descriptive. We could describe a newsroom today as using live coverage in big breaking news stories and big events, but not using it routinely. And after changing how we cover, we could describe a newsroom as routinely using live coverage (a mix of livetweeting, liveblogs and video livestreaming) for all breaking news and for all events (unless we have a good reason not to, such as a judge not allowing cell phones or computers in court or lousy connectivity in a small-town gym), with frequent live chats about community events and issues. The difference is dramatic, but not actually measured, just described.
Percentage. We could count the community events covered per week and note which of those were covered live by livetweeting, liveblogging, livestreaming, live chats or some combination. The percentage before and after would measure the culture change. These aren’t figures that are readily available from a service such as Omniture, ScribbleLive, or Chartbeat, but you could set up daily news budgets to note events and whether they are covered live and track those figures. You could review a week’s or month’s work to establish a baseline.
Metric services. Services such as Omniture, ScribbleLive and Google Analytics could provide various measures of your live coverage: number of events, page views, unique visitors, engagement minutes. Each figure shows something different. For instance, your number of live events will climb simply from deciding to cover more events live. But then you analyze the other metrics to determine impact. Some types of events may draw a small audience but high engagement, so you decide they are successful. But if something isn’t drawing audience or engaging, you might tweak your live-coverage approach or decide not to continue covering that type of event live. Or not to cover it at all.
Not all of the characteristics of the unbolted newsroom lend themselves to numerical measurement. For instance, it’s important for newsroom meetings to have a strong digital focus. But how do you measure that? The number of meetings is a meaningless metric. More meetings aren’t necessarily better, especially if they are heavily focused on print planning.
A descriptive measurement is more appropriate here: Our morning meeting used to focus on the stories we were working on for the morning paper, with little or no discussion of digital plans such as video, interactive or live coverage or when stories would go online. Now our morning meeting reviews what is doing well on the site, what we’re covering during the day and when we can expect initial posts and updates, engagement plans on social media, any live events we’re covering. The difference is important, even if we’re not putting numbers on it.
Some goals will have only descriptive ways to measure progress. Others will have multiple measures. You need to decide what you want to measure and find the best way to measure it.
For instance, if you want to become more effective at using Twitter, you might track your number of Twitter followers. We use a service called AdEverywhere that provides weekly reports of followers and other basic metrics on Twitter and Facebook for all DFM branded accounts.
Twitter followers are a useful measurement, but also limited. That number almost always goes up because people who aren’t interested in your tweets will often scroll past them, rather than take the effort of unfollowing you. And you care more about how people are interacting with your tweets than how many people might see them.
As I’ve noted before, the chain reaction of people sharing of your tweets or the content you tweet can generate more traffic than you get simply from your followers.
Other services such as TweetReach and Twitter analytics provide other tools for measuring Twitter performance. Each tool is probably helpful if you use it correctly and can be harmful if you use it wrong or don’t understand its limitations.
Klout attempts to measure your influence (not just on Twitter, but on various social media), but its numbers are confusing and it occasionally changes its algorithm, making them more confusing. It is now inviting users to boost their influence by using Klout to post to other social tools. I haven’t used the new tool yet, and I don’t fault Klout for experimenting with its business model, but I wonder about the accuracy of a tool that is going to raise your score for using the tool.
Measurements need to align with goals. When I was editor of the Minot Daily News in the early 1990s, the publisher and I shared the goal of improving our enterprise coverage. It was an important part of my pitch when I interviewed and of the expectations she voiced when hiring me.
Later, when the publisher was interested in tracking reporters’ byline counts, I said that was counter to the enterprise goal. Enterprise stories take more time than routine stories, and if reporters know they are being judged according to byline counts, they will crank out more routine stories, inflating some briefs into bylined stories instead of working on more enterprise.
On the other hand, if a reporter wasn’t being very productive and I gave a goal of producing more stories, a byline count would be helpful in tracking the reporter’s performance.
Here’s how you measure success:
- Identify a goal. What is the change you want?
- Identify a way (or a combination of ways) to measure the change.
- Assess how well the measurements reflect the change.
- Assess whether the effectiveness of the measurement is worth the time or cost of measuring.
- Repeat multiple times for multiple goals.
What do you measure in trying to track success in your newsroom? Or in your personal work? What tools or techniques provide that measurement?
Two companion posts over the next few days will examine how metrics can be misleading, using sports statistics as an example, and what my blog analytics tell me about my blog’s success and challenges.
Update: Matt Waite suggested this Brian Abelson post, Pageviews above replacement. It’s a detailed, sophisticated look at digital metrics, despite its misguided appreciation of the baseball stat wins above replacement.