You might find some valid research in the Navigating News Online study published Monday by the Project for Excellence in Journalism, a project of the Pew Foundation.
But the study needed lots of context that an organization committed to excellence in journalism should provide. For instance:
- The PEJ report acknowledges that the Nielsen Co., the source of all the data studied, relies “mainly on home-based traffic rather than work-based,” without adding that most use of news sites comes during the workday. So the data is at least suspect and is relevant mostly to a minority of traffic to news sites. If the data is mainly from home-based traffic, it also ignores or undercounts the huge and growing mobile use of news sites. (The study also excluded use of tablets; more on that later.)
- The study uses strongly dismissive language about Twitter’s contribution to traffic to news sites. But it never notes that many – probably most – Twitter users come from TweetDeck, HootSuite, mobile apps or some other source than Twitter.com. Twitter “barely registers as a referring source,” the report concludes, ignoring or ignorant of the fact that the data counted only traffic from Twitter.com and did not count most visits from Twitter users. The study also largely ignores Twitter in its discussion of how important social sharing is. Numbers the study cites for the New York Times and CNN websites show that Twitter sharing of news content is one-third to one-half as much as Facebook (46 percent for the Times and 36 percent for CNN). That is significant and an indication that traffic from Twitter might be nearly half as much as from Facebook, which would make it an important referral source. I noted more than a year ago that ignorance of Twitter or bias against it was one of the reasons an earlier (and often-cited) PEJ study was misleading and invalid. While this study focused on traffic to and from news sites, I should note that, however valid its findings about Twitter, promoting traffic is only one of many reasons journalists and news organizations should use Twitter. So even if the PEJ findings are accurate, they don’t say anything about Twitter’s value in gathering news.
- The study’s authors reflect significant bias by regarding the exact same percentage as trivial or significant in different contexts. “Power users” (people who visit sites 10 or more times per month) represent 7 percent of site visitors, “a potential audience of core, loyal users who value the brand and come often.” The report implies that this valuable core might pay for subscriptions (the New York Times “metered” approach), though nothing in the report describes behavior of users asked to pay for content. Still, the report says “subscriptions will work” for some sites. However, the report provides no data on tablet users, dismissing them as unimportant because “only between 7-10% of the population currently owns a tablet or e-reader.” (Presuming they described that statistic correctly, the percentage would be significantly higher if you took only the adult population or only the adult population reading news online.) Facebook, by the way, has “has become a critical player in news,” with no top-25 site getting more than 8 percent (Huffington Post) of its traffic from Facebook. I’d like an explanation why 7 percent is significant as power users, 8 percent (tops) is critical as Facebook-referred traffic and 7-10 percent of the population owning tablets merits an “only” and isn’t worth studying.
- Links from blogs are dismissed as irrelevant. In fact, sites that provided fewer than 5 referrals from the Nielsen sample are not even counted in the total of referring traffic. Those links are not counted together as any sort of long-tail total, and in fact, don’t count in the “traffic from links” total (35-40 percent), but instead are lumped in with “direct traffic,” such as typing a news site’s URL directly into your browser or coming to it as your home page. If long-tail links were 20 percent of the total, you could conclude that blogs and other individual links together were nearly as important as Google. If they were 1 percent, I would join PEJ in dismissing them as trivial (if the total sample were valid, but see #1 above). If they were 7 percent, PEJ’s presentation of that figure might give us another indication of bias. But not counting them at all and, in fact, lumping them together with direct traffic distorts the data for both direct traffic and link traffic.
- Whatever validity this study has is heavily skewed toward national news because PEJ studied only the top 25 news sites, based on unique visitors for the first nine months of 2010. Of the 25 sites studied, at most six could be described as local news sites, the sites of the Los Angeles Times, New York Daily News, New York Post, Boston Globe, San Francisco Chronicle and Chicago Tribune. And some, if not all, of those have significant national audiences, at least for a sports franchise they follow. With that heavy a national sample, the study is nearly worthless for local news sites.
I’m glad someone is studying how people navigate the news. This study probably has some helpful data. But it has too many huge holes and indications of bias to have much value.
(I emailed Tom Rosenstiel, director of PEJ, asking him for response. I will update if he answers my questions.)
Thanks for this, Steve – it’s a great look at some of the things that were bugging me about the study. I’m particularly surprised by the lack of long tail results and the lack of insight into Twitter traffic.
I was also surprised to see no mention in the study of how search and aggregation affect news navigation – although I suppose that’s to be expected because Nielsen doesn’t separate the two. Google News vs Google Search would give a very interesting insight into the news ecosystem.
LikeLike
Great information, thanks for sharing. Very good points about twitter, most don’t use or go to twitter directly, really skews the data.
LikeLike
I was really bugged about that study’s analysis of Twitter’s contribution to news traffic, as every other study I’ve seen up to this point seemed to reach an opposite conclusion. Glad to see you did a little digging and helped me understand why their conclusion really bothered me.
Great blog post.
LikeLike
Steve — Very insightful. I think you are right on about the failure to include smaller, local news sites. What is the skew, for example, between power users and casual users on these? Would be important to know if it is the same or different than the big national and metro sites. My hypothesis is that there would be slightly more power users at smaller-market local news sites. If so, the answers for revenue streams might be somewhat different, just as big metros rely more on national advertising in print.
It might have been a bigger mistake to use data emphasizing home computers for all the reasons you cited. Most news sites get spikes when people first get to work, at lunchtime and right before the end of the work day.
LikeLike
Thanks for mentioning the lack of attention to local news sites. I’d be really eager to see a similar study of those.
LikeLike
[…] 5 big problems with ‘Navigating News Online’ study « The Buttry Diary Seems the methodology on the report into how people navigate news might be a little bit more off the pace than people thought (tags: twitter news Pew journalism) May 12, 2011 | Filed Under Things I've found […]
LikeLike
Thanks, all, for your thoughtful responses. No response from Rosenstiel yet.
LikeLike
[…] 5 big problems with ‘Navigating News Online’ study « The Buttry Diary"The PEJ report acknowledges that the Nielsen Co., the source of all the data studied, relies “mainly on home-based traffic rather than work-based,” without adding that most use of news sites comes during the workday. So the data is at least suspect and is relevant mostly to a minority of traffic to news sites. If the data is mainly from home-based traffic, it also ignores or undercounts the huge and growing mobile use of news sites." Uh oh… (analysis research journalism ) […]
LikeLike
[…] Twitter undercounting was one of several problems that TBD’s Steve Buttry had about the study, including inconsistent language to characterize […]
LikeLike
[…] Twitter undercounting was one of several problems that TBD’s Steve Buttry had about a study, including unsuitable denunciation to impersonate […]
LikeLike
[…] article is creating a lot of cross talk, such as the study’s shortcoming, and it’s reliance on incomplete Nielsen data. One interesting part of the study is how […]
LikeLike
the data is at least suspect and is relevant mostly to a minority of traffic to news sites.
LikeLike
[…] UPDATE: Some people have pointed out problems with the Pew study, among them Steve Buttry. Steve lists five problems, but each of the five is a lengthy complaint. They fall generally under […]
LikeLike
[…] a recent report from the Pew Research Center’s Project for Excellence in Journalism, although flawed in many ways, offers a different take on these numbers. # NNO identified an important distinction […]
LikeLike
[…] as people had thought. But there were problems with the study’s methodology, as many people, including Steve Buttry […]
LikeLike
[…] Other topics I addressed about how journalism is changing: whether news judgment still has value, flaws in the Navigating News Online study, Warren Buffett buying the Omaha World-Herald and lame media coverage of Sam Brownback’s […]
LikeLike