But the study needed lots of context that an organization committed to excellence in journalism should provide. For instance:
- The PEJ report acknowledges that the Nielsen Co., the source of all the data studied, relies “mainly on home-based traffic rather than work-based,” without adding that most use of news sites comes during the workday. So the data is at least suspect and is relevant mostly to a minority of traffic to news sites. If the data is mainly from home-based traffic, it also ignores or undercounts the huge and growing mobile use of news sites. (The study also excluded use of tablets; more on that later.)
- The study uses strongly dismissive language about Twitter’s contribution to traffic to news sites. But it never notes that many – probably most – Twitter users come from TweetDeck, HootSuite, mobile apps or some other source than Twitter.com. Twitter “barely registers as a referring source,” the report concludes, ignoring or ignorant of the fact that the data counted only traffic from Twitter.com and did not count most visits from Twitter users. The study also largely ignores Twitter in its discussion of how important social sharing is. Numbers the study cites for the New York Times and CNN websites show that Twitter sharing of news content is one-third to one-half as much as Facebook (46 percent for the Times and 36 percent for CNN). That is significant and an indication that traffic from Twitter might be nearly half as much as from Facebook, which would make it an important referral source. I noted more than a year ago that ignorance of Twitter or bias against it was one of the reasons an earlier (and often-cited) PEJ study was misleading and invalid. While this study focused on traffic to and from news sites, I should note that, however valid its findings about Twitter, promoting traffic is only one of many reasons journalists and news organizations should use Twitter. So even if the PEJ findings are accurate, they don’t say anything about Twitter’s value in gathering news.
- The study’s authors reflect significant bias by regarding the exact same percentage as trivial or significant in different contexts. “Power users” (people who visit sites 10 or more times per month) represent 7 percent of site visitors, “a potential audience of core, loyal users who value the brand and come often.” The report implies that this valuable core might pay for subscriptions (the New York Times “metered” approach), though nothing in the report describes behavior of users asked to pay for content. Still, the report says “subscriptions will work” for some sites. However, the report provides no data on tablet users, dismissing them as unimportant because “only between 7-10% of the population currently owns a tablet or e-reader.” (Presuming they described that statistic correctly, the percentage would be significantly higher if you took only the adult population or only the adult population reading news online.) Facebook, by the way, has “has become a critical player in news,” with no top-25 site getting more than 8 percent (Huffington Post) of its traffic from Facebook. I’d like an explanation why 7 percent is significant as power users, 8 percent (tops) is critical as Facebook-referred traffic and 7-10 percent of the population owning tablets merits an “only” and isn’t worth studying.
- Links from blogs are dismissed as irrelevant. In fact, sites that provided fewer than 5 referrals from the Nielsen sample are not even counted in the total of referring traffic. Those links are not counted together as any sort of long-tail total, and in fact, don’t count in the “traffic from links” total (35-40 percent), but instead are lumped in with “direct traffic,” such as typing a news site’s URL directly into your browser or coming to it as your home page. If long-tail links were 20 percent of the total, you could conclude that blogs and other individual links together were nearly as important as Google. If they were 1 percent, I would join PEJ in dismissing them as trivial (if the total sample were valid, but see #1 above). If they were 7 percent, PEJ’s presentation of that figure might give us another indication of bias. But not counting them at all and, in fact, lumping them together with direct traffic distorts the data for both direct traffic and link traffic.
- Whatever validity this study has is heavily skewed toward national news because PEJ studied only the top 25 news sites, based on unique visitors for the first nine months of 2010. Of the 25 sites studied, at most six could be described as local news sites, the sites of the Los Angeles Times, New York Daily News, New York Post, Boston Globe, San Francisco Chronicle and Chicago Tribune. And some, if not all, of those have significant national audiences, at least for a sports franchise they follow. With that heavy a national sample, the study is nearly worthless for local news sites.
I’m glad someone is studying how people navigate the news. This study probably has some helpful data. But it has too many huge holes and indications of bias to have much value.
(I emailed Tom Rosenstiel, director of PEJ, asking him for response. I will update if he answers my questions.)