I don’t believe in one metric to rule them all. “If not pageviews, then what is a good metric for the media industry?” is something I get asked a lot since I wrote Cargo Cult Analytics. To everyone’s annoyance my answer is and always will be “a good metric to do what?” This is not the non-answer you think it might be. My goal with this guide is to show you how answering a couple of quick questions can lead to better metrics.
Now, any fool can come up with a metric. The challenge is coming up with metrics that serve a purpose. Numbers that you can learn from and that are actionable. Today’s news industry metrics are invariably too vague and generic for any purpose other than putting them up on a wall and staring at them.
These four questions will start you down a better path:
- What part of the user experience are you trying to improve? Reading, browsing, sharing, buying?
- What is the product? Is it individual stories, the editorial strategy or the apps and website?
- Why do you want to know? To explore new opportunities, to guide action or to keep people accountable?
- What’s the bottom line? More readers, more money, more prestige, more what?
Your answers to these questions determine
- what you should measure
- whether it’s reporters or editors or designers or developers who should care
- whether it’s important to track these numbers live or whether you can get away with just a monthly report
- what to prioritize
… and many other things besides.
That’s the basic formula. The details will take another 4,000 words.
The activity: what part of the experience are we trying to improve?
People visit news websites because they want to read stuff – or to listen to or watch stuff. But there’s different ways you can look at that reading experience. I’ll list four pretty distinct user flows. You can’t improve all four at the same time, so pick what you want to work on right now:
- the loop: find an article through social media, read it and then possibly share it with others, who might then click through and read it, share it and so on.
- the browse: enjoy yourself, kill some time, read at least a couple of articles. A great homepage and good recommendations for what to read next keep you on the site.
- the habit: becoming a part of your morning routine. Even if you enjoy a website you won’t necessarily keeping coming back to it.
- the sell: donate, buy a subscription, an ebook, an event ticket and so on. Every visit is an opportunity.
The loop is all about drive-by users.
Realistically, users visiting from social media are going to read a single thing and then bounce: in their minds, they’re hanging out on Twitter or Facebook or Flipboard, they’re not really on your website. You can always try to convince them to read one other article before moving on – a higher percentage of non-bounce visits to article pages that originate from social media – and kudos if you can manage that. You can also try to improve how many people read beyond the headline: time on page greater than 30 seconds. But these are uphill battles.
The really interesting metric in the loop is how many people share what they’ve just read: the percentage of visitors from social media that use your share buttons.
Sharing is what moves the needle, because with enough sharing you create a virtuous circle of sharing and clicking and reading and more sharing and more clicking and more reading that can keep going for quite a while, even for stories that are not quite viral.
The browse focuses on different parts of your website.
You never ever want people to visit your homepage and then not click on anything else: bounce rates for visits that start with the homepage. This is where headline testing might come in. You might also want to keep track of load times: perhaps a smaller and less ornate homepage that loads faster is better at keeping users on your site than a big and shiny homepage.
The key pain point of the browse is what happens after a user has read an article, or after she’s bored of it. That reader shouldn’t leave, they should read something else. Percentage of visits that make use of your read-this-next widgets and percentage of visits that circle back to the homepage and section pages.
The habit is all about long-term strategy. Who cares if you can get users to visit 3.7 pages on average instead of 3.6, if none of those users come back? The ratio of daily active users to monthly active users tells you how addictive or sticky your website is.
(Alistair Croll and Ben Yoskovitz note that many useful metrics come in pairs or work best as ratios. I’ve found the same to be true.)
Getting into a habit is hard, and even though a habit is supposed to be a stable behavioral pattern, people get out of the habit of doing things all the time. Thus, as a little nudge to make habits stick, we need ways to remind our readers that we exist as often as we can without annoying them.
Readers who follow you on Facebook or Twitter will get these kinds of reminders as they see your stories in their feeds. The ratio of followers to unique visitors needs to be as high as you can get it. These are people who have given you permission to talk to them. Facebook posts and tweets need to run at the right times and with the right messages so there’s a high ratio of views and clicks to followers.
If a user subscribes to one of your email newsletters, that’s even better than having them as a Facebook fan.
For newsletters, track not just ratio of newsletter subscribers to unique visitors but also the newsletter subscription form conversion rate: how many of the people that learn about your newsletter actually do fill out their email address to receive it, how many ignore it and how many stumble halfway through the process, say, because you’re asking them to create a full-blown account first in order to get your emails. (Not a very smart idea.)
If you want users to keep coming back, something that makes you unique doesn’t hurt either, trite as it may sound. Why would people consistently choose your site over another unless it provides wider or more serious coverage, a unique style, different writers or better apps? Analytics won’t help you figure out what makes you special, but surveys and a little bit of soul searching might.
The sell. News websites sell subscriptions (digital and print), books and e-books, merchandising, they organize events and they sell advertising. The news industry is often short-changed by analytics software, which tends to be tailored to the needs of merchants or anyone selling anything. But even ad-supported sites sell things, and anything you sell is amenable to funnel analysis, which is where you lay out all the steps a user needs to go through in order to finally buy your product, and then look at how many people quit before completing the purchase, and at which step they dropped out. Cutting the attrition percentages at each step of the funnel might be easier than you think.
Many newspapers have at some point taken a stab at selling e-books and most of them have been disappointed with the sales. But unless you’ve put real effort into how you promote the things you sell and unless the checkout is as frictionless as possible, there’s no way for you to know whether nobody likes the product, or whether you’re just doing a lousy job selling it.
Similarly, if you sell advertising online through a self-serve website, or even if you just have some brochureware up to point people to your sales staff, you’re leaving money on the table if you don’t keep track of how and how many people get stuck before buying an ad or before calling a salesperson.
For subscriptions, churn is your warning light: it tells you what percentage of your users cancel or fail to renew their subscriptions every month or year. But it’s just a warning light. It’s up to you to figure out, though user segmentation, surveys and inspired guesswork, which users are leaving and what prompted them to leave.
Then for something completely different, it’s important to track the consumer side of advertising too, not just its sales.
You’ll want to track ad revenue, how many ads you’re showing, the percentage of ads through networks rather than through direct sales, or, if you don’t use networks, track how often you’re not showing ads at all. Look at the clickthrough rate to see how effective the advertising is and at percentage of ads hidden by ad blockers, average ad load time (as well as the 95th percentile) and bounce rate for interstitial ads and popovers to see whether your ads are still bearable.
Ad metrics won’t magically make you more money and you can get away with looking at them only once every month or so. But they do establish a baseline that is useful in discussions and when you experiment with different ad networks, different ad formats, ad placement or even entirely different kinds of advertising like sponsored links, native (gasp!) advertising and sponsorships. News organizations should know exactly what they are getting in return for showing things to their users that really those users would rather skip.
(Making advertising more useful to users and advertisers alike isn’t easy, but I think it’s possible.)
Interlude: why are we doing this again?
The lens through which you evaluate your website determines which pages of your site to tackle (the homepage, the article page, the subscription page), which parts of those pages (the headlines, social media buttons, the what to read next widget) and what kind of user to worry about (loyal ones, first-time visitors or social media drive-by users.)
This is why unsegmented, global metrics hardly ever make any sense. For visits from loyal users that start on the homepage, it’s a great idea to try and improve daily time on site. For first-time visitors from social media, though, we mentioned before that browsing around is not what they came to your site to do and you can’t force them to stay. Instead all effort should go towards maximizing the chances of them sharing what they just read because that aligns with what they’re already doing.
A final aside before we move on to the next question.
You could also think of the loop, the browse, the habit and the sell not as activities or user flows but instead as four levels of engagement. New and occassional visitors probably find you through social media and typically bounce after reading a single article. As they become more familiar with your brand, they might browse around more instead of just relying on other people’s recommendations. Ultimately, your website becomes part of a daily routine. At that point, you can probably interest them in a subscription or a premium service.
(Even visitors that only ever visit you through social media can be persuaded to buy things and to share articles with friends, so the four levels of engagement are ultimately not so clear-cut, but you might still find it a useful perspective.)
The product: are we talking about the application, the editorial strategy or about individual stories?
Online news, if you think about it, consists of a couple of different products all bundled into one:
- The application through which you read articles, view video and explore interactives. This can be a website, a mobile app, a feed, a newsletter and many other things.
- The editorial strategy that determines what you write, what style you write it in, what formats and genres you use, whether you write short pieces or long ones and whether you write more about politics or more about entertainment.
- The individual story, each of which is a product unto itself.
Are you looking for a metric by which to gauge and improve a) an application, b) the editorial strategy or c) individual stories?
The product determines who is going to use a metric, and thus what that metric should be.
To a developer or designer, a list of yesterday’s top five stories is about as useful as white noise. You can’t design better interfaces from a single data point like that.
For an individual author, seeing what people are saying about your article is useful, and getting gentle or not-so-gentle reminders from the system if you haven’t done any promotion on social media is useful too. But exactly how many pageviews that author’s article got is more useful for editors, who can use that information in aggregate to figure out which kinds of articles people like to read, what sorts of promotion works best and where we still need to find the right recipe for success.
For a reporter, an article’s pageviews can only be correctly interpreted when compared against articles of similar genre and topic, published at similar times (morning versus evening, weekday versus weekend.) There is little point in telling a journalist that their theater review got fewer views than the superbowl live blog. They’ll start to resent analytics, and they will have every reason to.
Editors and reporters together sometimes worry about when to publish a non-breaking story: is it too heavy to be weekend literature, or too long for a weekday? They wonder whether a story deserves a shorter or longer treatment, whether or not to split the story up into a series. Knowing what types of content tend to get more or fewer pageviews when comes in handy during those discussions.
(Pageviews are usually not a terribly interesting metric, but as some of these examples show, pageviews are a great building block for more complex metrics and analyses.)
The lens through which you view your product – application, editorial strategy or individual story – doesn’t just determine the metrics that’ll be useful to you. Your product lens also determines the kind of analytics software you will need. Writers need to respond to what’s going on right now and that means live analytics. Editors and designers on the other hand need to evaluate longer-term trends before betting the house and that means slower analytics with extensive reporting features.
(Most of the metrics and analyses I talk about, you can get from trusty old Google Analytics.)
The purpose: why do you want to know?
Why do you want to know? is a terribly important question when you’re shopping for metrics.
We care about numbers because we hope to learn from them. That learning process has three phases to it:
- Accountability: we can use numbers to tell us about how we’re doing
- Exploration: we can also use numbers to give us ideas about what we can do differently
- Action: when exploration has shown areas for improvement and we’re starting to make changes, numbers can tell us whether those changes make a difference
Accountability, exploration and action are all three of them legitimate purposes for measuring things. Unfortunately many news organizations have a hard time striking a balance and focus exclusively on accountability at the expense of exploration and action. This imbalance is the origin of all these nonsense discussions about the one true metric.
If you’re an Airbus exec then I guess what you care about is how many customers you have and how many planes each of those customers buy at what margins. But if you’re an engineer at Airbus that’s not really going to help you design a better fuselage and if you’re a program manager at Airbus then sales reports are not going to tell you what people expect out of a modern plane.
There’s nothing wrong with using numbers to keep people accountable. If you’re a CEO then you’re going to want to keep track of how many pageviews and how many total unique visitors (or how many regulars) you have.
Pageviews and visitors are closely related to revenue. It’s perfectly fair to ask whether one’s employees are bringing in the views needed to show enough ads in order to actually pay those employees.
The big caveat is that just as airplane sales numbers don’t help an engineer design better airplanes, pageviews or engaged minutes don’t actually tell the people you’re trying to goad into doing better anything about how they can do better. No metrics are sillier than those you can’t do anything about.
That’s a higher rung on the analytics ladder: using analytics not just as a performance gauge but as a way to learn and from there on improve.
A tool like Google Analytics can tell you:
- how people browse through the site
- what they click, what they ignore, when they give up
- who those people are, where they come from
- what kind of content is popular
- when traffic tends to spike
This kind of insight into your users is useless if all you’re looking for is accountability, but vital if you want to figure out what you’re doing right, what you’re doing wrong and which user behaviors represent untapped potential.
Example. Are we doing enough food reporting? Well, how much do we do right now and how many pageviews and shares is it getting and does that compare favorably to other sections?
Example. What does a great headline look like? Let’s take a look at last year’s most-clicked headlines on the homepage, normalized by total clicks that day. Then we’ll know.
And then once you’re done exploring and have found some areas for improvement, you need to actually start making those improvements. The metrics for keeping track of a project’s success will be similar to those used for accountability purposes, but not exactly the same: they will need to be much more precise, looking not at the global picture but at whether this exact thing you’re changing is having that exact effect.
Are we there yet?
Firstly you looked at the user experience and decided whether to focus on habit generation or social sharing, loyal users or new ones, more eyeballs or more sales. (Not in general, just for figuring out which metrics you need right now.)
Secondly you figured out you want metrics related to the individual story, or perhaps the editorial strategy, or instead metrics related to the applications that readers use to read and view your content.
Thirdly you clarified that you need numbers to support an ongoing project, or perhaps that it’s too early for action and really you need to look around for areas for improvement. Inversely you might know exactly what your mission is and you’ve got this year’s projects figured out and so accountability is where it’s at.
Those three questions are sufficient to point you to many useful metrics. Maybe you’re already thinking about changes you could make that might help you move those metrics.
Example. I’d like to make our site more sticky, is there anything our most loyal users have in common and any way we could use that information to make our other users more like that? Exploration to improve habit generation, either by improving the application or the editorial strategy.
Example. I’m afraid we’re not going to hit our revenue targets this year; as our dev team has already put a lot of effort into reducing churn among our most loyal users, let’s focus instead on virality as a pageview driver. I want reporters to be more active on Twitter and I’d like at least 10% of all social traffic to originate from their twitter accounts. Creating a stronger viral loop, and you’re keeping journalists accountable.
(And by the way, I’m just providing the method. It’s up to you to not be a dick or a slave driver.)
We’re really most of the way there. But we do need one final reality check.
The engine of growth: what’s the bottom line?
We need one final filter, a bullshit detector to distinguish metrics that are actually useful from those that just look like they are.
You’ve told me you’re interested in tracking this kind of thing for that kind of reason, with this kind of readership in mind and that kind of internal audience. Pat yourself on the back, because you’re already being way smarter about this than 99% of your colleagues. But why do you care about this metric at all? What’s the bottom line?
An engine of growth describes how you expect changes in the numbers to reflect changes in your organization. If you know the engine of growth associated with a metric, then you have some assurance that you’re not measuring stuff for the sake of it, but instead you’re measuring to get better.
I’m blatantly stealing my five engines of growth from the old Coca-Cola marketing whiz Sergio Zyman: you need to sell more stuff, to more people, more often, for more money, more efficiently.
(If you’re a non-profit, replace “selling” and “money” with whatever your goals are.)
- More stuff gives you more tickets in the social/viral media lottery but might not help much with loyal readers, as there’s only so much they can read anyway. You can think about hiring more writers, you can also think about just writing more but smaller pieces, or you can think about writing more but less-researched pieces. All of these have obvious trade-offs. More stuff doesn’t have to mean more articles, though. It can also mean ancillary products like events, repurposing content for different platforms, creating new verticals and the like.
- More people: a lot of brands are experimenting with big, bold projects (like interactives) to expand brand awareness and reach. Don’t forget to spend some time on better conversion from new to loyal users too, making you less dependent on a constant influx of new readers. Writers can do more to promote their work, editorial strategy needs to focus on things people actually want to read and designers can tweak the article page to encourage sharing.
- More often: getting people to visit more often is all about habit generation. Editorial can plan content that creates expectations like article series, daily or weekly features. They can also push content through newsletters and social media. Designers and developers need to pitch in by making sure people find and use these habit-forming features and e.g. improve the subscription funnels.
- More money: increase conversion from free to paid users. Creating a more compelling offering or more beautiful apps can lure paid users, but it only pays if you can do it without a commensurate increase in expenses, because then you’re just treading water. (It’s fun to spend a lot of money on ambitious projects, so many news organizations make this mistake.)
- More efficiently: editorial staff should publish what matters when it matters, app devs can speed up repetitive work by automating it; editorial and developers can work together to make existing content more accessible e.g. through search, topic pages (like Vox Card Stacks) or collections (like Quartz obsessions).
What now?
Here’s our heuristic for finding useful metrics one more time:
- What part of the user experience are you working on? (The loop, the browse, the habit, the sell.)
- Who is working on what? (Product people or sales working on the website and apps, editors and execs working on the editorial strategy, reporters and social media folk working on a story and its post-publication lifecycle.)
- Why do you need to know? (Keep people accountable, explore new opportunities or drive projects.)
- How is it going to help? (You need to sell more stuff, to more people, more often, for more money, more efficiently.)
Now, none of these questions guarantee that whatever metrics you end up with will keep balance between journalism’s business goals and social goals. That’s your job.
And none of these questions directly, unambiguously lead to a particular metric.
But these questions bring focus to your search for metrics, and focused metrics are good metrics. Each question you answer will lead to a more robust metric.
Example. We need to be more efficient about what we write and when we publish it. Let’s start by comparing average pageviews by genre and by time of day to see what works and what doesn’t.
Example. Our developers should work on getting us more new visitors. Sharing is not the issue, so maybe they should work on SEO and that means the ratio of search traffic to total traffic should go up.
Example. We have more ad inventory than we can show. We could increase the writing budget, or we could create a new app for tablets. Let’s run the numbers.
At this point maybe you’re thinking, “whoah, metrics overload!” It is and it isn’t.
It makes sense for organizations to focus on only a couple of key performance indicators. You want to be very clear about what success looks like. You want to align efforts and make sure everybody knows that this quarter, this is what we’re doing and this is the metric that needs to move.
But as we’ve seen, analytics have a much larger role to play than just indicating performance. For analytics to really be worth all the time and money we spend on them, they have to be able to guide action and find you new opportunities, and that means different people will need to look at different numbers and these metrics will change as your data-driven organization changes and grows.
Editors should keep track of things readers love and things that somehow can’t seem to find an audience. Designers should keep users clicking from article to article. Writers should know what people are saying about their work and use it to inform what they write next. Developers should cut features that people don’t use and refine those that people love. We all have different jobs, so we all need different metrics.
Getting serious about analytics means asking yourself “what are the numbers my team needs to do a better job?” instead of making do with whatever happens to be on the default dashboard of your analytics software or whatever gets sent around as a daily analytics email digest.
Musicians sometimes say that every instrument, no matter how awful it sounds, has a song in it. Metrics are like that too. There are no good metrics or bad metrics. The goodness of a metric is determined by how you use it – and whether you actually use it or just look at it! – who uses it and why they use it.
Ask the right questions, get the right metrics.
Now that my fellowship at the Tow Center for Digital Journalism has come to an end, I’m looking for new gigs. If you’re doing analytics at a news organization and could use some help, get in touch. I’m stijn.
My previous writing on analytics in chronological order: Your Metrics Suck, Metrics are for doing, not for staring, In Defense of Pageviews and Cargo Cult Analytics.
share on twitter
Turning questions into metrics debrouwere.org/6l by @stdbrouw
Stijn Debrouwere writes about statistics, computer code and the future of journalism. Used to work at the Guardian, Fusion and the Tow Center for Digital Journalism, now a data scientist for hire. Stijn is @stdbrouw on Twitter.