Measuring the Value and Effectiveness of Content
Published: July 10, 2017
Author: Feliks Malts
Marketers know that all users are not created equal. We know that content consumption and purchase readiness will vary from user to user and that some users need to be influenced more than others to convert.
So how do we go about understanding whether or not the content that brands produce is effective or detrimental to driving users towards the desired outcome?
The answer is Cohort Analysis, which answers the question of whether the users who are consuming content, viewing blog pages, videos, etc., are converting at a higher rate than the users who are not.
The approach to measuring this is pretty straightforward. Map out the cohort definitions (possible scenarios) and isolate the users into cohorts based on the behaviors they’ve exhibited (viewed articles, didn’t; viewed videos, didn’t; viewed any content, didn’t; etc.) and look for positive correlations between the sets of cohorts.
The core indicators that we would typically focus on in this type of analysis are:
- Sales (obviously): Is content influencing and contributing to a greater share of sales when comparing users who are exposed to those than are not?
- Conversion Rate (User): Is content engagement leading to users converting at a greater rate when exposed?
- Avg. Basket Size: Is content on the site getting users to spend more, on average, because they’re building more trust and comfort with the brand through their content?
- User Retention/LTV: Are users who consume content more likely to come back and purchase again, leading to a higher brand affinity and user value? Is our content keeping the brand top-of-mind for users who are exposed, more so than for users who are not?
- User Satisfaction: Are users who are exposed to content, particularly informational, before purchasing more satisfied with their experience and product? Did support/installation/setup content partially influence their ownership experience?
A key consideration in this type of analysis is to make sure that the entire or majority of a typical user purchase window (latency) is included in the analysis period, to ensure that one or the other is not being discredited purely because we didn’t give them enough time to “convert.”
Beyond comparing and isolating correlations, we know that users may require multiple sessions for research, price comparison, feature evaluation, etc., so we often need to go beyond sales as a performance indicator. For brands to accommodate these types of “multi-session” purchase cycles, we recommend focusing on the following KPIs with supporting reasoning for each:
- Bounce Rate: Do users who view articles and other types of content (blog, video, support, etc.) have a lower likelihood of having single-page sessions (bounces)?
- Return Visitors: Are users who consume non-product specific content more likely to come back and ultimately purchase?
- Time on Site: Is the site content getting users to spend more time on the site, and as a result doing a better job of connecting the brand to the user?
- Page Depth: Are users incrementally engaged and going farther down the funnel because of their exposure to the available content on the site? Is this having a positive impact and driving additional page views and engagement?
- Page Scroll Depth: Are users who are consuming non-product content more or less engaged with other product-specific content on the site? Are they seeing more of the content, which gives your brand more opportunity to educated and create preference?
Lastly, digital is changing, and our users are not and have never been one-size-fits-all. You have to understand the audience by their needs in a space where attention time is limited, and not everything can be delivered on one page. Some user variations include:
- Audience: Not meant to be discriminatory in any way, but we often see that users of varying gender, age, HHI, marital status, parental status, etc., tend to behave a little or very different from one another. So understanding and defining the ideal path and content for those personas, specifically, can significantly boost the positive impact that the content is having on driving users to commit and purchase.
- Channel: Knowing that some channels drive users with higher intent and others are meant to drive awareness, we should leverage these insights to understand whether or not content does better with specific channels and not others. If users don’t convert in the same sessions, should we be immediately showing a different message and sequencing communication after initial interest?
- Traffic from varying mediums certainly complicates our ability to determine whether or not content is impactful. For example, users driven by branded search terms are more likely to be educated on your products and ready to buy than other users who might have been exposed to an image/social post where the product is new but appealing.
- Savviness: How familiar are users with the category that your product and service are in? Very? Then they should be sent into a flow that gets them to “buy” in as few clicks as possible. Not savvy? Then they’ll likely need to be driving into a flow that educates them on the category and why your product is better, in as few clicks as possible.
- Lifestyle: Are there attributes about a user’s lifestyle or life stage that may define their readiness to buy? Some users are at an early stage and just starting their research (e.g. looking for a new home). Other users may have already done their research and are more ready to buy (e.g. just purchased a home). In this case, showing education content to the former and product to the latter would potentially drive more conversions and maybe even efficiently than the average
- Status: Are users who are more affluent/established more or less likely to be influenced by content?
This is all great, right?! How do we go about executing this type of analysis? Fortunately, there’s not much setup work. Just make sure your analytics platform is on all of your pages and that your tracking is clean!
Since visitors are unique and tend to act differently, we can expect them to do one or the other (go straight to purchase or consume content before buying). This natural behavior allows us to create segments that group these cohorts of users that we can then apply to our analysis. In layman’s terms, with this structured approach to analysis, we’d proceed by creating segments that represent the multiple permutations of these cohorts and analyze the causational effect of exposure vs. not – and if exposure is driving incremental value, implementing data-driven experiential changes to create more of these behaviors.
Learn more about how the 3Q Decision Sciences team measures campaign value and effectivness – contact us today!