Note: this blog post was originally posted on ClickZ.
You read user-generated content, and so do your customers. They trust it more than anything a marketer can say, and it’s this trust that leads them to a confident buying decision. So when you put user-generated content on your site, how do you measure the impact? How can we tell if having user-generated content on the site is really helping the business?
Here are two ways we recommend organizations measure real success.
Do before-and-after comparisons. It’s possible to compare key metrics — such as average order value, sales conversion and traffic to the product page, among others — before and after user-generated content is added to the site. The key is to look at the same (or very similar) products at specific points in time over a broad period and avoid any external factors (such as promotions).
An example of a before-and-after test of reviews on a classic kitchen mixer that sees predictable sales throughout the year. Key metrics would be captured for a full year before reviews were launched. Once reviews were launched, the metrics would be compared quarterly, with a full comparative analysis done at specific points in time one year after launch.
As a best practice, compare metrics using “percent of change” rather than a simple difference, so you can compare the performance of different products to one another and make additional observations. For example, using percent of change, you may see a correlation between the number of reviews and overall conversion. Every marketer should know the percent change formula, but for the sake of convenience, here it is: % of change = [(new value – old value)/old value]*100.
To check the accuracy with this method you can compare that mixer’s results with a mixer in the same category that doesn’t have reviews for the same period, or you compare it to other products in the same category. And compare it further to products in similar categories and the site overall. With a few comparison points, assuming you have a large lift for the mixer, you can confidently support the results.
There’s always the question of external factors in a before and after analysis, but if the results are substantial, this method can work well on a product-by-product basis on established products that have sold consistently over time. It’s important to take seasonality and other factors into account, too. For example, recent nationwide economic factors played a role in many sales downturns.
Run an A/B test. While an A/B test can take time and effort to plan, it’s one of the most accurate ways to measure success, when done correctly. A/B testing takes two otherwise identical groups and makes one change to one group. That group, version “A,” becomes the test group, and version “B,” the group without the change, becomes the control group. You then measure the performance of these two groups, and determine what impact the variable made on the results.
For example, if you wanted to test the success of an email campaign, you could do so with A/B testing. You would first split your receipt list into two groups. The control group (Group B) receives a standard email, and the test group (Group A) receives the same email, but with the addition of review content. You then measure key performance metrics, which, in this case, might include response rate, click-through rate, and conversion. We have seen lifts in revenue per email as much as 50% with this method. We’ve also seen online retailers do A/B split tests on their web sites and show lifts from 10% to 50%.
Since the only difference between Group A and B is the inclusion of user-generated content, any difference in the performance metrics can be attributed to the presence of that content.
When used correctly, it is powerful, but, if done incorrectly, it can provide misleading results. To run a successful A/B test, take time to plan ahead, base your sample size and time frame to gather enough data to show clear, realistic results, and make sure to just test one variable at a time, so you can confirm that results are based on that one variable.
Other key points to remember.
Look for at least a three month test window to help make sure your results are sound. While some specific campaigns may take less time to gauge results, three months is usually a good rule of thumb.
It’s also important to make sure you have the right web analytics tags and data capture methods in place. For a holistic view of how user-generated content is working for you, tag all interactions your site visitors have with this content — including online and offline encounters. And align your tagging methods with your business’ overall success metrics.
Final parting thought: If a data point doesn’t matter to the bottom line (or impress the CFO), think about why you’re really measuring it.
120+ CMOs shared their biggest challenges, plans, and expectations for social marketing in this survey by The CMO Club and Bazaarvoice. Get the free results.