How do you go about choosing a photo to promote your product? Is it your favorite photo? An image that shows your product in action? A detail shot that shows off your craftsmanship? Research shows that a product’s photo quality is the biggest driver of its sales, even more than its tags, price, or customer reviews. If there was an empirical way to learn which photo will drive more traffic — and therefore conversions — to your shop, would you use it? (Spoiler: there is.)
Take a Guess
Take a look at the two photos below. They depict the same pair of sandals; one shows them as they might be used after purchase, the second as they might appear in a store. Which image will generate more traffic? Go ahead and guess before scrolling on.
In this case, the seller had chosen the image with feet as the primary image. When we tested both, the image of the sandals alone generated 30-50% more traffic.
Now, try the same exercise with these two photos of a necklace:
Applying the same logic as the test above, perhaps the photo that shows this necklace without person modeling it will attract more clicks? We find exactly the opposite: 20% more traffic for the photo on the right.
As humans, we like to look for rules and systems to guide our decisions. Some rules work better than others; for example, photos which feature products in real-world contexts are more successful on average. (In the future, we’ll be sharing blog posts that detail what we’ve learned to help you take better photos.) But even with excellent product shots, we’ve seen again and again that the best rules too have exceptions. We’ve learned that the only way to pick the best photo is to try a few and see what works — and we have the data to prove it!
We have tested tens of thousands of photos and have one overwhelming takeaway:It is really, really hard to predict which photos are best without running a test!
Stop Guessing and Start (Automated) Testing
The value of testing and making decisions backed by data is highly valued in business and professional circles. The statistical technique we use, called the “A/B test”, is the same one used by businesses like Google, Amazon, Facebook, and Netflix, researchers around the world, and regulatory agencies like the FDA to scientifically figure out what works. Consultants charge tens of thousands of dollars to run and analyze randomized experiments for large businesses. Whatify makes these professional techniques available to everyone. We are experts at running and analyzing A/B tests and we want to share our expertise with you so that you can increase sale conversions and build a more successful business.
Okay, so let’s unpack what’s under the hood of A/B testing and why it’s becoming a required part of the toolkit of most businesses.
You can think of A/B testing as a systematic way of “trying different stuff” in a way that is formulated to produce effective results. A/B tests address specific problems that might otherwise affect an experiment’s findings:
- Confounding Variables: If you try photo A on Wednesdays and photo B on Saturdays, you might find that photo A is better. But is this really because of the photo, or do you just get more traffic on Wednesdays than Saturdays?
- Noise: If you test two photos and photo A gets 13 views while photo B gets 16 views, does that really mean that photo B is better or was this due to chance? If you thought that photo A was better before the test, is 16 vs. 13 enough evidence to overturn your original judgement? Can we be sure that photo B is not performing better due to luck?
We’ll look at confounding variables first. In a perfect world, everything would be exactly the same when you test photo A and photo B.You could try testing both photos on Wednesdays, but you also want the time to be constant. You could try testing both photos on Wednesdays at 8:30 pm, but now you have test the photos on separate weeks (you can’t simultaneously make both photos primary!), and calendar date might matter too. Trying to hold everything constant seems impossible.
The way that A/B testing solves this problem is by essentially “flipping a coin” to decide which photo will be primary at any given time. This might seem counterintuitive — how does randomizing the test hold things constant? The trick is that if you flip a coin every few hours to decide which image is primary, then on average all the other variables cancel each other out. You won’t end up testing the two photos on exactly the same days or exactly the same times, but you can be certain there are no systematic biases.
With confounding variables out of the way, we arrive at the issue of noise. Once we know there are no systematic biases, we use statistical methods to figure out if one photo might perform better due to random chance and incorporate that into our analysis — AKA we make sure that one photo isn’t just getting lucky. We also combat the chances of luck creating false results by running the test for an extended period of time. If a photo performs well over a long period of time, it is less likely that its results are a product of luck. We combine all of these factors to determine whether all of time data together suggests that photo A or photo B will more successfully convert sales.
The results will increase your sales — we double checked.
In order to determine how much our recommendations increase sales for your shop, we generate our recommendations using only half of the available data. We use the other half as a “control” to learn how many more views our recommendations acquire in comparison (for my statistical gurus out there, this is called “cross-validation”). If the “winners” we pick on the first half just “got lucky,” they won’t out-perform other photos in the second half of the data. On the other hand, if those winners really are better, we should see that they continue to generate more traffic in the “unused” half of the data. If that’s the case, we can be confident the estimated improvements we generate are not due to false positives.
Want to know more?
Head over to Whatify.com and download our FAQ/checklist/whatever to learn more about what we do.
If you’re interested in learning more about how A/B testing is used in the business world, check out this article. If you love the mathematical aspect and want to dig into the details, try this textbook.
And you’re just interested in using A/B testing to increase traffic and conversions to your Etsy shop by 5-25%, then try signing up for Whatify. (It’s free.)
Whatify is a tool for Etsy sellers that increases sales by an average of 10% and as high as 30%. Sign up today and try it for free!
Whatify uses a data analysis technique called the A/B test to determine which of your existing product photos will attract the most customers to your shop. While consultants charge tens of thousands of dollars to run these tests for large businesses, Whatify aims to make these tools available to everyone.
Companies like Google, Amazon and Facebook use scientific experiments to figure out what works. Whatify helps you do the same. It’s easy, automatic and you can try it free (no credit card required). Get started today!