At the end of January, I shared some pay-per-click advertising best practices along with a new PPC testing strategy we’re trying out here at Formstack. Overall, the test proved very useful for determining winning ads, and helped us lift ad click-thru rates (CTR) by double digits for multiple keywords. And since we were taught to share growing up, we’ll even let you in on some of the results!
The idea for this test originated from a post by Dan Thies discussing split testing AdWords. The test consisted of one ad with performance history (original ad), three exact copies of that ad (the controls) and one new ad (the test ad). Creating the duplicate ads allows the test ad to be put on even testing ground and also prevents a potentially low-performing test ad from receiving 50% of the impressions as with a normal A/B test.
We set up the test to run among 25 different keywords within our Google Adwords account. We’re still running tests for some of the keywords due to a low number of impressions so I’ve pulled data from a few of our more popular terms. We’ll examine three specific keywords where we were able to crown a champion between the old ad (the ad with performance history) and the new test ad.
For two out of the three keywords, the test ad was the clear winner against the control ads and the old ad. For keyword 1, the test ad achieved a 46% higher CTR than the three control ads, a 24% higher CTR than the old ad and a slightly higher page ranking than all of the other ads. For keyword 2, the test ad outperformed the control ads by 36%, the old ad by 12.2% and again, was ranked higher in page position than all other ads.
The real shocker came in the results for keyword 3. The control ads crushed the test ad. The CTR was 85% higher for the controls and 87% higher for the old ad. The position ranking was also 150% better among all the ads compared to the test ad. Looking at these results, it’s completely obvious which test ads to pause and which to keep active.
What Did We Learn?
First off, we learned not to make assumptions. Prior to running this test, we assumed that the old ad would receive the majority of the impressions for a period of time until the control ads and test ads proved themselves. This was simply not the case. For the three keywords we highlighted above, the percent served was pretty close among all the ads during the entire course of the test. For example, in ad group 3 during day 1 of the test, the old ad received 21.2% of the impressions, the test ad received 21.7%, and the three control ads received 19.8%, 18% and 19.3%. The percentages varied only slightly after several weeks. The lowest performing ads began to see a drop in percent served, but only a 1-2% difference.
Now let’s pretend those three control ads were taken away in the group containing the test ad that had an 87% lower CTR. That underperforming ad would have received around 50% of the total impressions. We would have missed out on quite a few clicks and possible conversions had we set up our typical “Ad A versus Ad B” testing model.
We’re continually learning and finding new ways to make small tweaks to our ads in order to improve performance. We’ll monitor our current tests until they receive enough impressions, pause the losers, then set up new tests to try out.
The Take-Home Message
1. PPC advertising provides immediate, qualified traffic to your website so don’t neglect it.
2. A/B testing PPC ads is not as simple as “Ad A versus Ad B.”
3. Making assumptions on PPC advertising can lead to missed opportunities.
4. As with a recipe, keep testing until you find the right combination of ingredients.