Landing Page Testing & Hierarchy of Tests

Joined
Jun 7, 2015
Messages
133
Likes
181
Degree
1
I'm constantly running tests through google optimize to my landing pages, advertorials, etc.

My problem is I am just beginning to feel like I'm throwing shit at the wall more often than not. I'd love to have more of a standardized framework to test within.

Kinda like this:

Angle > Titles > Texts > CTAs > Images > Colors

In my mind, the above hierarchy is true for the overall effectiveness of paid ad campaigns. Figuring out the right angle is vastly more important than some button color you always see the marketing agency case studies on (lol). Nonetheless everything deserves to get tested over time, but it makes the most sense to make it all in the right ORDER.

Does anyone have a rubric or testing roadmap they use as their "hierarchy" for performing split tests with the largest impact?
 
If your talking about order, there is nothing more important than the offer.

A hot offer, does 90% of the work for you.

You can have an awesome angle, text, ctas, colors, hero shot, etc... but if the offer is a dud it won't convert.
 
@ORMAgencyDude

I forgot offer. Yes, 100% offer beats all.

But after that part, what processes and tests do you churn through to maximize your traffic potential?
 
@Zach, you can see 75 Ideas on the homepage here: https://goodui.org

A lot of these are about removing friction throughout the whole funnel so you get less drop off.

It's a paid thing now but they let you dig into successful patterns by type of page, offer "leaks" from big tests on big sites, and even offer templates you can copy.

I know this doesn't answer your question of the order to perform the testing, but it might give you more stuff to test.
 
Do you have a stable source of traffic (good ads campaign) and stable conversion rates? If not fine detail 'testing' is practically impossible.
I personally wont do fine grained landing page split testing until i have at least 20 conversion events a day, ideally a whole lot more, and there isnt something more valuable i can better spend my time on (growth).
If this isnt possible with the end game conversion (big ticket, low volume stuff), i'll create a higher level conversion event (clicks button. scrolls to end of page etc) to work off of, or create a higher level campaign (lead gen instead of direct sales) and then try and talk / live chat to as many prospects as possible.

Once that is working though, i think your general order of importance is spot on.
Copy (headlines, body, CTAs) have the biggest impact. Angle (the most important) and images are usually dictated by the ad campaigns, so not part of LP spit testing, and i have never seen colors have much of an impact.

Often times, i never really end up doing a whole lot of fine detail testing, because once you have hit on a decent profit level with the broad optimisations, it makes a lot more sense to focus on growth.

For example my current focus is a product with a 2k sale price, that requires physical face to face closing.
The first week we brainstormed angles, created a website and about 8 different landing pages focussed on the different angles, created ads for all LPs (a few different versions for each, some more than others - nothing scientific, just gut feeling, what if? etc), and spent 2K on ads, focused purely on volume of, and cost per landing page view. We somewhat ruthlessly ditched any angles that where not getting decent CTR, and ended up with 2 promising ones by the end of the week , with 6 warm leads (£333/lead). We spoke to the leads on the phone, noted all questions they had (to improve LPs & ads), and set appointments.
2nd week, we sent out the sales rep, and took feedback from the rep to inform us on how the angle effected likelihood to close. We started up ads for the 2 promising campaigns, and added live chat to the LPs, and spent 1k in ads, resulting in 21 warm leads (£47/lead).
By then end of the week, it became obvious that the most promising campaign from a CPL perspective had very low buyer intent, and wouldn't close (4/6 of the original leads, all non closers), but the other had promise (one close - 1.8k revenue, one maybe). Unfortunately, only 7 of the 21 new leads came from that campaign, but thankfully, we had learned that very quickly.
Week 3 we set appointments for the 7 most promising leads (and emailed the other 14 an info pack - we only have 1 rep, so have to prioritize), turned off the 'better performing' (but not closing) campaign, and spend 1k on the other, with tweaks to targeting and creatives. By the end of the week, we generated 29 new warm leads (£34/lead), and closed 2 of the previous weeks 7 leads, and 2 out of 4 appointments for the new leads.
Week 4 we didnt touch the website or ads, just kept running them at 1k a week, and kept an eye on closing rates. CPL stayed about the same (28 leads, £36/lead), and close rate at 1/3 appointments.
Week 5 (this week) we took stock, decided the CPL and close rates looked pretty stable, halved the ads budget (because we cant keep up with appointments) and did some maths.
Revenue per sale is 2k
CPL = 35
Close rate = 1/3. so advertising cost per sale = 105
Sales Tax (VAT) is 20%, so 400
Sales rep commission is 25% of post tax, so 400
Product cost is 700
giving a profit of (2000 - 95 - 400 - 400 -700) = 395 per sale
And we can generate around 9 sales a week, giving a weekly profit of around £3.500

All looking very promising. We are at the limit of what we can effectively close with 1 rep, so we can either look at streamlining the ads and lps, streamlining the closing process, or growing the sales force (we cant do all at the same time, because we dont have the bandwidth).

Looking at the first option, perhaps we can make our ads 30% more effective, cutting our cost per sale by 31.5, and adding 283.5 a week to our profit (providing we get it right first time, and dont end up increasing our CPL instead, which is likely).

Option 2 is out for now at least - a 1/3 close rate is already excellent.

Option 3 however has the potential to almost* double our profits for each new rep we bring on, so thats what we are focusing on this month. We expect we could run 5 reps before we hit serious scaling problems (ads and admin), and might need a month a rep to get them up to speed (and weed out the bad ones) so i won't be testing button colors before the summer is out.

This is main reason we will focus on growth over further optimisation, but there are other factors. This is not in any way a new or innovative product, but our angle and marketing is. There is every chance one of the huge players in this field will spot us sooner rather than later, and if we have grown by that point, we could be an attractive buy out, rather than just another bug to be crushed!

Also, if this works out (no guarantees), i will focus on getting other ad platforms running and profitable before i optimise the current ones further, because i would be particularly pissed off if we manage to build a 1mil a year company and have it taken away by an AI moderator having a hissy fit!
 
@Steve

Yes I wouldn't have made the post if it wasn't for stable, solid campaigns. We bring in hundreds of conversions/day across a variety of traffic sources (all online, all pixel based, so all numbers and attributed).

I really need some AI to dynamically test for me lol. Too many variables. And yes, when I feel I want "super quick" tests I will optimize for upstream points like LP CTR instead of sales.

One thing I have found truly remarkable is just HOW MUCH data you really need to make optimizations on finer tests (images/subheads/text).

Time and time again I create split tests, after a couple days and hundreds/thousands of conversions ALL the A/B test tools (including g optimize) will tell me variation B won 95-99% accuracy. Then I just let it run for a few more thousand conversions and then the data just swings another direction all together without anything else changing. Or after 5,000 conversions to each A/B its a wash and the test didn't matter at all. It's a frustrating cycle, I feel at times I'm not testing the right fundamentals. Wasted tests. Sometimes I'll find big, solidified wins though, but most of the time not.

Testing small-difference conversion optimizations is truly a big-data matter and 95% of the affiliates out there call a test a win far, far too soon. I've taken IDENTICAL landers and put into an A/B for fun and hundreds of conversions later the tools will be convinced 1 was better than the other.

I think psychologically it's so eassssssyyyyy just to call it, bank the win, and go "wow I just made 20% more on every click from here on, this rocks!" but the data typically isn't mature enough. Most times you just had a nice swing in your data and then you redo a lot of your funnel based on your test findings, but the test findings weren't as solid as the tools lead you to believe.

Not that having a degree in economics makes me an authority here, but I remember hearing my teachers mention all the time that "oh yea a WHOLE STUDY could just lean one way on data randomly. Happens all the time. Can't get repeated. Even with all the statistical controls and standard deviations, sometimes the data just edges hard one way and we can't do much about it. What's really the trend is when you have multiple, dozens of studies all pointing the same way, not just 1." Seems to be consistent with my findings in A/Bs online.
 
Back