Mouseflow blew my mind.

animalstyle

BuSo Pro
Joined
Mar 28, 2015
Messages
930
Likes
842
Degree
3
So yesterday I installed Mouseflow on my client's site per the request of the dev team we were working with. Never heard of the tool, so I didn't realize what the hell I was in for. This tool is magic.

The tool has all kinds of features, but the main one is recording sessions for site visitors. You watch where they tap or touch on their screen, where they scroll, where they leave etc. Its as close as you can get to sitting behind them watching them browse your site - except their desires are genuine. Watching this on my client's site opened my eyes like never before. I got to watch people succeed or fail, using things in ways I didn't realize or fighting issues I didn't know where problems.

Blown away, I fired up the free version on my authority site. You get 100 free visitors, so I let a few trickle in. Watching 30 visitor sessions was enough for me to identify a major pattern showing where the visitors were browsing and where they were leaving. Knowing what I needed to show them and where became so obvious. Today was a whirlwind day of adjusting my content, adding call to actions in better places etc.

The insight from this little tool blew my mind into the clouds. I finally understand the relationship between design and content and what matters where. If you haven't experienced this, I can't recommend this enough. I'll be upping that free plan and optimizing my cashflow if you need me....
 
Looking at my analytics this morning I've bumped my conv rate for clicks up from 17.5% to 18.5% by adding an additional CTA. Less leakage.

The content is much more effective at answering the base questions that consumers have. If they are converting more, with more information in their head I would guess they are converting more once they pass through my affiliate link.

All signs pointing to more dollars.
 
How are you perceiving this versus split testing positioning? I feel like maybe its a shortcut to getting towards the optimal positions for elements, but definitely no replacement. Sounds killer for long-form landers.
 
@Ryuzaki really glad you asked this. I agree that Mouseflow or the like are not a replacement for split testing. It is a compliment.

I've been extensively A/B split testing for the last 2 months or so. Being immersed in this and suddenly being exposed to the data that Mouseflow provides, I feel like suddenly I found the missing half of the puzzle.

Mouseflow helped identify and fix structural and functionality issues where A/B split testing is perfect for testing design and content choices. Both should be used for optimizing your funnel.

Examples Of Mouseflow Wins:
Using data from Mouseflow showing heatmapping from collected visits: A link in a secondary position is getting primary attention. Now I know that my page structure needs attention. Also I can learn a lot about the desire of my visitors by observing the content of that link. Maybe some of what is behind that link deserves to sit on the page itself.

I watched a user struggle helplessly trying to re-order from saved orders. The functionality was such that you have to select your previously ordered items before you re-order. The default is that nothing is checked. Almost two minutes of agony watching this visitor click 'Add Selected to Cart' without selecting anything. Finally they just bailed. Default the line items to pre-checked and this would be a non-issue and that abandon would turn to an order.

Watching numerous visits to similar pages on different device types I was able to identify the essential information people needed and that I was lacking: Users scrolled to the pricing section, hesitated and then left. I added a call to action: "More questions, Buy Now?" with a button and bam. Clicks immediately. I learned exactly what the users wanted and gave them an action at that point. In addition I also revamped the content around that section and the pricing info as well. I know the main draw is the pricing so I put other essentials that sell the call to action, and I put it ABOVE the pricing, because then they have to scroll past it to pricing as I know that's what they are really looking for.

Where Split Testing Steps in:
For the last Mouseflow example, I now take it a step further by a/b testing the actual pricing content. Does showing hard numbers upfront or soft selling them the pricing work better?

Which button color is better for the CTA?

Can I use other trust signals to boost this CTA? Test the inclusion of a photo testimonial or video right near the CTA.

etc...

A/B test and let the data decide.

While on the topic of split testing:
One of the biggest things I've learned from testing is that all 3 outcomes of a split test are all very important:
  • Winner - Good job, you've found your new control. Implement your new change, enjoy the fruits of your labor and test again.
  • Loser - It's clear that what you did doesn't work. Test again or leave it alone. You can usually uncover a logical answer for this, but sometimes you will just be plum surprised.
  • Neutral - Easy to think "Eh, whatever" and walk away. What is important here is that you've found something where your preference can shine. Take these opportunities to reinforce your brand or choose a version because it looks cleaner or just because its your site and you like it better. These are the perfect places where you have wiggle room. Knowing these neutral results can be just as important as your winners or losers.
At the end of the day I think the key is to have both tools in your arsenal and when looking at your site you can apply the right tool for the job. Re-organizing a page? Mouseflow it to see how users react. Not sure if the CTA button should be on the left or right? Split test and let the data decide (or tell you its neutral and then you decide!)

I can tell you I am quickly becoming addicted to site optimization, but I am MUCH more of a student than a grand-master. Hope this was helpful.
 
Last edited:
How are you perceiving this versus split testing positioning? I feel like maybe its a shortcut to getting towards the optimal positions for elements, but definitely no replacement. Sounds killer for long-form landers.

I'm no CRO expert, but I've taken to watching a few conference recordings over the last couple of years just to get a feel for more what they're up to for projects that are too small for clients (or myself...) to justify bringing in a highly paid CRO agency.

There are two ways this kind of data is cool:

1. The most important thing (according to all these experts anyway) is coming up with good hypothesis to test. You don't just want to be randomly moving things about and changing button colours and testing everything initially as you'll miss out on some big, easy wins, that you could have hit straight away with some good theories about what's going wrong. Seeing where people are going wrong and clicking the wrong things, looking in the wrong places and so on can be a good shortcut to those quick wins.

2. A/B tests have a horrific tendency to not even be 'won' at the point they have mathematically been won. The last set of conference vids I watched the guy showed dozens of examples that hit a 'win' where most tools tell you to stop the test, but after running 3x as long the opposite result had won or there was no winner. So basically for many of us with smaller money sites we're just not going to have enough traffic to ever know for sure just from A/B test numbers BUT by looking at user behaviour we can see if they're no longer clicking about in the wrong places AND the test appears to be winning to give us more confidence of a 'likely win' even if we can't quite get enough traffic data. Sometimes 'enough' is a ridiculous amount judging by what all these presenters were showing in the way of failed tests that looked like they 'won'.
 
@Steve Brownlie this is solid advice.

Adding on to the point about forming a hypothesis: It's easy to attach to a version ahead of time and assume it will win, letting that bias skew your assessment. Clearly define your performance indicators ahead of time (i.e. revenue and conversions) let the data tell the story and rely on volume with statistical significance. I also work with a partner(s) when running these tests and find that when we have opposing preferences is usually a good sign.

When running a test I make sure to observe the trend graphs over time AND make sure that my volume is there. Sometimes tests start off with a wide spread and the data shows a winner but if you look at the trend graph you can see things are coming back together. I keep those running. Tests with multiple thousands of visitors per side are generally needed for reliable data.

Your point on smaller volume and using a tool like Mouseflow is great. I've got it tagged and can turn it on/off and get info in a matter of minutes. It's a great way to check your work when you make changes or create a new page/template.
 
Back