пятница, 18 декабря 2015 г.

Figuring Out What to Test: 9 Experts Share their Methodologies


Oftentimes the hardest part of testing is deciding what needs optimizing and where to get started. As a matter of a fact, 63% of marketers optimize websites based on intuition (MarketingSherpa). There are many ways and techniques to locating the leaks in a funnel, finding the pitfalls and then testing them. Fortunately we have a few experts who are happy to share their insights and AB testing tips for deciding what to test, and where to start.


To get us started, Jeffrey Eisenberg dives into a 3 step process which will help you locate your customer’s pain points and prioritizes tests according to impact and resources.

 
JEFFREY EISENBERG

The CEO of BuyerLegends co-authored “Waiting For Your Cat to Bark?” and “Call To Action” both Wall Street Journal and New York Times bestselling books. He has trained and advised companies like HP, Google and NBC Universal to implement digital marketing strategies emphasizing optimization of revenue conversion rates for engagements, leads, subscriptions, and sales.
“We have a 3-step process we use as part of our Buyer Legends process.
  1. Pre-mortem
  2. Eisenberg’s Hierarchy of Optimization
  3. Scoring Priorities
Of course, you cannot start the 3-step process without first creating actionable personas based on qualitative and quantitative data. Buyer Legends employ storytelling to optimize customer experiences.
Why do we focus on customer experience? We wrote in 2001 that conversion rates are a measure of your ability to persuade visitors to take the action you want them to take. They’re a reflection of your effectiveness at satisfying customers. For you to achieve your goals, visitors must first achieve theirs.
“For you to achieve your goals, visitors must first achieve theirs.”
Pre-mortem
The reality is that most companies lose more sales every day than they make. If you are converting less than 15% you need to evaluate what is broken in your customer experience.
Get to the bottom of what is going wrong, and plan to get it right. That is why, hands down, the pre-mortem step is the most impactful step of our Buyer Legends process. In fact, rarely does this exercise fail to produce at least one a-ha moment for our clients.  When you imagine the sale is already dead it frees up all the mental energy that you used to try and get the sale and points it at all the potential pitfalls and problems in your experience.
Eisenberg’s Hierarchy of Optimization
After you perform your pre-mortem you will likely end up with a long list of potential proof of Murphy’s law, but not everything on your list is equal.  Some thing are worth your effort some are not.  In my work with clients we often use Eisenberg’s Hierarchy of Optimization to separate the more pressing issues from the tinier ones.
First sort the list of problems into the follow categories:
  • Functional. Does this product/service do what the prospect needs? How easy is it for a prospect to determine this?
  • Accessible. Can she access it? What are the barriers to her ability to realize the need? Is it affordable, reasonable, and findable?
  • Usable. Is it user-friendly? Are there obstacles?
  • Intuitive. Does the sales process/Web site feel intuitive and natural based on her buying preferences? Is she forced to endure unnatural buying modalities to realize her need?
  • Persuasive. Does she want it? Does she truly understand if it fills her need or solves her problem? Is her expectation reasonable? Will she be delighted?
Once they are sorted simply work your way up the pyramid.  Again, remember not every problem is in search of a solution, and you should focus on the problems that are likely to impact the most customers, and problems that you can actually fix. Be practical, don’t get caught up in the problems you can’t fix.
Scoring Priorities
Let’s consider another simple system to enable your organization to prioritize more effectively when planning tests. The system is based on prioritizing all your planned efforts by three factors with  a score from 1 to 5, with 5 being the best and 1 being the worse:
  1. Time – How long will it take to execute a project (a change, a test, or full scale roll-out) until its completion? This includes staff hours/days to execute and the number of calendar days until the project’s impact would be recognized. A score of 5 would be given to a project that takes the minimal amount of time to execute and to realize the impact.
  2. Impact – The amount of revenue potential (or reduced costs) from the execution of your project. Will the project impact all of your customers or only certain segments? Will it increase conversion rates by 1 percent or by 20 percent? A score of 5 is for projects that have the greatest lift or cost reduction potential.
  3. Resources – The associated costs (people, tools, space, etc.) needed to execute a project. Keep in mind: No matter how good a project is, it will not succeed if you do not have resources to execute an initiative. A score of 5 is given when resources needed are few and are available for the project.
Next, take each factor and multiply them (don’t add them because these factors are orthogonal) for each project. The best possible score is 125 (5x5x5). Tackle and complete the highest-ranking projects first. Meet weekly with a cross-functional group to evaluate the status of each project. Be prepared to re-prioritize regularly; once a month or at least once a quarter.”
Tiffany daSilva dives deeper into buyer personalities and how she uses them to find the important leaks in the funnel and address them:

TIFFANY DASILVA


is a full stack marketer who has spent the last 10 years working on over 350 websites. Her past includes Geosign, Achievers, & Shopify and is currently the Director of Strategy atPowered by Search. Tiffany was named one of the Techwomen Canada of 2012 and can be seen speaking about marketing at events such as Unbounce’s CTA Conference, PPC Hero London, and 500Startups growth conference.
“I like to first look at data to understand what’s happening behind the scenes, the first metrics I look at are:
  1. Time on site
  2. Frequency
  3. Mobile devices
  4. and the usual paths users follow.
From there I go to user testing where I try to find people with different buyer personalities:
  • Competitive: Logical and Fast Thinkers who tend to move through the site quickly because they know what they want.
  • Methodical: Logical and Slow Thinkers who tend to read all the fine print (even privacy policies) to ensure that they know all the details before making a choice.
  • Humanistic: Emotional & Slow Thinkers who take the time to read reviews, testimonials and ask their friends. They want to know what others think about the product before they make their decision.
  • Spontaneous: Emotional & Fast Thinkers, they think with their gut but also love to play with things so finding things that might “distract” them in the right direction like check marks, lists, tours of the site, and quizzes will help them convert easier.
These personalities help me see how they move around the site, what areas they focus on more and what they think is missing from the site.
When it’s time to start my test I make sure that I have a hypothesis. Knowing what you’re going to test is very different than hypothesizing what your outcome is. By creating your hypothesis you’re essentially saying,
“I believe version X will beat version Y by __%.”
A few things happen when you do this:
  1. You learn what works and what doesn’t because you’re invested in the outcome.
  2. You’ve created your requirements. If you decide that version X will beat version y by 15% and it only beats it by 2%, than it’s not enough of a difference and you can go back and redo the test. Deciding what your goals are before hand will keep you honest, and make your tests bigger and better over time.
  3. By setting out what you want your test to look like, you can ensure that your testing tool or analytics is set up in such a way that you will be able to measure the test. For example: If you say that version X will beat version Y’s conversion rate by 15%. You can run through your testing software, Adwords account or Analytics to ensure that this information is readily available to you.

JOANNA WIEBE



is a conversion copywriter and the co-creator of Copy Hackers and Airstory, she is also one of my favorite writers and I strongly recommend you take her copy writing course.
“I work with a lot of tech businesses, and they tend to follow Dave McClure’s pirate metrics (AARRR) for growth:
  1. Acquisition (sign-up)
  2. Activation (use product)
  3. Retention (keep using product)
  4. Referral (tell others to use)
  5. Revenue (pay for it)
We start by identifying the metric that is most off-target or most in need of optimization. Perhaps we see in our reporting that Acquisition, Activation and Retention are on track against goals right now but Referral and Revenue are flat or trending down. We work with the business to identify which of the two – Referral or Revenue – make the most sense to focus on optimizing, based largely on their business strategy and goals.
Let’s say they select Referral. Okay, now we need to find out what might be broken, not built, poorly executed, etc that’s potentially suppressing referrals. So we:
  • Identify and document the tactics in play to generate referrals
  • Analyze those tactics to find lower-hanging fruit; this is where analytics for web, email and the app come in especially handy
  • Brainstorm overlooked referral opportunities (e.g., new and proven ways other businesses are getting referrals)
  • Interview existing users to learn what’s keeping them from referring
Out of this exercise, we get not just split-testing ideas and food for hypotheses but also other growth hacks to boost referrals – like contests to run, businesses to partner with, content to create and repurpose, and apps and widgets to build.
To prioritize the lower-hanging fruit, we create a roadmap divided by channel: test these things in this order for user emails, test these things in this order for in-app prompts, test these things in this order in remarketing ads. Importantly, the roadmap has checkpoints at the end of each experiment so we can learn and apply what we learn to hypotheses for the next test on the map.

PEEP LAJA



is the founder of ConversionXL, and is one of the leading conversion optimization experts in the world. He helps companies grow via his conversion optimization agency, courses and coaching program. Take his free CRO course here.
“Figuring out what to AB test is the most difficult part of CRO – it’s what separates the pros from wannabes. If the site is terrible, it’s not very hard to come up with fixes, but if it’s decent or already well optimized, you have to have a data-driven process. I use ResearchXL framework to gather 6 types of data input to identify the problems the website has:
  1. Technical analysis
  2. Heuristic analysis
  3. Digital analytics
  4. Qualitative surveys
  5. User testing
  6. Mouse tracking
Where is it leaking money the most? What is the problem? What’s the discrepancy between what users want and how the website is? Once you know what the problems are, and are able to prioritize the issues (from major issues that affect a large number of users to minor usability issues), coming up with test hypotheses to address them becomes so much easier.
Next step: I figure out how many variations I can test against the control (depends on the transaction volume), and come up with as many different treatment alternatives as I can. Depending on the scope of changes, I might do a wireframe of the treatment, or just add annotations onto the screenshot of the Control – and then send it off to my front-end developer. If the changes are more radical, we might need a designer to be involved before coding up the test. One thing is for sure – speed of implementation (getting the test ready to go) is super important. We can not waste a day.
As Peep points out, the most difficult part of CRO is figuring out what to test. Which is why a huge part of our work at Conversioner is spent on understanding the data, the customer’s emotional needs and finding the point in which we can have the most impact.

TALIA WOLF



As Founder and CEO of Conversioner, I help businesses build and execute their conversion optimization strategies, using emotional targeting, consumer psychology and behavioral data to generate more revenues, leads, engagement and sales.
For us, figuring out what to test is a two-step process. First, running an in-depth analysis of the funnel and understanding key behavior of customers. Using heatmaps, Google Analytics, surveys and any other analytical tool that can help discover the crucial pain points within a funnel that need optimizing and if optimized, can have a large impact on bottom line results. Next, once we’ve found the pain points within the funnel, is time to identify what the problem is and come up with hypotheses of how to fix it.
This is when a second step comes in: emotional targeting research. We dive into customer profiling,building personas, mapping out the customer journey and the emotional needs of the potential customers. Understanding the sate of mind a customer is in throughout each step of the funnel, running competitor research, SWOT analysis, market and trends research all help us identify the customer’s emotional triggers, their story and come up with a few hypotheses to test.
Once we have built these hypotheses the next step is presenting them to the team, getting everyone on board to ensure we’re all on the same page, creating the test (usually a more strategic test) and launching as soon as possible. For us, the first test for us is just part of the initial research, giving us indication if we’re on the right track and if we’re testing the right stuff.
Using Peep Laja’s process, Tommy’s ultimate goal is to build a plan that starts with low investment/high-reward and to prioritize tests from there.

TOMMY WALKER



is the Editor-in-Chief of the Shopify Plus blog. It is his goal to provide high-volume ecommerce stores with deeply researched, honest advice for growing their customer base, revenues and profits.
“For example, if a page with high-traffic & relatively high conversions has a slower load time, why do a radical redesign of the page, when you could look for ways to increase the page’s load speed or time to interaction? Once you’ve taken the time to put together a list of these insights, the question of “what to test first” is really a question of “what can I get away with without having to ask permission?” “
Within most organizations, people like the ​_idea_​ of being data driven to run tests (as long as they’re not the ones collecting the data), but when it actually comes time to test execution, everyone suddenly has an opinion, whether they’ve paid attention to the data or not. Most tests get shot down because of internal politics, not for impact potential, so just be aware.
So, “what to test first” really comes down to being able to gradually build up to the pages that have more stakeholders and require more resources. Essentially, you’re just trying to build a case that says “My research identified X,Y & Z problems on these pages, we had these hypotheses which turned out to be correct, made this much more revenue, and now I’d like to test these hypotheses on these higher priority pages, based on X,Y,Z things I’ve found here.”
CRO is as much (if not more) a game of politics as it is a game of statistics and insights. More tests die because someone internally shut it down, than from actual ability to drive revenue or change. So you need to do what you can early on to build trust in the process and ability to drive change.
Sujan Patel has his own methods and tools to formulating testing ideas and executing them.

SUJAN PATEL


has 12 years of experience in digital marketing and co-founded Content Marketer &  Narrow.io, tools to help marketers scale their content marketing & social media efforts. He is also an avid blogger and writes for Forbes, Inc, WSJ and Entrepreneur.
“1. I always start by formulating a hypothesis around the weaknesses of the site. I validate the hypothesis by surveying visitors using tools like Qualaroo and talking to potential and existing customers. This process usually gives me 5-10 different tests to run, typically around messaging and copy.
2. I prioritize the test based on impact or urgency and start a/b testing. I typically start with top of the funnel a/b test and move down the funnel but its different for every company and depends on where their biggest weakness are.”
Angie also uses a framework to determine what to test and to prioritize:

ANGIE SCHOTTMULLER



Helping organizations exceed goals and better understand customers using data-driven technology and persuasive marketing psychology.
“To decide what to test I map out and inventory issues and ideas we have for optimization, then I score them using the PIER framework. The PIE comes from the WiderFunnel model:
  1. Potential – How much improvement can be made on the pages?
  2. Importance – How valuable is the traffic to the pages?
  3. Ease – How complicated will the test be to implement on the page or template?
  4. Reusability – I added the “R” to score reusability of the insight learned
Once I’ve mapped out the highest scoring testing plan, the next step for me is getting the team aligned on what to test. You can work extremely hard on your end but you have to have your team members on the same page or you’ll be wasting a lot of time and energy.
  1. Draft a test plan
  2. Perform discovery for controlling the test environment — PPC ad management constraints, outside variables, potential pollutants, etc.
  3. Regroup as a larger team to commit to the plan.
  4. Execute”
Before looking at the numbers and finding the leak, Nichole first puts her customer to work on answering some tough questions and dedicates most of her time to defining the language a company should be using:

NICHOLE ELIZABETH DEMERÉ



is a SaaS Consultant & Customer Success Evangelist. Chief Strategy Officer at @Inturact. Moderator at @ProductHunt & @GrowthHackers. Co-Founder at @GetTheCraft & @SaaSCommunity. Previously: Growth at @Inboundorg. INFJ.
״Before starting out, I have the website owner fill out the Ideal Customer Profile Framework and the Value Proposition Canvasthen I look at the data. It’s a lot of work filling out both forms, however language is the most important aspect of converting/acquiring the right customers.
Aside from analyzing the usual: ‘is there a main call to action / one call to action’ types of questions, I look at what the website is trying to accomplish in terms of setting the tone and expectations and whether those expectations match what the customer can actually accomplish both inside and outside of the product: i.e. does the language help the customer be successful with their desired outcomes?
As Lincoln Murphy says, “There is often a gap between the functional completion of your product and the customer’s Desired Outcome.” And updating and testing the language on the site is of utmost importance.
These frameworks and AB testing tips the CRO experts shared are simply our best practices and our tested methodologies . Now it’s up to you to take these processes and try them out for yourselves to figure out where to start testing on your site, and what to test in order to increase your conversions.
Which expert’s methodology best fits with your style? do you have a different way you go about finding what to test and where to start?

Комментариев нет:

Отправить комментарий