About Analytics, Conversions and On-Site Surveys

Recently, I have given a lot of thought to conversions. A static website may attract a lot of traffic, but unless there is an initial CTA (otherwise known as “call-to-action”), that traffic may be of no use at all.

According to a recent study by Bain & Company, around 80% of companies say they are customer centric, yet only 8% of their customers seem to agree with this.

When it comes to online shopping, this gap is evident.

You already have my contact info, and you know exactly what I bought – why not use that information and ask me for my opinions on-site, as well as at least some feedback about the online store and my purchase experience, or at the very least the product you just sold me?

As we all know, around 90% of all online experiences begin with a search engine. Proper SEO and SEM are, naturally, of a very high importance when it comes to attempts to increase the conversion rate.

Also, to be more precise, conversions only take place when targeted traffic meets the relevant offer. It all starts with knowing who is your target audience – and with knowing what they need or want.

It is highly recommended to begin by asking the right questions.

  • Who are the target customers? And what is their ongoing life situation?
  • What do they want? And what is the biggest pain point related to that?
  • What are the exact needs of the customers that aren’t being met right now?

Surveys can be used to significantly increase conversions by directing visitors to the correct pages on site. It all starts simply by asking questions related to customers and their specific needs, or feedback on whatever is displayed on the current page being browsed.

Qualitative research can offer more insight than anything else for coming up with conversions. Whereas quantitative figures tell you “what, where” and “how much”, the qualitative information tells you “why”.

The primary goal of qualitative research on-site is to gather an in-depth understanding of a website user’s behaviour, and the main reasons for that behaviour.

It makes sense to first inquire the user’s intent – to dig into what exactly is the specific problem they were solving by visiting the site in question. Moving forward, the next relevant questions might be, for example, what mattered to them when choosing the product or service, what kind of comparisons did they do prior to purchase, or how many and which other sites they looked at, and so on. It might also make sense to ask about friction – fears, doubts and hesitations the users experienced before making the purchase.

Your basic Google Analytics tools help you in defining your questions and in placing the surveys on site.

With Google Analytics tools, it is easy to spot the exact:

  • Best performing content (Which pieces of content work best? Try and get a clear view on this one!)
  • Best converting keywords (Which keywords rank? Aim to rank better for these and similar words.)
  • Best converting landing pages (Where is the incoming traffic landing on – and does it convert?)
  • Best converting traffic sources (Where exactly is your traffic that converts visitors coming from?)

To best avoid bounce and churn rate increase, I suggest giving the conversion surveys as well as their placement on your site a lot of thought. Less is more.

Exit surveys, annoying pop-ups and prompts to subscribe to another newsletter are proliferate. With Google Analytics tools, you can easily target the relevant customers with your on-site surveys – it does not matter if they converted, or did not yet.

Geckos, Data and Metrics

Imagine a world where everyone in your company or team has all the information needed to be successful. A world where everyone uses relevant KPIs to quantify and measure their team’s performance, so everyone knows exactly where their projects stand, and they can easily deliver progress reports based on goals.

How to make this happen? Which metrics matter most? And which tools should be utilized in the process?

Let’s start with the basics.

First of all, your team needs to identify the KPIs that are aligned to the company’s current strategic goals. Only then can the team start pulling in the relevant data, and crunching the numbers that matter most. Creating a data-driven environment starts with setting up the KPIs and a dashboard to access these.

The often repeated qualifications for the best metrics are, of course, “the three A’s”: good metrics are Actionable, they can easily be Audited, and they are Accessible.

In “Lean Analytics”, Alistair Croll and Ben Yoskovitz introduce a framework for deciding whether the metrics you’re tracking are good metrics or bad metrics. This distinction is crucial.

According to Croll and Yoskovitz, a good metric is:

  • Understandable — A good KPI should, of course, be understood by everybody who has access to it.
  • Comparative — KPIs should ideally be able to be compared over periods of time or against industry benchmarks.
  • A ratio or a rate – Absolute KPIs can be useful, however, rates and ratios generally provide a bigger context.
  • Behavior-changing — Can someone in the team take action based on how the KPI changes? If not, then this metric may simply be noise or a vanity metric.

KPIs can also be qualified using the IPA Rule. This rule stands for Important (Is this KPI important? Does it matter?), Potential improvement (Does this KPI have potential for improvement?), and Authority (Do you have authority or means to improve this KPI?).

Several dashboard tools can be of assistance in communicating the KPIs and making the data accessible to everyone in the team. I highly recommend Geckoboard, which makes it easy to create beautiful metrics dashboards with just a few clicks, and has pre-built integrations with software tools like Google Analytics, MailChimp, Salesforce and Zendesk to save time. Other nice, similar options are DearLucy and Leftronic.

Effective dashboards are powerful. They are, essentially, tools that make statements as to what a company or a team considers to be valuable.

One of the most interesting things utilizing a dashboard software makes possible, is to simply track leads and conversions. And in order to track these, the team only needs to agree on which specific actions define each stage of the customer journey, and then build these into the dashboard and the reporting system.

The Net Promoter Score (NPS) — that is, the percentage of people who would recommend your company’s products or services — seems to me like one of the most relevant metrics.

Measuring the NPS is not the only relevant metric of success, however, nor should it be.

Relevant KPIs should be set for each stage of the customer journey. It is important to focus on a single overall macro KPI, but also to monitor the whole customer lifecycle.

Dave McClure, for example, suggests the following macro metrics according to different phases of the customer journey:

  • Acquistion metrics –- How do the customers find you?
  • Activation metrics –- Do the customers have a great first experience?
  • Retention metrics –- Do the customers come back?
  • Revenue metrics –- Does your company make profit?
  • Referral metrics –- Do the customers tell others about you?

What I think is best about KPIs and metrics is that the overall process of sourcing and communicating data can spark intense discussions that at times can strike right at the core of the business and its purpose.

Dashboards and metrics may not be something that everyone is passionate about. But collecting and reporting data can have a huge impact on accelerating the company’s and the team’s performance.

Visit the Geckoboard website and get your free trial of Geckoboard now:

https://www.geckoboard.com/

Check out Meltwater’s Growth Hacker Brendon Ritz’s brief guide on analytics and data:

http://www.meltwater.com/insight/keys-kingdom-making-marketing-team-data-centric/

 

On the Lean Methodology and Metrics

I think setting up consistent metrics makes all the difference in relation to the lean startup model thinking. The practice of creating Minimum Viable Products is becoming a prevalent way to create new products and services. But in order to keep on improving our MVPs, it is essential to figure out the relevant metrics in relation to the customer’s overall happiness and satisfaction with the product or service.

So let’s take a brief look at the lean methodology in relation to metrics. In a recent post entitled “Flow and Seductive Interactions” (https://iiriskblog.wordpress.com/2016/03/14/flow-and-seductive-interactions/), I emphasized the need to create products and services that cater for the customer’s personal improvement, while giving us a sense of a true “flow experience” of micro-moments while performing relatively complex tasks. This is essential especially in multichannel digital service design.

But how to measure all of this?

Eric Ries, the author of “The Lean Startup”, says that while we certainly need figures, the customers are individuals.

In his book, Ries states that “Numbers tell a compelling story, but I always remind entrepreneurs that metrics are people, too. No matter how many intermediaries lie between a company and its customers, at the end of the day, customers are breathing, thinking, buying individuals. Their behavior is measurable and changeable.”

I agree with Ries. So essentially, we need to figure out what works, and also understand why it works. Focusing on these questions, especially the “why” part, helps us choose the correct metrics.

In “The Lean Startup”, Ries states that in order to support the Build-Measure-Learn feedback loop, the metrics need to be “Actionable, Accessible and Auditable”.

First of all, let’s take a look at “Actionable” metrics.

Your company may attract 1 000 000 unique visitors to its website annually. However, this figure might not be as relevant as many people think. As Ries explains, “For a report to be considered actionable, it must demonstrate clear cause and effect. Otherwise, it is a vanity metric.” So the question we need to ask next is, where are the visitors coming from and why? And follow up by setting the metrics for that.

Ries also provocatively states that “All too many reports are not understood by the employees and managers who are supposed to use them to guide their decision making.” Furthermore, he says that “Unfortunately, most managers do not respond to this complexity by working hand in hand with the data warehousing team to simplify the reports so that they can understand them better.” I think this is true.

Based on my own experience, this might be one of the most important issues to solve in relation to metrics. I think setting up an accessible dashboard of the most relevant metrics should be a top priority in the analytics team. There is currently a plethora of excellent analytics dashboard software available. I personally prefer the kind that are accessible for any employee at any time, modular and visual.

Finally, Ries states that all analytics and metrics must be “Auditable”. An easy way to test hypotheses based on analytics is to interview the people that are using the product or service. Another feasible way to audit and validate the hypotheses is A/B testing. Playing around with various landing pages, for example, usually certainly pays off. Checking out the heatmaps of the existing websites also helps. Yet another practical way to test the hypotheses based on metrics is creating traffic via modifying the parameters of search engine optimization. I think regular auditing paves the way for regular improvements.

So the KPIs as well as other metrics should have a clear relation to the overall customer experience as well as the strategic goals of the company. Some of the most important KPIs still remain social media audience size, reach, engagement rate, website traffic, and the amount of leads and conversions. But understanding why visitors end up on our social media or landing page, and why they convert into customers is essential in order to create the next MVP as well as improve on the multichannel experience of the existing ones.

Setting up the most relevant metrics for these processes should be a top priority in the analytics team as well as in the C-suite.