Three Barriers to Digital Analytics Success (and How to Get Around Them)

By Gary Angel – CEO at Digital Mortar

In this world, things rarely run smoothly. That doesn’t mean things never go well, but most everything happens in fits and starts. When you first try something, it’s usually hard. Think about that first jog in a new fitness program. Or, the early days of your digital analytics program. Painful! But if learning new stuff hurts, it’s also the time when progress feels substantial. Sometimes you hit a wall wall and progress slows dramatically, or even halts altogether.

In two decades of driving digital analytics programs, I’ve seen all sorts of walls, including digital programs that have been hung up, ground to a halt or failed outright. These are the key transition points you need to get right to take your program to the next level.

Hung-Over on Best-Practice Testing

The early days of most testing programs are intoxicating. If you’re a consultant, chances are you have a set of best practices in UI and conversion that you’ve learned the hard way. Using these, you’re nearly guaranteed successful outcomes. Fixing CTAs, resolving obvious friction points and re-organizing forms. This stuff is easy to consume and it works.

But even the most experienced tester will eventually hit a wall with this type of best-practice testing. It often happens towards the end of the first year, when you run out of best-practice ideas. All your successful experience doesn’t prepare you for the next stage. There aren’t enough best practices to drive continuous improvement. So, what to do?

Breaking the Best-Practice Barrier

Long-term success in a testing program means transitioning from best-practice testing to analytics-driven testing. Best-practice testing isn’t really the right methodology to drive a testing program. It’s too general and too dependent on unlikely similarities between your business and someone else’s.

Instead, drive testing with a continuing process of analytics investigation into your customers. This investigation should include behavioral data (where, when and how they succeed or fail) and supporting attitudinal data (who they are and why they made the choices they did). Analytics-driven testers are constantly evaluating every stage in the funnel across both behavior and attitude, and measuring how successful each funnel stage is and why customers succeed or fail. Tests are then targeted to areas where the business will benefit most. Not only is this a better paradigm for driving a testing program, it’s a way to help your test builders design creative based on a better understanding of customer pain points.

Getting it Right for Everyone (and No One)

So, you’re driving a testing program with analytics not just best practices. You’re analyzing each funnel stage and even doing VoC to identify pain-points and drive creative solutions. This will drive success. For a while.

But then you’ll start to notice that tests increasingly generate neutral results. You try a strategy, then it’s opposite. And the results aren’t much different. What’s happening? You’ve hit the wall when it comes to monolithic maximization. The problem? You almost certainly have lots of different types of customers with different decision points, different pain points and different needs. As long as you’re treating tests as global entities, you’ll quickly reach a very sub-optimum optimum. You’ve created a site that is as good as possible for everyone. But there are lots of individuals (maybe everyone) who are poorly served. Changing something to benefit one customer segment usually damages another—making it almost impossible to improve your overall results. This of course, can be frustrating.

Breaking through Sub-Optimum Maxima

Segmentation and personalization are the key to breaking out of this trap. Digital is most effectively driven by a two-tiered segmentation—a combination of who somebody is (tier one) and what they are trying to accomplish (tier two). In an ideal world, every test should be targeted to a specific population at the intersection of a customer type and a goal. When your tests are targeted this way, you get around the wall imposed by monolithic maximization in a highly individualized world. There’s a methodology (SPEED) that encapsulates this approach to segmentation followed by targeted testing to drive continuous improvement. It’s the best way to bring both analytics discipline and local maximizations to your digital analytics program.

It Takes Two to Tango (But only Digital Is a Good Dancer)

This problem of best-practice to analytics and global to segmented optimization don’t just play out across site design. The same process takes place with digital campaigns, too. But there’s a larger problem that increasingly plagues mature digital programs—and that’s local optimization at the digital level. With a mature digital program, you’re driving highly segmented continuous improvement both online and in campaigns. However, as soon as your customer leaves the digital world, all of that improvement and testing discipline vanishes. For any true omni-channel company, that’s a disaster.

Taking Digital Methods Beyond Digital

The new frontier of digital analytics isn’t really digital at all. It’s extending the methodologies that drive digital improvement into the rest of the customer experience. There’s a new set of technologies for measuring in-store and in-world customer experiences in a way that makes them much more amenable to the type of continuous improvement we’ve gotten used to in digital.

If you’re feeling pretty good about your maturity level in digital, but frustrated with the rest of the customer-experience, looking into in-store customer journey tracking technologies can help. For the enterprise, it’s a way to drive competitive advantage out of bricks-and-mortar. For an agency, it’s a whole new way to leverage your hard-won expertise in testing, analytics and continuous improvement in a largely green field.

 

 

Skip to content