top of page
Writer's pictureKevin Guenther

Validating Digital Experiences

Using an admittedly strange metaphor to discuss my approach to creating an MVP, setting up metrics and course correcting.


The illustration above is based on an analogy I was thinking about to explain what it feels like when you plan and execute a project that hits all the right success metrics.


Depending on the stone, the effort you put into picking the right stone, aiming it, throwing the stone, which lily pad targeting, you’ll get a variety of results ranging from a disappointing plop — to a satisfying splash (happy users and profitability)!

Those same feelings of preparation, practice, and eventual success fit really well with digital product design.

Put more succinctly, Tim Brown of IDEO described how any product or service—whether you’re designing something new or adding a features to an existing product—should meet the trifecta goals of:

  • Feasibility

  • Viability

  • Desirability

But to discover and measure whether you have met these goals requires a lot of qualitative research, a lean design/build process, and actionable quantitative and qualitative data.

Following is what I would define as the ideal mindset and high-level methods your product team should have in place before taking on the development of a new digital product or feature design.


Part 1. Selecting and Launching the Stone

MVP Effort = A balance between resources and requirements


There is already a lot out there about building Minimum Viable Products (MVPs), so I’m not going to belabour the subject here. The most valuable thing about building MVPs as it relates to the infographic above however is understanding how to utilize them for quick-to-market-turn-around and immediate feedback. This is arguably the most important step in breaking free of traditional waterfall project management. The other thing to focus on as it relates to ‘Effort’ in the infographic is that the words Minimum and Viable are meant to be used together. Too many business owners and Product Managers focus on the Minimum part. While too many designers and developers focus on the Viable part. Minimum and Viable are not meant to be mutually exclusive concepts, they are about finding balance.

Minimum and Viable are not meant to be mutually exclusive concepts, they are about finding balance.

Your MVP should follow the following three points, every single time you are building one:

  1. Only build the functions that will help test whether your product / feature is viable (refer to the milestones in the illustration). Ignore any temptation to add non-critical features.

  2. You absolutely need to build-in an analytics platform to measure usability, churn, retention, and conversions. This may seem obvious, but it’s always surprising to me how many Product Managers forget to plan for this.

  3. Building surprise and delight into the fabric of your products will create emotional connections that will push you above competitors if you manage to hit all of the milestones above — and soften the blow if you don’t.


Proposed Solution

How well you understand your users and the problem they need solved.


Closely related to the effort you put into developing your MVP is how you go about choosing the features to include in it. Or, the problem you think you’re solving for your users.


When it comes to these decisions, there is a big difference between launching a feature within a live-product vs. launching an entirely new product. Assuming the live product is gathering decent metrics and regularly surveying users (or even just logging support tickets), you’re starting with user insights that a new product likely won’t have. Regardless, in a user centred design process, it’s absolutely critical that you start with sound market analysis and user data before you, “select your stone” (decide what your MVP looks like). That means ensuring your organizational goals and resources are properly aligned to a solution that users actually need.

A few tools and techniques that product teams I’ve worked with have employed in the past are TAM, SAM, and SOM reports and Innovation Engineering Yellow Cards. These tools are great for business sided decision making — like whether or not you can even afford to make the product based on its potential for returning a profit.

Ensure your organizational goals and resources are properly aligned to a solution that users actually need.

To receive more wholistic user centred feedback, I like to employ user surveys, ethnographic research, and ideally Design Sprints before jumping into high-fidelity design and coding.


Ultimately all of these methods and tools are intended to ensure the features you are building into your MVP are attempting to solve a problem that real users have acknowledged they actually need solved. Of course, that’s not to say that your instincts about your product and users are completely wrong. You have to start somewhere.

In fact, going with your gut will probably lead you somewhere in the ballpark of 50% to 90% of the right path. But without a process in place to properly validate your assumptions, there is often no way of knowing where you really are in that spectrum until it’s too late.


Aim

Your ability to identify and target users


So now that you’ve built the essential features, you’re ready to launch your stone at the lily pad… but wait… which lily pad was it again?

Getting feedback from a handful of survey respondents or waiting list sign-ups from a few hundred potential users is one thing. Having a profitable number of users discover your product and choose you over a competitor in the real world is a much bigger challenge.


That’s why before you send that stone hurtling through the air, you’ll want to make sure you’ve properly considered your communication and marketing strategy and how you plan to gather feedback and respond to the results.


Regardless of how you’ll spread the message — social media, email marketing, paid advertising, etc… — knowing where your users are coming from and which channels are the most effective is equally as important as knowing whether or not they are able to use your product, convert, and remain loyal.

Some good tools for tracking the flow of users at the top of the funnel (and for those with a more robust marketing budget) are HubSpot or Marketo. They’re excellent soup-to-nuts products for almost all of your inbound marketing needs and CRM tracking.

For those on a tighter budget, check-out Adjust for measuring inbound marketing on mobile-based platforms, and MailChimp or SendGrid for email marketing and SaaS notification tools. Of course, Google Analytics is another affordable option for web-based products and AdWords campaigns. Facebook and Twitter for Business also have their own ad management and analytics dashboards as well.

With all of the issues above adequately addressed, you should finally be ready to launch that stone. The preparation may seem like a lot of work, but in the end these techniques produce results much faster and usually much more successfully than the old waterfall method of setting up requirements, detailed creative briefs and strict budgets.


Part 2. Splash Down

Impact — How did we do?


Just like throwing the stone, once you deploy your product live, there’s always a brief period of waiting for the results. The anticipation of waiting for your social media and direct email campaign analytics, letting you know whether your aim was accurate or not.

But the real validation comes once you start to see all the sweet ripples of quantitative data rolling onto shore.


If your initial user research and your marketing messages were well considered, you’ll have already validated the need and potential market size for your product. The quality of those efforts will start rolling in first. You’ll know you’re onto something when your metrics show you they:

  1. Have been waiting for a solution to their real-world problem or pain-point

  2. Have been able to find you easily enough (inbound marketing metrics)

  3. Show you that it’s a problem they care about by downloading or signing up for and/or installing your product to evaluate (downloads/conversions)

Bear in mind however, if the ripples flatten out after that, you need to take a closer look at your usability metrics. If (4) users are somehow annoyed or confused by the usability of your product, or find you’re not fulfilling the promises you’ve made in your marketing materials, they (5) likely won’t return to your product or recommend you to others. Which is unfortunate because viral growth through recommendations is definitely a sign you’re onto something big.


Of course, the ultimate goal you want to achieve is (6) long-term retention. Whatever that means for your product. This is represented as your Customer Lifetime Value (CLV or LTV for short). Essentially, the average effective length of time you can expect to retain active users until they cancel their accounts or simply stop using the product. You can use metrics like average usage cadence for online products—or in traditional applications—the open rates (the average amount of time each user opens your product daily/weekly/monthly). If need be, you even use how consistently they maintain the product by downloading updates as soon as they’re available as opposed to months after they’ve been released.


But every product is different and the goal for usage / conversion will be different as well. Based on your product, you’ll likely know what metrics are best to calculate CLV/LTV… just ensure you do!


But what about situations in which you’re not designing a product for profit, but as an internal tool for employees or certified partners?

While retention isn’t something you’ll have to worry about for a highly specialized, mandatory digital product — efficiency, user error, and employee happiness is certainly going to be an issue. You’ll want to gather data for those measures instead.


Just like throwing a stone at a lily-pad in a pond, you’re first throw is probably not going to be your best and it usually takes a few shots before you get it right.

With all of this juicy quantitative data, you’re also going to want to get as much qualitative data as you can. That means getting back out into the field and watching how your users interact with your product. Their mood while using it; the common distractions they have to deal with; and most importantly how they may be hacking your product or its environment to achieve their goals.


Now you’re ready to have another go at improving your product. This is what we call the experimentation phase in product design, when your product team begins to plan an experimentation road-map. The road-map will include devising and testing various hypothesis about why something did or didn’t work, and how those issues can be fixed. Just like throwing a stone at a lily-pad in a pond, you’re first throw is probably not going to be your best and it usually takes a few shots before you get it right.


Clearly there is a lot more detail I could go into regarding tools and methods, but each one of those would justify a post unto themselves. A decent follow up read if you’re interested is an article I published a few years ago explaining the on-boarding metrics and experimentation process we implemented while I was working for a startup called Kindoma (now Hoot Reading).


Conclusion


Just like launching (and re-launching) a new digital product or feature, the “pre-throw” is really the only part you can control, tweak, and get better at. As with everything else in life—the more you do it, the more adept and accurate you become. And thankfully, keeping your process lean will ensure you’re not limited to just one stone (one shot at a successful launch).

2 views0 comments

Recent Posts

See All

Comments


bottom of page