Product/Market Research Case Study:

Putting ‘I Would Use This’ to the Test with an MVP

Need to validate a product idea? Don’t ask users to predict behavior. Get proof with MVP testing. Here’s a 7-step concept testing process we used on a recent project.

Person holding red box with “Act Now” sticky note

Photo by Jo Szczepanska on Unsplash
 

From Prediction to Product Failure

Moderator: Imagine this scheduling feature existed today. How likely would you be to use it?

The software product owner looked up from his notes and took a deep breath.

He and his team were huddled in a conference room, watching a video of a concept testing session. For about 20 minutes, the research participant interacted with a rough prototype of a new scheduling feature. Now, the moment of truth had arrived.

Participant: Yeah, I would definitely use this.

By the end of the day, having watched other sessions with similar exchanges, the team agreed that the scheduling feature was ready to design, build and launch. They pitched senior management the next week and received the green light for funding.

When all was said and done, the enterprise business unit spent over $2 million on the new feature.

Within a year, it was gone.

No matter where or how much the team promoted the feature, software users ignored it. Usability testing led to design improvements, but UX design wasn’t the problem. The team had built the wrong feature — a solution without a real user need.

Instead of asking all the questions I would go out and start selling. I’d learn a lot you know … Even before I have the machine. Even before I started production. So my market research would actually be hands on selling.

— Anonymous entrepreneur

The failed scheduler story plays out in companies every day, wasting billions of dollars and countless hours. User research can help turn product failures into wins, but not if we rely on questions like “would you buy this?”

Most researchers know that users are bad at predicting future behavior. But when it comes to product/market validation, we feel handcuffed by traditional methods — so we ask for predictions anyway. To better lead product teams, we need to add more creative methods to our toolkit.

Here’s the story of a recent project where we incorporated a lean startup method: testing an MVP (minimum viable product).

The setting: a 5,000-person company with the largest provider network in its industry. The company’s most successful digital product is a provider marketplace, similar to Zocdoc, Realtor.com, or Care.com. This marketplace is a top source of new customers for providers and a key revenue driver for the company.

With the help of ongoing user research and iterative design, the Minneapolis-based team has been able to increase marketplace leads by 1,000% over the last 5 years. A few months back, they were seeking another lift.

While the details were unique, the high-level approach was one the team had used many times before.

Spot a problem

Recurring qualitative usability testing of the marketplace showed a clear problem: many users felt overwhelmed by the thousands of providers, and struggled to narrow down and select one that seemed like the right fit.

Validate the problem

Quantitative analysis confirmed the problem: session replays showed endless scrolling through search results; web analytics showed “pogo-sticking” between search results and provider profiles; and Google Ads data showed low conversion rates for highly-qualified segments.

Generate a solution idea

Inspired by user comments in qualitative research, and guided by a collaborative ideation session, the team came up with a solution concept:

Let’s add an option where visitors can skip the marketplace selection process, and instead just tell us what they want in a provider. Then we’ll use an algorithm to match them— like an automated matchmaker.

Test the solution concept

Now it was time to test the concept. This wouldn’t be an easy solution to build, so we needed a way to fake the product, to try “selling” it, and to measure the results.

After a lot of trial and error on past MVP testing projects, we’ve arrived at this process:

1. Write a hypothesis

State your solution assumption in 1 or 2 sentences. Be specific. Include something the key decision-maker cares about. We landed on this:

By adding a matchmaker option that allows overwhelmed users to skip marketplace provider selection, we will increase total lead volume by 5%.

2. Create an MVP

It’s tough to get MVP testing buy-in at big companies. There’s more to lose and more aversion to risk. To get buy-in here, we needed to:

  • Hack a solution without developer effort
  • Avoid disrupting the existing UX and metrics
  • Satisfy users who interacted with the MVP

The answer was a Wizard of Oz MVPWhile the real solution would be an automated, algorithm-driven matchmaker, here we’d fake it by having a human acting as the machine.

But first, we needed a way to sell the matchmaker to visitors. We found a third-party tool to add a popup ad and lead form to the site.

Exit intent popup used for MVP testing

3. Design the experiment

Next, we mapped out how the experiment would work:

  • Who sees the MVP? We decided to only show the popup to traffic from 3 states and to visitors who showed intent to exit the site. We’d limit it to 1% of the site’s traffic over a 2-week period.
  • What happens after visitors convert? We assigned a member of our team to be the wizard and pointed lead notifications to her. We outlined a 5-minute process for her to match a lead to the right provider based on location and other inputs. Since each provider already had their own lead form as part of the marketplace, we decided to just fill out those forms as if we were the user —pasting their MVP inputs. And with that, the user was part of the existing lead flow. Not a perfect UX but good enough!

4. Align on success

We determined a success metric: conversion rate, or the percentage of MVP viewers to fill out the lead form. We then picked a target conversion rate, based on calculations of what would allow us to increase leads by 5% at full throttle. We walked through the experiment plan and goal with product stakeholders. After some tweaks, we were ready to go.

5. Launch and monitor

We added the popup script to the site with Google Tag Manager, allowing us to control the experiment without bothering developers. After a quick pilot, we launched the MVP. As leads arrived, our wizard teammate was notified by email, and she went to work matching leads to providers.

6. Analyze and iterate

During the experiment, we looked for patterns with the converters. An open-ended comments field in the MVP lead form gave us some qualitative data to analyze alongside quant data. Based on what we saw, we implemented a few messaging and targeting improvements. Our wizard teammate also tracked what made it hard or easy to match leads to providers.

7. Share results and plan

We regrouped with the larger team and shared the results: the MVP narrowly beat our target goal by 6% — validating our solution hypothesis. As a next step, we agreed to pilot an automated process of provider matching, informed by our wizard’s learnings.

Success metrics results from MVP testing

While this MVP narrowly succeeded, most fail. That, it turns out, is the most useful thing about MVP testing: spotting bad ideas before you build them.

Think back to the failed scheduler story. Instead of asking lab users to predict their behavior, what if the researchers had spun up an MVP, dropped it in the software, and watched what users actually do?

It would have taken creativity, guts, and resilience. The payoff? The product team would have learned that their solution idea was a bad one before wasting $2 million on it — and moved on to another solution or problem.

That is a huge impact for a user researcher to have: not just improving the UX of existing products, but helping teams pick the right products to build — while saving companies from million-dollar mistakes. To achieve this, we must expand our toolkit for early-stage product research. MVP testing is a great place to start.

About the Project

  • Platform: Web marketplace
  • Audience type: Consumers
  • Methods: MVP/concept testing
  • Length: 2 months
  • Stakeholders: Marketing and IT teams
  • Company size: 5,000 employees
  • Company HQ: Minneapolis, MN

​More Case Studies

 

Concept Testing of a Weather Alerts Feature for GEICO’s Mobile App

A GEICO product team needed to decide whether to build a new mobile app feature. We led qualitative concept testing and delivered actionable findings related to concept value and design.

Product/Market Fit Research for Verizon’s Innovation Group

A Verizon team needed to test key assumptions to move closer to product/market fit for a new enterprise software concept. Our rapid research cycle delivered new user insights that helped validate/invalidate hypotheses.