How to Market Test a New Idea

“So,” the executive sponsor of the new growth effort said. “What do we do now?”

It was the end of a meeting reviewing progress on a promising initiative to bring a new health service to apartment dwellers in crowded, emerging-market cities. A significant portion of customers who had been shown a brochure describing the service had expressed interest in it. But would they actually buy it?  To find out, the company decided to test market the service in three roughly comparable apartment complexes over a 90-day period.

Before the test began, team members working on the idea had built a detailed financial model showing that it could be profitable if they could get 3% of customers in apartment complexes to buy it. In the market test, they decided to offer a one-month free trial, after which people would have the chance to sign up for a full year of the service. They guessed that 30% of customers in each complex would accept the free trial and that 10% of that group would convert to full-year subscribers.

They ran the test, and as always, learned a tremendous amount about the intricacies of positioning a new service and the complexities of actually delivering it. They ended the three months much more confident that they could successfully execute their idea, with modifications of course.

But then they started studying the data, which roughly looked as follows:

Buy Your Offering chart

Overall trial levels were lower than expected (except in Complex 2); conversion of trials to full year subscribers were a smidge above expectations (and significantly higher in Complex 3); but average penetration levels fell beneath the magic 3% threshold.

What were the data saying? On the one hand, the trial fell short of its overall targets. That might suggest stopping the project or, perhaps, making significant changes to it. On the other hand, it only fell five customers short of targets. So, maybe the test just needed to be run again. Or maybe the data even suggest the team should move forward more rapidly. After all, if you could combine the high rate of trial in Complex 2 with the high conversion rate of Complex 3…

It’s very rare that innovation decisions are black and white. Sometimes the drug doesn’t work or the regulator simply says no, and there’s obviously no point in moving forward. Occasionally results are so overwhelmingly positive that it doesn’t take too much thought to say full steam ahead. But most times, you can make convincing arguments for any number of next steps:  keep moving forward, make adjustments based on the data, or stop because results weren’t what you expected.

The executive sponsor felt the frustration that is common to companies that are used to the certainty that tends to characterize operational decisions, where historical experience has created robust decision rules that remove almost all need for debate and discussion.

Still, that doesn’t mean that executives have to make decisions blind. Start, as this team did, by properly designing experiments. Formulate a hypothesis to be tested. Determine specific objectives for the test. Make a prediction, even if it is just a wild guess, as to what should happen. Then execute in a way that enables you to accurately measure your prediction.

Then involve a dispassionate outsider in the process, ideally one who has learned through experience how to handle decisions with imperfect information. So-called devil’s advocates have a bad reputation among innovators because they seem to say no just to say no. But someone who helps you honestly see weak spots to which you might be blind plays a very helpful role in making good decisions

Avoid considering an idea in isolation. In the absence of choice, you will almost always be able to develop a compelling argument about why to proceed with an innovation project. So instead of asking whether you should invest in a specific project, ask if you are more excited about investing in Project X versus other alternatives in your innovation portfolio.

And finally, ensure there is some kind of constraint forcing a decision. My favorite constraint is time. If you force decisions in what seems like artificially short time period, you will imbue your team with a strong bias to action, which is valuable because the best learning comes from getting as close to the market as possible. Remember, one of your options is to run another round of experiments (informed of course by what you’ve learned to date), so a calendar constraint on each experiment doesn’t force you to rush to judgment prematurely.

That’s in fact what the sponsor did in this case — decided to run another experiment, after first considering redirecting resources to other ideas the company was working on. The team conducted another three-month market test, with modifications based on what was learned in the first run. The numbers moved up, so the company decided to take the next step toward aggressive commercialization.

This is hard stuff but a vital discipline to develop or else your innovation pipeline will get bogged down with initiatives stuck in a holding pattern. If you don’t make firm decisions at some point, you have made the decision to fail by default.


Source: New feed