How to Navigate Product Enhancements Through Data

Print Friendly, PDF & Email

You’ve probably heard it countless times: we’re in the “consumer age”—the power of choice is in the consumer’s hands. This realization has forced organizations in every industry to take a hard look at the experience they are delivering.

Many companies have realized that with intuitive products and innovative features, they can significantly improve consumer experiences, and by proxy financial outcomes. But how do you identify what features are truly impactful? Finding product enhancements that move the needle in meaningful ways requires a deep understanding of what users are doing. 

Let’s look at how to do this. 

Hypothesis creation

Prior to any experiment, you need a hypothesis—a verifiable/falsifiable belief about how or why users take particular actions. Experiments must begin with data analysis to figure out the problem you want to solve and identify the potential solutions to that problem based on expected impact, reach and effort. 

That being said, a hypothesis is specific to a particular time and place—it’s concrete. Shocks to the economy (for example, our most recent inflation spikes), social changes, technological changes and more can create changes in user engagement. These can introduce lurking variables that invalidate a hypothesis, so one must remain vigilant in evaluating long- and short-term variables that affect user behaviors (and consequently hypotheses). 

Given the complexity of isolating a variable and separating the signal from the noise, one must undertake an ongoing cycle of data analysis, brainstorming, design and experimentation.

Data analysis and modeling help identify key factors driving consumer behavior. But to understand why consumers act, a great data team needs to understand relevant emotions and subconscious behavior by talking to consumers directly through user research, in order to understand what they think about their own experiences. 

By correctly identifying the major consumer behavior causes and why they influence action, you’re armed with information to build a thoughtful hypothesis and deliver experiences that can (and must!) be continuously improved upon through experimentation.

Experimentation, validation and falsification

Given the cross functional collaboration inherent in experimentation (i.e. data science, design, user research, engineering and product must work in lockstep when making changes), teams must constantly optimize processes to make sure the experiments with the most expected impact are prioritized. And since the cost of experimentation can be high, you also want to make sure that everything is done thoughtfully, and all steps are carefully planned and reviewed. 

When you do finally test, know that sometimes a wrench is thrown in the works and you have to retest, sometimes even going back to the drawing board to try different product changes and personalizations until one clearly confirms or falsifies a hypothesis. Fortunately, if caught very quickly, you can learn a lot from that experience. For example, one experiment we conducted led to a 30% drop in patient login rates to our platform—our hypothesis wasn’t falsified, we had just pushed the call-to-action too far down our homepage (which actually showed how critical the login workflow was). 

Additionally, a hefty grain of salt must sometimes be consumed depending on sample size. For example, you may have a sample size of 10,000 people but are making decisions for 10 million, while also trying to generalize for future consumers (aka, the entire population of the United States). 

It’s important that a data science team holds each other accountable by sharing updates broadly to the whole company, helping to ensure objectivity by soliciting outside opinions when you must prioritize or make tough decisions. Let yourself be fearless when it comes to cutting changes and features that aren’t improving metrics. 

For example, when you find something isn’t working, running a classic A/B test with two separate workflows can help you find which option yields  greater success, whether it be click throughs, registrations or sales. 

I’d recommend methodically testing one aspect before making any big changes so that you can be as sure as possible there aren’t any lurking variables. 

Making it count

Developing a culture of continuous experimentation isn’t easy, as the process can be expensive given the aforementioned extensive resource coordination. And getting results takes time, so make each hypothesis as airtight as possible and each experiment meticulously well thought out. 

But by collaborating on a regular basis, the entire team can become greater than the sum of its parts by helping each other to identify and fill collective blindspots while contributing to each other’s work. Through this partnership and constant evolution, organizations can be sure they are delivering on the consumer expectations necessary in order to be successful.

By employing some of the pointers above, we hope you’ll be well suited to start using a data-driven approach to product enhancements—good luck! 

About the Author

Yohann Smadja is the VP of Data Science at Cedar. With 12 years of experience in the field, Yohann leads the Data Science team who is in charge of making sense of the vast amount of data they have available to help achieve the company’s vision.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*