We often hear stories about how "data driven" Netflix is. Netflix has a whole blog about their tech methodologies, include how they test covers, titles and design functions. Before Netflix, A/B Testing was seen as a fine-tuning design tool, very tactic, very low-ranked in the organization. After Netflix, A/B testing has become an integral part of a complex architecture of KPI setting and user testing.
Once, discussing the "My List" function of /sub/Netflix at Reddit, I came across this great comment:
Recently, hearing a podcast featuring Steve Johnson VP, Design, and Rochelle King, VP, Creative Production, both at Netflix, they describe further the early process: Where is the impact of this design change? How many users will this change affect? While, ultimately, it all comes to a positive impact on the service, it starts with scoping and understanding how likely the change is to create an impact. The episode is available here.
From my experience, a lot of companies would start failing right there. Why? Because companies rarely have one single business objective. You need to figure out on which revenue stream your design stands in the organization, and go for design changes that affect that metric.
In Netflix, according to what our mystery alumni from Reddit states, their design requirement in UI is always to move a metric at 1%, which ends up converted to retention. This is actually consistent with what growth hacking guru Sean Ellis states on his concept of North Star Metric: the one metric that measures value to users and is connected to business goals.
The process of attaching design metrics to business goals is not a simple one, but it's also not that complicated. It needs people involved in the process of design to stop and think thoroughly why they are designing, and what role that interface or product feature has on the big picture.
What about qualitative data?
Yes, yes, qualitative data. There is often an unnecessary conflict between the two datasets, qualitative and quantitative. There are also lots of oversimplifications out there, as "big data tells what, qualitative data tells why". I wouldn't create that dychotomy. If really necessary, a statement closer from truth could say that qualitative data tells which ideas should be tested, and quantitative data tells how much these ideas are worth.
Rochelle King states that "qualitative data is equally important in creating this 360 view of the customer". King also mentions surveys and the qualitative team as an integral part of the insight generation for good insights. "Data science is what we learn at scale; it's one of many inputs".
Notice that the two datasets exist in separate points in time: before and after implementation. Win!
Mixed methods, grey areas
Even then, there are grey areas. For example, a usability test with n = 6 could be considered qualitative data, but it validates the usability of implemented ideas with high confidence, in metrics established decades ago by Nielsen.
A well-conducted usability test can give you a strong hypothesis on how much you will gain at scale. For example:
"We have tested the new interface with 6 users and noticed they gain 3 minutes in speed to complete the forms when compared to the old user interface. Across the whole organization, this means a potential gain of 5000 hours of work per year. Considering the impact on white-collar workers, this means about €120,000 per year in salaries."
This is a perfectly credible hypothesis. If you keep A/B testing until you have statistical confidence, you are then able to affirm with 98% confidence that the new UI saves about €120K every year — and secure the budgets for your next design project.
So, which dataset to choose? The bottom line may be to stop seeing qualitative and quantitative data as rivals or substitutes for one another.
In the words of Steve Johnson, "data tracking informs what you already know". That is a big deal of a statement because it is often blurry how data analytics can work or should work for designers.
Take this to your next data & design discussion:
Even Netflix understand that to get an insight into what users want and is still not there, one needs to get qualitative insights from a variety of qualitative methods.
Quantitative data is great to identify how an implemented idea is performing, and how much value it generates.
In order to understand how much value a design feature is worth, you need to A/B test your design and attach it to an action the user must perform. The easiest way is to create a hypothesis: if the background has a product photo, will the user purchase more?
The data should then be taken to the data science or business team, so they calculate how much the data is worth at scale: if customers are 1% more likely to purchase with the product picture on the background, this means a €100K/year increase in sales. Win!