Get updated when we post
new interviews, stories, and tips.
How do high-functioning startups deal with this problem? Insights from our conversations with 50+ founders and leaders at customer-obssessed companies.
The goal of customer-driven development is to be able to quantify how well your team is “listening” to customers and therefore solving their problems. If you’re tracking feedback carefully, it’s much easier to map your output to customer feedback.
Every team has a cadence at which they plan and complete work. For engineering teams, this is often referred to as a sprint, and for the purpose of this guide, we’ll be referring to it as a sprint too. Usually, these happen in 1-2 week intervals. In order to understand how well you’re listening to customers, we recommend you track the percent of qualified problems that your team was able to solve in a given planning period.
When doing sprint planning, it is essential that you review recent feedback as a team. This keeps everyone in touch with your customers and incorporates customer empathy into decision making. If a team reviews customer feedback regularly, new information in each successive review skews towards recent features shipped by your team, new bugs discovered by your users, and longer-term recurring customer concerns.
The Flex Company also hosts monthly lunch-and-learns for their employees to share learnings from customers. Whenever decisions need to be made, we make sure we have people from the team who have had a direct line of communication with customers. Often time this will be a member of our Customer Success team.
In order to track what percentage of problems are being addressed each week, it’s necessary to first quantify those problems. If you implement the process recommended in this guide, that’s fairly simple. As you collect quotes and organize them around problems, you can “weigh” each of them by the number of unique customers that have brought up that concern.
It’s important to note that for every customer problem you solve in a given sprint, you might be uncovering 2-3 more. So in order to compute the percentage of problems computed, divide the total weight of problems solved by the total weight of all problems that were “active” during that time span.
For example, say a team plans in two-week sprints. During their planning meeting, they have qualified problems with a collective weight of 100. Over the course of the sprint, they close qualified problems with a total weight of 30. But they also find new qualified problems with a total weight of 20. This means they addressed 30 / (100 + 20) = 25% of customer problems during the sprint.
Unfortunately, shipping a feature does not mean you’ve solved the problem. Once a feature is shipped, the team should always follow up with all the customers who articulated the problem and do two things:
- Notify them that you’ve shipped a feature that might address some of their concerns (see the section on engage your users for more information on this).
- Ask them if it did, in fact, help fix the problem they were facing.
If the majority of respondents say it did indeed improve the problem, it’s reasonable to deem it solved (for now). But depending on the complexity of the problem, it might not be a complete solution. But that’s OK! Iteration is necessary. If and when more feedback comes in about the same issue, feel free to start tracking quotes as part of a new problem.
Solving customer problems is a collaborative effort. Design and engineering teams find that referring to customer quotes helps them scope down the implementation and ship the feature faster by solving real customer problems vs. grander, hypothetical ones. Understanding the right trade-offs to make is impossible without understanding the insights coming from customers.
Additionally, seeing customer problems firsthand builds empathy from design and engineering teams. There is nothing more painful than seeing someone struggle to use the software you designed or built. Ultimately, this will result in teams shipping better solutions.
Our engineering team does most of the inbound support. As such, our customers always get quick and technically complete answers. It also allows the engineers to “feel the pain” of our customers. The amount of customer love that we can generate by going the extra mile is the kind of stuff that they tell their friends when they’re starting their podcasts.
It’s natural to wonder what exactly is the right benchmark for this metric. There is no one-size-fits-all answer to this question—the nature of your business and feedback volume largely dictates what your goal should be.
As a rule of thumb, we recommend you try and compare your team’s week over week numbers to get a sense of where you are, and continually try to improve them.