From Support Noise to Product Signal: How to Quantify Customer Feedback

Alex Barnett
CEO
From Support Noise to Product Signal: How to Quantify Customer Feedback
Why we need to stop acting like a cost center and start acting like an insight engine
By Alex Barnett
One of the most frustrating moments in a Support Manager's career is walking into a product meeting, knowing a specific feature is broken, and being told, "We aren't prioritizing that right now."
You know it's a problem. Your team has been dealing with angry customers all day… but Product’s dashboard looks normal and engineering checks the bug tracker showing "low severity." They need you to prove it.
Proving it means turning "customers are frustrated" into quantifiable customer feedback that justifies pulling engineers off their roadmap and risking deadlines. This is the core friction of customer support: you are the first to know when something is wrong, but often the last to be heard.
Support speaks in anecdotes; but Product needs data.
Eight Years in the Trenches
I didn't start as a founder building software. I started answering phones at Tier 1.
Over eight years, I moved through Tier 3 escalations, operations engineering, program management, and product management. Each role taught me a different piece of the puzzle. At Tier 3, I learned to read logs when engineering couldn't respond fast enough. As an ops engineer, I built the integrations connecting our tools. As a program manager, I owned the analytics and feedback loop. As a product manager, I used that loop to build a roadmap.
I've sat in every seat. I've seen exactly how the disconnect between support and product actually happens, and how it’s solved.
At every stage, the same problem kept showing up. When you tell engineering "customers are mad about billing," they hear an opinion. When you show them "billing complaints rose 10% yesterday, three standard deviations above normal," they hear a priority.
The Manual Trap of Generating Insights
To get that priority, you have to prove the issue is causing churn or blocking growth. But historically, turning support conversations into hard data required a massive operational lift.
At my last role, I managed a department handling 3.5 million tickets a year. Building a reliable pipeline to extract insight at that scale, took me (and a team of 10) roughly six months and $50,000 in labor.
The process was grueling:
Manually review thousands of tickets.
Build a taxonomy.
Write training docs and train the team.
QA their work and measure accuracy.
Iterate.
We ran three-week cycles. It took seven to ten iterations before we hit our coverage and accuracy targets. That is too slow and expensive for modern teams. By the time you have the data, the customers have already paid the price.
The solution is automated ticket categorization, but getting it right requires a fundamental shift in how we view AI.
Imagination, Not Hallucination
There is a valid fear around AI "hallucinating." In legal or medical contexts, that fear is justified. But in the context of analyzing customer feedback, what many call hallucination, we call imagination. Imagination is how we sympathize with customers, and it’s exactly what you need to interpret human language.
Customer feedback is messy. There is no hard metric for "annoying" or "confusing" or "feels broken." A script can't process that. But an AI can infer meaning from context, the same way a human analyst would.
Think about a customer who writes, "I keep clicking the button but nothing happens and I'm losing my mind."
That isn't a bug report with clean reproduction steps. It’s frustration wrapped in vague language. An LLM can make the interpretive leap: this indicates high user effort.
Through trial and error building chatbots at scale, I learned one core lesson: ONLY use LLMs when fuzzy interpretation helps. Use traditional code for everything else.
Why You Can’t Just "Ask ChatGPT"
When LLMs first emerged, I built a prototype in days. I was excited. Then I tested it at scale and realized why so many "AI for Support" tools fail.
If you paste 500 tickets into ChatGPT and ask "what are the trends?", you get an answer that sounds reasonable but is often mathematically wrong.
LLMs are great at language, but they are bad at math. If you ask an LLM to give you the average sentiment score of 1,000 interactions, it will likely guess "7 out of 10." Not because it calculated anything, but because that is the most statistically probable answer to that question.
To generate reliable customer support insights, we had to build an enterprise-grade architecture that separates the interpretation, from the calculation:
The Agentic Layer (The Reader): We use the LLM to analyze one interaction at a time—extracting sentiment, friction points, and intent.
The Storage Layer (The Library): We store those structured data points in traditional databases.
The Insight Layer (The Math): We run traditional, hard-math queries (SQL, standard deviation) on that structured database.
AI does the analysis. Code handles the counting. You move the needle.
We can tell you, with statistical significance, that billing complaints rose 10% yesterday. That isn't a guess; it's a fact derived from fuzzy data.
Immediate Value, Better Over Time
Because of that approach, our system doesn't need months to learn. Out of the box, Make Data Speak Human uses industry-standard practices and rubrics.
As you use it and provide feedback (thumbs up or thumbs down), it evolves into a bespoke analyst that knows your business. It learns your internal language. It learns what you want for your customers, and what they want from you.
The analysis that used to take my team days, now happens continuously… in real time.
From Cost Center to Insight Engine
The goal of this technology isn't just to make support more efficient. It’s to fundamentally change the relationship between Support and Product.
Most companies view support as a cost center. I've spent my career proving it's actually the most valuable insight source a company has. When you can instantly show that an issue affected 400 users this week and represents $15k in potential churn, you aren't just complaining about a bad day. You are helping the company hit its goals.
That is how you turn support chaos into product signal.
You don't need a data science team to understand your customers. You just need to let the data speak for itself.
Your data has the answers. Can you hear what it’s saying?
About the Author

Alex Barnett is the Founder of Make Data Speak Human. His perspective on support operations wasn't formed in a boardroom; it was formed in the queue. Over an fifteen year career, Alex rose from answering phones as a Tier 1 Support Agent to leading Tier 3 Escalations, filling in as an Analyst, eventually becoming the Product Manager for Customer Support Operations at Earnin.
At Earnin, he managed the support tooling, automation, and program for a 500-person department handling 3.5 million tickets annually. He has physically done the work he now automates, rebuilding categorization matrices, managing CRM migrations, and bridging the gap between angry customers and busy engineers. Alex now builds the tools he wished he had a decade ago, helping companies turn support noise into clear engineering signal.
Share on social media





