The Signal Was Always There

Alex Barnett

CEO

Tools

Why we built the signal engine and what it means for proactive teams.

There is a specific kind of dread that comes with the Monday morning dashboard check.

You open your support metrics. Volume looks normal. QA scores are green. Resolution rates are holding steady. You take a breath and tell yourself things are under control.

Then, sometime around noon, someone drops a screenshot in Slack. A customer is furious in a public forum. They have been complaining about the same bug for three weeks. You check your tickets. Yes, the issue is there. Multiple reports are scattered across different agents and tagged inconsistently. Each one looks small on its own. Nobody connected the dots, and nobody flagged it as a pattern.

By the time it is visible on a dashboard, it is already a crisis.

This is the problem I have been trying to solve since I was a Tier 3 agent reading logs at 2 am. It is not just the bug itself. It is the fact that the signal was always there, and we couldn’t find it in time.


The Limits of Standard Support Analytics

Most support tooling is built to answer a simple question: what happened?

It answers that question beautifully. You can slice tickets by category, pull CSAT trends, or export a CSV of everything. The data is all there.

But "what happened" is a lagging question. By the time you are asking it, customers have already felt the pain. The issue has already compounded, and the churn risk is already baked in.

The question I care about is different: what is happening right now, and is it normal?

Answering this does not need more data. It needs math. Specifically, it needs the ability to compare what you see today against a solid baseline and flag the moments where the two split in ways that matter. This is what anomaly detection does, and it is what we just shipped.


How Automated Anomaly Detection Works

Starting now, the Signal Engine monitors your support operations in real time and automatically surfaces anomalies before you think to go looking for them.

The system builds a 30-day rolling baseline for your key metrics like conversation volume, QA scores, first response time, handle time, CSAT, resolution rate, and touches per resolution. It accounts for day-of-week patterns (since Monday is almost always different from Friday) so you are not comparing apples to oranges.

This is not a vague alert, but a specific one. It doesn't just say "CSAT seems lower than usual." It tells you something like: "Today's CSAT is lower than any Monday in the last 30 days, affecting 120 people.” Think of it like a weather report: you're not getting a barometer reading, you're getting told to bring an umbrella. The statistical rigor is there, under the hood (rolling baselines, day-of-week normalization, standard deviation thresholds), but what you see is a clear signal: this is unusual, here's how unusual, and here's the blast radius.

That is the difference between noise and signal.


What Changes When the Signal Finds You

When I was responsible for insights and alerts within a 500-person support department, the hardest part of my job was not resolving tickets. It was convincing engineering and product managers that a pattern existed.

Saying "customers are frustrated with billing" is an opinion. It gets a nod and a "we'll look into it." Showing up with "billing-related contacts are up 14 percent this week, making this the WORST Tuesday in over a month. ~340 people are affected" is different. 

The distance between those two statements is the difference between being ignored and being prioritized.

This is what I think gets missed in conversations about support analytics. The value is not just catching problems faster, though you will. It is that when the data surfaces itself, when the deviation is quantified and the blast radius is clear, you stop advocating and start reporting. Your team is no longer the department that says "something feels off." You are the department that says "here is exactly what changed, when it started, and how many customers it is touching."

That shifts support from a cost center to an intelligence source. And that shift changes how the rest of the company treats you.


Separating Fires from Slow Burns

Not every anomaly is urgent, and the system treats them differently.

Some alerts are fires. A metric has moved so far outside normal that you need to look at it today. Others are slow burns, something that has been quietly drifting in the wrong direction for three or four days. It is not dramatic yet, but if you catch it now, you avoid a crisis next week. The system distinguishes between these and ranks by real impact: how severe the deviation is, how many conversations it touches, and how long it has been building. A week-long trend that is quietly worsening will outrank a one-day spike.

It also catches the good stuff. When CSAT jumps, when resolution rates improve, when handle time drops, that surfaces too. Your team does good work. You should know when the numbers reflect it.

Alerts show up in a daily digest inside the app, and they pipe into whatever you already use. Slack, PagerDuty, your own internal tooling. Our goal is to meet you where you are, not make you check another dashboard.


What Comes Next

Right now, the Signal Engine monitors your metrics at the company level. The next phase is segment-level detection (teams, categories, etc.). When a primary alert fires, the system will cascade into specific categories, customer segments, and domains to tell you not just that something changed, but where. Instead of "CSAT dropped," you will see "CSAT dropped for enterprise customers filing billing tickets on Mondays."

That is the future we are building: a system that does not just tell you something is wrong, but narrows it down fast enough for you to do something about it.

The baseline is running. The hourly checks are live. If a pattern is forming in your support queue right now, you will know about it before it becomes a story someone drops in Slack

Share on social media