What Does Outlearn Track and Why Does It Matter?
Your agent is live - now find out how it's actually performing.
Table of Contents
It is the end of your first month with Outlearn live. You open Analytics and see the numbers: 847 conversations handled, 91% resolved without a human, 4.6 satisfaction rating, and a Content Gaps list showing exactly which questions to add next. You know precisely how your agent is performing, what it is doing well, and what to improve this week.
That visibility starts from the moment your first conversation happens. Analytics tracks everything - resolutions, handoffs, satisfaction ratings, content gaps, conversation volume - and turns it into a live feedback loop that makes your agent measurably better over time.
In this article, you'll learn:
- What the Analytics tab contains
- What each metric measures and why it matters
- How to use Analytics to continuously improve your agent
What Gets Tracked
Outlearn tracks everything that happens in your agent's conversations - automatically, from the moment your first conversation happens. There's nothing to configure or enable. You open the Analytics tab and the data is there.
Analytics is organized into two views:
- Overview - high-level metrics, charts, and summary panels that tell you how your agent is performing at a glance
- Conversation & Training - every individual conversation your agent has had, searchable and filterable
Why Analytics Matters
Most teams set up their agent, go live, and then wonder whether it's actually working. Analytics answers that question with data - and more importantly, it tells you what to do next.
The four core metrics tell you:
| Metric | What it measures | Why it matters |
|---|---|---|
| Total Conversations | How many conversations your agent has handled | Shows adoption - are users actually using it? |
| Auto-Solved | Percentage resolved without human handoff | The most important metric - shows how independently your agent is working |
| Time Saved | Estimated time your team saved from automated resolutions | Translates agent performance into business value |
| Satisfaction Score | Average user rating after conversations | Shows whether users are actually happy with the answers they're getting |
What a Healthy Agent Looks Like
There's no universal benchmark - it depends on your use case, your content, and your users. But here are useful signals to watch for:
Auto-Solved rate going up - your agent is getting better at resolving conversations independently. This is what you want to see over time as you add sources and improve instructions.

Auto-Solved rate going down - something changed. New questions your agent can't answer, outdated sources, or a change in what users are asking. Investigate with Content Gaps and Unresolved conversations.
Satisfaction Score dropping - users aren't happy with the answers. Could be accuracy, tone, or response length. Check Conversation & Training to read actual exchanges.
Handoff Reasons showing a pattern - if the same reason keeps triggering handoffs, it's a signal that your agent should be able to handle that scenario - but can't yet. Add sources or update instructions.
Best Practices
- Check Analytics at least once a week for the first month after going live - the early data shapes your most important improvements.
- Don't obsess over Total Conversations in isolation. A low volume with a high Auto-Solved rate is better than high volume with lots of handoffs and low satisfaction.
- Use the date range filter to compare periods - week over week or month over month comparisons reveal trends that a single snapshot misses.
- Think of Analytics as a feedback loop, not a report card. Every gap it reveals is an opportunity to make yours better.