AI Agent Analytics in AgentFlow helps you understand how deployed AI agents are performing. It provides visibility into agent reliability, knowledge usage, lead quality, and cost, making it easier to optimize flows, improve fallback logic, and plan team coverage.
In this article, we’ll walk you through key metrics available in the dashboard, how to access and filter the data, and what insights you can derive.
Benefits and use cases
Benefits of using AI Agent Analytics
Before deployment (planning and setup):
- Unclear which knowledge sources are used: Identify which content will be useful or needs updating before launching.
- Limited cost control: Estimate potential credit usage per conversation and plan for usage caps.
- Unknown handoff rates by channel: Use channel-level data to decide how to route conversations or assign team coverage.
After deployment (monitoring and optimization):
- No visibility into agent behavior: Track how your agent performs across flows and channels.
- Can’t diagnose why AI exits: Spot system errors, low confidence replies, and frequent handovers.
- Hard to quantify business value: Visualize lead scores and handoff rates to assess AI effectiveness.
- Fragmented channel insights: Compare how your agent performs on WhatsApp, Instagram, and more.
Common use cases
- Reliability monitoring and guardrails tuning: Adjust fallback logic and thresholds based on exit patterns.
- Knowledge coverage audit: Identify which documents are used or ignored to refine your knowledge base.
- Channel-level staffing and routing: Monitor handoff % by platform to plan human support more effectively
- Cost visibility and control: Review credit usage trends and export data for finance tracking.
- Lead quality tracking: Use average lead scores to evaluate if the agent is reaching the right users.
- Post-deployment troubleshooting: Investigate issues after launching new flows, prompts, or campaigns.
Accessing the AI Agent Analytics dashboard
ℹ️ Note: Requires at least one deployed AI agent in AgentFlow and connected channels to show usage and handoff metrics.
You can follow the steps below to access the AI Agent Analytics dashboard:
- Click on the
icon in the left navigation bar to go to the “SleekFlow AI” page
- On the top menu panel, click on “Analytics”
- In the AI Agent Analytics dashboard, you can pick an existing AI agent in the “AI agent” dropdown to drill into a specific agent
Setting the date range
The AI Agent Analytics dashboard only shows data for the selected time period.
- By default, the dashboard shows data from the last 7 days.
- You can change the Date range filter at the top of the page to view historical trends or zoom in on a specific period.
- All charts and tables will update based on your selected range.
Make sure to select a longer date range if you want to review weekly or monthly performance trends.
Understanding the metrics
The AI Agent Analytics dashboard is divided into 4 key sections. Each section surfaces a different dimension of your AI agents’ performance — from volume and reliability to cost and business outcomes. Use these insights to monitor trends, troubleshoot issues, and evaluate the impact of AI on your customer experience.
Data updates once a day at 12 AM UTC. Use these insights to monitor trends, troubleshoot issues, and evaluate the impact of AI on your customer experience.
Conversations
This section helps you understand how often the selected AI agent is triggered, how active the conversations are, and where the traffic is coming from. Use these metrics to track adoption, troubleshoot flow performance, and measure how effectively your agent is being used across channels.
Total conversations handled
This line chart shows the total number of conversations the selected AI agent was involved in during the selected date range.
What it represents:
- A conversation is counted when the AI agent is triggered and responds at least once
- Each data point reflects daily totals within the selected time period
How it helps:
- Helps you track how often the AI agent is being used
- Use this to monitor changes in engagement volume after campaigns, flow updates, or bot configuration changes
Average messages sent per conversation
This chart shows the average number of messages sent by the selected AI agent in each conversation.
This shows how active the AI agent is in each conversation. A higher number means longer or more complex conversations; a lower number may signal early exits or short replies.
What it represents:
- A conversation is counted when the AI agent is triggered and responds at least once
- Each data point reflects daily totals within the selected time period
How it helps:
- Helps you track how often the AI agent is being used
- Use this to monitor changes in engagement volume after campaigns, flow updates, or bot configuration changes
Conversations per flow
This table shows how many conversations were handled by the selected AI agent, grouped by the Flow Builder flow that triggered it.
What it represents:
- Each row shows a flow name and how many conversations it initiated with the selected AI agent
- Only flows that routed into the AI agent are counted
How it helps:
- Lets you compare the effectiveness of different entry points
- Helps you identify underperforming flows that may need better targeting, setup, or visibility
Flows triggered by channel
This table shows how often the selected AI agent was involved in flows triggered from each connected messaging channel.
What it represents:
- Breaks down AI-triggered flows by originating channel (e.g. WhatsApp, Web Chat, Instagram)
- Includes all flows where the selected AI agent was deployed
How it helps:
- Helps you compare AI usage across platforms
- If a connected channel shows very low activity, it may signal configuration issues or poor channel visibility
AI agent behavior
In this section, you will find metrics that help monitor how often your AI agent exits conversations and which knowledge sources it relies on to generate replies.
Exit occurrences by reason
This table shows the total number of conversations where the selected AI agent exited, grouped by the reason for exit.
What it represents:
- Each row represents a specific reason the AI agent exited a conversation.
-
Reasons include:
- Speak to human: The AI handed off the conversation to a live agent — triggered either by fallback logic or user request.
- System error: The agent exited due to a technical failure such as a timeout or broken prompt.
- Low confidence: The AI couldn’t confidently generate a reply and exited to avoid giving a poor response.
- High lead score: If configured, the agent exited when the conversation reached a set lead score, often used to pass qualified leads to humans.
How it helps:
- Identifies the most frequent exit reasons so you can troubleshoot accordingly.
- A high number of Low confidence exits may indicate missing or unclear content in your knowledge base.
- Recurring System errors could point to prompt issues or agent deployment problems.
- Frequent Speak to human exits may suggest that your agent isn’t yet trained to handle common queries — consider expanding its coverage.
- High lead score exits can help confirm that lead qualification logic is working as expected.
Data sources used
This table shows how many times the selected AI agent referred to each knowledge base source when replying to user messages.
What it represents:
- Each row lists a data source that has been linked to this AI agent, including uploaded files, indexed URLs, or custom answers.
- The count indicates how many times that source was referenced during conversations handled by the agent within the selected date range.
How it helps:
- Helps you identify which sources are actively contributing to replies — and which ones aren’t being used at all.
- If a source has zero or very low usage, it may mean:
- The content isn’t relevant to what users are asking
- The source wasn’t indexed correctly or is unreadable
- The flow or AI agent isn’t configured to access that source
💡 Use this metric to:
- Audit which documents are actually useful to the AI
- Remove outdated or unused content
- Fill gaps by adding missing FAQs or fixing broken links
Lead score and AI handover
This section helps you assess how well the selected AI agent is qualifying leads and how often it escalates conversations to human agents. These metrics give you insight into both the quality of interactions and the limits of your AI automation.
Average lead score by conversation
This line chart shows the average lead score of all contacts who spoke with the selected AI agent during the selected time period.
What it represents:
- Lead scores are calculated based on the rules you’ve set in your workspace (via contact properties or Flow Builder)
- This chart averages those scores to give you a sense of the quality of leads coming through AI conversations
How it helps:
- A higher average lead score may suggest your AI agent is attracting or qualifying better leads
- A sudden drop in score could indicate changes in targeting, messaging, or agent logic that are attracting less relevant users
Note: This metric only appears if lead scoring is enabled in your contact properties or agent setup.
Average conversation handover rate
Shows the percentage of AI-handled conversations that ended with a handover to a human staff member, either due to the configured “Exit conversation” logic, low confidence, or customer request.
What it represents:
- The rate is calculated by dividing the number of handovers by the total number of AI conversations during the selected period
How it helps:
- A high handover rate may indicate that your agent isn’t equipped to handle certain topics, or fallback thresholds are too sensitive
- A low handover rate may be a good sign, but only if user needs are being met (not dropped)
Credits usage
This section helps you monitor how many SleekFlow AI credits your selected agent consumes, both in total and on average per conversation. These insights help you understand cost impact, track efficiency, and plan credit usage more effectively across flows, channels, and agent types.
Credits used
This line chart shows the total number of SleekFlow AI credits consumed by the selected AI agent during the selected date range.
What it represents:
- Includes all credits used by the AI agent when replying to users, regardless of outcome (e.g. successful replies, low confidence exits).
- Each data point reflects daily usage totals based on your date range.
How it helps:
- Lets you monitor overall usage patterns and cost impact over time.
- Spikes in usage may signal higher traffic, long conversations, or inefficient flow logic.
- Useful for reviewing trends after changes to flows, prompts, or fallback logic.
Average credits used per conversation
This chart shows the average number of credits consumed each time the selected AI agent handles a conversation.
What it represents:
- Calculated by dividing total credits used by the number of conversations handled.
- Reflects the efficiency of the agent in terms of credit consumption.
How it helps:
- A higher average may indicate longer or more complex conversations — or inefficient prompt setup.
- A consistently low average may reflect optimized flows, short conversations, or high-confidence replies.
- Helps you compare efficiency across agents or use cases, especially if you’re monitoring AI costs closely.
How to use AI Agent Analytics to improve your AI agent
Use the insights from each section to continuously refine your AI agent setup:
- Reduce early exits: Check “Exit occurrences by reason” metric to find common failure points. Adjust fallback logic or expand knowledge coverage where needed.
- Clean up your knowledge base: Use “Data sources used” metric to spot unused or underperforming content. Remove what’s irrelevant and add what’s missing.
- Improve flow design: Review “Conversations per flow” and “Flows by channel” to see which journeys are working. Tweak entry points or targeting for flows with low volume.
- Validate lead quality: Use “Average lead score” to track whether your AI agent is attracting the right users. Combine with handover rate to adjust qualification logic.
- Manage credit usage: Monitor “Credits used” and” Average credits per conversation” to control costs and identify efficiency gaps.