Skip to main content

Conversation Quality is a binary metric that assesses whether a chatbot interaction left the user feeling satisfied and positive or frustrated and dissatisfied, based on tone, engagement, and overall experience.
The Conversation Quality metric evaluates user satisfaction across an entire chatbot session by analyzing tone, engagement, and sentiment. It classifies each conversation as GOOD or BAD depending on whether the user’s overall experience reflects positive engagement or frustration directed at the bot. The metric focuses on conversational flow rather than task success, emphasizing how naturally and politely the user and bot interact. It excludes non-textual or purely action-based agent outputs (e.g., button clicks). This is a boolean metric, returning a confidence score that the conversation quality is good. The score ranges from 0% (no confidence the conversation quality is good) to 100% (complete confidence that the conversation quality is good).

Conversation Quality at a glance

PropertyDescription
NameConversation Quality
CategoryAgentic AI
Can be applied toSession
LLM-as-a-judge Support
Luna Support
Protect Runtime Protection
Value TypeBoolean shown as a percentage confidence score

When to use this metric

When to Use This Metric

Conversation Quality is a useful metric when working with chat bots or other tools with lots of user interaction

Score interpretation

Expected Score: 80%-100%.
060%100%
Poor
Many conversations indicate frustration, impatience, or dissatisfaction directed at the bot
Fair
Excellent
Most conversations reflect positive user sentiment, polite engagement, and satisfaction

How to improve Conversation Quality scores

Some techniques to improve Conversation Quality scores are:
  • Ensure bots provide clear, empathetic, and concise responses
  • Detect and mitigate repeated clarification loops
  • Train models to de-escalate external frustration effectively
  • Log complete sessions to allow accurate tone assessment
Common issues that can cause low scores are:
  • Mislabeling external frustration as bot-directed
  • Incomplete logs
  • Abrupt session truncation

Performance Benchmarks

We evaluated Conversation Quality against human expert labels on an internal dataset of agentic conversation samples using top frontier models.
ModelF1 (True)
GPT-4.10.89
GPT-4.1-mini (judges=3)0.85
Claude Sonnet 4.50.85
Gemini 3 Flash0.88

GPT-4.1 Classification Report

PrecisionRecallF1-Score
False0.910.830.87
True0.850.930.89
Confusion Matrix (Normalized)
Predicted
True
False
Actual
True
0.925
0.075
False
0.173
0.827
0.0
1.0
Benchmarks based on internal evaluation dataset. Performance may vary by use case.
If you would like to dive deeper or start implementing Conversation Quality, check out the following resources:

Examples

  • Conversation Quality Examples - Log in and explore the “Conversation Quality” Log Stream in the “Preset Metric Examples” Project to see this metric in action.

How-to guides