Call Scoring allows managers to create custom scorecards that use AI and human-evaluated criteria to assess call performance.
This feature is available in all AI-applicable languages: English, French, Spanish, German, Dutch, Italian, and Portuguese.
Access and license requirements
| Feature | AI Assist Standard | AI Assist Pro |
|---|---|---|
| Custom manual questions | Yes | Yes |
| Predefined AI questions | Yes | Yes |
| Advanced analytics dashboard | Yes | Yes |
| Custom AI questions | No | Yes |
| Automated call scoring | No | Yes |
Note: Admins and Supervisors with AI Assist licenses can create and manage scorecards.
Users can evaluate any call with a recording, but only calls with transcripts benefit from AI evaluation. Voicemails cannot be evaluated.
Creating scorecards
Scorecard structure
Scorecards contain questions grouped into categories. Each scorecard has:
- Title and description
- Language setting, which determines the AI question language
- Questions with point values, normalized to 100%
- Categories for organizing evaluation areas
Question types
Predefined questions (Standard & Pro)
- Pre-built questions that can be automatically evaluated by AI
- Advanced sensitivity controls and evaluation guidance
Custom questions (Standard & Pro)
- Manually evaluated by supervisors or admins
- Full control over content and criteria
Custom AI questions (Pro only)
- Create your own AI-evaluated questions
- Advanced sensitivity controls and evaluation guidance
- Automated Call Scoring available for hands-free evaluation
Creating custom AI questions (Pro)
When adding a new question, you configure the following fields.
Question type selection
- Yes/No question: Binary evaluation with Yes/No answers
- Ranged evaluation: 5-point scale, from Very poor to Excellent
Scorecard question
- Enter your evaluation question in the text field
- Example: “Did the agent show empathy towards the customer?”
- Use clear, specific language that can be objectively assessed
Not applicable option
- Toggle to allow evaluators, human or AI, to skip this question when it does not apply
- Useful when insufficient data exists to make an assessment
Enable automatic AI evaluations
- Toggle ON to have AI automatically evaluate this question
AI question suggestions
- You can use AI to help you improve your question
AI evaluation sensitivity
Controls how strictly AI evaluates responses:
- Lenient: More forgiving, higher scores for borderline cases
- Strict: More demanding, lower scores for borderline cases
Guidance to answer (recommended)
- Toggle ON to provide evaluation criteria
- Yes criteria field: Define what qualifies as a “Yes” answer
- No criteria field: Define what qualifies as a “No” answer
Example:
- Yes: “The agent identified with the customer's pain, acknowledging their problem, and expressing sympathy.”
- No: “The agent did not express any empathy, was rude, or tried to dismiss the problem.”
Answer types and scoring
| Answer type | Choices | Weight |
|---|---|---|
| Yes/No | Yes / No / Not applicable | 100% / 0% / Excluded |
| Ranged | Excellent / Good / Fair / Poor / Very poor / Not applicable | 100% / 75% / 50% / 25% / 0% / Excluded |
Scorecard management
All licensed users with Admin and/or Supervisor roles can create and edit a scorecard from AI Assist → Call Scoring.
A scorecard has two states:
- Draft: Not yet available for call evaluations
- Published: Available for use in call evaluations
A user can edit a published scorecard. They can:
- Edit the scorecard name and description
- Add or delete questions and categories
- Edit category titles
- Change the points for each question
When publishing changes, users should be aware that:
- The changes will impact scorecard analytics going forward
- Past conversations will remain evaluated based on the previous version
Users can delete a scorecard, which removes it from the list of available scorecards. Evaluations already made with that scorecard are kept, but aggregations for that specific scorecard can no longer be accessed because the scorecard was deleted.
Conversation review
When reviewing a conversation, managers can:
- Navigate to the Evaluation tab on the conversation page
- Select the desired scorecard
For custom questions, users must select the answer choice, which is used to calculate the question score. They can optionally add an explanation for their choice.
If the scorecard contains AI questions, AI will automatically fill in the choices, scores, and explanations when the scorecard is selected. AI uses the conversation transcript as the only source of information to answer the questions. Users can always edit the AI scores and answers. AI answers are filled in using the AI question language.
Users can also:
- Fill in a general feedback comment for the global score, which is automatically calculated based on the choices and scores for each question
- Delete an evaluation and use the same scorecard, or another one, to run a new evaluation
Aggregation
When clicking on a published scorecard in the Scorecard tab, users can access Scorecard stats.
This section shows:
- The global score per scorecard, calculated by averaging the global scores for all evaluated calls with that scorecard for the selected time period
- A date filter, which uses call dates, not evaluation dates
Users can also see:
- How many conversations were evaluated
- How many agents were evaluated
- The list of conversations included in the scorecard’s global score
The stats for each scorecard also show the global scores per category.
Permission and sharing
All users with Admin and Supervisor roles can:
- Create, edit, and delete a scorecard
- Evaluate a conversation
- View scorecard stats
Users can share the conversation page URL to share an evaluation with a user who has the Agent role. The evaluated agent will have access to that evaluation.