Documentation forSolarWinds Service Desk

Virtual Agent Cognitive Analytics console

On this page

Introduction

Virtual Agent administrators can use the Cognitive Analytics console to analyze and gain insights into use case performance, focusing on aspects where the use cases satisfied or failed to provide a response to the user query and emphasizing the factors associated with failure.

Taking into consideration such failures, the Cognitive Analytics console helps analyze each use case to provide insight on its performance. This allows each use case to be trained further to accomplish specific tasks by identifying user intents more precisely.

The console keeps a record of each use case that was triggered during conversations.

Navigation

Setup > Global Settings (or Service Provider) > Service Portal > Virtual Agent > Open Analytics.

Features across dashboards

By default, the Overview dashboard opens first.

Switch to another dashboard

By clicking the Dashboard list dropdown arrow next to Overview in the upper-left corner, you can switch to other dashboards. You may need to scroll up to see the dropdown list, and after selection, wait for the dashboard to load.

Driver details

From any dashboard Drivers tile, you can click on the eye-shaped icon in the Details column to view additional information about each individual use case.

See Details visible from a drivers table for more information.

Overview dashboard

By default, the Overview dashboard opens first. It is an interactive dashboard that provides a unified view of metrics that are in all the other dashboards in the Cognitive Analytics module. It explains potential metrics in the form of cards with graphs and tables.

The following tiles are visible on four different rows on the Overview dashboard.

On the first row, by default, data is presented over the last 7 days within the selected period. On the remaining rows, by default, data is presented over the last 3 months. You can use a filter to set a different data period.

  • Total sessions is a count of the number of times users have initiated a chat session where multiple queries might have been asked and those queries could have resulted in a use case or an unresolved query. The graph represents the number of sessions in the last 7 days within the selected period.

  • Engagement Rate is the percentage of valid use cases. That is, the number of queries asked by a user that turned into a valid use case. The formula Number of valid use cases/total queries provides an engagement rate of x%. The graph represents the valid use cases in the last 7 days within the selected period.

  • Resolution Rate is the percentage of use cases that received feedback from users. That is, the number of use cases for which the user provided feedback as satisfied or dissatisfied. The formula Number of feedback received as satisfied/total feedback provides a resolution rate of x%. The graph depicts the use cases that have received feedback in the last 7 days within the selected period.

  • Escalation Rate is a percentage of chat transfers that occurred and tickets that were generated. The number of times that a user has created a ticket or started a chat that was transferred to a human agent when the user is not satisfied with the response. The formula (Number of tickets generated + Number of chats transferred) / total valid use cases provides the escalation rate percentage. The graph represents the sum of tickets generated and chats transferred in the last 7 days within the selected period.

  • Abandonment Rate is the percentage of use cases that did not receive feedback from users. That is, the Number of cases where the user did not provide any feedback and left the chat. The formula (Number of valid use cases – Number of use cases that received feedback) / total valid use cases provides the escalation rate percentage. The graph represents the number of use cases that didn’t receive feedback in the last 7 days within the selected period.

  • Coverage contains the count of channels, domains, language, region, and customers from which users have raised queries.

  • Users provides metrics regarding the number of unique users and returning users along with the average interaction per user.

  • Engagement

    There are two Engagement tiles.

    • The bar chart graph on the left represents the comparison between the total queries asked by a user where that query turned into a valid use case (engagement) and the total queries asked by the user that didn’t turn into a valid use case.

    • The line chart graph on the right represents the comparison between the total number of tickets created by a user and a chat transferred to a human agent (escalation), and the total valid use cases that received feedback from user (resolution) and the total valid use cases that did not receive feedback from the user (Abandoned).

  • The Rate Drivers table represents the list of the top 5 use cases based on the total number of use cases triggered by users during conversations. Rate percentage denotes the number of times that the particular use case has been triggered by the user. Impact / Weighted Avg. denotes the overall impact of that particular use case on a tenant.

    In the table view you can click the four tabs at the top to change data displayed in the table:

    • Resolution
    • Escalation
    • Abandon
    • Engagement

    Rate Driver details

    From the Rate Drivers table you can click the eye-shaped icon in the Details column to view additional information about each individual use case.

    See Details visible from a drivers table for more information.

Users dashboard

Use the Dashboard dropdown list to switch to the Users dashboard.

The Users dashboard provides information on the overall users. It includes user metric details, such as their types and other relevant user information.

The following tiles are visible on five different rows on the Users dashboard. By default, data is presented over the last 3 months. You can use a filter to set a different data period.

  • Unique Users is the count of Unique users who have initiated at least one session for the period.

  • Returning Users is the count of users who have returned again after their first session. The progress bar indicates the percentage of returning users with respect to total users.

  • User Feedback is the count of feedback received from users. That is, the total feedback received from users for a question/query for which a resolution has been provided. The progress bar indicates the percentage of users who provided feedback among the total number of users.

  • Daily Users Interactions is the average count of interactions in a day by users. That is, the total number of times users have interacted by either clicking an option or typing a message. The progress bar indicates the percentage of daily user interactions with respect to total interactions.

  • Served users is the count of unique users who were served with a response.

  • Returning Served Users is the count of users who returned after getting a response in their first session.

  • Dissatisfied Users is the count of users who provided feedback that they were dissatisfied.

  • Dropped Users is the count of users who stopped interacting for some reason and dropped off.

  • Average Users is the average count of total users who interacted over a specific period of time. The graph represents the total number users in the last 6 months.

  • Exit Breakup is the way that users exit the conversation;

    • The Human Escalation indicates the percentage of times a user's case has been transferred to a human agent.

    • The Abandon Rate indicates the percentage of users who exit the conversation without providing feedback.

    • The Ticket Generation indicates the percentage of users who created a ticket due to an unsatisfactory response. The graph represents the total count of each for the months of the selected period.

  • The Users/Use Cases graph represents the comparison between the total number of users and the total number of use cases generated in the selected period.

  • Users by Region & Domain is the count of users who have raised queries or questions related to a domain categorized per region.

  • Users by Region & Customer is the count of users who have raised queries or questions categorized by customer.

CSAT dashboard

Use the Dashboard dropdown list to switch to the CSAT dashboard.

The CSAT (customer satisfaction) dashboard displays data and metrics related to customer satisfaction with the service or experience. The CSAT score is measured using a questionnaire that asks users to rate their satisfaction on a scale of 1 to 5, with 1 being the lowest and 5 being the highest. The visualizations, such as a chart or progress bar, show the trend over specific criteria. Satisfaction Drivers cards have an advanced Details option to view more detailed information about that particular metric.

The following tiles are visible on two different rows on the CSAT dashboard. By default, data is presented over the last 3 months. You can use a filter to set a different data period.

  • CSAT score over time is the customer satisfaction score received from users. The score depicts the level of satisfaction users have measured on a scale of 1 to 5 based on the implementation, with 1 being the lowest and 5 being the highest. The graph represents the CSAT score received from users during the selected period. Average CSAT is calculated based on the total count of CSAT scores accumulated for a month. The percentage of the CSAT Survey Response Rate is the count of the number of users who provided the CSAT score metric in every conversation. To see details, hover over the chart.

  • CSAT responses by region is the count of satisfied and dissatisfied, and the CSAT scores provided by users from different regions. The graph represents the total number of ratings received as satisfied or dissatisfied, as well as the CSAT score received from users in a specific region.

  • CSAT by domain is the count of CSAT score provided by users in various domains.

  • The Customer satisfaction drivers table provides the detailed information for each use case, such as the count of valid use case sessions, the user's feedback rate, the users that left without feedback rate, the ticket generated or chat transferred rate, the average CSAT for the top 10 use cases based on the count triggered by users, and the CSAT impact on overall tenant.

    Customer satisfaction drivers details

    From the Customer satisfaction drivers table you can click the eye-shaped icon in the Details column to view additional information about each individual use case.

    See Details visible from a drivers table for more information.

Conversations dashboard

Use the Dashboard dropdown list to switch to the Conversations dashboard.

The conversation dashboard provides information on use cases and intents metrics with reference to language, customer, region, channels, and domain.

The following tiles are visible on nine different rows on the Conversations dashboard. By default, data is presented over the last 3 months. You can use a filter to set a different data period.

  • Unique Intents is the count of unique intents asked by users. That is, a distinct request or question a user made while having a conversation.

  • Total Queries is the count of queries raised by users.

  • Average Strike Rate is the average count of user queries identified as a valid question that can be answered by one of the use cases configured or trained and that provides a satisfactory response or resolution. Formula: (Valid Questions/Total Queries) %.

  • Unanswered Queries is the count of queries not resolved to a use case or not responded to with a solution. This can happen for unclear user input, grammatical errors, or when the issue is not in scope of the implementation or configuration.

  • Response Rate is the count of user queries for which a valid response or a solution is provided to the user. The progress bar indicates the percentage of valid responses with respect to total queries.

  • Article Triggered is the number of times an article is triggered in response to users for a specific query. This refers to an informative or educational article that is suggested based on the user's query or request for more information.

    The progress bar indicates the percentage of articles triggered with respect to the use case.

  • Script Triggered is the number of times when a script is triggered in response to users for a specific query. Script refers to a pre-defined sequence of messages or actions initiated based on user's input or query.

    The progress bar indicates the percentage of scripts triggered with respect to the use case.

  • SOP Triggered is the number of times an SOP (Standard Operating Procedure) is triggered in response to users for a specific query. The progress bar indicates the percentage of SOPs triggered with respect to the use case.

  • Confidence Score is the confidence with which the user query is understood and mapped to an intent. The confidence score is presented as a percentage and represents the level at which the Virtual Agent attempts to resolve the user query based on the training of the use cases.

    The progress bar indicates the percentage of the average confidence score with respect to use cases.

  • Fallback is the count of fallbacks provided as a backup response. Based on the implementation/configuration, a list of the top 5 intents/use cases based on their confidence score related to the query is provided that closely match the query raised by the user. The Virtual Agent can also prompt questions with options to further narrow down the use cases that might solve the user's query.

    The progress bar indicates the percentage of fallbacks triggered with respect to total queries.

  • Human Takeover is the count of human takeovers that occurred in a conversation for personalized assistance when the user's issue or needs is not fully addressed with a satisfactory response.

    The progress bar indicates the percentage of human takeover with respect to the use case.

  • Abandonment is the number of times a user exits the conversation without or before providing feedback. The progress bar indicates the percentage of no feedbacks provided with respect to use case.

  • The Conversations graph represents a cumulative count of conversations that occurred in each week of the month of the selected period.

  • The Feedbacks graph represents the total count of conversations that received feedback as Satisfied or Dissatisfied and also includes whether the user did not provide feedback.

  • Conversations by Channel is the total count of conversations triggered across multiple channels, for example, MS Teams, Web, or Slack.

  • The Conversations by Language graph represents the total count of conversations across multiple languages, for example, English, French, or German, configured or implemented.

  • The Conversation By graph represents the total conversations for the combination of mentioned parameters to provide a comparison. Mentioned parameters include Region & Domain or Region & Customer.

  • The Usecase By graph represents the top 10 use cases based on the count triggered in conversations that happened for the mentioned combination of parameters.

  • The Tree View graph provides a visual representation of the intents from the origin. This is displayed in a way that allows for easy navigation by language, channel, region, and customer.

  • The Intents table provides detailed information for each intent, such as the count of that particular intent, users that left without feedback rate, users who provide feedback as Satisfied or Dissatisfied, total count of Chat Transferred and Ticket generated, total count on channel, region, customer and language, total count of unique user and average feedback for that intent.

    You can change the number of entries that display by clicking down arrow at the bottom of the page. You may need to use the horizontal scroll bar at the bottom to see the entire contents of the table.

Performance dashboard

Use the Dashboard dropdown list to switch to the Performance dashboard.

The Performance dashboard provides an overview of Key Performance Indicators (KPIs). The metrics are displayed visually in both graph and table format. By default, data is presented over the last 3 months. You can use a filter to set a different data period.

  • Users is a count of unique users who initiated at least one session for the period.

  • Usecases is a count of queries asked by the user where the query turned into a valid use case.

  • SOP is the number of times an SOP (Standard Operating Procedure) is triggered in response to users for a specific query.

  • Transfer Rate is the percentage of conversations that are transferred to a human agent when the user's needs are not fully addressed with a satisfactory response.

  • Ticket Deflection is the count of user queries resolved by providing a satisfactory solution that otherwise could have resulted in a ticket.

  • Resolved W/O Ticket is the percentage of user queries resolved without the need for users to open a support ticket.

  • W/O Solution is the percentage of user queries that are not resolved and did not result in a solution for the user.

  • The Performance graph represents the comparison between monthly total count of use cases that received feedback (Resolution), total count of tickets generated or chat transferred (Escalation), and use cases that did not receive feedback (Abandoned).

  • The Customer Conversation Breakdown table represents the combination of total articles triggered with total tickets generated with respect to customers.

  • Performance By represents the total count of feedback provided by users (Resolution), total count of tickets generated and chats transferred (Escalation), total count of no feedbacks provided (Abandonment), and total count of satisfied and dissatisfied with respect to region, channel, domain and language.

    You can click the Switch chart icon in the upper-right corner of the chart to switch the type of chart that displays.

Unresolved dashboard

Use the Dashboard dropdown list to switch to the Unresolved dashboard.

The Unresolved dashboard includes information about questions or queries that were not resolved to a use case or were not responded to with a solution. This page contains the graphical view of metrics along with a table that provides much more detailed information about each query.

The following charts are visible on the Unresolved dashboard. By default, data is presented over the last 3 months. You can use a filter to set a different data period.

  • Unresolved Queries is the count of queries not resolved to a use case or not responded to with a solution. This can happen for unclear user input, grammatical errors, or when the issue is not in scope of the implementation or configuration. The graph represents the total count of unresolved queries in each week of a particular month of the selected period.

  • The Tree View graph provides a visual representation of unresolved queries from the origin. The related unresolved queries are grouped together and displayed in a way that allows for easy navigation by language, channel, region, and customer.

  • The Unresolved graph represents the total count of unresolved queries in a particular region, language, channel, customer, and domain over the selected period.

  • The Unresolved Region By 2-dimensional graph represents the total count of unresolved queries from various regions that belong to specific customers and the total count of unresolved queries from various regions that belong to specific domains.

  • The Unresolved Queries List table provides detailed information for each query, for example, personality, solution used, chat, and conversation details & the created time of each query.

Reporting dashboard

Use the Dashboard dropdown list to switch to the Reporting dashboard.

The Reporting dashboard visually tracks, analyzes, and displays key performance indicators (KPI), metrics, and data points for Virtual Agent conversations. You can click Graphs in the upper-right corner to switch to a view that displays graphs instead of metrics.

The following statistics-related tiles are visible on three different rows on the Reporting dashboard.

  • Questions is a count of valid queries submitted by users.

  • Satisfied is a count of valid queries for which positive feedback was provided.

  • Dissatisfied is the count of valid queries for which negative feedback was provided.

  • Resolution Rate is the ratio of positive feedback to the total number of queries for which feedback was provided.

  • No Feedback is the count of valid queries for which no feedback was provided.

  • Unique Users & Returning Users is the count of distinct users (unique users) and the count of unique users who have returned after their first session (returning users).

    Each of the tiles is broken into areas.

    • Asia Pacific
    • Europe
    • Middle East / Africa
    • Central & South America
    • North America
  • The Frequently Asked Questions table represents the list of use cases with the total count of Total Questions, Feedback as Satisfied (Resolution Yes) and Feedback as Dissatisfied (Resolution No).

In the upper-right corner of the Reporting dashboard, you can click the Graphs button to display information in graph format.

Reporting graphs display the following:

  • The Total Questions graph represents the total count of valid queries raised from specific regions.

  • The Users by Region graphs represents the total number of unique and Returning users from specific regions.

  • The Monthly Breakup graph represents the total count of monthly data for each category.

  • The Unique Users Per Month graph represents the total count of unique users of that specific month for each region.

Details visible from a Drivers table

From any Drivers table that you can access from a dashboard, you can click on the eye-shaped icon in the Details column to view additional information about each individual use case.

  • The Questions total indicates the count of valid queries made by users. Satisfied indicates the count of valid queries for which positive feedback was provided. Dissatisfied indicates the count of valid queries for which negative feedback was provided.

  • Users provides metrics regarding the number of unique users and returning users.

  • Performance displays the last and current month count with a graph that indicates the data for a day of the week.

  • SOP displays unique count and total number of times when an SOP (Standard Operating Procedure) is triggered in response to users for a specific query.

  • Solution is the total count of Article and Script as solutions that were provided for queries.

  • Escalation is the total count of the ticket generated, and chat transferred for that query.

  • Region is the count of occurrences from the list of regions.

  • Language is the count of occurrences from the list of languages.

  • Channel is the count of occurrences from the list of channels.

  • The User Queries table represents the list of queries, satisfaction results, whether an SOP is provided or not, what type of escalation has happened for that query, and chat and conversations information including the created time of each query.