The first set of chatbot analytics that is important to admins is generic usage statistics. Key metrics like is the chatbot used, on what devices, how often, how is the user experience, what is the retention rate and what is the bounce rate in a given time frame, etc? These are the kind of valuable insights you would get from a chatbot analytics tool for a website.
Other generic statistics that are required for all chatbots when natural language processing is used are conversational analytics like misunderstood phrases, most frequent words used, number of human intervention/escalation incidents etc.
When it comes to the above statistics, it is important that these statistics are not only measured but integrated into the software in ways that make the chatbot better for user interaction. For example, misunderstood phrases can be automatically added to the list of phrases associated with a given intent when setting up the NLP so that the bot develops better conversational flow. It's important that admins or more likely support agents have easy ways of adding and validating these types of phrases so that the bot improves quickly.
While all the above generic analytics are important, it turns out that in many cases, custom access to chatbot data is even more important. This is particularly true when the chatbot is being rolled out and piloted. This is because at the beginning of a bot project, sponsors are eager to show adoption and usage. They will, therefore, try to make sure that the bot is adequately marketed to the pilot users and if they have done their job correctly the statistics will show good usage and chatbot success. This is also partly because the chatbot platform is a novel product for the users they may be curious to use it initially and this can artificially inflate the usage statistics.
What is of interest to chatbot admins, however, are signs that there are issues with the bot usage that signal that the usage may not be as robust as the initial statistics indicate. And even if the statistics are clear that there is a usage problem, the sponsors want to know why the usage problem is happening.
The problem may be easily identified and fixed with generic analytics. At the start, for example, it is very often the case that the NLP setup is not as comprehensive as it should be so the bot misunderstands more than it should. This problem is normally quickly rectified by adding more phrases to the relevant intent in the NLP setup.
Often custom analytics is needed to diagnose a problem. It may be for example that explicit feedback needs to be built into the conversation flow to be able to identify issues.
Custom analytics is also of particular interest when the bot is a more customized chatbot.
If a chatbot is simple, for example, it takes a user through some sort of decision tree, and there is a usage problem, it may be easy to identify the problem from the generic usage statistics alone. The analytics could indicate the point in the chatbot conversation where users lost interest and abandoned the conversation or it could indicate the amount of time spent on the bot before the user abandoned it and in both cases this can indicate a problem with the flow or with the bot use case overall.
If the bot is more complicated, i.e. it has custom logic, the generic statistics will not tell the full story. They might be able to tell you the point that the user abandons, but they won't be able to tell you why the user abandons.
Imagine as a simple example. You build a chatbot to help children learn their timetables. A basic approach may be that the children choose the times table in question and the bot randomizes the questions regarding the chosen times table. The problem here is that the children learning the timetables are at different levels and therefore the successful engagement rate might fall if they find the questions are too difficult at the beginning or even at a later point in the interactions.
To pick this up we need the analytics to also reflect the difficulty of the questions among other things (and ideally automatically adjust the level). This needs to be built into the custom analytics. And this can only be done if the chatbot building platform supports custom analytics (or more to the point, easily adding custom analytics).
Once the custom analytics is available, developers can use the actionable insight gained to implement a sophisticated approach such as using an algorithm to match the child's level to the questions asked to maximize user retention in the game.
This brings us to a critical and related subject to customized analytics and that is A/B testing. With any software, it is difficult to know at the start what might work best in terms of functionality, graphics and content and the only way to know definitely is to A/B test different alternatives.
This is true for chatbots as well. The custom analytics needs to be linked to an A/B testing engine inside the chatbot building platform. Of course, within the bot platform itself it is not only important to be able to generate and tag custom analytics, but also to define A/B tests within the conversation flow.
The sponsor, manager, and developer of the chatbot are all responsible for helping define the analytics required. As mentioned, the custom analytics at least depends on the use cases addressed by the bot.
The sponsor clearly is interested in the adoption of the conversational interface and trying to work out any impediments to adoption or any other issues that might negatively impact user satisfaction levels.
From a generic chatbot analytics point of view, chatbot companies would be interested in the following key chatbot metrics and KPIs:
They may be interested in looking at the above statistics in specific periods or using other filters of course.
In terms of custom analytics, they might be interested in feedback which is normally entered manually as a node in the conversation flow, especially at the endpoints of each flow (whether the outcome was successful or not). They could be interested in the ranking of the flows by feedback rating.
For the example we gave of a times table chatbot, they may be interested in seeing whether there is any correlation between the level of difficulty and the engagement (number of nodes traversed).
The bot managers are of course interested in the above statistics but are also interested in ensuring the smooth operation of the bot. They may require analytics on the performance of the bot across different devices and statistics on the availability of the bot. Have there been any infrastructure or security issues?
They might be interested not only in the behaviour of the user base but also in the behaviour of the super users such as how often they update content or modify the flow. This kind of information could also be mandatory for security reasons.
They would of course also be interested in information regarding the progression of the bots from development, to staging, to production environments, and statistics on developer releases, etc.
The developers are interested in all of the above to the extent that they can use the information to make their enterprise chatbots better. Of course, they would be interested in statistics that identify bugs, such as the statistics coming out of the testing process which will have special tests for bots such as testing for NLP success. In practice, however, the developers and super users are more involved in implementing custom analytics than monitoring them.
I mentioned briefly that integrating analytics into the bot functionality is critical for successful bot building. A/B testing needs to integrate custom analytics and then can use a simple algorithm to optimize the conversation. More complex integration can be used to optimize the performance of the bot, such as the optimization mentioned previously to ensure that the difficulty of the timetable bot (or more realistically a more complex game) is optimized.
Many large software companies, such as Google, Microsoft, and IBM offer chatbot analytics services. Although these services can easily provide generic analytics, what I have made a clear case for is that to get the full benefit of analytics the analytics needs to be customized and tightly coupled to the functionality of the bot in a way that is different from non-conversational software such as websites for example. It is therefore essential that the chatbot framework used allows developers to customize the admin panel.
This article explains by means of an example what a deep neural network is and how they work. Artificial Intelligence (AI) is a broad set of computer science techniques that allow a computer to imitate human intelligence.
A guide on how to deploy a custom GPT chatbot on websites or messaging channels using OpenAI Assistant API and Botpress, with no need for extensive coding skills.
Find out all about our August and September 2021 events, product updates, tips, tools, tricks and tutorials — All in one place.