There is a lot of hype about chatbots and AI. Of course, people implementing a chatbot want to demonstrate something remarkable and intelligent.
Even though they know that chatbots are good at identifying the intent behind one off natural language statements and that human level conversation is not possible (because if it was Google Home and Alexa would be orders of magnitude better) they still expect to see some impressive level of intelligence and AI.
With every technology it is critical to understand the limitations and to make sure that the user experience design not only makes the best use of the technology but also takes account of the limitations.
The most effective and simplest strategy to ensure that a chatbot makes the best use of AI is actually counter intuitive. That strategy is to always have human in the loop as a backup for the chatbot i.e. in the event that the chatbot doesn’t understand what the end user wants it is escalated to a human.
The reason for this is two fold:
This creates a bad user experience because either the user needs to know everything the chatbot can do upfront, or they need to guess what it can do and prepare to be disappointed frequently.
If you have human agents behind the chatbot it totally changes the experience for the end user. Now they will have confidence that the question they asked will be handled by the chatbot or the human. Instead of that chatbot representing potential downside when it doesn’t know the answer, the chatbot now represents potential upside, as if the chatbot can answer their question it can be answered faster.
It is important that the human agent provide support in real time rather than in a delayed manner, as this will boost the user experience and positive experience of the chatbot.
By having the humans answer questions in the process, the AI can keep improving it’s map of the questions and answers so that it can answer more questions in future. This process may result in suggestions for improvements that a human operator has to improve or have a process by which tentative answers are validated by the end users (by seeing their response to the answer provided - in much the same way that Google does it).
The one exception to the needing human in the loop (or at least a way of limiting human involvement) building chatbots for an extremely narrow domain.
This bot would very quickly understand all the detours involved in this kind of conversation and almost eliminate the need for a human agent for all but the most difficult questions. This would allow the bot to interact almost at the human level.
Something else to consider however is that a 100% natural language bot is better suited to voice than text. Typing especially on a phone is extremely inefficient and so a 100% natural language bot would not be the best use case here.
If the user is typing, it’s often best to use graphical widgets with the option for the user to type if an issue arises. This means that the user would usually complete the task at hand just by clicking graphical widgets like you would do on a web page (what we in the bot world call staying on the “happy path”), but there is always the option to go off script and type a question (what we call taking a “detour”).
Of course each situation is different and different chatbot platforms have different capabilities in terms of allowing the use of AI technology or at the very least allowing developers to connect to third party tools.
Hopefully the above can help inform your chatbot design decisions in future.
As we’ve discussed in other posts, defining what a chatbot is is not a straightforward process if you recognize that...
Botpress announces the first commercial knowledge-based NLU engine: OpenBook. OpenBook brings game-changing benefits to the world of chatbot development.
There are many types of bookings, from booking an appointment at the hairdresser to booking a table at a restaurant...