When considering what is the best use case for your bot, the first question should be whether the chatbot adds real value to the customer compared to other ways of doing the same job. And most importantly whether it is a job that the customer actually wants to be done.
If designed well, chatbots make it easier for companies to interact with their customers. They are much cheaper to develop than apps and have some built-in advantages such as they can be used directly in the chat (by definition) and they don’t need to be downloaded.
The following are common mistakes that can detract from the chatbot experience, what chatbots can do and the development process.
When possible, the chatbot should be designed to not require text input. Users prefer clicking simple buttons to typing. Graphical widgets should be used instead of text where possible. In the worst case, the numbered options can be presented in the text that users can choose from.
Number of touches to get the job done is the key metric here.
The chat user interface is not a good interface for many jobs as it has many disadvantages including:
Most of the mainstream chat applications allow users to add web views with custom graphics as screens for their bots. Using web views definitely improves the UX.
Of course, responsiveness is also a consideration with web views and needs to be addressed by the developer and ultimately the chat platform developer.
It’s also a lesser known use case that web views should be used for entering secure information like passwords which you don’t want want to be saved in the chat history.
A little bit of personality when appropriately is good. A personality that gets in the way of utility for example by asking people to read too much text, will detract from the experience.
It’s worth thinking about how much the user will appreciate the bot personality when it is failing to comprehend what the user wants to do. In that case, it will no doubt make the situation much worse!
There is definitely novelty value to chatbots at the moment which makes users put up with their flaws. This won’t always be the case.
Most of the points in this blog post could be the subject of their own blog post and this one definitely is in that category.
The advantage of a scripted bot is that the user experience is tightly controlled, served in bite sized chunks and the user has some ability to navigate to topics of interest. This makes the experience very understandable to the end user.
The downside of this approach is that tthe navigation is very clunky and limited compared to a custom user interface as found in an app or on a website. For example, the back button is very heavily used on the desktop and on mobile, but the equivalent for bots is to navigate back to where you were using menus which are a very clunky solution.
The scripted approach falls short on many of the same dimensions that we mentioned regarding chat interfaces previously such as not being able to get a good overview of the options up front and being locked into sequential navigation.
Using graphical interfaces and NLP appropriately can make the bot much more efficient to use.
Of course, there may be a trade-off in terms of speed of development versus the limits of functionality. A Powerpoint presentation may be able to be much improved by making it a fully interactive website. If it does the job at hand well enough however you won’t be able to justify building the interactive website from a time, effort and economic point of view.
Even if you use natural language processing (NLP) for one off commands (as opposed to multiple level dialogues - which are impossible to get right at this point) it has limitations. This is because in reality it’s hard to anticipate how many ways a user can ask a question.
On their blog, Google states that 15% of searches done on their site are completely new i.e. never seen before. That’s around 800 million searches per day.
For infrequent or one off jobs it’s therefore important to find some way of limiting the scope of the questions that people will ask by making the scope clear.
For frequent tasks it is possible to “train” the user along with the machine. For frequent household tasks like managing the playlist or ordering food the user will adapt to the NLP.
It’s hard to see users typing in full sentence questions unless they believe there is at least a possibility that a human is behind the bot.
Of course, typing sentences is more effort than clicking a few buttons, especially on mobile, and therefore if the bot fails to respond correctly user frustration will escalate fast.
Human in the loop is the ability for a human to take over from the bot and respond to the end user manually.
Human in the loop is obviously more expensive to provide than a purely automated system however in many cases the human agents are already the ones answering the questions and the bot is introduced to improve efficiency by answering the simple repetitive questions.
To make sure the end user is not frustrated it is important to make sure that the conversation is escalated to the human if the bot’s probability of answering the question is anything but extremely high.
The level of confidence that the bot has in answering the question is an output of the NLP engine and therefore it’s usually easy to set the level of confidence below which the conversation will automatically be escalated to a human.
Even in the case that the bot conversation is escalated to a human, the bot can still add value. It’s possible for the bot to show the human canned answers that they can use to respond more quickly and these can even include graphical widgets. And of course by monitoring the human’s response the bot can train itself to answer more questions in future.
As mentioned previously chatbots using NLP work well within a narrowly defined scope, when the responses to a query are limited and can be curated. In fact one of the best uses of NLP is a type of limited search that can get the user to the option they want without going through a tedious decision tree.
Open-ended searches are different however. In this case the user doesn’t necessarily know upfront what all the relevant factors are. They wants to very quickly get an overview of all the available options and the relevant factors and then rapidly narrow down the options based on search criteria using filters.
While it’s possible to ask a bot to tell you the best hotel for you to stay in in a given location, it’s highly unlikely that it will give you a better solution in the same amount of time than you could find searching a good hotel website using the extensive search and filtering tools.
The NLP route may be a good route where you value speed and convenience more that the quality / value of the solution. For example when buying flowers you might be relatively happy to use the most convenient solution rather than spend time finding the florist with the best price.
There are many reasons why you shouldn’t develop your bot from scratch when you can use a bot framework that will provide you with all the features you need.
Not only should you not waste time and effort coding up features that could be provided as common components, you should also not spend time writing multiple integrations to third party services when one integration to the framework provides you with many third party integrations out of the box.
All the above is especially true if the framework gives you access to the source code of the components and integrations so you can customize them as required.
If the customer does not need to leave the chat channel to complete their job then the bot will be truly useful. If the customer has to leave the chat to complete certain steps in the process then the bot will be less useful.
Of course it may be the case that it is impossible for the bot to take care of all the required steps but if it is possible to do so, it will be much more valuable to the end customer if the bot can deal with the entire process.
For example, when a customer uses a chatbot to purchase a cup of coffee, they should have the option to complete the transaction by paying the bot. They should not be forced to pay cash to the barista at the end of the process if at all possible. Uber wouldn’t be as good if you had to pay cash to the driver at the end of the ride.
It’s obvious, but great customer experience comes from simplicity and convenience. The effort that the customer needs to exert to accomplish a given task needs to be minimized in every way, including the number of steps they need to carry out.
For example, if a customer always orders the same type of coffee, they should be given a quick order option for this type of coffee at the beginning of the process.
Chatbots aren't perfect for every task. In some cases, companies can use applications to serve customers better.
If for example a process requires feedback and adjustments on multiple screens and a high level of detail, it makes more sense to use an app than a bot.
This is especially true for desktop applications but is also applicable to mobile applications.
This general rule of software development is equally applicable to chatbots. Allowing the user flexibility in how they interact with the bot is important even if it results in more input errors.
As long as the customer can undo what they have done if they make an error, the user experience will be much better if it is less rigid and controlled.
AI has made data the new oil. The business model of many AI companies is to give away services for free in the hope of collecting data which can be used to make the AI better among other things.
By surrendering your data to third parties you are creating the following issues:
As chatbots move towards the mainstream many best practices for UX and development will emerge. Hopefully some of the above will be useful to you in terms of identifying issues that detract from your bots UX.
Disclaimer: We encourage our blog authors to give their personal opinions. The opinions expressed in this blog are therefore those of the authors. They do not necessarily reflect the opinions or views of Botpress as a company.
Fortunately, the unrealistic expectations regarding how conversational AI would allow chatbots to be almost fully...