What is a chatbot? A chatbot is software that can undertake a human like conversation with a user. A user can either speak to it or message it through a chat application, and it will respond as appropriate by speaking, typing something or showing something graphical. The main use case for chatbots at the moment is in customer support where they are used to answer simple, repetitive questions and escalate more complex questions to the human agents.
Although there are some issues to resolve before they are used widely for customer enablement (beyond Amazon Alexa ordering products and a few other examples), the conversational interface is being rapidly adopted for customer support functions.
From a business point of view a chatbot project, like any other project, needs to be assessed in terms of risks and returns.
In this paper, we will examine the possible chatbot implementation challenges and how to avoid them.
Many of the risks highlighted here are avoidable as many of the problems faced by early adopters are now well known.
It is inevitable that chatbots and voice will soon become widely adopted for customer support because the ROI for chatbot in many cases are above 1,000% which is not just due to cost saving, but equally due to increased customer engagement and satisfaction and the revenue opportunities that lead from that.
The bot platforms out there have matured to the point that this is now low hanging fruit for enterprises. Not only will chatbots be widely used for customer support, but the use cases will rapidly expand to customer enablement which will eventually dominate customer support as the main use case.
It’s often hard to evaluate a new technology because you know that some of the hype around the product is just that, hype. Tech companies make all sorts of bold promises about what will happen when you implement their technology, but of course you know that it’s not that easy, nothing is guaranteed and they are certainly not emphasizing the downsides. The same applies to chatbots.
Chatbots have been through many stages of hype. A lot of this hype has to do with overestimating what chatbots can do.
It is true that there have been some genuine breakthroughs in chatbot related AI in the last few years, that these breakthroughs need to be understood to have a true picture as to what to expect from chatbots.
The main breakthroughs have been in three primary technologies, natural language processing (NLP), speech recognition at scale (for voice assistants) and natural language generation.
NLP allows a chatbot to identify the common intent behind different natural language phrases that have same meaning. For example “book a flight” or “I want to fly to Paris” have the same intent to “book a flight”. A software developer can code what to do once that intent is identified.
Speech recognition uses technology that translates spoken words to text. While speech recognition has been around for a long time, it’s only advances in the performance of computers and the ability to delegate work to the cloud that makes it possible for these systems to be able to identify millions of word as the algorithms are very compute intensive.
Natural language generation takes in a set of parameters and generates a grammatically correct sentence in natural language.
All of these technologies to some extent have advanced because of fairly recent advances in computing power.
The hype at the extreme is that chatbots will soon fully replace human agents. The reality is that chatbots perform extremely well within a narrow domain where the context is limited and perform best when answering one off questions that have no context.
That does not mean that the underlying technology is not powerful and useful. It is. That does not mean that chatbots cannot generate a huge return on investment (ROI). They can.
It means, however, the chatbot experience needs to be crafted with the limitations in mind.
There are many ways that the objective chosen when implementing a chatbot could be wrong. There can be many problems in setting objectives, such as solving a problem that doesn’t exist, i.e. using a chatbot to do something better done by a graphical interface.
The biggest mistake you can make is to buy the hype and try to implement a human-like chatbot that engages in conversation with customers almost at a human level. Many companies have tried this and failed. Trying to build a chatbot outside of the scope of things it does really well is always a problem.
The best chatbot experiences are a guided conversational experience not an open ended conversational experiences. The Botpress software, for example, defines a happy path which is a guided path that software needs to keep the user on. If the user diverges from this path, the software will try to bring them back to the happy path or offer them the chance to initiate another path, but will not allow them to go off on a tangent.
A poorly designed chatbot causes users to use it in a way that was not intended. This obviously causes frustration and has all sorts of negative repercussions.
Chatbots need to be designed conservatively, the scope needs to be made very clear and the conversation needs to be escalated to a human too often rather than not often enough (or an equivalent strategy needs to be used for the use case in question).
It goes without saying the developers working on the bot need to be competent and familiar with best practice in this area.
Chatbots today are a mix of NLP and decision trees. The NLP allows the user to ask open ended questions in a very narrow domain and the decision trees take the user through a decision tree (the happy path) to solve a problem or complete a task. As mentioned above regarding detours, there is limited scope for the user to deviate from the happy path.
It is a mistake to choose a black box approach to conversations. Black box solutions are data driven solutions where the logic is essentially held in the AI algorithms. The problem with this is that no one knows for sure what the AI solution will do, it’s extremely difficult to debug, it cannot be comprehensively tested and new information may change its behaviour.
While even Botpress does use some of this technology, it limits the domain for where this “black box” AI can operate to the narrow scope around the happy path. Therefore the goal of the AI is to always bring the user back to the happy path or allow them to transition to a new path. This is much easier to understand and debug.
I should mention that this “Black box” AI works extremely well in domains that are bounded and where there is a huge amount of relevant data surrounding the task at hand. This is why AI can play games so well. The problem with language is that it has infinite dimensions as any statement means different things depending on the context which includes statements that were made previously and other relevant information that the conversational agent should be aware of.
Implementing black box AI for conversations using the current state of technology is similar to the mistake of trying to implement an open ended chatbot.
In addition, despite the flaws mentioned above this sort of black box approach is extremely data intensive and therefore costs a fortune to implement. And the fact that it is a black box means that it is very difficult to switch vendors which means an extremely high switching cost and therefore lock in.
It is better to use simple NLP and decision tree technologies to build bots and then to use AI with a limited scope round the edges to get the user back to completing the task at hand. We have found that companies are actually surprised at how accessible and easy the technology is to use. A competent developer can learn how to build a bot that uses NLP and decision trees in just a few hours.
It’s also critical to understand that conversations with chatbots should not replicate conversations with humans. Graphical interfaces, for example, are much more efficient to use in many cases than text or voice. Option buttons are quicker to click than to type or speak a response. This would be true even if it was possible to create a human level chatbot. This reality is often overlooked in using a black box or primarily word based AI approach.
The problems with choosing the wrong bot framework may not be apparent immediately but will become obvious over time.
The fastest way to build a chatbot is to use a drag and drop platform. The problem with this is that in most cases developers soon run into hard limitations. In addition, the generic approach used means that what should be simple features are hacked into the system making it clunky and difficult for administrators to use the bot.
The other side of the spectrum are code based proprietary platforms that allow developers to code the bot from scratch. The problem with this approach is that it takes a very long time to build even simple bots.
The best approach is a framework that provides all the necessary components and visual interfaces, including the drag and drop interfaces, out of the box, but at the same time allows all these components and interfaces to be easily customized for the task at hand.
This is particularly important because bot sponsors typically focus most of their attention of how the bot will work for end users. The problem with this is that there are many other components and interfaces that are important to other users of the bot such as the administrators (who want to monitor chatbot analytics and manage backend access), technical and non-technical creators (who want to modify the bots behaviour and content) and human agents (who respond to the conversations that are escalated by the bot).
Building these components from scratch is an extremely time consuming exercise. Of course, simple drag and drop frameworks have very generic and limited versions of this functionality and cannot be easily customized.
The ability to customize is essential to the end user bot itself, even if it is not obvious upfront. For example, when building flows using the drag and drop flow builder there may be some tasks that need to be repeated over and over in different flows, such as authenticating a user with a company system or processing a payment.
The framework should allow you to add these components as visual components to the flow builder so that less technical content creators can easily add these functions to the processes.
A platform that is not easily customized will make it hard to offer non-technical users ways to update content because the methods to do this need to be “hacked” into the framework. A framework that allows customization of everything should make it easy to create built for purpose screens for non-technical users that are easy and intuitive to use.
In addition, it is also highly beneficial for your developers to have access to the underlying source code for the system. This will allow them to more quickly understand how to do things and will enable them to identify issues quickly if they arise.
Something that is vitally important for a framework is the ability to control and migrate your data. The platform should allow enterprises to deploy the bot anywhere they so choose, be it on a private cloud or on-prem (on internal servers).
ROI is also an important consideration in terms of platform. The platform should make it possible to reuse work from one bot for other bots i.e. by building functionality for one bot makes it easier to build the next bot. This makes scaling from one bot to many bots incrementally cheaper which has the impact of improving the overall ROI.
One example of this is that poorly designed platforms will make you create a new bot for every new language you add instead of simply allowing you to provide the same content in a different language. Even not separating the flow design from the content makes managing the content more difficult and error prone because non-technical staff need to edit the actual flows rather than simply updating content.
Allowing admins and other backend users to do their work in an efficient and easy way also saves times and leads to less mistakes which improves ROI.
Vendor lock-in is a problem in a number of ways.
If you are forced to use their technology, i.e. the platform doesn’t allow you to use third party components, you are betting that all their components will be the best in class forever. If not you will be forced to use outdated technology while the rest of the market moves on or undergoes a very costly switching exercise.
If there are any components they are missing, or if you need to change how something works, you need to rely on them to do the custom development which not only causes delays but could be an expensive exercise.
Finally, if you are a captive customer, they can set the pricing which can be very expensive. They know that the fully loaded switching costs can be very high, especially if they make it difficult to migrate data and code to other platforms.
Using a proprietary system as opposed to an open system makes lock in more likely and switching costs higher. In addition, choosing a complex approach to chatbots that can only be implemented by data specialists means lock-in will be even harder to escape and the costs of lock-in higher.
This is a common and obvious mistake for any software project that entails changing existing behaviours and the solutions are well known. Of course, customer service agents are particularly important in the bot world as they can feel threatened by this technology. They need to be retrained in order to offer a set of services that complement the services that the bot offers, especially to offer deeper services to customers who have more complex needs that cannot be resolved by the bots.
There are two ways that ignoring the ROI can lead to failure. The first is that without a compelling ROI number the project won’t get sponsored, even if there was sponsorship for a POC to prove the technology. The second is that stakeholders in the project realize that there is no ROI once the project is up and running.
There is no reason not to calculate the expected ROI [calculating the ROI] upfront and then update this number as you get more information about the use case and context. There are many use cases that have extremely high ROIs so finding use cases should not be difficult.
Of course, many of the risks above can be avoided by following an approach of incrementally implementing the bot.
It is extremely easy to build solutions that can be tested incrementally. Start off with a single use case POC and route a few end users to the bot to assess the performance. This way the effectiveness of the solution, including the response of users, can be inexpensively tested and refined at each step.
Of course it is important to select the use cases that challenge the most “at risk” assumptions when selecting a use case for this exercise so the most uncertain and critical assumptions are tested upfront.
Many vendors would like to get you to engage in a big bang approach where a great deal of work and effort goes in upfront before a working bot, even a POC, is presented to users. Not only that, but the vendors insist that only high priced consultants are able to manage and monitor the bot for you. This should be a big red flag.
There are many considerations to bear in mind when building a chatbot. As long as you are aware of the main risks and take an incremental approach to implementation, you have a great chance of building a successful chatbot and achieving the phenomenal ROIs that come with doing that.
Wondering which languages ChatGPT supports? Discover the answer in this useful guide.
Not only does impeccable customer service put your business ahead with a great reputation, but it also has tremendous financial benefits. Customer retention and loyalty remain a massive factor in the financial success of your business.
One use case for chatbots that is often overlooked is the internal chatbot for employees within a company...