Our first step in creating an MVA is to make a basic set of choices about how the chatbot will work, sufficient to implement the Minimum Viable Product (MVP). In our example, the MVP has just the minimum features necessary to test the hypothesis that the chatbot can achieve the product goals we have set for it. If no one wants to use it, or if it will not meet their needs, we don’t want to continue building it. Therefore, we intend to deploy the MVP to a limited user base, with a simple menu-based interface, and we assume that the latency delays that may be created by accessing external data sources to gather data are acceptable to customers. As a result, we want to avoid incorporating more requirements—both functional requirements and quality attribute requirements (QARs)—than we need to validate our assumptions about the problem we are trying to solve. This results in an initial design which is shown below. If our MVP proves valuable, we will add capabilities to it and incrementally build its architecture in later steps. An MVP is a useful component of product development strategies, and unlike mere s, an MVP is not intended to be “thrown away.”
An open-source, reusable framework (such as RASA) can be used to implement a range of customer service chatbots, from a simple menu-based bot to more advanced ones that use Natural Language Understanding (NLU). Using this framework, the initial MVA design supports the implementation of a menu-based single-purpose chatbot capable of handling straightforward queries. This simple chatbot presents a simple list of choices to its users, on smartphones, tablets, laptops, or desktop computers. Its architecture is depicted in the following diagram:
The chatbot interfaces with the following backend services:
As described in our previous article, we start the MVA process by making a small set of fundamental choices about the solution and use a simple checklist to ensure that we make appropriate architectural decisions. Our checklist includes the following items:
The following choices are not of concern at the moment but may become concerns at a later time if the user base and usage grow substantially.
For your own applications, consider this checklist a reasonable starting point that you may need to adapt or expand depending on the technical issues you are exploring.
After the MVP is delivered, users seem relatively pleased with the capabilities of the product but express that they find a menu-based interface too limiting; even the simple menus used in the MVP are rather cumbersome, and expanding the menu options will only worsen the user’s experience, especially on smartphones and tablets. They would like to converse with the chatbot more familiarly, using natural language.
The open-source chatbot framework used for the MVP implementation includes support for Natural Language Understanding (NLU), so we will continue using it to add NLU to the capabilities of the chatbot. Using NLU transforms the simple chatbot into a Machine Learning (ML) application.
Switching to an NLU interface changes the chatbot’s architecture, as shown in the diagram below. Data ingestion and data preparation in the off-line mode for training data are two architecturally important steps, as well as model deployment and model performance monitoring. Model monitoring for language recognition accuracy as well as throughput and latency is especially important. Business users use certain “industry jargon” terms which the chatbot gets better at understanding over time.
The evolved architecture includes two models that need to be trained in a sandbox environment and deployed to a set of IT environments for eventual use in production. The two models can be thought of as an NLU model relating to users’ questions, and a Dialog Management model (DM) pertaining to the chatbot’s answers. More specifically, the NLU model is used by the chatbot to understand what users want to do, and the DM model is used to build the dialogues so that the chatbot can satisfactorily respond to the messages. Both the models and the data they use should be treated as first-class development artifacts that are versioned.
Reconstruction costs and home valuation data are, at least initially, maintained and stored by other organizations; only insured coverage information is under the insurance company’s control. Even at low levels of usage, users may experience undesirable latency as the chatbot gathers necessary data from the two external data services. The assumption that latency delays are acceptable to customers can and should be tested early as part of the MVP phase using the initial menu-driven UI.
If latency, caused by accessing external services, appears to be undesirable, the architecture must be adapted to cache external service data locally (or at least at the same location as the insured coverage data) and periodically update cached data. Assuming that home valuations and reconstruction costs do not change much, if at all, over short time periods, caching that data seems like a reasonable trade-off. However, this assumption needs to be tested as well, and the cost of the impact of latency delays on customers’ experience should be compared to the cost of maintaining cache coherency to determine if caching is worth the time and effort. In addition, the MVA checklist should be revisited frequently to ensure that the assumptions made at the beginning of this process are still valid and that the architectural choices remain satisfactory as the MVP evolves into a full-fledged product.
At first glance, a chatbot application doesn’t seem like something that really requires much architectural thought; frameworks abound that provide most of the building blocks, and developing the application appears to involve little more than training some NLU models and integrating some off-the-shelf components. But as anyone who has experienced the challenges involved with obtaining useful information from many chatbots will know, getting the chatbot application right isn’t easy and the cost of getting it wrong can dramatically affect customer satisfaction. Even a simple application like a chatbot needs an MVP and an MVA.
With more complex applications, the issues that the MVA needs to address will vary depending on the objectives of the MVP. While the MVP tests whether a product is worthwhile from the customer’s perspective, the MVA typically examines whether it is feasible to deliver that solution to customers, from a technical perspective, and whether that solution can be effectively supported. The MVA must also look beyond the MVP to at least provide the option of dealing with issues that must be handled if the MVP succeeds, for fear that a successful MVP leads to the situation where the organization can’t afford to sustain the product over the long run.