April 19, 2018 | Written by: Anton McConville
Share this post:
This is the third and final part of a series which describes the design and implementation approach used to build a Watson Conversation data integrated Chatbot with the IBM Cloud. (Parts 1 and 2 here and here). In this post, I explain about design thinking and the front end design and development approaches used in making this chatbot. As I visit with our clients they often ask how should I build user interfaces, and my answer is simplification since plain CSS, JS and HTML have matured so well.
Most of all I wanted to experiment with a chatbot that worked with personal data and can be accessed from a database. I have tried plenty of chatbot examples that responded to canned messages from a person’s input. But I hadn’t come across any examples integrated with dynamic data from a database. I wanted to understand how well a bot could respond in such a case. Here’s an experimental website set up for this purpose. To use it, make a local account, so that it’s personalized—it’s quick to do.
When I’m working on an exploratory app like this chatbot, I try to go through some basic IBM design thinking. Starting with empathy mapping and scenario mapping exercises (even if it is only me working on app). In this case, I imagined the current scenario – what I think, feel, see, and do when I try to access benefit information today, trying to overcome negative sentiment and experiences with the solution later.
Keeping my previous ‘lost glasses scenario’ in mind while thinking about what ideal ideal steps are there for learning about my policy from a chatbot. I thought about the scenario of filing a claim using the bot. Since I was working quickly, I sketched out a rough visual outline of the pieces I’d need to work with in a UI. I wanted to experiment not only with a chatbot, but with a compact visual interface that responded to the chatbot. I wanted to imagine a more ideal user interface that might help me find this information more quickly, even without a bot.
The user interface I came up with has two sides to it. On the left hand side is a ‘benefit map’ list concept.
It’s basically an expandable accordion of benefits. I could have implemented it as a standard table, but I took the opportunity to experiment with the UI as it relates to a chatbot.
I borrowed some thinking from how Google Maps works on my phone when I’m navigating a metro system. I love how it tucks away the individual stops of a route in an expandable section.
I’m stretching a little, I guess, by imagining the benefit information UI as an interactive ‘health journey’—a collection of grouped procedure types in each benefit cluster. What I’ve found when I visited financial companies is that they’re always interested in fresh ideas. They run hack-a-thons internally to shake out new thinking.
Many of our clients want to use the cloud to create new, fast and fresh systems of engagement though microservices working with existing data sources. I want to take the chance to share material that approaches problems in fresh ways too. They may not always work, but it at least triggers more thinking.
When the chatbot recognizes a person is chatting about a procedure, it will automatically expand that procedure, so I wanted an interface for allowing that happen with fluidity.
I like how it works, and I’ve found it helps emphasize the potential for bringing data to life quickly. I would also go in the other direction too. Expanding procedure, the chatbot will prompt me about it as well. For example, if I expand orthodontics, maybe it will tell me that I’ve hit my limit – so don’t even think about claiming. The app provides a more traditional little form for entering a claim. It also provides a claim history.
‘Ana’ (short for Analytics )—the persona of the chatbot—was a character I made for a location-based game a couple of years ago. One of the things I’ve learned about chatbots is that it is important to convey they are an actual bot, not a human. It was convenient for me to reuse a robot character I already had, but it doesn’t need to be a robot.
When I experimented by removing the bot image from the UI, it reduced the effectiveness of conveying what the thing is. There are probably much stronger ways of illustrating this information. My style is pretty minimal, and gentle with this approach. In the dialog, I’m borrowing the conversation style I see in Snapchat, where they use bars against the side of each chat snippet to indicate the participant, rather than bubbles. Again, I’m just experimenting, but I like how tidy that approach is.
This year I really embraced the CSS Flexbox Model in my work. It allows you to split the content of your page into ordered rectangles, and iteratively position the content of each rectangle into rectangles, by making rules for the flow of DOM element children on the page.
Here’s how this health insurance page is split into flex components:
Basically, there’s a flex row with two <div> elements in there for the two sides of the view. Each of those divs is a flex column with dom elements beneath. For the benefit blocks on the left hand side each block is another flex row, with sub flex columns in there. You can study this by looking at health.html in and benefit.css.
If you work with HTML, and haven’t already embraced the flex model, I urge you to try it. It is native in the modern browsers, and combined with media queries it can yield good responsive design outcomes.
The benefits data is read from the database, and returned to the the HTML page as json data. In the createBenefitEntity() function of cloudco.js you can see where the code iterates over the json data to dynamically add rows to the benefits ‘roadmap’ section of the page.
Summarizing a few of the things I’ve learned about chatbot UX:
- Step back and design a human conversation flow for your problem space, as if you were talking to a person at a desk instead of a Chatbot.
- Consider using empathy mapping to prize apart the expectations you have, or wish for.
- Think about the tone, gender and character of an ideal person answering your questions. Try to infuse that into the responses of the bot.
- Make it clear that it is a bot, not a real person. Inevitably the bot will not be able to answer a question a human would be able to.
- Use natural language to remove friction. For example: recognize date information like ‘last week,’ instead of forcing a person to type unnaturally in a formatted way.
- Keep observing. Clients of the bot are writing down in plain English what they want from your service. That is gold!
- Keep building. The more you learn, the more intent, entities and dialog paths you can make to improve.
- Chatbots can fail spectacularly. Test as much as you can, ask colleagues to test before going public, log the conversations and repeat testing until the bot becomes more useful.
I’m hopeful that this series can accelerate development for readers who wish to build personal data-oriented chatbots with IBM Watson, Node and IBM Cloud. Chatbots and virtual agents have established themselves into the mainstream of personal and business applications. I expect this trend to continue to grow exponentially where the use of conversational interfaces become second nature in society.
Learn more about Watson Conversation
Build your own chatbot with Watson
Some other useful links:
Watson Conversation Developer Resources
IBM Bot Asset Exchange
“Improve Your Chatbot Using Watson Conversation Chat Logs”
Building Ana the insurance chatbot: Part 1 of 3 – Getting starting with Watson Conversation and IBM Cloud
Building Ana the insurance chatbot: Part 2 of 3 – Behind the scenes