Share this post:
In my last post, I explained how K’Ching, a financial app for young people, has a familiar look and feel to apps they’re already using.
Now I’ll share a little about how we developed our open chatbot with Watson Conversation on the IBM Cloud.
Building a chatbot from scratch
K’Ching is an open chatbot, which means users can ask it anything. For example, users can ask K’Ching to tell them a joke, tell a story, play a game or explain something.
The developer team didn’t have any idea what questions kids were going to ask. They had no reference because young people don’t go into bank branches, so the starting point was difficult.
The team asked KBC’s social media team for questions that youngsters had asked them. The developer team took those conversational questions and used them as input to map and create a baseline of information that would be used to train K’Ching in the language young people use when they chat with each other.
K’Ching was trained with a mind map that includes intents that deal with financial products and services, as well as the fun side.
The power of Watson AI
One of the primary benefits of Watson Conversation on the IBM Cloud is its business interface.
The business team can add training samples, as well as intents and construct dialogs, while the IT people dealt with things behind the scenes such as logging and spell check.
For a typical new project, there are an equal number of IT days (building an application) and business days (marketing, communicating) of work. For chatbots, it’s a 1:10 ratio.
For every 10 days of IT work, 100 days of business work are required to train the engine. This means not only is the developer team free to work on other projects, but also the business team is empowered.
During a demo of K’Ching, a KBC executive wanted to understand what is so special about artificial intelligence.
The developer team suggested, “Go ahead and ask K’Ching anything.”
He asked, “I dropped my bank card in a glass of water. What should I do?”
K’Ching replied, “Oh, your card is broken. Use the procedure to order a new card.”
The executive nearly fell off his chair. “How can it reply to that? How does it know?”
The developer team didn’t know exactly how K’Ching knew the answer, so they went back to the reference and training material to try to understand. They found that a youngster had asked about a bank card that accidentally went through a washing machine. Since K’Ching was trained to that intent, the executive mentioning the glass of water sparked the intent so it could deliver the correct answer.
Training K’Ching is like teaching a child. Suddenly, the child starts to talk and walk. A parent may have done a lot of things before these milestones, but doesn’t necessarily remember them individually.
It’s the same thing with Watson. The developer team feeds things to K’Ching, then somebody asks the question that’s stored somewhere in memory and K’Ching responds in exactly the correct way.
That’s the power of AI.
The team began developing with Watson Conversation in February 2017. By April, K’Ching was already in production with the Watson engine, which replaced the primitive keyword conversation chatbot that K’Ching originally used since being launched in 2016.
Authentic conversational interface
Young people speak a language laced with emojis and abbreviations.
Questions often generate mixed reactions from the users. Often, instead of the users responding with words, they reply with emojis.
Because each emoji is a unique character code, and the first thing Watson Conversation does is strip all characters (it doesn’t really need colons, dashes or parentheses to analyze language), the team had to create filters for K’Ching. Whenever a user replies with an emoji, it is translated into a fixed string of characters, for example, “U0001F600” for the grinning face emoji.
KBC developers translated 5,000 emojis into text so that K’Ching can respond appropriately.
Because 10 percent of the reactions K’Ching gets are with emojis, once the chatbot was trained on emojis, it was able to answer correctly 72 percent of the time.
The developer team also introduced latency to K’Ching’s response time. For example, if K’Ching was asked to recite the first four sentences of a poem, it would pause before giving the entire answer, as if it was typing the sentences like a person. The developer team has trained the Watson engine to calculate how long it should take to return an answer based on the number of characters.
Testing K’Ching: Man or machine?
One of the top questions kids ask K’Ching is, “Are you a man or are you a machine?” K’Ching will say, “I’m neither man nor machine. I’m a small piece of smart software.”
If a user doesn’t believe it, they might start cursing at K’Ching to try to antagonize the person they believe to be behind it.
Because of this, K’Ching’s curse intent, which started with five or six words, has increased to more than 700. All the new words are fed to the intent manually, but the reactions are randomized. K’Ching might reply, “Hey, I’m sorry, but I don’t like these words,” or “Well, there’s a word I don’t know.” K’Ching reacts in a neutral or funny way, acknowledging the curse word, but indicating that it’s not OK with it.
K’Ching is never going to curse back. Instead, the developer team takes being cursed at as a compliment because it demonstrates that they’ve done a good job of both respecting the conversational UX and providing users with the content that they want or are expecting.
Learn more about Watson on IBM Cloud.