May 28, 2021 | Written by: Johan Rodin
Share this post:
When I was one of the speakers at the webinar “Interactive Assistants – Give the customer the right answer in all situations”, I was asked how to avoid including sensitive data in chatbot logs and the problems it can cause in terms of data security issues. This is a very relevant and important question – and definitely one of the aspects you need to consider when incorporating an interactive assistant into your business.
The logs are a key area
When choosing a tool for your upcoming chatbot project, it is important to check how the logs and the logging itself work. How easy or difficult is it to retrieve information afterwards about what has been saved in a certain dialogue session? Will the provider of a potential cloud service use your dialogue data to train their own machine learning model to improve the experience for other customers of the same cloud service? Can an individual citizen be asked to be removed from the dialogue archive on request and have the right to be “forgotten”?
Create business value for your organization
What I thought was nice about the webinar was that we managed to get a common thread throughout, without it being planned or rehearsed! I sat and listened with interest throughout the webinar, when I was not speaking, even though I work with the solutions and the customers all day. There were exciting questions and views that emerged.
The overarching theme was how AI and automatic text analysis can be applied in an organization and generate business value. My experience is that the chatbot project is usually the first AI project in an organization where the AI is actually taken into production, instead of ending up in the prototype cemetery.
Find out more in this webinar: Interactive Assistants – Give the customer the right answer in all situations
Four thoughts to take with you
Here I list four things to keep in mind if you are faced with choosing a tool and platform for an interactive assistant:
- Language support – which languages should be used and how good is the understanding of the first contact attempt ?
Today’s users are impatient and move quickly to other pages unless the assistant understands directly or can answer in the language with which the user is most comfortable. Can multiple languages be easily added? What does the in-depth text analysis look like in the languages to be supported?
- The location of the assistant with the proximity to data?
Can the chatbot only “live” in a cloud service? Then data transport questions often arise if you need to access customer data in the dialogue. Most organizations today work in some type of hybrid environment where tools must be available both in cloud services and in their own premises.
- Analysis – can we analyze and take action based on how the chatbot behaves?
Is there an integrated and easy-to-understand support for analyzing logs from the assistant’s dialogs with users? Can we easily find the dialogues that had low understanding? Can we follow trends and suggest new dialogue goals automatically based on what we see is common for people to ask?
- How do I avoid including sensitive data in chatbot logs and the issues it can cause regarding data security issues?
My answer to the question I received during the webinar was that:
- detect in the language analysis of the dialogue that sensitive data has been included, such as social security numbers and names and places that can be linked to a specific person.
- control the dialogue so that the user of the interactive assistant understands not to provide sensitive information.
- place the interactive assistant inside the company’s firewalls in its own data center, so that no data leaves the same.
- if sensitive data is to be included in the dialogue, make sure that it is done via an encrypted channel and that authentication takes place and preferably in two ways.
Find out more:
Watch the entire webinar: Interactive assistants – Give the customer the right answer in all situations (in Swedish)
How good is the language comprehension on the first try? IBM has released a new ” intent detection algorithm ” that is more accurate compared to others in the market.
Read this blog in Swedish: Att tänka på när man väljer verktyg och plattform för en interaktiv assistent