Getting started with the new Watson Assistant Part III: test and deploy

You’ve built and refined your first assistant. Now it’s time to test and deploy it into the world.

By | 10 minute read | January 4, 2022

Getting started with Watson Assistant

It’s a big day: you practiced building actions with the new Watson Assistant in Part I and refined your assistant in Part II. Now it’s time for your assistant to launch and start interacting with real customers. Today we’ll walk you through testing your assistant and deploying it to the live environment. Just like Parts I and II, it will take you less than 30 minutes.

Test your assistant

By now you’re familiar with the Task Tracker that helps you measure your progress throughout this initial build. Click the dropdown arrow next to Test your assistant to reveal the drop-down menu. (At any time during the initial build, you can hover on the text of any step and click it to take you to the appropriate field.) Click Deploy your assistant to a test channel (or click the rocket icon in the side menu) to go to the Publish page.

You’ll see a full list of content types you’ve created. At this point the content type will be exclusively comprised of actions. Click the blue publish button at the top of the table, and voila: your assistant has graduated from the draft environment, and your first version is live!

The assistant is not visible to customers yet (we’ll cover that process soon), but this version is now finalized, and any subsequent changes you make (new actions, edits to existing actions, etc.) will be saved in the draft environment.

Now it’s time to share this first version of your assistant with your colleagues! To share V1, click the preview icon (the play icon) from the menu, or click Deploy your assistant to a test channel from the Task Tracker. Copy link to share is at the top of the screen; it’s the center button with the ‘copy’ icon. Click on it to copy the Preview URL to your clipboard, then start sharing the URL with your team.


Your colleagues are your best resource for determining the efficacy of the content you’ve built into your assistant: they know the business, they know your customers’ pain points, and they know what questions most frequently need answering.

Some best practices we’ve identified for getting the most out of testing your assistant:

  • Send the preview link to 10-15 colleagues
  • Don’t invite anybody who was part of the building process — you want your testers interacting with your assistant “cold”, the same way your customers will
  • Ask your colleagues to spend at least 10 minutes interacting with the assistant
  • Ask them to log and categorize any issues they had when interacting with your assistant
  • Instruct your testers to ask for a human agent if they get frustrated. This is an inevitability in real-world interactions with virtual assistants, so you want to get a sense of when your current version will need to funnel users to a human agent
  • Make sure you collect at least 20-30 conversations

When reviewing the conversation logs and your colleagues’ notes, there are two overarching areas of analysis to focus on:

  1. Understanding: Is your assistant properly understanding user requests? Are there any major topics that you haven’t trained your assistant to handle yet?
  2. Resolution: Are users successfully completing actions? Is the assistant escalating actions to human agents more frequently for certain actions?

You can review the details of each conversation in the Conversations tab of your assistant’s Analytics page (more on that later). Reviewing the details of every conversation is a perfect way to learn about and improve your assistant’s skills in this early stage. But once you deploy your assistant to your full customer base, that type of review will become unrealistic, so take full advantage of this early testing phase to rigorously refine your assistant’s actions.

Customize your live assistant’s branding

In Part II you customized your assistant’s greeting. That’s a key first step, but it’s just the start of your options for customizing your assistant’s branding. With the latest updates to Watson Assistant’s UI, you can customize all the design elements of your assistant’s home page for alignment with your organization’s personality and aesthetic.

Click Customize your live assistant’s branding from the drop-down menu, or go the draft environment and click Web Chat to open the customization page. As you can see, your options for customizing your user’s UI are both varied and organized.

Our first tab, Style, lets you name your assistant, adjust the text bubble and header colors, and add your company’s avatar (note: you’ll first need to upload your avatar image to a URL). Type your company’s hex color codes into the appropriate field or click the color button next to the number screen to open the palette and adjust the color to one of your choosing.

The next tab over, Home Screen, is where you can edit your assistant’s initial greeting to the user. As the page states, “Good greetings are welcoming, actionable, and expressive of your assistant’s personality.” Once you’ve got a greeting that you’re happy with, add conversation starters in the field below. These conversation starters should be simple phrases that align with actions that your assistant has been trained on. They’ll appear as options on your assistant’s home page beneath the text field.

Let’s take a second to appreciate just how slick you and your assistant are: when the user selects an option from the starters field, your assistant will be able to funnel the user into an action — no heavy lifting required. That’s how powerful Watson’s NLP abilities are!

Now that you’ve fine-tuned the design of your assistant, you’ll want to ensure its users are set up for success.

Set up suggestions

Click Suggestions, the fourth tab over on the right (or click Set up Suggestions in the Task Tracker if you already closed out of the draft page). Suggestions are automatically configured on when you create your assistant. They surface in the form of action options in the same manner as the conversation starters you configured when customizing your UI, and they’re designed to funnel the user into an action when your Assistant receives input that it hasn’t been trained on.

The option label is preloaded with Connect to an agent. This option surfaces when the user clicks on the ? icon in the corner, or after three failed attempts (when the Assistant is unable to funnel the user into an action based on the user’s input). In later posts, we’ll cover integrating a support desk with your assistant. For now, know that you can customize the option label to specify whatever option is available to the end user (e.g., filling out a support ticket). Just make sure that you’ve trained your assistant how to respond to that request.

Get acquainted with your assistant’s analytics

The final step in testing your assistant is familiarizing yourself with our built-in analytics page. Your assistant starts collecting data from its interactions with your testers in both the draft and the live environment (it separates the data of “draft” interactions from “live” interactions). Once your customers start interacting with your assistant, the analytics page will populate with data. This data will guide you as you continuously refine your assistant based on its success in containing users in actions without needing to escalate the interaction to a human agent.

You have the option to view analytics from a fixed date range (select a range from the dropdown menu) or to view a custom range based on specific dates. The centerpiece of the analytics page is the Completion and Recognition tables. The data in these tables show you how often your assistant can successfully guide users through the end of an action, and how often it recognizes user requests. Beneath the table set, you’ll see lists of most frequent actions, least frequent actions, and least completed actions.

Finally, the Conversations section lets you view actual interactions between end users and the assistant. You won’t need to guess what’s working and what isn’t, because you’ll have data tracking built into your assistant’s payload, plus real-world examples of interactions between your assistant and its users.

Once you’ve taken the tour of the analytics and you’re ready to make use of it, take a moment to congratulate yourself: you’ve reached “the end of the beginning” and have an assistant that’s ready to meet your customers!

Deploy your assistant

There are two short steps left before your customers can chat with your assistant on your website:

1. Publish your latest content to the live environment

Return to the Publish page by clicking the rocket icon on the side menu, or by clicking Publish your latest content to the live environment in the Task Tracker. You’ll see a list of all the edits you’ve made since publishing to the live environment for the first time in the table. Click the blue publish button to publish V2 of your assistant to the live environment.

Pro tip: You can overwrite your unpublished content with content from a previous version by clicking the revert icon next to Publish. You can switch the live version by clicking switch to this version (the copy icon in the V1 field under Version history).

Again, every time you publish a version of your assistant, you create a snapshot of all the actions you’ve mapped out and the content you’ve written. The latest published version will be the one your customers interact with, and any new content you add to your assistant will be saved in the draft environment until you’re ready to publish the next version.

2. Deploy your assistant on a live channel across a broader set of customers.

When you’re ready to launch your assistant on your website, it’s literally as easy as copy and paste. On the same page where you customized your assistant, you’ll find the JavaScript code snippet that you need to integrate Watson into your website.

Click the Embed tab, then hover over the copy icon in the right corner and click it to copy the script to your clipboard. From there, open the HTML source for any page on your website and paste the snippet in.

Pro tip: Paste the snippet as close to the closing </body> tag as possible to ensure that your page renders as fast as possible. Refer to the documentation for the new Watson Assistant to get the deepest dive possible on deploying your assistant to your website.

Not only is your assistant now live and ready to start answering customers’ questions, but you also have the option of deploying your assistant on as many of your site’s pages as you want. Just make sure you only add the code snippet once per page.

Success! Now the fun begins

Congrats! You now have a fully functional, NLP-powered chatbot on your website, and you built, tested, and deployed it in less than an hour.

As great as this first version of your assistant is, you’ve only scratched the surface of what it can do. In forthcoming posts, we’ll take you on a deep dive through the whole constellation of integrations available beyond web chat. We’ll walk you through building search capability into your assistant through integration with Watson Discovery. And we’ll break down best practices for designing a UI that ensures that the end users’ experience with your assistant is efficient, effective, and delightful.

Your customers are going to get everything they need from your assistant, and they’re going to have some serious fun doing it.

Ready for more? We have the final installment of our getting started guide all lined up for you. In Part IV, you will learn the final steps to take to get your assistant live. You can also review the whole Getting started series if there’s anything you missed along the way.