Using Watson services with Google Docs

Share this post:

I recently worked with a client on an experimental approach for natural language analysis of responses to interview questions. We wanted to see if Watson could automatically report concepts and tone from the interview responses.

My first test case and results are somewhat raw, but they showed me how easily I could use Watson and Google Sheets to analyze text and share the results.

Test case to demonstrate integration of Google Spreadsheet with Watson services

The client’s interview responses were in Google Docs, so I put together a reference example that reads questions from rows of a Google spreadsheet, runs AlchemyLanguage’s concept tagging function and the Watson Tone Analyzer on them, and then writes the Watson data into cells alongside the question. For fun, I chose a 1971 interview between John Lennon and Rolling stone.

Here’s an excerpt of the resulting spreadsheet:

And here’s the full spreadsheet. You can try this out for yourself. Just go to the GitHub repo.

Working with Google Sheets

Very often, when I start a project, I use Google Spreadsheets. It comes with a marvelous API and a great npm module for easy integration with Node js. This means that I can easily model, change, and experiment with data in a convenient visual way by treating the spreadsheet like a database, and without writing a user interface to work with data.

If you look at the app.js code in the GitHub repo, you can see that including and using the spreadsheet is pretty easy:

var GoogleSpreadsheet = require('google-spreadsheet');
var doc = new GoogleSpreadsheet('1UVbnv8KJ5ycYxdl_1LIr7XzFaPGhRqvf5SQRCrBqwpg');

The long scramble of letters in that call is the unique id of the published spreadsheet.

If you follow the code down to the getInfoAndWorksheets(), you should be able to follow along in the code. You can always clone this code if you want to try yourself. Just be sure to replace the unique id of your published spreadsheet. For more instructions, refer to the readme file.

Calling Watson

Making a call to Watson is very easy; it’s often just a single line of code. In this case, I make calls to Watson Tone Analysis and AlchemyLanguage concept tagging. You can see a call in the code in app.js:

var parameters = { text: row.answer, knowledgeGraph: 1 };
alchemy_language.concepts(parameters, function (err, response) {

This code iterates through the rows of the spreadsheet and reads the answer text, then passes it as a parameter to AlchemyLanguage concept tagging.

Following the file, you can see where I write the AlchemyLanguage response back into the spreadsheet row and save. I love the simplicity and efficiency. It happens almost instantly, too. Here’s a video clip of it happening in real time:

IBM Watson working with a Google Spreadsheet from Anton McConville on Vimeo.

Surprising results

In my example, I randomly chose John Lennon’s interview as a test case. When I look at the output data, the tone and concepts can be difficult to reconcile at times. The concept analysis is clearly picking out appropriate concepts based on the bands and celebrities he name drops. But the tone is a little different, and I personally find that I have to read a bit more into the output at times. I did smile at the tone results to the question in row 9 of the spreadsheet:

Why did you choose or refer to Zimmerman, not Dylan?

In his response, John Lennon criticizes Bob Dylan for not using his real name. It registers 82% disgust! Lennon’s tone is generally angry throughout the interview, but he does reach some levels of joy in some of the questions. The concepts are definitely surfacing too.


Setting the coding aspects aside, just think about the possibilities.

For example, using Watson Speech to Text, you could take the text from any video or audio clip, and easily run language analysis on it to share. You could do the same with snapchat videos, video calls, videos embedded in tweets, etc. You could even do this in real time, as a conversation is happening and see the concepts that Watson picks out as the conversation or interview unfolds. Or you could monitor for sharpness of a particular tone. There are many more dimensions of understanding you could explore using this kind of approach.

My example is raw and experimental. I haven’t tried other interviews, or proved the concept works. I’ll be digging more deeply into the concept, but for now, as a developer and designer, I was delighted with just the combination of Watson and Google Sheets. They helped me set up an easy way to analyze text and share the results.

The more I use Watson, the more certain certain I am that natural language analysis will keep evolving and inevitably form part of many solutions. The possibilities are endless for automated understanding of our words.

Learn more

  • Watson Tone Analyzer uses linguistic analysis to detect three types of tones from text: emotion, social tendencies, and language style.
  • AlchemyLanguage is a collection of APIs, including concept tagging, that offer text analysis through natural language processing.
  • Watson Speech to Text converts the human voice into the written word.
More How-tos stories
April 23, 2019

Announcing the Deprecation of the Watson Machine Learning JSON Token Authentication Service

We’d like to inform you about the deprecation of the Watson Machine Learning JSON Token Authentication service. This method of authentication will be retired on May 30, 2019.

Continue reading

April 2, 2019

Data Refinery and Profiling Changes in Watson Studio and Watson Knowledge Catalog

We'd like to announce data refinery and profiling changes related to Watson Studio and Watson Knowledge Catalog that will take effect on May 17, 2019.

Continue reading

March 26, 2019

Announcement for the IBM Watson Discovery Community

Starting April 2, 2019, we will be upgrading the infrastructure of the IBM Watson Discovery service to improve reliability and quality of results.

Continue reading