How-tos

Using Watson services with Google Docs

Share this post:

I recently worked with a client on an experimental approach for natural language analysis of responses to interview questions. We wanted to see if Watson could automatically report concepts and tone from the interview responses.

My first test case and results are somewhat raw, but they showed me how easily I could use Watson and Google Sheets to analyze text and share the results.

Test case to demonstrate integration of Google Spreadsheet with Watson services

The client’s interview responses were in Google Docs, so I put together a reference example that reads questions from rows of a Google spreadsheet, runs AlchemyLanguage’s concept tagging function and the Watson Tone Analyzer on them, and then writes the Watson data into cells alongside the question. For fun, I chose a 1971 interview between John Lennon and Rolling stone.

Here’s an excerpt of the resulting spreadsheet:

And here’s the full spreadsheet. You can try this out for yourself. Just go to the GitHub repo.

Working with Google Sheets

Very often, when I start a project, I use Google Spreadsheets. It comes with a marvelous API and a great npm module for easy integration with Node js. This means that I can easily model, change, and experiment with data in a convenient visual way by treating the spreadsheet like a database, and without writing a user interface to work with data.

If you look at the app.js code in the GitHub repo, you can see that including and using the spreadsheet is pretty easy:

var GoogleSpreadsheet = require('google-spreadsheet');
var doc = new GoogleSpreadsheet('1UVbnv8KJ5ycYxdl_1LIr7XzFaPGhRqvf5SQRCrBqwpg');

The long scramble of letters in that call is the unique id of the published spreadsheet.

If you follow the code down to the getInfoAndWorksheets(), you should be able to follow along in the code. You can always clone this code if you want to try yourself. Just be sure to replace the unique id of your published spreadsheet. For more instructions, refer to the readme file.

Calling Watson

Making a call to Watson is very easy; it’s often just a single line of code. In this case, I make calls to Watson Tone Analysis and AlchemyLanguage concept tagging. You can see a call in the code in app.js:

var parameters = { text: row.answer, knowledgeGraph: 1 };
alchemy_language.concepts(parameters, function (err, response) {

This code iterates through the rows of the spreadsheet and reads the answer text, then passes it as a parameter to AlchemyLanguage concept tagging.

Following the file, you can see where I write the AlchemyLanguage response back into the spreadsheet row and save. I love the simplicity and efficiency. It happens almost instantly, too. Here’s a video clip of it happening in real time:

IBM Watson working with a Google Spreadsheet from Anton McConville on Vimeo.

Surprising results

In my example, I randomly chose John Lennon’s interview as a test case. When I look at the output data, the tone and concepts can be difficult to reconcile at times. The concept analysis is clearly picking out appropriate concepts based on the bands and celebrities he name drops. But the tone is a little different, and I personally find that I have to read a bit more into the output at times. I did smile at the tone results to the question in row 9 of the spreadsheet:

Why did you choose or refer to Zimmerman, not Dylan?

In his response, John Lennon criticizes Bob Dylan for not using his real name. It registers 82% disgust! Lennon’s tone is generally angry throughout the interview, but he does reach some levels of joy in some of the questions. The concepts are definitely surfacing too.

Conclusion

Setting the coding aspects aside, just think about the possibilities.

For example, using Watson Speech to Text, you could take the text from any video or audio clip, and easily run language analysis on it to share. You could do the same with snapchat videos, video calls, videos embedded in tweets, etc. You could even do this in real time, as a conversation is happening and see the concepts that Watson picks out as the conversation or interview unfolds. Or you could monitor for sharpness of a particular tone. There are many more dimensions of understanding you could explore using this kind of approach.

My example is raw and experimental. I haven’t tried other interviews, or proved the concept works. I’ll be digging more deeply into the concept, but for now, as a developer and designer, I was delighted with just the combination of Watson and Google Sheets. They helped me set up an easy way to analyze text and share the results.

The more I use Watson, the more certain certain I am that natural language analysis will keep evolving and inevitably form part of many solutions. The possibilities are endless for automated understanding of our words.

Learn more

  • Watson Tone Analyzer uses linguistic analysis to detect three types of tones from text: emotion, social tendencies, and language style.
  • AlchemyLanguage is a collection of APIs, including concept tagging, that offer text analysis through natural language processing.
  • Watson Speech to Text converts the human voice into the written word.

Add Comment
One Comment

Leave a Reply

Your email address will not be published.Required fields are marked *


MarioMontero

Hi.
Im new in bluemix/watson and Im doing some testing with this app you provided.

I’m getting this error message, do you know what could it be the issue?
(“api_key: alchemycreds.credentials.apikey”
“TypeError: Cannot read property ‘apikey’ of undefined”).

Thank you, this is very interesting!!

Reply
More Watson Stories

Obey your commands: Home automation using Watson and PubNub

Integration of voice control in smart devices is buzzing, and adoption continues to grow. Voice control provides a more natural way of interacting with connected apps and devices ranging from news feeds, traffic information to acting as personal assistants in the home. These intelligent devices respond to commands spoken in our own voice and act immediately.

Continue reading

Container builds with multiple stages in IBM Cloud Container Registry

The IBM Cloud Container Registry team has been working to enable users to run their container builds in IBM Cloud. This capability was available to users of single containers or container groups, and we’re proud to announce that now cluster users can use it too. We’ve also taken the opportunity to add some new features. There’s a new command, bx cr build, and I’d like to highlight one of the new features that can help simplify your container builds.

Continue reading

Secure your mobile serverless backend with App ID

Learn how to implement user authentication and application logic with App ID and Cloud Functions in IBM Cloud.

Continue reading