How-tos

Home automation powered by Cloud Functions, Raspberry Pi, Twilio and Watson

Share this post:

Over the past few years, we’ve seen a significant rise in popularity for intelligent personal assistants, such as Apple’s Siri, Amazon Alexa, and Google Assistant. Though they initially appeared to be little more than a novelty, they’ve evolved to become rather useful as a convenient interface to interact with service APIs and IoT connected devices.

This series will guide users through setting up their own starter home automation hub using a Raspberry Pi. This first blog post provides a step-by-step tutorial to create an RF circuit that’ll enable the Raspberry Pi to turn power outlets off and on. Once the circuit and software dependencies are installed and configured properly, users will also be able to leverage Watson’s language services to control the power outlets via voice and/or text commands.

Furthermore, we’ll show how IBM Cloud Functions (formerly OpenWhisk) can be leveraged to trigger these sockets based on a timed schedule, changes to the weather, motion sensors being activated, etc. We’ll assume that the reader has a basic understanding of Linux and electronic circuits.

This tutorial requires the following components:

Install software dependencies

Login to Raspberry Pi and install prerequisites for the wiringPi library. This library enables applications to read/control the Raspberry Pi’s GPIO pins.

sudo apt-get -y update
sudo apt-get -y install git-core
git clone git://git.drogon.net/wiringPi
git pull origin master
./wiringPi/build

Ensure the wiringPi library is installed properly by running the following command.

gpio readall
Pi GPIO output

Pi GPIO output

Next, install 433Utils, which will call the wiringPi library to transmit and receive messages via the 433MHz frequency. In our case, each outlet has a unique RF code to turn power on and off. We’ll use one of the wiringPi utilities, titled “RFSniffer,” to essentially register each of these unique codes. The 433MHz frequency is standard among many common devices such as garage door openers, thermostats, window/door sensors, car keys, etc. This initial setup is not limited to only controlling power outlets. This library can be installed by running the following commands on the Raspberry Pi.

sudo apt-get install build-essential
git clone git://github.com/ninjablocks/433Utils.git
cd 433Utils/RPi_utils
make

Setup RF Circuit

Arrange the hardware components to complete the following circuit.

Raspberry pi

Now we will determine which RF codes correspond with the Etekcity outlets. Start by running:

sudo /var/www/rfoutlet/RFSniffer

This will listen on the RF receiver for incoming signals, and write them to stdout. As the on/off buttons are pressed on the Etekcity remote, the Raspberry Pi should show the following output, if the circuit is wired correctly:

pi@raspberrypi:~ $ sudo /var/www/rfoutlet/RFSniffer
Received 5528835
Received pulse 190
Received 5528844
Received pulse 191

After determining the on/off signal for the RF sockets, place the captured signals into the /etc/environment file, like so:

RF_PLUG_ON_1=5528835
RF_PLUG_ON_PULSE_1=190
RF_PLUG_OFF_1=5528844
RF_PLUG_OFF_PULSE_1=191

Now, plug in the associated socket and run the following command to ensure the Raspberry Pi can turn the socket on and off. This command simply sends the RF code at the requested pulse length, which is to be provided as the -l parameter.

/var/www/rfoutlet/codesend ${RF_PLUG_ON_1} -l ${RF_PLUG_ON_PULSE_1}

/var/www/rfoutlet/codesend ${RF_PLUG_OFF_1} -l ${RF_PLUG_OFF_PULSE_1}

Now that we can control the sockets manually via cli, we’ll move forward and experiment with different ways to control them in an automated fashion. Rather than writing and executing pipelines and complex automation logic on the Raspberry Pi, we’ll utilize a serverless, event-driven platform called Cloud Functions. In this implementation, Cloud Functions’ actions communicate with the Raspberry Pi via MQTT messages.

An IBM Cloud account is required to set up Cloud Functions and the accompanying Watson services.

Provision Watson services

Moving forward, login to IBM Cloud and provision the following services:

  • Watson Speech To Text — This service transcribes voice input from the Raspberry Pi microphone
  • Watson Natural Language Classifier — This analyzes the transcribed text and determines the intent behind the user’s command. The included training data can currently detect the requested state (on or off) and device type.
  • Watson IoT Platform — This serves as a MQTT broker. The credentials to connect to the broker can be generated by following these instructions. Once the credentials have been generated, set the following values in the /etc/environment file; IOT_ORG, IOT_API_KEY, IOT_AUTH_TOKEN, IOT_DEVICE_TYPE

IBM Cloud Functions

Serverless architectures are taking Cloud and IoT industries by storm for several reasons. First, these solutions allow for developers/makers to offload operational tasks such as networking, server configuration and management, disk failure, etc. Second, these architectures provide a very effective billing model where functions are executed only on demand, and users are only charged for the total amount of time their code is running down to the millisecond. These benefits allow developers to avoid being weighed down with operational tasks and focus purely on their app code and business logic.

If you’re unfamiliar with Cloud Functions, it might be best to run through the introductory documentation.

To get started, login to IBM Cloud. If you do not have an account, you can register for one. Select the menu icon at the upper left corner, and navigate to the Cloud Functions section.

IBM Cloud Functions

Follow the instructions to download and install the OpenWhisk CLI. Run wsk action list to ensure your credentials and api endpoint are set properly.

Continue on by cloning the home automation github repository. This repository contains a set of Cloud Functions’ actions and training data for the Watson Conversation service.

git clone github.com/IBM/serverless-home-automation /opt/

Cloud Functions allow for multiple code snippets, or “Actions” to be chained together as a sequence. To get started, we will create a sequence that consists of three actions. The first action will transcribe an audio payload to text.

The second action will analyze the transcribed text result using Watson’s Natural Language Classifier service. This analysis will extract the intent behind the spoken message, and determine what the user would like the Raspberry Pi to do.

So, for example, if the user says something along the line of “Turn on the light” or “Flip the switch”, the NLC service will be able to interpret that. Finally, the third action will send a MQTT message that’ll notify the Raspberry Pi to switch the socket on/off.

Hub Architecture

Hub Architecture

The speech to text action is already built in to Cloud Functions as a public package, so we’ll just need to supply our credentials for that service. Moving forward, we can create actions to call the Conversation and Watson IoT services with the following commands.

cd /opt/serverless-home-automation/ibm_cloud_functions
wsk action create conversation conversation.js
wsk action create parser-python parser-python.js

The additional actions are simply creating a JS promise that makes a request to a given service and returns the results when the service call is complete. For example, here is a snippet of the action responsible for calling the conversation service.

Conversation code

Once the actions are successfully created, we can set default service credentials for each of the actions. Otherwise we’d have to pass in the service credentials every time we’d like our actions to call the Watson services. To obtain these credentials, click each provisioned service in the IBM Cloud dashboard, and then select the “View credentials” dropdown.

Watson Speech to Text

Watson Speech to Text

Then insert the corresponding credentials when running the commands below.

wsk action update conversation -p username ${conversation_username} -p password ${conversation_password} -p workspace_id ${conversation_workspace_id}
wsk action update parser-python -p org <em>${iot_org_id}</em> -p device_id <em>${device_id}</em> -p api_token <em>${api_token}</em>
wsk package bind /whisk.system/watson-speechToText myWatsonSpeechToText -p username <em>${stt_username}</em> -p password <em>${stt_password}</em>

Next, we can arrange the actions into a sequence.

wsk action create homeSequence --sequence /myWatsonSpeechToText/speechToText,conversation,parser-python

For the sequence to be able to return the result to the Raspberry Pi, a MQTT client will need to be listening to the Watson IoT service. If the proper values have been set in the /etc/environment file, you should just have to run the following commands to create and enable a systemd service, which will automatically start on boot.

sudo cp /opt/serverless-home-automation/iot-gateway/node-mqtt.service /etc/systemd/system/
sudo systemctl enable node-mqtt
sudo systemctl start node-mqtt
sudo systemctl status node-mqtt

To test the sequence, plugin the USB microphone to the Raspberry Pi, and run rec sample.wav. Record a command, saying something like “Turn on the light” or “Turn off the socket.” Then, use the commands below to write the audio file’s binary output to a json file and execute the sequence.

echo "{\"payload\": \"$(cat sample.wav | base64)\"}" &gt; parameters.json
wsk action invoke homeSequence --blocking --result --params-file parameters.json

Finally, we can setup a hotword using the Snowboy detection toolkit. A “hotword” allows a device to listen passively in the background, and “wake up” once a specific phrase is detected. When the hotword is spoken by the user, the Raspberry Pi will begin recording a voice command, and forward the audio to the Cloud Functions sequence when the recording is complete.

Twilio

As an alternative to using a microphone, we can also control the device outlets using a phone by leveraging Twilio, which is a communications platform that enables developers to integrate SMS and VoIP capabilities into their application. Texts or phone calls can be made from a registered phone number via a simple HTTP call or library client, like so:

#!/usr/bin/python
from twilio.rest import Client</code> <code class="markup--code markup--pre-code">client = Client(account_sid, auth_token)
# Send text message
message = client.messages.create(
    to="+15558675309", 
    from_="+15017250604",
    body="Hello from Python!"
)
# Make phone call
# 'url' points to a XML document that dictates what will be said on the outbound call, how to respond to user input, etc. <a href="https://www.twilio.com/docs/api/twiml" target="_blank" rel="nofollow noopener noreferrer">docs</a>
call = client.calls.create(
    to="+14155551212",
    from_="+15017250604",
    url="http://demo.twilio.com/docs/voice.xml"
)

In addition to making outbound calls and texts, we can configure the Twilio platform to take action in response to incoming calls and texts. So in our case, we’d like to be able to text something like “turn on the light” or “switch off the fan”, and have the message contents be forwarded to a Cloud Functions sequence. This can be done by configuring our active Twilio number’s “Messaging” settings to react to incoming SMS messages by triggering the webhook associated with our Cloud Functions sequence.

Voice Fax Messaging

The exact value for the webhook url can be found in the Cloud Functions’ “Develop” dashboard by selecting the “View Action Details” button and then the “Enable as Web Action” checkbox.

Now that we have our Twilio number configured to trigger the Cloud Functions sequence, we’ll need to update our Cloud Functions action to extract and forward the relevant SMS information (sender number, message body, time) to our services. As we can see in the Twilio webhook documentation, the information from the incoming text message is forwarded as a JSON response. We can see the contents of the incoming JSON body by creating a action titled “printparams” which simply prints all parameters forwarded from the webhook request.

IBM Cloud Functions

 

So here we see that the Twilio platform received a text titled “Turn on the fan” from a phone number ending with 7799. The text message value can be accessed via the “params.Body” variable, so we’ll simply add an “or” statement to use the text value if set, and short-circuit to the “params.data” value if not.

Conversation code 2

Also, since a text message can be sent from any phone, the “params.From” value can be used to add some level of security. This can possibly be achieved by adjusting the action to only respond to certain phone numbers.

For part 2 of this series, you will learn how to build out a full application that focuses on using voice commands to control additional types of household devices via infrared and Wifi.

Feedback or contributions to this GitHub repository is encouraged! Also, feel free to post comments to this article.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Watson Stories

Interpreting Spring Social Twitter Data with Watson Tone Analyzer

In this post, I'll show you how to build a basic Spring app with Twitter login using Spring Social. Then we'll use Watson Tone Analyzer to determine the dominant emotion from each of the tweets on the time of the logged-in user. The project we will create will be similar to the Accessing Twitter Data Spring guide, but with a few modifications.

Continue reading

Arria brings Natural Language Generation to IBM Cloud

The Arria Natural Language Generation APIs service is an addition to the Finance category on the IBM Cloud platform. This blog post shows you how to get started with Arria’s Natural Language Generation APIs service on the IBM Cloud platform.

Continue reading

Analyzing Spring Social Facebook Data with Watson Personality Insights

In this post, I'll show you how to build a basic Spring app with Facebook login using Spring Social. Then we'll use Watson Personality Insights to analyze the profile of the logged-in user. The project we will create will be similar to the Accessing Facebook Data Spring guide, but with a few modifications.

Continue reading