Hackers at the Disrupt 2015 hackathon in San Francisco
Hackathons may often seem like pandemonium. There is loud, pulsating music, hundreds and sometimes even thousands of people milling around, greeting friends and strangers, mingling, carrying sleeping bags, pillows and laptops, trying to find a place to sit and ultimately to sleep. There is Wi-Fi that doesn't always work, there are lectures, and there are announcements that aren't always audible and there is flustered staff running around trying to make order out of the chaos..
But behind all the seemingly disjointed activities there is a certain structure to hackathons, regardless of where in the world they take place. And the structure is important because it explains what the hackathon is all about and how to be successful participating in a hackathon.
During the first few hours, as people come trickling in and making themselves at home, it is all about hackers walking around from booth to booth looking for swag.
It is like swarms of locusts; you hear a swishing noise and within a few minutes all your swag is gone, all your bags and packages are empty, all the swag you had hoped would last at least until noon has just evaporated.
But soon you get hackers coming around asking "so what APIs have you got?" Now we are in business, this is what we came for. And this is really the point of the entire hackathon, hackers looking for APIs that they can cobble together in various patterns to create cool apps that can change the world.
And this means that if you are running in another cloud and don't feel that you have the time to move your workload over to Bluemix during the hackathon, you can use the Bluemix RESTful APIs and add Watson Cognitive computing services and any other of the Bluemix services to your app then and there.
Bluemix also comes loaded with the API Management Service which allows you to quickly create your own APIs and manage them. Thus exposing services that you create to others to use. and enforce policies around the consumption of your services.
And the recent addition of Strongloop API Platform to Bluemix, the first end-to-end platform for the full API lifecycle that allows you to visually develop REST APIs in Node and get them connected to new and legacy data makes the Bluemix RESTful API story all the more exciting.
In the end it is all about APIs, the API economy and the ability to quickly build apps around exisiting APIs.
Which is what make hackathons so important and fun to attend.
My son (10) wanted to build a website. Last year, he helped me test out my Build Your First Node.js website series, where I drafted him to help test my projects, and he followed the instructions step by step. This time was different - he knew what he wanted to build, and wasn’t about to follow any script.
I pointed him to my NodeJS_Simple_2 project to get started, as I felt it would be easiest for him to start by just editing HTML pages in the DevOps Services Web IDE, deploy to Bluemix, and see quick results. He did an initial deploy of the project and had the sample website running within minutes. Then, I showed him how to start editing the HTML index page using the Web IDE.
At this point, I also had him turn on the “Live Edit” capability in DevOps Services, and do the one time redeploy. Live Edit was added earlier this year, and is a great improvement when you are making small changes, and want to see change fast. With Live Edit, a full redeploy is not required. The quick turnaround is so much better, as my son would never have the patience to wait for full re-deploys as he built his site. His style is too much, try something, see if it works, and try again.
I showed him the URL link in the DevOps Services runbar, and how to follow this link to see the running website, and he started making his HTML edits. He made one little HTML change, and, immediately clicked on the URL link for the website. I forgot to tell him to hit the Live Edit restart button first.
Much to my surprise, there were his changes running live on his site!
As it turns out, with Live Edit, static changes can usually be seen immediately, and only changes to Node.js code need to use the Restart. So, the HTML changes my son made were immediately available (just like changing a file on your desktop). When changes are made to the actual Node.js code, the Restart button is needed - the button kicks off a restart of the node task on Bluemix to pick up the code change. Though not quite immediate, restart is really quick, and much faster than a full deploy since it bypasses staging and startup time.
Live edit is part of a larger set of Live Sync capabilities which also include desktop sync and debug. You can edit code dynamically, insert breakpoints, step through code, restart the runtime, and more, while your app is running on Bluemix. The full capabilities are available for Node.js, and I’m hoping will be extended to other languages in the future. They make a huge difference as you are developing out a website using DevOps Services and Bluemix. Get your 30 day free trial to get started with IBM Bluemix and DevOps Services.
I’ve spent the last week messing around in Eclipse, downloading stuff, installing plug-ins, and adding servers, etc. I hate to think about how many hours I spent without making any forward progress on my application, and without learning anything I consider meaningful. So, I was very happy to get our final draft of Analyzing notes with the AlchemyAPI Service on Bluemix back from the editor and be able to go through all the steps in the article WITHOUT installing anything.
Our new article is written so you can get started with the Alchemy Concept Tagging and Taxonomy APIs without downloading or installing anything. The application runs in IBM Bluemix, and we use DevOps Services, a Hosted IDE, for editing, building, and deploying the application. This is deliberately a very simple sample application, providing you the UI to take text in, send it off to the Alchemy APIs, and format back the results. I expect in less than 20 minutes you’ll have your app running, be making code changes, and working directly with the APIs. You’ll be through the article and ready for more in the time it usually takes just to download pre-reqs.
Our sample application is written in Python. This was my first experience using Python within DevOps Services, and I have to admit I found the experience more tedious than developing in Node.js. For Node.js, DevOps Services has Live Edit and Deploy capabilities, allowing for code changes and deployment without committing the code, creating a Build&Deploy pipeline, or performing a full deploy. To use Python with DevOps Services, you have to define a pipeline, and commit your changes in order to build and deploy. If you’re going to do serious Python development, I expect once you complete this example, you’ll want to move to your favorite IDE, and use the Cloud Foundry (cf) command line to deploy to Bluemix. Unlike my Eclipse installation woes, the cf command line is actually a quick and easy set-up.
I hope you have an opportunity to experiment with these two Alchemy APIs, and continue on to check out the complete set of Alchemy APIs. Face-detection is easy to get running by following the article, Build a simple face detection web app, by Chun Bin Tang. It’s just so easy to try out all types of new services on Bluemix - get started today with a free IBM Bluemix 30 day trial.
You know how they say "No great mind has ever existed without a touch of madness"? This Techcrunch - San Francisco 2015 was definitely testimony to that. I spent my weekend there and oh boy, this is definitely something. These were a bunch of 500 people "disrupting" the tech world with their new tech creations. Personally, it was so rewarding to interact with people who are so passionate about technology, in more ways than one: staying up all night to code, investing in state of the art gear. Sleep is clearly overrated. Some cool new things I saw at the hackathon were, Self balancing hoverboards gaining popularity more than ever before, and Hush headphones quietly rocking the entire hackathon! So these hoverboards aren't exactly a solution for the lazy. I mean, it requires quite some concentration and exquisite balancing skills to fully reap its benefits. And as for Hush headphones, woah! So the DJ powered up music via 3 radio stations and these headphones pick those channels. Color coded stations flashing on the headphones will help you identify fellow 'hush'-ers swaying their heads to the same tunes as you. IBM Bluemix and Watson rocked it out with about 30 teams using the PaaS and services for projects submissions.
Everybody is talking about it.. It’s like the sensation that “Cloud Computing” created a few years back. Alright, what is it? IoT is the fascination of today’s world to enable anything and everything with internet. Need a temperature controlled room? Bring in NEST. Need a watch that tracks your heart rate? Bring in Fitbit. A t-shirt that tracks your exercise(OMsignal). Whoa whoa whoa…! Pause.. Take a deep breath.. Repeat... So yes, basically everything these days can be internet enabled. And thus I jumped onto the bandwagon too..
I decided to participate in the Intel IoT LA Roadshow in LA(much thanks for the sensors and devices folks!). Exactly 2 days to create something unbelievable from scratch. Today, cloud has really made it possible to get things up and running in a matter of no time. I mean, can you imagine a fully blown product and its URL created within 48 hours? It’s freakin awesome right! That’s exactly what Bluemix(IBM) has made possible. And that’s how Matt Pinner(so glad we met you), Colin Mccabe and I realized our own IoT dream and went ahead to win 2nd prize. Check out the post here.
Being a platform as a service and a deployment medium, we used Bluemix’s Internet of Things foundation and NodeRED editor to connect a device to the cloud. Just key in the ID of your device and click on connect. And voila, your device is now sending information to the cloud. Using NodeRED for fetching this data and giving the app logic, was nothing but drag and drop. No fear of the black screen(*shrugs*).
The Node-RED editor is designed to minimize any kind of coding. It has components which are connected by connectors(a squiggly line with 2 ends). And each component has a little editor, where you can write a couple of lines of code based on your requirement.
Want to see for yourself how easy it is? Try these 3 steps yourself and you’ll believe your eyes:
Go to ibm.biz/iotsenor and note down the alphanumeric string on the top right corner.
Click “Deploy” in the top right corner and watch data appear under the Debug tab!
And there you go! Your very own, temperature and humidity reader from a virtual device. It’s just a little more work to power up an actual Intel Edison/ Raspberry Pi/ Spehro etc, hook up sensors and feed in the MAC address. Double-click other boxes on the NodeRED editor to understand other components and have fun!
Getting Started: Connecting my Edison Board to the Cloud
I am getting started with my Edison board and connecting it to Bluemix for cloud back end data storage and processing. My first goal was to learn some of the basics on the Edison and run a basic application to push data into the cloud for possible data analytics in the future. you can follow along to get your own board running. These steps took me about 3 hours.
First, I followed the one time board setup tasks found here with no problems. However, when installing, I needed the XDX Intel IOT development IDE installed on my laptop.
So by now you have hopefully done the simple Onboard LED template project.
Next, I wanted to see something working, so I decided to do a simple test to grab the voltage from the board. I found a simple application setup in the recipes from the IOT documentation. The application has I punch in the MAC address and it will display a graph of the voltage being used by the Edison.
Go to https://github.com/chipgarner/EdisonBluemixNode/tree/quickstart and click the Download Zip (on the right hand side).
Make sure to extract it into an easy to find location, like the desktop. From there in the XDX IDE you will need to open the project.
From there if your device is connected and running, grab the MAC Address (open a terminal and run ifconfig and look at the hwaddr for the device connected to the internet) make sure to copy that and replace the one found in main.js.
Once that is completed I hit the hammer icon for a Build/Install (make sure to save and processed). Once completed, run the program. Now open a browser and go to https://quickstart.internetofthings.ibmcloud.com/#/. Enter the Device id (MAC address) and click GO. From there a chart pop up and it will start to fill with data from the Edison board.
Now that I had a simple application running on the Edison board, I moved onto connect into IBM Bluemix to get at sensor data. If you have a Bluemix account already great if not please sign up. It only take a few moments and then a validation email is sent.
In the Bluemix catalog, under boilerplates you will find Internet of Things Foundation Starter, click on that and enter in a name and unique Host then click Create. This creates a new Node-Red application.
Once it has deployed go to Overview, on the left hand side, and Add a Service. Scroll to the bottom and click on Internet of things. Then click Create, when asked to Restage please do so.
There is just one more step on the Bluemix side, create the Edison device in the IOT service. Click on the IOT service and Click Open Dashboard. From there you will need to go to the device tab. Once at the device tab click add new device. Enter Device Type (I used “Edison”) and the ID will be the MAC Address (same one as the one above). Click add and it will bring you to a page with the device information, please copy this as it will only show the Auth key once and if you don’t grab it a new device has to be made.
Now we are set, with our Bluemix application setup and authentication credentials, so we move back the Edison board. Go back to GitHub and go to this link https://github.com/chipgarner/EdisonBluemixNode/tree/registered
Same as before click download Zip and extract into a known location. In the XDX IDE, click open Project and find the one just extracted. Now for one simple update and you are off. Please go into main.js and update the device credentials with the ones copied from the Bluemix IOT service.
Now we can do a build/install. Please make sure to do a save and processed. Once completed, click Run. Now go back to the IOT Dashboard in Bluemix(s) where the device was added see it connected and sending information about every second.
The Edison is connected into Bluemix and sending data! My next step is to use the point and click Node-Red capabilities to create a workflow to put the data into a database and run analytics capabilities. I’ll save that for another blog. Time to invent the way to the next big thing with the Edison board in Bluemix.
Our IBM Cloud Marketplace Partner Instaclustr is now ready to deliver a Apache Cassandra DBaaS to Bluemix through Softlayer - running on Bare Metal servers - and the performance is just amazing, I've been supporting Instaclustr since a long time and now we are really proud to see this amazing performance figures.
Since every partner will provide a free tier of its service you'll be able to use Apache Cassandra as a Service for free as well - among other services like ApacheSpark (50 GB free), Hadoop MapReduce (20 GB free), MongoDB (500 MB free), CouchDB (20 GB free) , ElasticSearch, Node.js, Liberty, PHP, Python, Go, R, Scala, JavaEE, ...
2 GB RAM DOCKER, 500 MB RAM CloudFoundry, 12 GB RAM - 80 GB HD, 8vCores OpenStack, ...
Just register here for the TRIAL and after 30 days you'll end up in the free tier with all the services mentioned above...have fun! :)
But now lets turn back to the original Blog of Ben Slater, the Product Engineer of Instaclustr. He basically states that Softlayer is twice as fast as the largest machine of our competition, this is pretty amazing since we have some clients relying in Cassandra as their central data repository and need to answer queries in less than 100 ms latency for a real-time recommender system, for example. So they are very happy as well.
In this article I show you how you can create a GPS-Tracker app for Android. We are going to use the Android SDK for the app, NodeJS for the REST-Service and a DB2 instance to store the data. The REST-Service and the DB2 instance are going to be deployed to IBM Bluemix. For developing/coding I am using the Eclipse IDE. This tutorial requires the following Know How/installations:
Everything described in this article can be done with the IBM Bluemix “Free Tier”. You can sign up for Bluemix here: http://Ibm.biz/joinIBMCloud
01 Let’s start with the concept.
Our GPS-Tracker App sends the location (latitude and longitude) and the ID of the device to the REST-Service, which runs on Bluemix. The REST-Service writes those information into the DB2, which runs on Bluemix too.
Our REST-Service has a web interface on which we can receive the location of a device according to its device ID. The communication between the App and the REST-Service is carried out by HTTP. The REST-Service uses SQL to interact with the DB2. The graphic below visualises the simplified concept.
A favorite in Bluemix prototyping is the Node-RED interface, a wiring tool that simplifies programming by turning common functions into nodes that can be added, removed, and connected at will. In particular, Node-RED is a rather magical interface that simplifies MQTT, a publish-suscribe messaging protocol, in such a way that just about anyone can connect a device without much effort. What does this mean? I can send data from a device with a few simple lines of code.
Bluemix has it's own Internet of Things management interface called the Internet of Things Foundation, IoTF, and the minds over in the IoT group at IBM have put together several quick start guides for getting various devices connected to IoTF. With the thousands of devices out there it's reasonable to assume that not all of them are going to have quick start guides. The closest you'll get to the Intel Edison is the Intel Galileo, the older brother of the Edison.
While the Galileo Quickstart is the best place to get started connecting the Edison to the IoT Foundation, the two devices are far from identical and I assumed going in that I was going to have some problems. In trying to send data to and from the Edison I ran into two major problems using this quick start code and documentation: the example reading CPU data and then receiving data from Node-RED.
After digging around for a few seconds I realized that the problem was in the fs.readFile function:
If you navigate to the file in question (test out those Linux navigation skills), it's unreadable. However, you may have noticed that the thermal directory has several other directories for zones. In my case, zone1 had a readable temp file and merely changing this line to direct to zone1 instead of zone0 got the quick start working.
Now Node-RED has data so I did some very basic temperature handling and sent a message back to the Edison via an IBM IoT out node. Except I quickly found I was receiving no data and then finally a sporadic twelve to fourteen message at most.
So what is the secret to getting it working? The following example code updates the createClient method to the newer connect method and handles an LCD output display as data is passed to Node-RED and back to the Edison.
Looking at the subscribe handling portion, the Edison needs the QOS, Quality of Service, qualifier to properly handle messages:
If you change QOS to 0 or 2 you'll see some differences in how MQTT handles messages. A QOS of 1 worked best for what I was doing. You can do custom error and success handling, but I went for a simple message that a subscription existed.
With those two sections of code you have a basis for connecting the Edison while using the simple quickstart recipe already available for the Galileo. Easy fix, hours of fun with sensor!
OK, I admit to the “kid in a candy store” feeling after finding the Node-RED library full of additional nodes and starter flows. There are so many nodes to choose from, but the two that immediately caught my eye were – Fitbit and charts. Two I’ve been looking for - all ready for me to use.
Well, not quite ready - I had no idea how to add these nodes into my Bluemix Node-RED flow editor. I checked the forum, and found instructions at: https://developer.ibm.com/answers/questions/180359/node-red-instructions-for-adding-a-new-node.html. Unfortunately, the instructions talked about adding the nodes into the package.json of my application; and all I had was a Node-RED service on Bluemix. Then, I remembered what we did for the user interface of my Node-RED to Watson to article, and used the Add Git button in my Bluemix Node-RED project to populate a sample application into DevOps Services. In the DevOps Services project, I edited the package.json to add these two lines:
I deployed the application from DevOps Services, and when I went back into the Node-RED flow editor, the new nodes were in the palette:
I’m making good progress using the new nodes. Here’s an example chart, where the data from my fitbit is live, though the other two data points are hard-coded for now (we only have one fitbit in the family). I hope you find great new Node-RED node capabilities in the library. You can get started with Node-RED and all the other Bluemix capabilities using a 30 day free trial.
I’ve been enamored with Node-RED since I built my first Twitter to SMS flow, so I’m really excited about the application built for my new article, “Add Machine Translation to your app using IBM Watson”, co-authored with Romeo Kienzler and Fabian Dubacher. When the Watson Services first appeared as nodes in the Bluemix Node-RED flow editor, I quickly built a flow using the Watson Machine Translation service. It was fun to inject a word in and have it translated; however, it was just a flow, and what I really wanted was a user interface that I could put on top.
I started my quest for an example UI for a Node-RED flow, and fortunately for me, along came Romeo and Fabian. We started with the application Romeo built for this blog, which included a Java/AngularJS application for the user interface invoking a Node-RED hosted REST service that called Watson machine translation and sentiment analysis.
For the article, I asked Romeo to show me how the user interface could be built and deployed via DevOps Services. I don’t have anything against Eclipse, I just wanted the article to be even easier to get started. Romeo and Fabian did even more than I asked; by also updating the original application’s interface to the REST service to remove Java, allowing our AngularJS user interface code to deploy directly into the same runtime with the Node-RED workflow. The result is you can reproduce the entire application in just a few minutes, without installing anything locally.
Instagram, SnapChat, WeChat, WhatsApp, you name it, mobile chat and messaging apps are everywhere. And when we communicate in the mobile space we usually do so with pictures. Snapped in an instant on our smartphones with mobile cameras that rival what serious photo journalists had available a few years ago.
But thousands, millions, billions of pictures are just the beginning, because mobile picture apps have created their own world that isn't just about cute kittens or the gang on vacation clowning in front of the Golden Gate Bridge.
The pictures we snap throughout the day aren't just consumed by our friends and followers; they can be analyzed by advanced image technologies that run in the cloud and are already opening up an amazing world of picture recognition, machine learning and analytics that can change the simplest picture into a treasure trove of information.
The AlchemyVision API allows us to snap a picture on our smartphones and then to instantly match it against millions of pictures in the cloud. AlchemyAPI uses natural language processing technology and machine learning algorithms to extract semantic meta-data from content, such as information on people, places, companies, topics, facts, relationships, authors, and languages.
The AlchemyVision API allows us to augment our pictures with relationships; background and contexts that we weren't even aware existed.
To learn more about what is possible with Image Analysis, look at the Watson Visual Recognition service, from where the picture above comes from, which enables us to analyze the visual appearance of images or video frames to understand what is happening. Using machine learning technology, semantic classifiers recognize many visual entities, such as settings, objects, and events. The service applies these models to identify imagery and returns candidate responses with confidence levels.
Watson is all about cognitive computing and the AlchemyVision API and Watson Visual Recognition Service, running on Bluemix, applies cognitive computing to the world of smartphones and billions upon billions of pictures.
Last year I participated in a Bluemix based Hackathon and my team and I came up with a solution that, in part, included using Twitter as a source for restaurant recommendations. At the time we did not implement that particular feature because it was daunting. It would have involved indexing millions of Tweets, singling out references to restaurants and determining whether a reference was positive or negative. Way too much effort to squeeze into a 48 hour time period !
Fast forward to today and now that task doesn’t seem so daunting anymore. IBM is actively collaborating with Twitter and one of the fruits of that collaboration is a Bluemix service called IBM Insights for Twitter that makes use cases like the one from our Hackathon much easier to implement. IBM Insights for Twitter indexes tweets, and adds searchable metadata like the gender of the author of the tweet, the location from which the tweet was made and , the overall sentiment of the tweet (positive, negative, neutral). The service also gives you aggregated search results where you can count the number of tweets the meet your search criteria.
With this new service I was able to get a rudimentary version of our previously unimplemented use case up and running very quickly . In this blog post I’ll share the sample application I came up with to mine the Twitterverse for restaurant recommendations .
The sample application
There are 2 major parts to the sample application:
Generate a list of restaurants within a certain radius of the user’s current location.
Rank the list by the number of positive tweets for each restaurant on the list.
For item 2) I used the IBM Insights for Twitter Bluemix service. For each restaurant on the list I was able to get a count of the number of positive tweets referring to it. I implemented the interface to the IBM Insights for Twitter service in Node.js running in Bluemix. For each restaurant returned by the Places API the app queries for the count of the tweets with positive sentiment . There is also the capability to limit the tweets counted to a certain radius around the location of the restaurant.
I also added a simulation mode where the number of positive tweets can be generated randomly. The IBM Insights for Twitter service caps the number of free searches you can do each month based on the number of tweets it needs to search to meet your query criteria, so the simulation mode allows you to debug the rest of the app without using up your monthly search allowance. You can turn off simulation mode (and actually search against Twitter) by changing a configuration option in the app's source code. This is described in the next section.
When the app is run, the user is prompted by the browser to allow the app the get his or her location. The user then chooses the search radius and initiates the search. The result is a list of 5 restaurants ordered by the number of positive Twitter mentions. Here's a video of the app in action.
Click on the button below to get it up and running in your environment. Follow the prompts to either login (if you already have a Bluemix account) or sign up for a free trial. Once you complete the process, the app will be deployed in Bluemix and you'll have your own copy of the app in IBM Dev Ops Services.
When complete the screen will look like the following. Click View Your App to run your copy of the app, or Edit Code to bring up the source code in IBM Dev Ops Services.
The app defaults to simulation mode so while you'll get restaurants close to your location when you run it, the Twitter mention data is generated randomly. To get it to query real Twitter data instead do the following :
In IBM Dev Ops Services, edit the file config.js in the project's root folder and change the value of the property config.ibmtw.useSimulator from true to false . Save your changes.
Click on the deploy button (as shown below) to redeploy the app to Bluemix.
This is a great start for less than a days works but there are a couple of improvements that come to mind. Twitter users abbreviate a lot so we could add different variations to the restaurant names to the search criteria. Also the name of a particular restaurant might also be the name of another entity, so additional screening might be needed to make sure that a particular tweet refers to a restaurant and not something else with the same name.
Well all is said and done though, the IBM Insights for Twitter service has drastically reduced the time it takes to get meaningful insight from the Twitterverse . Bon appetit !
PS I would like to thank the other members of the team that generated this idea last year - a big thank you to Tuhin Mahmud, Michael Senkow, Kathryn Mcelroy and Wen Zhu WZ Liu .