A group of 13, 14 and 15-year old kids exposed to the IBM Bluemix cloud-based environment for the first time, how do they react, what do they create?
I must admit that I didn't have especially high hopes when I signed up as a tutor and mentor. Mostly because Bluemix isn't really an environment for kids that young.
But they quickly took to the Node Red visual editor for the Internet of Things. We should them the default app in Node Red, the thermometer app, and how you can connect it to Twitter and our own Smartphone.
When we teach Node Red and Bluemix to grownups we have to spend a lot of time explaining what we are asking the students to do But these kids didn't need justifications. They just wanted to be shown what to do and they were off doing it.
They also quickly realized what the Node red environment is just like Lego blocks. So they started to connect Twitter to Watson nodes and then the output from Watson to their smartphones with Twilio.
Now there was no stopping them. "Is this a Hackathon one of them asked, can we stay all night?"
Moving beyond the Internet of Things was a little bit harder. Some of the kids had been exposed to Java, but by sticking to a pre-integrated Boilerplate and showing them how to write HTML code we quickly got them started coding. It took them a few minutes to learn how to download the code to their laptops, make changes and then push it up to Bluemix.
After a weekend of hacking the kids needed less and less mentoring and were off exploring new possibilities. And when one of the kids discovered something that worked they immediately showed it to the others, often sharing code snippets on Google Plus, which they use at school, before we even knew what they were doing.
They were a lot more social and cooperative than grownups.
It is clear that the Node Red environment is perfect for teaching kids programming. One thing that would help is to create Node Red nodes that will allow the kids to connect to their own favorite apps, foremost then Instagram.
Surprisingly and unexpectedly they took to Watson like a duck takes to water. Especially then Personality Insights and Questions and Answers.
Kids and the Internet of Things on Bluemix, a weekend of constant surprises.
Modified by felixf
One of the most asked questions during my Bluemix workshops is how to integrate on-premise's applications into the Bluemix cloud. I had a quick look at the Bluemix Cloud's Integration service a few months back. The way it worked at that time was to expose on-premise applications/services via Restful APIs. While this solution would work well for many in-house applications, it may not be the best way to work with databases such as DB2 or Oracle as one would have to expose each table and use APIs to interact with them (ie with CRUD calls). A detailed walkthrough of this approach can be found here.
However, what I want is to be able to connect to my database in a normal way so no changes to my application or new coding is required. Well, Bluemix is evolving constantly, and two days ago, I found a blog on DeveloperWorks India where Rajesh talked about how to use Bluemix Cloud Integration service to connect to an on-premise database without using API. This really caught my eyes, so I followed his instructions and within 5 minutes, I was able to run a Bluemix application with an on-premise DB2 database.
The trick is to use the newly introduced Basic secure connector instead of the Standard secure connector. See the following chart for each connector's capabilities.
Please note that you can have only one basic connector and it requires a supported Linux operating system. However, you can create as many endpoints as you like and they can be Windows or Linux Operating systems as long as these endpoints are on the same network as the basic connector. You can think of the basic connector as a gateway between your endpoints and Bluemix cloud. The best part of the deal is that your first 5 endpoints are free. In other words, you can have up to 5 on-premise databases or applications connect to the Bluemix free of charge. That's really awesome. So why don't you go ahead to try it out by signing up for your free 30 day trial at http://ibm.biz/blog2BM
Modified by deleeuw
As a JEE developer at heart, I really like the way I can use Vaadin on Bluemix to create rich, modern user interfaces, using the Java technology that is familiar to me. The Bluemix Vaadin boilerplate deploys an application running on the Liberty for Java runtime environment. I needed a way to secure the application, and decided to look at the latest features of the Bluemix Single Sign On (SSO) service. The Bluemix SSO service documents how to secure both Node.js and Java web applications. However, the Vaadin sample code does not exactly match the examples – this blog will help you bridge the gaps.
Initially, you’ll want to deploy the Vaadin boilerplate which provides a Liberty for Java runtime environment, an SQL Database, and a web application which renders a simple welcome page.
Unlike other boilerplates, you’re directed to download the Vaadin sample application, optionally make changes and recompile, then push the resulting web application war file to Bluemix. The changes required to utilize the Bluemix SSO service from the Vaadin sample are:
The Vaadin sample code does not include a web.xml, instead it uses Java annotations to make these definitions. The Bluemix SSO service secures an app on the Liberty for Java runtime courtesy of standard JEE web.xml deployment descriptor elements, to define security constraints. If the web application contains only a servlet, the required security constraints can also be specified using Java annotations. However, if the web application does not use a servlet then a web.xml file is still required, see this link. In this case, the simplest approach is to add a new web.xml as it aligns with the Bluemix SSO service documentation. The new web.xml file would need to be added to the Vaadin source code at \vaadin-jpa-app\src\main\webapp\WEB-INF\web.xml. For example, it could add a security constraint like this:
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="3.0" xmlns="http://java.sun.com/xml/ns/javaee"
Note the URL pattern for the Vaadin application has been added to a <security-constraint> element, to be constrained to only users in the “user” role. After adding the web.xml file, the source must be recompiled and packaged into a war file using maven, for more information see the documentation for the Vaadin sample code DevOps project
Package a Liberty server runtime, to include a modified server.xml
The Liberty for Java runtime needs to be configured, using its server.xml file, to map the role(s) used to secure the application (in the web.xml), to users in a user directory (in this case, using the Bluemix SSO service). Note the server.xml must also enable the Liberty for Java features required by the Vaadin application:
<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">
<!-- Enable features -->
<!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
<webApplication id="vaadin-jpa-application" location="vaadin-jpa-application.war" name="vaadin-jpa-application">
To update the configuration of the Liberty for Java runtime on Bluemix, it is necessary to package a local Liberty runtime environment to include the modified server.xml, and application war file (which should be copied to the “apps” folder of the local Liberty runtime). The resulting package can then be pushed to Bluemix.
The server is packaged into a zip using the "server package" command. The resulting Liberty server package is pushed to Bluemix using the cf tool. It makes sense to specify a previously deployed Vaadin boilerplate as the application name, as this will ensure an SQL Database service is already bound to the Liberty for Java runtime on Bluemix.
Note that after deploying a packaged Liberty server, the URL to access the application needs a context-route appended to the route defined in the Bluemix dashboard. The resulting URL will render the sample Vaadin app, but it is not yet secured.
Add the Bluemix SSO service to the application
After the service has been created, it requires some simple configuration to give it a name, and create an identity source. I used the simple cloud directory, and configured some users.
Once the configuration is complete, the SSO service can be bound to the Liberty for Java runtime. Before testing, it's wise to clear any browser cookies. The app should now be secured:
The Sign In dialogue can be changed to match the presentation of the application – see the Single Sign On service documentation
By default, the user will be prompted to consent the app to access their details in the cloud directory identity source. Alternatively, this consent can be made by default in the cloud directory settings.
That’s it. Enjoy working with your secured Vaadin Bluemix app.
Modified by RomeoKienzler
In this Blog we I'll describe how to use IBM Bluemix, Watson, NodeRED, Liberty and AngularJS to build a SPA (Single Page Application) which uses Watson Machine Translation and Sentiment Analytics in order to process a user query, the final result looks like this although design can be improved :)
First of all we have to build REST service using a NodeRED workflow which takes a message and translates it using Watson (into spanish in our case) and also calls the Watson Sentiment Analytics Serivce. Since security constraints restrict an AJAX application to call foreighn REST services we will implement a little Proxy in JaveEE and deploy it to WebSphere Liberty. Finally we build a small AngularJS applications as frontend:
1. NodeRED Workflow (REST Service)
The job itself is completely doneby NodeRED and the services, so you will notice how easy this is:
a) Using this link follow step 1-9 in order to create a NodeRED Workflow Editor but besides of adding "IBM Analytics for Hadoop" at step 6 just add "IBM Watson Machine Translation"
We are now creating the following Workflow
b) Choose HTTP on the Input Nodes and provide the following parameters
(Now you already created a REST service listening on /olli :)
c) In the palette under "function" choose "function", drag it to the canvas and connect it with the http node
d) Double-Click on the new node and edit as follows:
e) Under "analytics" select the "sentiment" node. drag it to the canvas and connect it with the "function" node
f) Under "IBM Watson" select the "Machine Translation" node, drag it to the canvas, connect it with the "sentiment" node, doubleclick and select "from english to spanish" and click ok
g) Ann another "function" node, connect it to the "Watson" node, doubleclick and edit as follows, then click ok (this justs extracts the output from the "Machine Translation" and "Sentiment" node and transforms it to a easily consumable form:
h) Add a final HTTP response node and connect it to the function node
i) Hit "deploy"
Congratulations, you already finished your REST service, it is up and running and can be used by everybody.
You can test it this way:
(of course you have to replace sampleName.mybluemix.net with the name you've provided.
Unfortunately due to browser restrictions we are not allowed to access this service directly, so we have to create a little REST proxy at the URL serving our AngularJS application. I've already done this for you, so just import the following application into your eclipse:
The only thing you have to do is updating the REST_ENDPOINT in RestService.java to your NodeRED REST endpoint, e.g. http://sampleName.mybluemix.net
Then just depoy it to Bluemix...
and you should be able to see the following screen, DONE :)
Modified by DavidCarew
I use DB2 in different places - locally, in Bluemix as the SQL Service and in various images in SoftLayer , so a DB2 Client with the DB2 Command Line Processor (CLP) is always something that I need to have handy . Since I juggle my time between Windows 8.1, OSX Yosemite, Red Hat/Centos and Ubuntu I thought it would be useful to have a portable DB2 Client that could run in all environments so I wouldn't have to deal with different native installations and live without the DB2 Client on the platforms that it doesn't support.
After spending some time getting acquainted with Docker recently I decided that a portable DB2 Client Docker image was just what the doctor ordered and decided to learn how to build one. I'm sharing the results of that effort here. In this first pass I'm assuming you'll be running the image locally - in the next iteration I'll add SSH so the image can be deployed in a remote Bluemix Docker Container and accessed via SSH.
According to the DB2 10.5 Client prerequisites, several Linux distros are supported. I chose Ubuntu 14.04 (Trusty Tahr) because it's my favorite flavor of Linux and is probably the most widely used distro among Docker users.
Before we get started , here's what you'll need to follow along:
Prerequisite #1 - A functional Docker environment
See https://docs.docker.com/installation and choose the instructions for your platform. For Windows and Mac you are told to install boot2docker which runs a very small VirtualBox Linux VM. Alternatively you can run your own Linux Virtual Machine on those platforms and then follow the instructions for the appropriate Linux distro. I prefer the latter approach myself as it's very useful to have a fully functional Linux environment that matches the distro of the image you're trying to build for testing things out etc.
In this case I used Docker on Ubuntu 14.04 installed on a separate partition on my laptop to build this Ubuntu 14.04 based DB2 client image.
Prerequisite #2 - The installation file for the DB2 10.5 client. You can get it here. http://www.ibm.com/support/docview.wss?uid=swg21385217 You need the IBM Data Server Runtime Client (which includes the DB2 Command Line Processor tools) for 64bit Intel based Linux. The name of the downloaded file should be ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz . No need to extract it, that will be done automatically during the build process
Prerequisite #3 - The Dockerfile I used and the response file (db2rtcl_nr.rsp) I used to allow DB2 to be installed silently. They are both available in this zip file. Note that the Dockerfile comments describe each stage of the build process.
Here are the steps to follow:
Create a new folder in your Docker environment for this project. Copy the DB2 install file (Prerequisite #2), the Dockerfile and the response file (Prerequisite #3) to this new folder.
Run the following command to build the DB2 Client image (note the period . at the end signifying the location of the Dockerfile).
sudo docker build -t db2client .
Verify that you get a successful completion as shown in the following screenshot
Run the new image with the following command
sudo docker run -i -t db2client
You should be at the command prompt in your DB2 client container and ready to run commands !
Connect to a DB2 server and run commands through the CLP. I tested it on a Bluemix instance of the SQL Database service. Here's an example of the commands you'll need for those of you that haven't connected to a remote DB2 database from the CLP before:. For the following DB2 server credentials:
The command sequence would be: (note items in bold italics are taken from the credentials above and the node name has to be the same in the first and second command).
db2 catalog tcpip node bmdb1 remote 10.1.10.5 server 50000
db2 catalog database SQLDB at node bmdb1
db2 connect to SQLDB user dockerman using foo
At this point you can run all your typical DB2 commands (db2 "select * from mytable1" , db2 list tables etc)
That's it for now - you can build and run this container wherever Docker runs. Stay tuned for the next installment where SSH support is added as well as instructions to deploy in the Bluemix Container service.
Modified by rewillen
When you see the updated DevOps Services interface, you can’t help but notice the similarity in look and feel to Bluemix. By default, the color schemes are now the same, though you can change the Editor style and theme.
Though I couldn’t really care less about the color change, I really like the integration improvements that rolled out over the same time period in the first half of February. My favorite addition is the Live Edit capability for Node.js. Live edit and quick restart allow a much quicker cycle for seeing your changes. Under the covers, DevOps Services is restarting the node.js process and not doing a full stop and restart of the underlying droplet. The performance improvement is marked. Just be aware that once you stop the application, you need to do a full redeploy to resynch your changes.
The other improvements are more minor, but simplify navigation, and certainly make it easier to explain to first time users. The DevOps team has expanded what is referred to as the “runbar” by adding icons that link to the URL of the running application, the Bluemix application console, and the logs (see the screen shot).
There is no longer the need to go the “Manual Deployment page”. Also, the sign-on is now transparent across the Bluemix and DevOps Services – just sign-in to DevOps Services and you’re good to go. There is also a new debug capability which I hope to try out soon.
There is still the capability to override the manifest file by editing the Launch configuration parameters. The panel no longer appears during deployment, so to edit the launch configuration there is a little upside down triangle next to the application status in the runbar. This leads to the options to edit, add, and delete launch configurations. Perfectly easy to use, but I admit I didn’t notice it initially. So, my first deploy went off using the manifest file without update. No real harm, but now I try to remember to update the configuration before hitting deploy.
Check out these new capabilities with a free 30-day Bluemix trial. I updated all my Node.js Step by Step sushi cards for the changes. The updated cards are on the CoderDojo Kata and also in the DevOps Services projects worksheets directories (https://ibm.biz/nodejs1, https://ibm.biz/nodejs2, and https://ibm.biz/nodejs3). My articles and the Kids Code Kit should be updated shortly. Please use the cards from the CoderDojo site until then. I hope I caught all the changes, but if you find any problems, just let me know and I’ll get them fixed up.
Modified by RomeoKienzler
Hi, in this BLOG article I just wanna show you how everybody can do a very simple hot fix deployment with zero downtime on Cloudfoundry / Bluemix in 3 simple steps
We assume that you have installed an application called sample1using route sample1.mybluemix.net and now you have a new version of this application you want to update without any downtime. Here are the three simple steps:
1. Deploy the new version using a 2nd route:
cf push sample2 -p ./myapp.war
2. remap the routes in one go:
cf map-route sample2 mybluemix.net -n sample1 && cf unmap-route sample1 mybluemix.net -n sample1
3. cleanup (delete the old application and unused routes):
cf delete sample1 && cf delete-orphaned-routes
In order to check if you REALLY have ZERO DOWNTIME you can use the following script
while true; do curl 'sample1.mybluemix.net' 2> /dev/null >/dev/null; if [ $? -eq 0 ]; then echo OK; else echo ERROR; fi ; done
Please let me know in the comment section what you think, you can register for bluemix here and try it for FREE.
Modified by RomeoKienzler
Hi, just want to explain to you how a Twitter Analytics Workflow can be built using Hadoop and Bluemix
in less than 10 minutes
without writing a single line of code
completely for free
This application will illustrate Node-RED and the Hadoop - Service withing Bluemix in order to create a little chart showing the top-N companies related to a specific search term in twittersphere. So basically we are doing the following:
Create a little ETL (Extract, Transform, Load) workflow using Node-RED in order to retrieve tweets directly from twitter and store them in HDFS
Once the tweets are stored in HDFS we use BigInsights, IBM's comercially supported Hadoop offering running in Bluemix as well
We use BigSheets (part of BigInsights, running on top of Hadoop), a Spreadsheet like tool capable of analyzing Terabytes of Data in order to
Extract company names from tweet texts using a Watson service
Draw a bubble chart out of the extracted company names
1. Logon to Bluemix
2. Click on Catalog
3. Choose "Node-RED Starter Community"
4. Enter a "sampleName" as Name and Host and click on "CREATE"
(please be aware since you are sharing a top level domain you have to replace sampleName by something you choose)
5. Click on "Add a Service"
6. Choose "IBM Analytics for Hadoop" (you can type hadoop into the search field to accelerate things)
7. Click in "CREATE"
8. Open a new browser window and open "sampleName.mybluemix.net", please ensure that you replace "sampleName" by the name you've chosen before
9. Click on the red "Go to your Node-RED flow editor" button
10. Choose the first twitter node in the social plane and drag it to the canvas
11. Double-click on the new twitter node
12. In the Drop-Down "Log-in as" choose "Add new twitter credentials" and then hit the pencil icon and authenticate with Twitter
13. Enter "cloud" in the "for" text field (you can choose any key word(s) you like)
14. Enter "cloud tweets" as "name" and hit "OK"
15. From the palette on the left hand side choose the second ibm hdfs node (write) in the storage panel and drag it to the canvas
16. Using your mouse connect the twitter node with the hdfs node
17. double-click on the hdfs node
18. Enter "sampleTwitterData/stream" as file name and hit "OK"
19. Hit the big red "Deploy" button on the top right corner
20. Return to Bluemix, click on "Dashboard" and in the "Services" section click on your "IBM Analytics for Hadoop" icon
21. Then click on "Launch", this will open the BigInsights console for you
22. Click on "file" and browse through the file explorer until you can choose the file you've just created: /user/biblumix/sampleTwitterData/stream
23. Try to find the small "Sheet" radio button above the file and click on it
24. Click on "Save as Master Workbook"
25. Enter "tweets as name"
26. Click on "Save"
27. Note that you've been forwarded to the "BigSheets" tab, click on "Build new workbook"
28. Click on "Add sheets"->->"Function"->"Categories"->"Entities"->"Organization" in order to use a Watson/NLP component for extracting company names out of the tweets
29. Choose "Header" as "Fill in parameter" and accept by clicking on the green hook
30. BigSheets only works on data samples during the design phase, so you see the different companies mentioned in conjunction with the term "cloud" in your tweets
31. Click on Save&Exit, then on Save, then on RUN. This will cause a MapReduce job to be started applying the analytics to all collected tweets on HDFS
(wait until the progress bar on the top right side shows 100%)
32. Click on "Add chart"->"cloud"->"Bubble Cloud" and hit the green hock
33. Again the chart has been drawn based on the sample data, so please hit "Run" in order to calculate aggregates on all data in HDFS, wait until the progress bar on the top right corner shows 100%
Et voila, this should be the final result showing count distribution of the tweets of the last 10 minutes mentioning cloud and an organization
And this is the situation after IBM Interconnect Conference 2015 in Las Vegas :)
Modified by rewillen
I just launched my first “production” application for my team to use. This may not seem like much to some of you, but with my limited programming skills, a live application, with data input and retrieval, browser and mobile UIs, email messages, and more, is definitely blog worthy.
I created a scaled down copy of my application at https://trackerexample.mybluemix.net for you to try out, and you can download the source from my DevOps Services Project . The application keeps track of articles published by my team. Instead of manually maintaining a spreadsheet, the application provides a self-service interface. My application lets authors enter their article information, and provides a lists of the articles, and calculates how many articles are written each month. Rapid Apps automatically tailors the UI for different devices and includes a Preview Application feature to simulate your application on different devices. Here’s my application on the iPhone simulator:
I used the Data, Screens, and Rules components of RapidApps to develop the application.
1. Data. I created several tables, including the Articles and Publishers tables from scratch – which is just point and click naming the attributes and selecting data types. To create the Author List table, I used the Import from Spreadsheet capability, and imported a list of people in my organization. Then, I linked the Authors attribute in the Articles table to the Author List. When you use the application, you’ll see that the UI pulls up a selection list of authors. I also used a link for Publisher.
2. Screens – The List and Form screens are the cornerstone of my UI. I have these Screens for each table. By default, Rapid Apps will create two screens for your table (data set) - a List screen that displays all the records of the data set, and a Form screen which can be used to add and modify an individual record. The List screen also links to the Form to add records. Some simple editing of the properties on the buttons, and you have a UI with all the CRUD functions going. Rapid Apps automatically provides filtering, sorting, and basic navigation on the screens. I also use a Navigation screen of Buttons as my home page.
When you are happy with how your application is working in the Preview Application, you just deploy and your application is available on Bluemix. There are also Roles you can set to limit what different folks can do. Give it a try - there is a forum for Q&A so don’t hesitate. I’m really looking forward to watching these capabilities evolve.
I am really excited to be a Technical Mentor at the AT&T Dev Summit, which is about to start at 8 am tomorrow morning at Palms Casino Resort in Las Vegas. This is the fourth AT&T hackathon, I am mentoring at, starting with the event in Seattle focused on Women in Technology. We have an impressive list of sponsors including IBM showcasing Bluemix. The primary theme in this hackathon is Automation be it Home, Connected Car or mBed devices using Internet of Things.
In our TechTalk on December 18th, we focused on showcasing Home Automation using Internet of Things Foundation. NodeRED and AT&T M2X APIs. We are offering a $5000 kicker prize for best use of runtime and services offered in Bluemix. I am looking forward to working with the participants to come up with cool apps using Bluemix !!
Take a look at some of the sponsors at this Summit, more later ..
Modified by Lennart
Bluemix has always had a "Bring your own Buildpack" option. The buildpacks you see on the Bluemix homepage have never been the sum total of the computer languages you can use in Bluemix.
But now Bluemix adds support for the Go, PHP and Python programming languages out of the box right on the Bluemix homepage. Just a mouse click away. .
All three languages are extremely popular in the programming and startup community, and the out of the box support in Bluemix will help startups quickly get apps up and running in Bluemix with a minimum of effort.
More information on the Go language can be found on the golang website.
More on Pyhton can be found on the Python website.
And for PHP, see this website.
Time to get coding.
Modified by rewillen
All I can say is Wow! I was blown away with Demo Day of the IoT Design Challenge in Research Triangle Park. We had seven teams demonstrate a wide range of innovative projects, all integrating various “things”, and using Bluemix in the backend.
The Grand Prize went to Elliot Poger, who applied technology to a problem all of us with aging parents see regularly. Elliot’s project, PharmAssist, instrumented a 7 day pill box, tying it together with an Arduino board and Bluemix to generate beeps and then text messages if the box isn’t opened according to schedule. My mother-in-law was visiting, and I would have bought one on the spot if he’d been selling them. Congratulations Elliot!
Several of the teams used beacons, loaned to them by our joint sponsor, BKON. This event was my first real exposure to beacons, and I have to admit, I still have the creeps. I grew up joking “Big Brother is watching”, but honestly, we had no idea what was coming. The People’s Choice winning application, from an NC State team (Vrushti Shah, Dharin Upadhyay, Siddhant Oza, Chetan Pawar and IBM mentor, Mihir Shah), used beacons to help navigate their library. The team demonstrated how upon entering the library, the beacons would interface with their app on their phone to tell them the closest available study rooms, provide maps, and more. I think all of us were easily imagining how this type of technology could be applied to help navigate any large venue.
Another winner (and joint sponsor) was the RTP Foundation. Demo day was hosted at The Frontier, a new project of the RTP Foundation, officially launching January 15th. Though the People’s Choice Application vote was really close across all the great applications, The Frontier was a landslide winner:
What an amazing experience bringing together RTP Foundation, our great local universities, IBM, and start-up companies like BKON. To find out more about all the applications in the challenge, and get their code from GitHub, visit the IoT Design Challenge website. You can start building your own applications using the 30 day free Bluemix trial. To quickly get started on Node-RED, check out my “thing-less” (well, you need a phone) getting started video and step-by-step directions.
Modified by MarkWeber
Bluemix applications using Cloudant NoSQL "DB-as-a-Service" have become available for some time now using Cloudant's distributed NoSQL “Data Layer” for transactional JSON “document” database management that effectively distributes across data centers & devices for scaling & high availability. Important to industry specific applications such as Transportation, Travel and Shipping, a key capability area in Cloudant is support of features for development of geospatial and GIS applications.
You can explore some of these features being used by a Cloudant based geospatial web app that recently become available in Bluemix. This "Where" application uses AngularJS, Bootstrap and Leaflet.js for the UI, and implements geospatial capabilities using Cloudant as well as geolocation and travel boundary services from Pitney Bowesin Bluemix. You can run the web app here: https://where.mybluemix.net/ You can access the code repository here:
Some key design points for this and other geospatial applications can be illustrated well by drawing from examples and use cases for the Shipping, Logistics, and Transportation industries. Geospatial applications for the transportation industry must be able to handle large Volumes of data generated by millions of travelers each month and similar data loads from shipping and logistics companies. Cloudant NoSQL leverages the cloud to create a distributed data layer enables GIS and geospatial applications for high scalability and availability. Applicationscan ingest large streams of data from a wide range of sources like devices, sensors, satellites, and to store, process, and syndicate that data across web apps. Application developers can focus on analyzing travel, shipment and customer data to improve operational efficiency and customer service instead of managing and replicating the data itself.
A large variety of data types across structured & unstructured data must be processed in an integrated manner - data such as travel/shipping reservations, bookings, customer sentiment, equipment maintenance. In rail freight applications alone, data is generated across a large number of sources for GPS location tracking, fuel consumption, track conditions, heatsensors, shipment tracking and emissions monitoring. Add to this one rail shipping example all the other types of applications for use cases across customer rail, airfreight, automotive, ocean freight, air travel, trucking, logistics and retail use cases.
Cloudant stores data in self-describing JSON object document formats that bridge the gap across applications and these many different data types thru ease of data access across its distributed data layer, RESTful API and flexibility of schema designs that can avoid the need for expensive or complex ETL processes. Cloudant supports these types of design patterns by acting as a “data broker” responsible for publishing data points of interest to subscribers using technologies for message queueing, event sourcing and data sync. The data broker event-sourcing publish-subscribe pattern reduces data movement, that can be critical for services and applications in transportation and shipping where large, highly diverse data is generated from devices, sensors and IoT endpoints.
Cloudant also provides key geospatial features typically available only in specialized GIS systems, including:
GeoJSON for high precision location-based services
Geospatial indexing and query support
Nearest neighbor and predictive path analyses
Supoort for complex geometric data types
Support for Open Geospatial Consortium (OGC) SFSQL & SQL(MM) specifications
Integration with coordinate reference systems (CRS)
Full integration with ESRI via Koop
You can learn more about developing geospatial applications with these capabilities in this article documenting how to build to a geospatial application to show Colorado ski areas here
Want to see the capabilities demonstrated right away ? The app is live and its winter in North America so check out the ski areas
Modified by rewillen
We were out of cat food, and my husband (Doug) was heading to the store anyway, so he volunteered to get some. After I gave him a manufacturer’s coupon, a store coupon, and a store discount card, he was sorry he’d volunteered. Doug just wants the price to be the price.
He is accepting reality, and the other night he turned to our son and said he wants him to write an app that will tell him the lowest price for something he wants to buy – where the app takes into account all possible discounts, coupons, taxes, figures out shipping versus the cost of driving to the store, etc. I couldn’t help laughing, as my new article, written with Sandhya Kapoor, Create a coupon-finding app by combining Yelp, Google Maps, Twitter, and Klout services was being edited for publication.
Our article includes a sample java application that pulls together information using APIs from Yelp, Google, Klout, and Twitter. The application collects data, merges, and then optimizes results using an aggregation sort. We store data in MongoDB, and use a JSP to display the data. I was happy when my favorite local Italian restaurant came out first:
We also include the directions for using Eclipse to develop and deploy the application to Bluemix. This was my first experience using Eclipse with Bluemix. I still prefer the no install approach of DevOps Services, but the Eclipse setup was easy enough and deployment to Bluemix is nicely integrated.
The sample first started in a weekend hackathon at the University of Missouri, Kansas City. Our thanks to Chetan Jaiswal, Parikshit Juluri, and Xuan Liu for getting it started. We hope this article proves helpful in showing how quickly you can build apps that integrate across many different services. It’s not exactly the application Doug is asking for, but it’s sure a great sample if you want to build one. My son is currently more interested in designing an automatic retrieval system for his Nerf Gun bullets, so you have a head start. You can get started with a free 30 day trial of IBM Bluemix.
Modified by rewillen
There is an IBM Bluemix t-shirt with the slogan “Commit & Deploy & Scale & Repeat”. Whenever I see this shirt, I think to myself “huh?”. I really want to deploy my changes to see if they work before I commit anything. Maybe others are perfect coders who don’t need to test first, but more likely, the dichotomy stems from different usage patterns.
This distinction came home to me earlier this week. I’ve written my Node.js articles, and the new Kids Code! Activity Kit using the “Manual Deploy” in DevOps Services. The manual deploy picks up all files I've edited in the Web IDE and deploys my application. This way, I can see if my changes work before I commit them. Also, I’m lazy and it’s far faster to just hit Deploy than to go to the Git Repository, Commit my changes, and Push them to the repository.
Now, there are others firmly in the commit and push your changes first camp, who just use the automated “Build & Deploy” capabilities. If you look at the screen capture below, you see a DEPLOY button in the left center, and a BUILD & DEPLOY button on the top right. The DEPLOY is used for "Manual Deploy", and it picks up all the changes you have made in your Web IDE project. The BUILD & DEPLOY on the top right only sees changes that are committed and pushed. You can see why, in a lab setting, these two deploys could be a little confusing. Especially when the directions are for the manual DEPLOY path, and the instructors are from the automated camp that uses the BUILD & DEPLOY on the top right.
I suspect most of the automated camp typically uses Eclipse or other local IDEs first, so when they push to DevOps Service and Bluemix their code is ready to go. Whereas, I am using DevOps Services exclusively, so the Web IDE is in essence my local laptop. Also, the Manual Deploy only does a Deploy, whereas Build and Deploy includes the Build steps which are important for some other runtimes. I don’t think either path is necessarily the right or wrong path, I think there is good merit for both, and it’s just important to understand the distinct behaviors. I love the suggestion here, to configure the web IDE deploy (DEPLOY) and the Auto-Deploy (BUILD & DEPLOY) to use different app names so that you can use the web IDE deploy tool as a personal test environment and the Auto-Deploy as a team integration environment.
Personally, I’m going to continue my lazy approach to deploy my changes before I commit anything. So, if you are using one of my labs, please be aware of the distinctions. If you are using my labs, and want to also learn how to use the Shared Git repository, here are step-by-step directions and a concept card.