Archive

Big data, cloud and Smarter Cities

Smarter Cities and big data

Information can be generated by anybody and anything, anytime, anywhere in the IBM Smarter Cities environment unlike in the traditional environment. The purpose of Smarter Cities is to ensure a more convenient life for citizens by generating various information types from all human behaviors and situations, such as the number of people waiting for the bus at the bus stop, bus numbers they wait for, road traffic conditions that affect the bus arrival time, music they listen to, and many others.

To manage massive amounts of various data types generated rapidly every day, big data technology is essential for the Smarter Cities environment. Whereas smart sensors and equipment, detecting huge volumes of data, are the sensory organs of the Smarter Cities environment, big data technology is the brain of Smarter Cities. Because the sensory organs and brain are closely related, and they function coherently, the combination of Smarter Cities environment and big data is required to meet the needs of each.

Big data is needed for analyzing massive amounts of information, which could not be understood and analyzed before. As technologies that were once thought impossible (for example, voice recognition or auto-translation) become possible in reality; thanks to big data, IT industries are again dreaming of developing thinking machines.  In fact, some are already in use. One of the leading products is the intelligent IBM Watson supercomputer, which boasts of a capability to understand natural-language queries. Last year, in the Jeopardy game show, IBM Watson beat its human competitors and won the championship. This fact suggests many big changes in our future. Big data technology has played a leading role in accomplishing all these innovations and successes.

Using cloud-based big data

Big data should be provided as a service by combining it with cloud computing, not as a separate product solution.

IBM provides analysis solutions and storage spaces for big data to customers, given the serious budgetary constraints of infrastructure investment. Instead of deploying self-developed servers and solutions, customers can actively use the cloud-based big data service to store and analyze their data. It is cheaper than individual commercial solutions and more stable than open-source Hadoop, which delivers beneficial economics and high efficiency.

What are the main differences between the big data service on silo system environments and the cloud-based big data service? It is the location where data can be collected. The big data service on the silo system environment collects data in the individual servers purchased by public safety services, hospitals, and schools. In contrast, the cloud-based big data service collects data in the service provider’s systems. As such, the cloud-based big data service providers can maintain enormous data infrastructures.

Greater cost-effectiveness can also be realized with cloud-based big data services than with the individual computing system invested in by each provider, because professional service providers with many customers provide cloud-based big data services.

When the improved operational excellence and cost-effectiveness of the big data system reach higher levels beyond those of a mere solution, the cloud-based big data service can become a single platform. The big data service can be a computing platform that can understand all kinds of data and analyze the data in real time, and customers can implement their own service system based on this platform. The cloud-based big data service can be an intelligent platform that creates a large ecosystem of individual services such as intelligent medical service, intelligent education, and so on.

By combining big data solutions and cloud computing, the service might be more suitable for the Smarter Cities environment.

Share this post:

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Archive Stories

5 key success factors for cloud implementation

The journey to the cloud is different for every organization. Some organizations start with a very well thought out strategy, while other organizations may be responding to a request from the business, or to a report by industry analysts such as Gartner or Forrester. A study that was just released from the IBM Center for […]

Continue reading

Real-World VDI (Part 2): Advanced graphics requirement: DirectX and OpenGL with HDX, RemoteFX, and PCoIP

Although most clients initially enthusiastically agree to address first the “low hanging fruits” of VDI, almost without exception I had the “dreaded question” come up consistently in the first meeting: “I have this user group with high-end graphics, video conferencing, 3D graphics, engineering application (replace this with your graphics intense application) requirements ... can we deliver this with our approach?

Continue reading

OpenStack and NUMA placement

Nonuniform memory access (NUMA) is a memory architecture that provides different access times depending on which processor is being used. This architecture introduces the concept of local and remote for memory access based on which core is accessing it. That means local access is faster compared to remote access. This is a useful feature for […]

Continue reading