Congratulations to the hardworking team for updating the Streams Redbook "IBM InfoSphere Streams: Assembling Continuous Insight in the Information Revolution" to Version 2. The team contributed vast knowledge, time, and expertise to this new version and will be happy to hand out copies to 250 lucky IOD attendees.
Come to the Conference bookstore on Wednesday, October 26 11:30 am – 1:30 pm to get your copy and to meet the author & project team. Use Smart Site to sign up for this session: Session #4205 IBM InfoSphere Streams and the Streams Processing Language.
Note, if you prefer a softcopy version, simply go to the IBM Redbooks site and download the book… always green & free!
Besides the updates to all materials from Streams version 1.2 to version 2.0, there have been additions of new chapters and appendices and links to a streamtool reference and also examples that you can download.
About the book:
In this IBM® Redbooks® publication, we discuss and describe the positioning, functions, capabilities, and advanced programming techniques for IBM InfoSphere™ Streams, a new paradigm and key component of IBM Big Data platform. Data has traditionally been stored in files or databases, and then analyzed by queries and applications. With stream computing, analysis is performed moment by moment as the data is in motion. In fact, the data might never be stored (perhaps only the analytic results). The ability to analyze data in motion is called real-time analytic processing (RTAP).
IBM InfoSphere Streams takes a fundamentally different approach to Big Data analytics and differentiates itself with its distributed runtime platform, programming model, and tools for developing and debugging analytic applications that have a high volume and variety of data types. Using in-memory techniques and analyzing record by record enables high velocity. Volume, variety and velocity are the key attributes of Big Data. The data streams that are consumable by IBM InfoSphere Streams can originate from sensors, cameras, news feeds, stock tickers, and a variety of other sources, including traditional databases. It provides an execution platform and services for applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams.
This book is intended for professionals that require an understanding of how to process high volumes of streaming data or need information about how to implement systems to satisfy those requirements.
Many thanks to Chuck Ballard, Sandy Tucker, Elizabeth Peterson, Vitali Zoubov, Brian Williams, Warren Pettit, Diego Folatelli, and the team of reviewers who contributed to this new edition.
The Conference bookstore will be a busy place this year. Here are the blog entries that summarize all the events and provide session numbers:
Here are the other signings and giveaways that are taking place at the event:
Coming soon: A podcast series where I’ll interview as many of these authors as I can!