Congratulations to the hardworking team for updating the Streams Redbook "IBM InfoSphere Streams: Assembling Continuous Insight in the Information Revolution" to Version 2. The team contributed vast knowledge, time, and expertise to this new version and will be happy to hand out copies to 250 lucky IOD attendees.
Come to the Conference bookstore on Wednesday, October 26 11:30 am – 1:30 pm to get your copy and to meet the author & project team. Use Smart Site to sign up for this session: Session #4205 IBM InfoSphere Streams and the Streams Processing Language.
Note, if you prefer a softcopy version, simply go to the IBM Redbooks site and download the book… always green & free!
Besides the updates to all materials from Streams version 1.2 to version 2.0, there have been additions of new chapters and appendices and links to a streamtool reference and also examples that you can download.
About the book:
In this IBM® Redbooks® publication, we discuss and describe the positioning, functions, capabilities, and advanced programming techniques for IBM InfoSphere™ Streams, a new paradigm and key component of IBM Big Data platform. Data has traditionally been stored in files or databases, and then analyzed by queries and applications. With stream computing, analysis is performed moment by moment as the data is in motion. In fact, the data might never be stored (perhaps only the analytic results). The ability to analyze data in motion is called real-time analytic processing (RTAP).
IBM InfoSphere Streams takes a fundamentally different approach to Big Data analytics and differentiates itself with its distributed runtime platform, programming model, and tools for developing and debugging analytic applications that have a high volume and variety of data types. Using in-memory techniques and analyzing record by record enables high velocity. Volume, variety and velocity are the key attributes of Big Data. The data streams that are consumable by IBM InfoSphere Streams can originate from sensors, cameras, news feeds, stock tickers, and a variety of other sources, including traditional databases. It provides an execution platform and services for applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams.
This book is intended for professionals that require an understanding of how to process high volumes of streaming data or need information about how to implement systems to satisfy those requirements.
Many thanks to Chuck Ballard, Sandy Tucker, Elizabeth Peterson, Vitali Zoubov, Brian Williams, Warren Pettit, Diego Folatelli, and the team of reviewers who contributed to this new edition.
The Conference bookstore will be a busy place this year. Here are the blog entries that summarize all the events and provide session numbers:
Flashbooks at IOD
Bookstore Activities – IBM Information on Demand Conference
Here are the other signings and giveaways that are taking place at the event: Meet Author Sandy Carter at IOD2011 - Get Bold: Using Social Media Meet Netezza Expert & Author David Birmingham at IOD11 Meet Data Warehouse expert & author: Bob Laberge Meet author Tony Giordano, “Data Integration Blueprint and Modeling” Meet FileNet expert and author Bill Carpenter at IOD11 Meet author & decision management expert James Taylor at IBM’s Information on Demand Conference From Idea to Print by Roger Sanders Flashbook: Understanding Big Data: Analytics for Enterprise Class Hadoop and Streaming Data Meet author and DB2 expert Roger Sanders at IOD11 Meet Flashbook author Arvind Sathi - Customer Experience Analytics Meet DB2 authors and experts Lawson & Luksetich at IOD: DB2 10 for z/OS DBA Cert Guide Meet Sunil Soares - Selling Information Governance to the Business: Best Practices by Industry and Job Function IBM Redbooks – get free printed copies at IBM’s Information on Demand Conference
Coming soon: A podcast series where I’ll interview as many of these authors as I can!
Join host Scott Hayes and special guest Maria Schwenger from
IBM for a free 90-minute episode of “The DB2Night Show”. The
topic is “Crucial things you want to know as a DBA to manage a pureScale
system” and Maria will share her opinions and insights.
Date: Friday, May 25, 2012
Time: 11-12:30 EDT
Do you know Maria yet? Here’s more about her:
Maria Schwenger is a member of the Competitive Software &
Power Systems team at IBM, North America, where she works in a “high
touch” model with early release participants to promote the newest IBM
and DB2 Technologies, their early adoption, feedback and references on
pre-released products, and enablement of applications written for competitive
databases on to DB2.
In the last 3 years, Maria specialized in architecture and implementation of
active-active high availability database solutions utilizing DB2 pureScale
feature. Currently, Maria works with customers on adoption of PureApplication
System as part of the newest IBM offering of Expert Integrated Systems. Maria
has over 15 years experience in performance engineering, database architecture,
administration and development on DB2, Oracle and MS SQL server as well as
extensive experience in migration from legacy systems to relational databases.
She is an author of articles, presentations, tutorials, and IBM Redbook about
DB2 9.7 SQL Compatibility feature and DB2 pureScale.
After attending this talk, you’ll be happy that you “met” Maria!
PS… I’m posting this information rather late (ie.. the day before), so if you
read this after the webinar date, don’t worry. Scott records all these talks
and you can find them on his site.
If pureScale is your thing… check out the Flashbook that you can download for free.
Happy New Year! I hope you had a restful holiday!
I had a present waiting for me when I returned to work. Two books by the publisher Packt
. I was recently made aware of this publisher and hope to work with them this year. Here are the two books that they sent to me to take a look at:
by Ned Riaz, Jason Edwards, and Rich Babaran
This was published in July 2009 and is aimed at those who need an introduction to IBM Cognos 8 Planning. According to the cover, the book gives clear, easy to understand instructions on how to design, build and deploy Planning models. It focuses on the essential tools that first time developers need to know.
The authors are very experienced consultants who provide step by step guidelines to using the modeling tool, Analyst.
by Anthony Chaves
This book is aimed at intemediate Java EE developers who want to build applications that handle larger data setas with massive scalability requirement. IBM WebSphere eXtreme Scale 6 provides a solution to scalability issues through caching and grid technlology. Working with a data grid requires new approaches to writing highly scalable software. This book covers both the practical eXtreme Scale libraries and the design patterns which will help you build scalable software.
If you've read either of these books... or other title by Packt, let me know what you think.
IBM Data Studio
provides an integrated data management environment that offers a comprehensive solution to help you design, develop, deploy, and manage database applications throughout the data lifecycle.
Give it a try!
Download IBM Data Studio Administrator for DB2 for Linux, UNIX, and Windows V2.1
at no cost.
Improve database administrators (DBAs) productivity and reduce application outages by automating and simplifying complex DB2 structural changes. This helps DBAs easily manage DB2 structural changes, while consistently ensuring data integrity and reducing application downtime. Enhancements for v2.1 include:
- New task assistants for common database administration tasks
- Faster navigation to the data sources you care about
- Initiation of tasks easily and intuitively from the Data Source Explorer
Download IBM Data Studio Developer V2.1
at no cost.
Improve development productivity up to 50 percent for developing and testing SQL, XQuery, and Java queries, stored procedures, Web services, and data access layers.
IBM Data Studio Developer 2.1 enhancements include:
- Reducing or eliminating SQL injection risk for Java database applications
- Giving developers more information to focus SQL tuning efforts
- Simplifying impact analysis for any databases changes
Not sure where to start? Check out these demos
that illustrate how a fictional company uses Data Studio software to enhance collaboration among team members for improved productivity and application performance while solving a real-world business problem.
Don't forget to sign up for the DataStudio newsletter
so you can keep on top of changes as they happen.
IBM Information On Demand EMEA Conference
19 – 21 May 2010, Rome, Italy
Registration is now open for our 3rd Information On Demand EMEA Conference 2010! This industry-leading, pan-IBM event will be held from 19 – 21 May in Rome, Italy, and provides you with the strategy and roadmap you can use to turn information into a strategic driver of innovation, business optimisation and competitive differentiation.
IOD EMEA 2010 offers a world-class technical and business leadership programme and provides an unrivalled forum for to share best practices, learn the latest on information-led transformation and network with peers. If you register before 26 February 2010, you can take advantage of our "Early Bird" offer and save up to € 610 so sign up early!
I don't have my approval to attend the conference ... yet... but I'm working on it. Planning sessions for the bookstore begin next week. Regardless of the location, the IOD EMEA conference is worth attending! But going to Rome to attend this conference makes it a MUST-attend event.... do you agree?
Plenty of books exist, and if you've read any of my blog entries,
you'll know that I blog quite frequently about books before they
publish, when they publish, or if we have an event around a book. To
make it easy for you to keep up to date on the growing collection of
books, I encourage you to join my group:
This group is on MyDeveloperWorks
. With My developerWorks, create your own personal profile and custom
home page (My Home) to get instant access to the people, feeds, tags,
bookmarks, blogs, groups, forums, etc. that you care about. See this how-to article
Once you have a profile, you can join my group and learn about
discounts, offers, events, and when a book publishes. I've invited the
authors of these books to also join the group to make it easy for you
to contact the author if needed.
Scott Hayes the fabulous host of the DB2Night Show has published the details of the upcoming shows that he has planned.
ONLINE TABLE MOVES with special guest Ergin Babani, IBM Toronto Lab
In this episode, special guest Ergin Babani from the IBM Toronto Lab will share with us information and tips on the new Online Table Move capabilities introduced in DB2 9.7. Be a guest in our virtual studio audience so you can learn how easy it is becoming to move data in DB2. There will be an opportunity to ask questions after Ergin's presentation and demonstration.
RESERVE YOUR Studio Audience FREE SEAT: Friday, April 2, 2010 at 10am CENTRAL.
DB2 AUTONOMICS UPDATES and STMM with special guest Adam Storm, IBM Toronto LabIn this episode, special guest Adam Storm, DB2 Kernel Development, from the IBM Toronto Lab will share with us recent updates to DB2 autonomics in V9.5 and V9.7, plus share updates on STMM and best practices. We will take questions from our virtual studio audience after Adam's presentations.
DB2 LUW Performance Update & Best Practices with special guest Berni Schiefer, Distinguished Engineer, IBM Toronto Lab
In the DB2 LUW Community, Berni Schiefer is a man that doesn't need an introduction - but, just in case you are newer to DB2 LUW (like one of the many Oracle converts), Berni is "the performance guy". He's a Distinguished Engineer who leads the Information Management Performance and Benchmarks team at the IBM Toronto Lab. Berni has been a frequent speaker at IDUG conferences and other venues, and he gives terrific presentations that share DB2 Performance Benchmark results and performance best practices.
As always, if something prevents you from attending one of these sessions live, remember that they are recorded
. Scott just mentioned that the Oracle session was downloaded more that 5000 times! Did you see that one yet?
Scott is also trying to fit Sam Lightstone
into a session either as a secondary guest or for a show of his own to promote is new book: Making it Big in Software
Join DB2 expert Kelly Schlamb and panelists Ben Ream and Debbie Hauss as they discuss The New Rules Of Always-On Retailing in a free webinar.Date: Tuesday, December 18th, 2012Time: 1:00 pm ETRetailers
have specific database needs, especially this time of year. Any outages
to a retailer's computer systems, quickly means loss of revenue. To
help retailers ensure that their systems can handle unexpected
transaction spikes and to adjust to new demands quickly, retailers must
deploy and manage a continuously available environment. A continuously
available environment means transactional data can be accessed in real
best data management systems evaluate the performance of each specific
task, offering new levels of availability and scalability on distributed
platforms and enabling extreme workload capacity while cutting costs
across the enterprise. Learn how DB2 pureScale can minimize outages and
keep systems up and running.Register for this webinar
to hear how retailers can leverage advanced data management in order to
maximize their capacity to fill orders and process transactions.
Attendees also will learn how to respond to demand spikes easily, meet
SLA demands with proper workload management, and address performance
concerns in order to ‘keep the lights on’ at all times.Panelists• Kelly Schlamb, WW IM Technical Sales Acceleration, IBM• Ben Ream, Principal Retail Analyst, Ream Consulting Services• Debbie Hauss, Editor-in-Chief, Retail TouchPoints
Related content:Ember Crooks writes a blog called DB2 Commerce and she focuses on issues related to the retail industry.Two recent articles in the IBM Data Management Magazine were about retailing and were written by IBM Champion Scott Hayes:
IDUG EMEA took place in Prague a few weeks ago and like all European conferences, it was fun and educational. In some ways it feels like a DB2 reunion as well since it is much smaller than other conferences that I attend. This makes it easy to connect with people and to catch up since we last got together.
Personally I have a few stories I’d like to share with you, but not now. Now I’d like to tell you about meeting "DB2’s Got Talent” winner Norberto. This was a trip of “firsts” for Norberto: his first time in Europe, first IDUG, and likely a few other firsts as well. If you missed the conference, I strongly encourage you to join Friday’s DB2 Night Show episode where you can hear directly from Norberto about his trip, what he learned, and what he thought of IDUG.
Space is limited, so Register now
. Here are the details:
The DB2Night Show Episode #64: Things I Learned at IDUG EMEA with DB2's GOT TALENT Winner Norberto Filho
Date: Friday, December 2, 2011
Time: 11:00 AM - 12:00 EST
After registering you will receive a confirmation email containing information about joining the Webinar.
Required: Windows® 7, Vista, XP or 2003 Server
During this unique episode of The DB2Night Show, Norberto will share with our audiences many of the most interesting things that he learned while attending IDUG EMEA.
Refresh your memory on the absolutely wonderful series of “DB2’s Got Talent” Shows:
First Episode of “DB2 has Talent”
Second Episode of DB2 has Talent
Third Episode of “DB2 has Talent”
Fourth Episode of "DB2 has Talent"
DB2's Got Talent - March 11 Episode
DB2's Got Talent - 3rd Episode of the Finals
DB2’s Got Talent Finale
DB2, DB2Night Show and IDUG as viewed by JB
Tips for DBAs from Guest Blogger Norberto.... DB2's Got Talent contestant
And the winner of the DB2's Got Talent Competition is….
PS. I’ve completely lost my voice due to a bad cold. I’m glad Norberto is presenting and not me. You likely wouldn’t be able to hear my whispers :)
encourage everyone to give this a try today! I blogged about this virtual conference (No-travel conference: Feb 25 - Data in Action)
last week, but today is the day.
One thing that I didn't mention in my previous blog is that this conference is FREE
. You have nothing to lose except the experience of this wonderful new technology.
Cool things that I've noticed so far.... the "booth presentations" are short and very well done. You can chat with an expert... and when you do, check out their super-cool avatar! You can also go to the Chat Zone to chat with others who are "attending" this conference.
How do you join? Register here: Data in Action Virtual EventSolutions for an efficient environment
Have fun! Let me know what you think.
One more thing... pass it around... we want as many people to join this conference as possible.
Two of the publishers who I work with have sent me information on their sales that are taking place this weekend, until Sept 6. I hope you’re able to take advantage of this sale to stock up on books that will help you keep your skills as sharp as they can be:
Save 50% on all ebooks through Sept 6, 2011:
Enter coupon code LABORIBM at step 3 of checkout to save 50% off IBM Press eBooks in your shopping cart.
Books you may be interested in:
Making the World Work Better: The Ideas That Shaped a Century and a Company
by Kevin Maney, Steve Hamm, Jeffrey O'Brien
IBM Cognos 10 Report Studio: Practical Examples, Rough Cuts
By Filip Draskovic, Roger Johnson
BM Style Guide, The: Conventions for Writers and Editors
By Francis DeRespinis, Peter Hayward, Jana Jenkins, Amy Laird, Leslie McDonald, Eric Radzinski
Data Integration Blueprint and Modeling: Techniques for a Scalable and Sustainable Architecture
By Anthony David Giordano
DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA, Rough Cuts
By Laura Bellamy, Michelle Carey, Jenifer Schlotfeldt
Save an additional 10% off the price of all their books. Books you’ll be interested in:
DB2 9.7 for Linux, UNIX, and Windows Database Administration (Exam 541) by Roger Sanders
Here is everything you need to know to pass the DB2 9.7 for Linux, UNIX, and Windows DBA Certification exam (Exam 541)!
List Price $21.95
Our Price $19.76
(You Save 10%)
Viral Data in SOA
An Enterprise Pandemic
Author: Neal A. Fishman
Happy Holiday & Happy Reading!
Today I spent an hour taking part in the TweetChat at Big Datamgmt focused on governance to avoid a data landfill: http://t.co/j2wojSb9Hf: "Getting Control of Data in Big Data Era"
it went too fast for me to actually be a contributor, so I was
participating as a reader / listener. This kept me busy enough since by
the end we had generated a fair about of Big Data ourselves: 647 tweets, 180 users with reach of 136,229 & 1,506,585 impressions.
Who were the experts?
and facilitators / moderators:
There were 8 questions posed over the hour, but I'm only posting the first 4 here.
Q1 In this Big Data era, do traditional concepts data quality, data governance & data stewardship even apply?
A summary of the answers:
Big Data refers to datasets whose size, type and speed of creation make
it impractical to process and analyze with traditional tools. That Big
Data definition comes from wikibon; see http://t.co/awsPyuqXjZ. So given that, definitionally then, traditional concepts are at the very least “impractical”… no?
dvellante My belief is that ingest process & analysis of data changes with big data.
BigDataAlex Yes, I think they apply. Our clients are very concerned about these issues and it does apply.
jeffreyfkelly Absolutely, but vastly more complex.
Natasha_D_G Traditional concepts are even more critical in Big Data era especially in data governance.
craigmullins But, of course data quality, data governance and data stewardship SHOULD apply in the age of Big Data Management.
You still need clean and common policies for data taxonomies; but the
unstructured and semi-structured data texture requires some new thinking
and technology. Specifically ideas around function shipping, name value
pairs, Hadoop, etc - applying traditional concepts to new model.
Dmattcarter In order for Big Data to be enterprise-ready, it needs to include those traditional concepts.
jeffreyfkelly The challenge is applying DQ and governance to high velocity data - hard enough with "traditional" data, ie CRM, ERP.
craigmullins Failing to apply these concepts will result in poor data quality. Analytics performed on bad quality data produces bad results.
BigDataAlex I think transparency is important too in this era of Big Data and how we govern. I would suggest Big Data Ethics manager.
BTRG_MikeMartin IG concepts apply to Big Data even more so as the issues solved by Information governance are only exaggerated.
furrier Data quality has to take on the idea that it will be moving around different systems/APIs.
Yet there are issues and adaptations that will be required as we apply
data quality, data governance and data stewardship to Big Data
BigDataAlex Love the challenge on high velocity data....algorithms in streams.
jeffreyfkelly Big Data is experimenting with data sets, while governance is applying policies that sometimes restrict experimentation.
BTRG_MikeMartin You can’t make good business decisions on bad data. http://t.co/8J1pQPy6eW
Natasha_D_G Data quality is an issue as "94% biz believe some of their customer/prospect info is inaccurate".
Data governance is critical in the Big Data management era as it makes
small problems bigger. You need data quality to enable Biginsights http://t.co/yVTA9NpXIB
furrier Data as a resource for applications; ownership of data is important to individual and/or company.
BigDataAlex In health care sector, orgs are combining medical ethics with their CIOs.
Aarti_Borkar Governance is even more important with Big Data as the security and trust is a bigger business issue now.
dvellante In part this is a discussion around the balance between data being an asset an a liability - good DQ is important for both.
searchCIO Metadata practices are gaining momentum as companies tackle Big Data. http://t.co/DSkdH4Yk6S
Q2 With data at unprecedented speed/volume, how can data quality measures be applied in time for analysis?
A summary of the answers:
With data quality, cleansing can occur as humans eyeball the data -
most raw Big Data is not eyeballed. In some cases (e.g. medical
devices, automated metering, etc.) only rudimentary cleansing (if any)
may be needed. At least as long as the meters are calibrated and
BigDataAlex Real-time analytics is critical. We love Streams. The right algorithm at the right time.
Natasha_D_G Trust = Word we try to avoid. @Aarti_Borkar: Governance is even more important with Big Data as security & trust bigger biz issue.
To deal with Big Data, speed, and volume: be proactive by starting
Big Data Management across the enterprise now & maintain http://t.co/hGJ3QkTiJf
Aarti_Borkar Data Quality for Big Data can be handled right upfront before starting Big Data analysis
BigDataAlex A next-generation of KPIs for quality vs. quantity are being implemented to separate quality from quantity in real-time.
furrier Data quality is about the context of the application & what users experience for each use case is not always the same.
jeffreyfkelly Machine learning is required to improve data quality for Big Data - velocity too high for human methods IMHO
nenshad Variety of algorithms include semantics
zacharyjeans Ask your Big Data well crafted questions. Sloppy questions lead to sloppy answers.
craigmullins Speed + volume make data quality challenging…
searchCIO Data Quality is essential to master Big Data Management http://t.co/pxZ49Xgimm
BTRG_MikeMartin Start now on data quality because if you don’t have it in now Big Data only magnifies data issues http://t.co/hGJ3QkTiJf
Natasha_D_G Excellent question especially given social media data and its 18 minute life span
jeffreyfkelly Also with Big Data, volume of data can sometimes smooth over anomalies in data quality.
Aarti_Borkar Data quality should also be handled as the results of the analysis are merged back into the reporting marts.
BigDataAlex The right analytics at the right time against the systems of systems integration.
dvellante Perspectives from a former CIO on the importance of data quality http://t.co/mYPfqNCCjm
nenshad It’s all about the data first
dvellante In my view you can't deal with Big Data quality unless you can automate the classification of data at the point of creation.
Kari_Agrawal How exactly do we clean the data when it has no structure?
BTRG_MikeMartin You can’t make good decisions and enable business biginsights without high data quality.
furrier Dirty data equals poor user experience. I wrote about it in 2009 re: twitter facebook & social data http://t.co/vpkfB0xS3h
Aarti_Borkar Data quality should be handed as part of data integration as the Information Server customers do - its the same with Big Data.
Q3 How do data governance policies apply when the point of Big Data is to explore novel use cases?
A summary of the answers:
craigmullins Finding novel uses of data does not diminish the need for data governance policies.
Natasha_D_G True, but still need boundaries.
BTRG_MikeMartin Exploring Big Data still requires trusted data so you must secure and govern even more so. http://t.co/UL0VNCiivP
craigmullins The novel uses need to be documented as part of the data governance policies.
BigDataAlex The right policy at right time. I think you can agility with accountability.
craigmullins Keeping in mind that even under ideal circumstances data governance policies can be difficult to enact.
Big data isn't just for novel new business cases - it can also vastly
improve value in existing ones - i.e. R&D, cust service.
craigmullins Consider non-intrusive data governance; see this article by my friend Bob Seiner http://t.co/GogojXCcoV
Seiner states: data governance refers to the administering
(formalizing) of discipline (behavior) around the management of data.
craigmullins And data governance is an on-going process; it should formalize what already exists + address opportunities to improve.
jeffreyfkelly There is a need to set up boundaries but give analysts freedom to explore Big Data.
furrier Innovation will not come from regulations but creative developers to play with data -#slipperyslope
Q4 How does Big Data change data retention policies, ie, deciding what data to keep vs dispose?
A summary of the answers:
tomjkunkel Formal Data Destruction processes minimize the growing data landfill and need to be incorporated into Data Lifecycle Mgmt.
dvellante: Still must be able to defensibly delete data. you may not want WIP data hanging around - too much of a risk.
BTRG_MikeMartin Big Data is not immune to the laws of information economics: http://t.co/Ta361ASBkP
BigDataAlex Focus on workflow, business process, optimization. There is no set answer. Filtration - distillation
BTRG_MikeMartin Velocity of Big Data means current best data is changing rapidly, you want decisions on the best info.
BTRG_MikeMartin: It is important to have Big Data Management framework for good business outcomes inc. policy, security, ILM & quality.
Data is retained for internal + external reasons... Internal because
the org needs it for business – external because the law demands it.
tomjkunkel Isn't there also a need for Data Entrepreneurs (A business perspective with a knack for data)?
You may choose to retain more data for Big Data Management analytics
but be careful because data once retained is discoverable during court
furrier Big data complicates data retention policies - we have shadow IT and now "shadow data" or what I call "dark data".
Natasha_D_G Big Data can extend data retention esp in R&D. Pharmas can leverage old research to accelerate new research.
jeffreyfkelly This is a major issue: with hadoop you can now store all data inexpensively - not possible before and new challenge.
BTRG_MikeMartin NO still too costly.
Kari_Agrawal If we see the huge amount of IP packets flying around, can we process those packets to get something meaningful?
craigmullins There are over 150 different regulations (at the local, state, national, and international levels) that impact data retention.
Aarti_Borkar Retention is about storing what the business needs later vs everything - that core concept does not change with Big Data.
BigDataAlex Do we need to store everything? Can we, should we?
craigmullins No, no, and no to that last series of questions!
Natasha_D_G Data hoards say keep all! Fear of losing critical info.
jeffreyfkelly Nothing worse than looking for data you know you had only to remember you threw it away!
Aarti_Borkar Defensible disposal of data becomes harder if multiple copies are made as part of Big Data analytics.
craigmullins MT @Aarti_Borkar: Defensible disposal of data becomes harder if... hence the need for #datagovernance policies!
furrier We all want data retention but who owns it after it's retained..will a data marketplace economy develop?
TheSocialPitt Storage is a huge challenge, especially in cases with many streaming video feeds, e.g. defense.
Keep in mind regulations haven't caught up w the technology - industry
needs to be proactive on this issue or the government will.
Aarti_Borkar Big Data allows for pattern searches and trends in retained data that was not easy to do earlier.
is a lot of information! I hope you can follow the discussions. I
tried to clean up a little bit and hope that I didn’t change any content
from the participants.
To find out more about managing big data, join IBM for a free event: http://ibm.co/BigDataEvent
Webinar: Musings on DB2 Security for the DB2 LUW DBA
In this timely Webinar, my good friend Rebecca Bond
, independent security consultant and author of Understanding DB2 9 Security
, will share important DB2 Security Tips and Best Practices with participants. People who know Rebecca personally also know that she is a part-time comic and capable of turning the dullest, most boring topics, into something truly entertaining, informative, and memorable.
Here is a sampling of topics Rebecca might cover if she doesn't have a headache:
- Security features in V9.1 & V9.5
- Cost, Risk, Mitigation, People
- Disaster Recovery & Trust
- Don't do these STUPID things
- Touching DB2's Best Known Tickle Points
- Authorizations, Privileges, Roles, LBAC, Trusted Contexts
- HOT Data & Encryption
- Thinking like a Hacker, but avoiding Jail
- Performance versus Security Smackdown
- Other cool things too numerous to list
"Rebecca Bond is an independent DB2 LUW Security Consultant. With a background in government, healthcare and financial DB2 consulting, she is adept at designing efficient, secure database architectures that balance the twin needs of performance and protection. Rebecca is the author of Understanding DB2 9 Security, published by IBM Press, and has written articles on security topics for the IDUG Solutions Journal. She holds numerous DB2 certifications and has been designated by IBM as a Subject Matter Expert."
ORDER THE BOOK: Understanding DB2 9 SecurityOne lucky Webinar Attendee will be randomly selected to win a $50 Amazon.com gift certificate - enough cash to buy the security book plus other fun things!
US and European Phone numbers will be provided, as well as VoIP.
We look forward to seeing you online! Title: Musings on DB2 Security for the DB2 LUW DBA Date: Thursday, May 7, 2009 Time: 9:30 AM - 10:30 AM CDT
After registering you will receive a confirmation email containing information about joining the Webinar.Register Now
See the latest DB2 Magazine edition for an excellent article by Roger Sanders:Could Label-Based Access Control be the Magic Bullet that Tames the Privacy Beast?
After reading the Distributed DBA column on database cloning for Linux, Unix, and Windows in DB2 Magazine, a reader who works in the heavily regulated banking industry cried out for help. How, he wondered, could he keep sensitive information away from developers' eyes without having to obfuscate multiple tables? "I think I'm searching for a magic bullet that doesn't exist." But does it exist? Columnist Roger Sanders thinks DB2 9 might offer a solution in the form of label-based access control. Find the answer, and lots of other useful information, in the DB2 Magazine Email Newsletter Volume 8, Issue 4
Also on LBAC, see Rebecca Bond and her team's book Understanding DB2 9 Security
andWeb Chat: DB2 9: Securing your data with Label-Based Access ControlReplay of this Chat held on October 12, 2006 is now available.
Securing information assets and restricting data access to only those with the need to know is becoming a significant challenge for many organizations, especially in highly classified environments such as government agencies, armed forces, healthcare institutions, etc. Label-Based Access Control (LBAC) is a new security capability in DB2 9 that allows you to control access to your data at a very granular level. With LBAC you can limit data access at the row and/or column level based on security labels. Unlike traditional implementations of mandatory access control (e.g., Multilevel Security), the DB2 LBAC capability allows you to tailor the security label definition to best suit your application specific needs. DB2 LBAC integrates well with other DB2 capabilities (like data partitioning) and can be combined with such capabilities to offer an even stronger security.
Join Sal Vella, VP of DB2 Development, and Walid Rjaibi, Senior DB2 Software Engineer - Security and Privacy, for a deep dive into the LBAC capability in DB2 9 on Linux, UNIX, and Windows.andArticle:DB2 Label-Based Access Control, a practical guide, Part 1: Understand the basics of LBAC in DB2
by Carmen Wong and Stan Musker.
Magnus Lindkvist, keynote speaker and facilitator
Author of Book: Everything We Know Is Wrong – The Trendspotter’s Handbook
Magnus is a trendspotter and futurologist based in Stockholm, Sweden, his mission is to help companies make sense (and money) out of future possibilities and inspire people into action. He founded his company Pattern Recognition AB in 2005 and have a mix of large conglomerates and nimble upstarts within finance, energy, media and IT on the client list. Furthermore, he was elected Sweden’s Business Speaker of the Year in 2009.
Meet Magnus now at the IOD 2010 Bookstore at IOD EMEA. Buy a copy of his book and get it signed.
Modified on by svisser1
Last week at IDUG, there were several side discussions about social media and how it fits in with the database experts. I’ve noticed that there are many more people taking part in using twitter, facebook, and LinkedIN to learn about things that are happening in the DB2 and Big Data spaces of the world. Are you one?
This post will focus on Twitter.. with a few comments about the other major sites.
One of the comments that I get frequently from people who don’t use twitter is “I don’t want to read about what people are eating or when they go to the bathroom”. You’ll only get these comments from genuine “twits”! I can assure you that I see none of that in my tweetstream and if I did, I’d quickly drop them.
By the way, dropping someone you follow on twitter is very easy to do... and don’t worry, they are not notified that you have dropped them. If I follow someone for database information but find that the person tweets about religion, politics, or markets to me... I drop them. OK, some of that information doesn’t bother me so much, but if it happens too often, I drop them.
The @ sign is used to call out usernames in Tweets
Representation of your username
Use the # before relevant keywords to categorize tweets in twitter search
#IBMBLU #DB2zos, #IDUG, #IDUGNA, #IBMChampion, #bigdata, #database, #DB2
@susvis, @ibm_db2, @idugdb2
Who should you follow?
If you want this to be a tool for getting information about work related items... and your work revolves around DB2, I’d suggest you follow these official accounts:
This is the official Twitter ID of IBM DB2 for Linux, Unix and Windows
This is the official Twitter ID of IBM DB2 for z/OS
This is the twitter aggregator of blog entries from planetdb2.com
This is the official Twitter ID of International DB2 Users Group (IDUG)
IBM Information Management Support news, updates, and information
Join us Nov 3-7, Las Vegas.http://ibm.co/IBMIODregister
A Fun virtual edutainment TV Show for the DB2 community
Official IBM Data Magazine Twitter account
The list of people joining twitter every day is growing so quickly, it is hard to keep up. I’m trying to create lists of people who you may wish to follow by putting them into categories such as IBM Champion, IDUG NA Attendees, IBMers... etc. You can see some of my lists here: https://twitter.com/IBM_DB2/lists If you want to be added, just send me a note or message via twitter and I’ll add you!
Now start listening on twitter
There is no need to have followers or to tweet (create & send messages) when listening. You don’t even need an account! You can simply enter a hashtag in google and you’ll see what messages were created and tagged with this hashtag. Give it a try, type #bigdatamgmt in your search engine to see.
You don’t need to live on twitter to get benefit from it. Think of twitter as a way for you to scan the headlines. If you see something that interests you, you can click the link to read more about the topic. If you then like what is said in the link, you can retweet to share the posting with others, or mark it is favourite so that you can see it again or later.
Start sending messages!
When you feel comfortable with twitter, you can begin retweeting interesting tweets to your followers. Followers will come once you have a well defined profile set up. Eventually, start tweeting your own 140-character messages. You must include your message, link, mentions and hashtags in this 140-character limit. You’ll quickly learn how to be concise with your messages.
The most successful tweets have a coherent message with a link to related content. If you can include a photo or a graphic, your tweet may also get additional attention.
There are many related tools that have been developed to make tweeting and following tweets more effective. Most of them are aimed at advanced users, but I’ll suggest two that you’ll find quite useful.
First... tweetchat.com. This is an easy way for you to read all the tweet messages related to a specific hashtag category. I recommend that you use this if you are taking part in a tweetchat or tweet conversation such as a debate. Once you sign in, you’ll easily be able to contribute to the conversation.
Second, Flipboard.com... I was introduced to this tool to look at twitter messages on my iphone. If you have other smart phones or tablets, you’ll also find this useful. You can group twitter accounts or lists together and have them presented to you in a magazine-like format. I have just learned that you can also add other content into the stream as well: facebook, videos, and more!
As an example, see the flipboard for big data content that was created by IBM Champion Frank Fillmore.
That’s a good summary of twitter. Let me know if you have questions so I can build on the content here.
Welcome to Twitter
See also my slides that I created for the DB2Night Show a few months ago.
The Beginner’s Guide to Twitter
I’m looking forward to listening to Sam Lightstone’s latest webinar about his best selling book “Making it Big in Software”. This webinar is presented by Safari Books Online and is free.
Date: Wednesday March 16, 2011
Time: 1:00 pm EDT (10:00 am PDT)
Register Now: Safari Books Online
This webcast is perfect for anyone who wants to jumpstart their career in software! Individuals are all too easily confined by the scope of their current position. Gain insight into how to create your own path toward greater success through inspiring advice and real-life stories from author Sam Lightstone. Discover how to:
- Accelerate your career, and promotions, with integrity
- Master the nontechnical skills crucial to your success
- "Work the org" to move up rapidly
- Successfully manage your time, projects and life
- Move up to "medium-shot," " big-shot" and finally "visionary"
The software business is constantly changing, to make it big you need a finger on the pulse of today's realities.
Sam Lightstone is the creator of MakingItBigCareers.com as well as Program Director and Senior Technical Staff Member with IBM’s Software Group, where he works on product strategy and R&D for one of the world’s largest software engineering teams.
Sam is a sought-after public speaker, author, and prolific inventor who still spends a good part of his professional time recruiting and mentoring software engineers.
If you’re a regular reader of my blog, you’ll remember a few other entries I’ve posted in relation to Sam’s book.
Campus visit - how to improve technical writing skills
Wine and Cheese and Sam Lightstone at IOD
Meet author Sam Lightstone “Making it Big in Sofware”
Making it Big .... Live on the DB2Night Show!
Two live sessions coming up with author Sam Lightstone...
Making it Big in Software
Yes, I have heard Sam speak about his book several times, but can tell you that he changes the topic for each talk. His book is so rich with advice that he could do quite a few more sessions before he runs out of topics.
Congratulations on the success of your book Sam!
Maybe it's a bit sad that in the book world, a book that was published in 2004 is considered a classic! But with the number of new books being published on a daily basis... such is life! Developing Quality Technical Information: A Handbook for Writers and Editors (2nd Edition)
by Gretchen Hargis, Michelle Carey, Ann Kilty Hernandez, Polly Hughes, Deirdre Longo, Shannon Rouiller, and Elizabeth Wilde.
I have this book on my desk and think fondly of the team of people who wrote the book. Seven talented women from IBM wrote the book using their collective years of skill that they built working in Information Development. Sadly, Gretchen is no longer with us and several of the other authors have since retired from IBM. But, we can still benefit from their learned experience.
The book was so well-written that amazon displays 20 reader comments... most of them four or five star reviews! I recently went back to the author team to ask whether an update was needed. The answer came back... NO. The content is still very relevant and useful to anyone who is writing or reviewing technical content.
Who is writing technical content these days? I would have to say... just about everyone! Yes, the talented people working on the Information Development team are the leading contributors as they write the product docs, online help, and more. But... I write a blog, friends of mine write articles, tutorials, white papers, and IBM Redbooks. I work with many technical people and ask them to write books.... or chapters of books. Not many of these people took actual training to write about technical subjects, so this is very much a skill that people build over a number of attempts via trial and error, repeated writing, and learning from reviews.
I also ask many people to review chapters of books before they are published. Developers working on IBM are constantly asked to review documents to ensure that they are technically accurate. For that matter, developers are required to write technical specs before they design and code.
So, really the answer is just about everyone is required to write or review as part of their technical career. Do you feel qualified? Help yourself... and increase the quality of your work, with the help of this book. It is one of the books that I'll forever be proud to be part of... in the small way that I was.
Read the customer comments on amazon
regarding this book, and take advantage of the 35% discount you can get when buying this book directly from the publisher.Developing Quality Technical Information: A Handbook for Writers and Editors, 2nd Edition
* List Price: $49.99 * Your Price: $32.49 (Save 35%)
Modified on by svisser1
One of the things we’ve heard about the awesome DB2 10.5 with BLU Acceleration release is that the content is scattered everywhere! This was true, until now!
Introducing ibmBLUhub.com where you can find all the great content about BLU in a very well designed website.
On the first page, you’ll find the hot assets: latest papers, videos, podcasts, etc.
From here you can navigate to several pages:
What is BLU?
Learn the reasons why this product is so different than anything you’ve seen before. A few related videos, white papers and analyst reports are attached to the page to make it easy for you to explore a few ideas further.
Now that you know the basics, you can go deep into the details of the technology via an overview of the technology, learning roadmap, FAQ, a complete listing of related resources, as well as a technical forum where you can ask peers and experts detailed technical questions.
Learn how you can get your hands on BLU for free. Choose the cloud or the the 90-day preview. Feeling skeptical about some of the claims? Give the product a try and see that it really is remarkable.
This is my favourite page. I am working with experts who are blogging, tweeting, podcasting, making videos, writing papers, and more. I gather all of this information and choose the entries that I think you’ll be most interested in!
This page provides you with details on the many ways you can go the next step to using BLU in your environment. Be it education for yourself or others in your organization or details on how the product works with SAP or information on consultants who can help you, this page covers it all.
SAP, Intel, Cognos: just some of the solutions that work very well with BLU. Use this page to learn more.
Ready to talk to someone in IBM who can help you go the next step? Whether you are a customer or a business partner, this is the page to go to.
I hope you like the great work that was done to satisfy your need to have everything on one page. Be sure that new content will be created and posted often! Have a suggestion for us? Tweet, comment on the blog, send an email.. whatever. We want to hear from you!
the ads say “I LOVE NY”. I’ve visit often and have many friends in NY.
If you’re looking for an excuse to visit the big apple, consider some
of these events that are taking place during Data Week.
When: October 22 - October 26. I’ll be in Las Vegas for the IBM Information on Demand Conference, but some of my colleagues will be in NYC at this event.
Where: Various awesome Manhattan locations.
Price: Most NYC Data Week events are free to attend, and anyone can attend.
What is Data Week: According to their website, NYC Data Week is co-produced by the City of New York's Department of Information Technology & Telecommunications (DoITT) and O'Reilly Media's Strata + Hadoop World Conference.
It celebrates and explores the people, industries, and organizations using data to fuel innovation in New York City. The Data Innovation in Finance Panel on October 24 and Data Innovation Across the City Panel
on October 25 showcase New York City business and government leaders
using data to implement change, and talking frankly about what it takes
to succeed with data initiatives.
Data Week events include:
- A Startup Showcase with Fred Wilson and Tim O'Reilly.
- Ignite NYC @Strata, a hackathon, numerous meetups, and more.
- IBM Big Data Developer Day • Oct 22 • 8:00am–6:00pm • IBM Client Center, 590 Madison Avenue, New York, NY
IBM’s enterprise-class big data platform at IBM's Big Data Developer
Day hosted by the IBM Big Data Development team. The morning will
include interactive discussions and live demonstrations of big data for
social media and log analytics, then get hands on with Hadoop scripting
and text analytics with guidance from development experts. Seating is
limited and you must register to be guaranteed a seat. Register today!
- If you can't make this one, see the list of other Big Data Developer Days.
- DataKind DataSprint • Oct 23 • 9:00am–5:00pm • Sheraton New York, Empire Ballroom, 811 7th Avenue 53rd Street, New York
hackathon focused on a critical New York City data project. DataKind is
incredibly excited to announce that we will be setting up shop all day
at the Strata NY Conference on October 23rd with a bunch of great data
problems for you to stop by and work on! We will be serving non-profits
and charities, using data to to solve some of their toughest problems,
so bring your data skills and get ready to make the world a better
place. If you're a socially conscious data hacker who wants to make the
world a better place, RSVP now! Entrance to our DataSprint is completely
- The Future of Security • Oct 24 • 9:00am–3:30pm • Theresa Lang Community and Student Center; The New School; 55 West 13th Street, 2nd Floor
Future of Security: Ethical Hacking, Big Data and the Crowd conference
will convene a daylong series of discussions to highlight the emerging,
disruptive forces changing the landscape of the global community. Key
panels include the following topic areas: Ethical Hacking / Hacktivism;
Big Data and Networks; and The Crowd and Crowdsourced Science. Organized
by the The Parsons Institute for Information Mapping (PIIM), The Center
for Transformative Media (CTM) of Parsons The New School for Design,
and The Richard Lounsbery Foundation
Be sure to see the agenda as there are many choices that may appeal to you. Wish I was going to be there!