Join this free SSWUG Webcast: The Big Deal about Big Data,
staring expert Paul Zikpoulos. The webinar is free only during the
live broadcast that takes place Wednesday, February 20, 2013, 1:00 PM
Eastern. What you’ll learn: Big
Data can mean a lot of things to a lot of people; but one thing we're
sure of, it's the hottest thing to hit the IT landscape. In this chat
you'll get a comprehensive introduction to Big Data. You'll learn how to
spot Big Data, it's characteristics, and what the opportunities are.
(Hint, be prepared to get shocked on Volume and more). You'll get a
taste of Hadoop, but realize Big Data is so much more. Paul
will also share top things to consider in the Big Data world that's
often overlooked (governance, integration, search, and more). Consider
this: if you Google search 'What is Big Data', you will get almost 1
billion hits!!! If you attend this session, you'll never have to Google
search this phrase again.To register and for additional information see: Webcast Structure and Cost
Note that listening to the live broadcast is free, but ordering the replay has a charge.
Modified on by svisser1
Get ready for another action packed tweetchat! Join us April 24 at noon (eastern) to discuss the topic Storing Big Data.
Join us to listen or contribute!
You don't need to be on twitter to "listen", but you do to contribute to the conversation. Here's what you do:
Go to tweetchat.com
Sign in with your twitter handle
Search on #bigdatamgmt
A new window will open that makes it easy for you to follow & contribute.
And you'll see the comments come flying. It is very fast paced.
With the launch last week, there were a number of articles written about this topic that I’d like to recommend you read:
New IBM storage chief Ambuj Goyal: I like all-flash and I cannot lie
Data in DRAM is a Flash in the Pan
Reduce IT, Legal and Compliance costs through Value Based Archiving and Defensible Disposal
Archiving the Big Data Old Tail
Finding gems in big data archives
Ten Properties of the Perfect Big Data Storage Architecture
Top 5 Myths About Big Data
Attend the April 30 Event: April 30 - Big Data at the Speed of Business - Find out more about managing big data, join IBM for a free event.
Another great tweetchat on the hashtag #bigdatamgmt! Here are the results:625 tweets, 157 users, 153,849 reach, 1,849,624 impressions
This is the recap of the Mobile Data tweetchat for questions 4 and 5. See also Recap of Tweetchat: "Mobile Data: Taking Your Big Data On the Road" - Part 1
Q4 Does mobile big data require different retention policies?
BigDataAlex A4:Retention for mobility is a key factor: do we really need to remember every location we have been - It links back to privacy
craigmullins A4: Data is retained for internal + external reasons: internal because it is needed for biz – external because laws demand it.
A4: You may choose to retain more data for mobile bigdatamgmt but be
careful because data once retained is discoverable during court trials
BigDataAlex A4: The question is who is doing the retaining of the data....
jeffreyfkelly A4 mobile data growing exponentially, so how much is retained and for how long gets tricky
craigmullins A4 Bigdata does not necessarily change retention reqmts but it complicates the issue
BigDataAlex A4: Mobile data is the new oil field.
Natasha_D_G Useless unless we mine & leverage it correctly RT @BigDataAlex: A4: Mobile data is the new oil field.
cristianmolaro A4: maybe not: retention should be a business variable not a access device matter
craigmullins A4: There are over 150 different regulations (at local, state, national, and international levels) that impact data retention.
furrier A4: yes it does - to save or not to save that is the question legal issue
IBMbigdata Whether tis nobler to save or not... RT @furrier: A4: yes it does - to save or not to save that is the question #legalissue
cristianmolaro A4: is not the amount of data or the older that it is that matters the most, but what you can get from it... insight!
craigmullins A4 #bigdata can be more costly to retain for long periods simply due to its massive volume
Dmattcarter can hadoop help? RT @craigmullins: A4 #bigdata can be more costly to retain for long periods simply due to its massive volume
BigDataAlex A4: Some folks save it all - some say save what you end - Big Data is about figuring out the difference.
Empirix A4 Different groups within an org might need the data for longer, to make informed decisions based on cust history
jameskobielus A4: Mobile doesn't impact retention policies at server--but should @ client. Keep sensitive info on device limited
jeffreyfkelly #BigData Shakespeare RT @furrier: A4: yes it does - #mobile data: to save or not to save that is the question
IBMbigdata RT @Empirix: A5 Your biz model & regs define retention policies- becomes more of a ? of do you have the right tech to hold it
TheSocialPitt Quoting Shakespeare on #bigdatamgmt chat! Well done!
cristianmolaro A4: there is a trend to increase data retention
Empirix @cristianmolaro Also need the right tools, questions, and people to get you there
furrier As navigation promotion here are all the videos from IOD http://t.co/tugNzS4MGE #bigdata
zacharyjeans Cold storage data solutions like @ironmountain will have more and more play in big data management
AllanKoivo A4: Key is HOW the data is retained. Policies always need to be dynamic to reflect new technologies
craigmullins @Dmattcarter Not sure how/if Hadoop would decrease storage costs?
cristianmolaro A4: not all bigdata have to be saved for later... think about live digital video recording... it leaves just for a short while
motohero Drill ready? Mine is! "@BigDataAlex: A4: Mobile data is the new oil field.
craigmullins A4 Over large spans of time technology becomes the problem, not the solution, for data retention
Natasha_D_G How so? RT @craigmullins: A4 Over large spans of time technology becomes the problem, not the solution, for data retention
craigmullins A4 Consider retaining data back in the 1960s. Could've been on punched cards. Can you even read them today?
BigDataAlex A4:is there enough power to run or water to cool the data we retain?
jeffreyfkelly A4 yes @cristianmolaro, w #Hadoop, storage constraints lifted - but retention policy still an issue and, while cheap, not FREE
cristianmolaro and what about being able to read today's bigdata in 20 years from now?
Betharonoff @AllanKoivo They've started testing digital data stored in DNA as coding system
motohero @craigmullins @Dmattcarter IBM is the oldest masters of storage, maybe they can provide insight about what they are creating
craigmullins A4 The longer you retain data the more likely the tech used to retain it has become obsolete
furrier A4: this is a good video to watch on compliance data issues http://t.co/crGpu821Sv
jameskobielus A4 Mobile #bigdata makes imperative 2 have enterprise rights mgmt policies 2 prevent multi-device over-retain/leak
Q5 Which type of #bigdata platform (#Hadoop, Streams, etc.) is best for #mobile clients?
A5 Your business model & regs define your retention policies - it
becomes more of a question of do you have the right tech to hold it
BigDataAlex A5 Streams!
A5: this has too many dimensions: all areas are affected - no one
technology wins; ease of ingest & extraction are key to this
IBMbigdata Should have known Alex would vote for InfoSphere Streams
cristianmolaro A5: any platform that would allow users to get information from bigdata
jeffreyfkelly A5 real-time key to mobile BigData - must get insights to mobile workers at the right time to take action
BigDataAlex A5: Big Telcos are trying to figure how to parse 700,000 calls per second and deal with dropped call management and service.
craigmullins A5: I’d look for customizable reports, dashboards, + graphs that users can adapt to their mobile preferences + needs
craigmullins A5: I think the presentation + interface on the mobile device is more important than Hadoop, etc.
BigDataAlex A5: Streaming analytics allows customers to keep up with the volume and interoperate with Hadoop for non-real-time analytics.
jameskobielus A5: No particular back-end bigdata platform preferred 4 mobile. Need front-end mobile access infrastructure agnostic to all
furrier @craigmullins I totally agree on the easy of use I would add reduce the steps to get extraction + insights
craigmullins A5: Maybe innovative interaction support (gesture, voice, etc.) too
jeffreyfkelly A5 definitely streaming, CEP-style tech to allow automated actions - must take action while customer still engaged
cristianmolaro A5: sometimes bigdata users are like people having a solution looking for a problem to solve...
AllanKoivo A5: not all mobile clients are the same- it depends on the objective of the client. There is no one size fits all solution
Natasha_D_G Usability, scalability all key RT @furrier: @craigmullins ease of use I would add reduce the steps to get extraction + insights
motohero A5 nosql on the bottom, webtech (pig, R) on top
furrier A5: many uses cases hadoop is great but what does real time mean?
Empirix A5 Predictive Analytics capabilities are important for keeping a competitive edge
craigmullins Like most new tech, marketers co-opt it! MT @cristianmolaro: A5: Bigdata users are ... looking for a problem to solve...
cristianmolaro A5: the one able to provide fast access to organization's insight with easy of operation...
BigDataAlex A5: Streams means real-time which means millions in the Telco space, esp., in Eurozone.
mjcavaretta BigData opportunities across the entire space of high-volume, high-value data generation.
Empirix @cristianmolaro Yes - you need the right questions first or there's no direction or focus
Natasha_D_G Should start w/ biz prob RT @cristianmolaro: A5: sometimes bigdata users =people having solution looking 4 problem to solve
IBMbigdata Is "real time" changing? RT @furrier: A5: many uses cases hadoop is great but what does real time mean?
furrier @motohero not all nosql but mostly yes SQL is a easy way to extract ontop of semi structured data- agree on above db layer
jeffreyfkelly how about in-time? RT @furrier: A5: many uses cases hadoop is great but what does real time mean?
cristianmolaro A5: Bigdata is everywhere... in any organization... you just have to discover how to leverage it
BigDataAlex Real-time at network line speed - 1, 10, and 100 GigE speeds - analytics in process, in flow - compute memory.
jameskobielus A5: In-memory #bigdata clients/servers with back-end streaming best for real-time mobile
motohero @furrier real-time is the here and now model of customer(for exmpl) that a decision can be derived from
Empirix Need the right tools to extract meaning @cristianmolaro: A5: bigdata is everywhere in an org, have to discover how to leverage it
AllanKoivo & other industries as well RT @BigDataAlex: A5: #Streams = real-time which means millions in the Telco space
motohero @furrier my vision is that SQL could be the cheese in the bigdatamgmt sandwich - that middle layer that can be exploited also
jameskobielus A5: Need lower-latency in-motion bigdata platforms closer to mobile client, batch & "data-at-rest" further
BigDataAlex So many problems require real-time analytics as an initial distillation process or layer.
furrier @motohero awesome! except some are making a "wish sandwich" - customer wish they had some "meat" :-)
jeffreyfkelly suddenly I'm hungry for lunch
TheSocialPitt Super-size me RT @motohero: @furrier my vision is SQL cld be cheese in bigdatamgmt sandwich - that middle layer that can be exploited also
jameskobielus A5: The mobile bigdata environment should have SQL-query-virtualization front-end to simplify access
Q6 What types of insight are orgs getting from mobile generated bigdata?
craigmullins A6: Proximity and buying patterns can be used to target potential customersNatasha_D_G Agreed esp when u can connect w/ historical RT @BigDataAlex: So many problems require real-time analytics as an initial layer.
jeffreyfkelly A6 sentiment, engagement patterns, geo-locations patterns
cristianmolaro A6: by analyzing big amounts of data, users are discovering cause - effect correlations that they were not aware of
Betharonoff @TheSocialPitt From Shakespeare to McD's! BigData covers the gamut
craigmullins A6: With GPS/sensors in mobile devices, when u walk past your favorite coffee shop your device can alert u to the daily special
cristianmolaro: 90% of data unstructured... 10% of data structured; this 10% matters the most
mjcavaretta Much value in unstructured to structured.. MT @cristianmolaro: 90% of data unstructured... 10% of data structured; this 10% matters the most
TheSocialPitt A6 Or is it deriving meaning from that 90%? @cristianmolaro
BigDataAlex I think it is deriving meaning for the 90% - whether it is structured or unstructured.
BigDataAlex A6:they are getting a wealth of location information, proximity, distance, rate, clustering polygons-the where of consumerism.
A6: Many businesses are gleaning social & customer sentiment,
customer feedback, location data, etc from mobile generated bigdata
craigmullins A6: When offering up mobile bigdata it is important not to get overwhelmed by the volume of the data that is available
cristianmolaro A6: Bigdata sitting in a enormous hard-disk worst nothing if you cannot get insight from it
cristianmolaro A6: a practical example: how weather conditions in Brussels can impact the car accident rate 40 KM south of the city?
BigDataAlex A6:Combine mobility and immersive visualization and you have created a new experience layer.
cristianmolaro A6: today's technology makes possible what we call bigdata... but the question is how do you prepare to get advantage of it
furrier A6: the big issue for mobile generated data is learning about user experience patterns
Natasha_D_G Becomes a junk yard RT @cristianmolaro: A6: bigdata in enormous hard-disk worth nothing w/o insight from it
craigmullins A6: Avoid push technologies that inundate users with a ton of unwanted data
craigmullins A6: Instead use tools that provide filtering to enable users to get only the information they need to make business decisions
furrier A6: IBM gets the notion of the "learning machine". It's true here with mobile data-loop in the data & loop back value
jeffreyfkelly A6 can glean insight on buying patterns - mobile v. online v. brick-and-mortar
Empirix A6 M2M insights can be discovered that can help insurance orgs and in customizing the user experience
AllanKoivo A6: or increasing % that matters
TheSocialPitt Ex: http://t.co/AZTwWUpjMQ RT @craigmullins: A6: W/ GPS/sensors when u walk past fave coffee shop device can alert u special
Natasha_D_G Gold mine 4 #CX personalization RT @furrier: A6: big issue 4 mobile generated data = learning abt #UX patterns
jameskobielus A6: deep machine-data analytics insights on geo-localization, sentiment, behavior, and other signals sourced from mobiles
cristianmolaro A6: geo-localization data correlated with behavior patterns adds a complete new dimension to the way we approach data
furrier @jameskobielus right on geo is key
Empirix A6 you can also learn how apps are used & how to improve service delivery
jbondre @Natasha_D_G @craigmullins True, but it is intrusive. Marketing on demand is better, then marketing as a intrusion.
jameskobielus A6: mobile-sourced bigdata fleshes out the "720-degree customer view" (external behavior + internal experiences)
TheSocialPitt A6 #Watson is using data from mobile devices to help doctors diagnose, treat patients better, faster.
Natasha_D_G @jbondre @craigmullins Not intrusive if you request it in settings though
cristianmolaro A6: you carry a small almost-supercomputer with a gps in your pocket every day... think about the potential...
jbondre @Natasha_D_G @craigmullins PPL looking for coffee will be way more receptive to the msg. Pushing Msgs is like Minority report.
motohero Nods in agreement "@Empirix: A6 you can also learn how apps are used & how to improve service delivery
Natasha_D_G Data can tell if they were doing a look up RT @jbondre: @craigmullins PPL looking for coffee = more receptive to the msg.
IBMbigdata Push v pull RT @jbondre PPL looking for coffee will be way more receptive to the msg. Pushing Msgs is like Minority report.
craigmullins @jbondre True, I don't want my smartphone buzzing when I pass every store... just my faves
cristianmolaro A6: one of the biggest challenges that mobile bigdata comes with is organizations realizing the opportunities behind
furrier @edd Dumbill lays it out http://t.co/33GrnKlXLk great watch
jbondre @Natasha_D_G @craigmullins good idea for 1, turns into good idea for everyone, becomes noise. bigdatamgmt CX is about the consumers wants
motohero @IBMbigdata @cristianmolaro privacy.concerns :-)
Natasha_D_G Done right can be successful MT @craigmullins: @jbondre True, I don't want my smartphone at every store... just my faves
jameskobielus A6: mobile-gen data + data sourced from all other channels = fodder 4 dynamic multi-channel experience optimization
Betharonoff Can u get metadata on how Bigdata is used? Whether users like push v. pull, intrusion v. on-demand in yr app?
AllanKoivo Was thinking same thing RT @IBMbigdata Push v pull RT @jbondre PPL looking for coffee will be way more receptive to the msg.
jeffreyfkelly More here http://t.co/XdwvGburDV RT @TheSocialPitt A6 Watson using data from #mobile devices to help diagnose, treat patients
motohero gotta feed, cheeeeeze! thanks for the great insight folks
mjcavaretta Value in internal analytics. MT @motohero: Nods in agreement @Empirix: A6 learn how apps are used to improve service delivery
furrier Economist editor Ken Cukier @kncukier has amazing important book Big Data Revolution http://t.co/m2OXdmeCsb
cristianmolaro Actually mobile bigdata is a mass grid of interconnected devices... do you internet mobile?
jameskobielus A6: wearable & implanted mobile devices will deliver unparalleled insights into wellness, health, & experience
TheSocialPitt Lots of talk about sandwiches, cheese, coffee on #bigdatamgmt. The downside of Twitterchat over lunch time.
craigmullins But I want certain vendors to shout for me to come to them!
craigmullins Certain = opt-in (or some other verification mechanism)
Natasha_D_G Hilarious! RT @craigmullins: But I want certain vendors to shout for me to come to them!
jbondre @craigmullins 4sqr model is nice. Oh, you're near X, did you think about trying Y? Only at interaction is there a suggestion
jeffreyfkelly yes, based on my (your) interests RT @craigmullins: But I want certain vendors to shout for me to come to them!
craigmullins Is the smartphone constantly buzzing the "rubber biscuit" (since we already mentioned the wish sandwich)
TheSocialPitt Bow bow bow! RT @craigmullins: Is smartphone constantly buzzing the "rubber biscuit" (since we mentioned the wish sandwich)
craigmullins I won't sacrifice battery life for auto check in on 4Square!
jbondre @craigmullins In a year, when phones are more efficient, and batteries better, this type of location push is very feasible.
craigmullins @jbondre I still wish 4Square was more automatic. RFID to auto check in?
jbondre @craigmullins Totally possible, RFID not needed. Look up Sonar. The issue is constant GPS pinging drains battery.
CrystaAnderson @craigmullins @jbondre Glad 4Square not automatic - want to retain more #privacy control
craigmullins @CrystaAnderson Makes me wonder where you've been! ;-)
AllanKoivo & customer backlash @craigmullins Need some form of opt-in or we run up against privacy concerns, no?
CrystaAnderson Amen! RT @craigmullins: I won't sacrifice battery life for auto check in on 4Square!
jbondre @TheSocialPitt Agreed. I use 4sqr, but I do not push the feed to my FB or Twitter unless I have something to say.
craigmullins @AllanKoivo backlash if the customer even knows about it
craigmullins A6 Need some form of opt-in or we run up against privacy concerns, no?
jbondre @Natasha_D_G @craigmullins An opt-in might work. Still, its better to be there for the consumer, not shout to come to you.
jameskobielus A6: vehicle-sourced mobile data insights will help traffic planners dynamically optimize world transportation grids
TheSocialPitt Already happening in EU RT @jameskobielus: vehicle-sourced mobile data insight will help traffic planners optimize transportation
Last one! Q7 When should you tap into smartphones, mobile clients as sources for bigdata apps?
Natasha_D_G A7: Yesterday was too late to tap into smartphones & mobile clients as sources for #bigdata!!
jeffreyfkelly A7 whenever such data would add value to analytics and resulting insights
BigDataAlex A7:Mobility will drive the Internet of Things, connecting us to billions of sensors measuring our world.
jeffreyfkelly A7 mobile devices are akin to tracking devices in your pocket - back to the privacy issue, not everyone realizes this
BigDataAlex A7: What to extract, what do I pursue, how does this improve our lives? If I have a million sensors can I predict the weather?
cristianmolaro A7: insurance companies could correlate geo-localization data with weather and traffic conditions to draw car accident patterns
BigDataAlex A7: what do we measure, why, and how do we extract the metadata from the edge and make it actionable, valuable?
Always. Smartphones becoming the most ubiquitous, valuable source of
ambient, geo, sentiment, & experience data; traffic conditions
Empirix A7 As mobile devices continue to proliferate it becomes a critical necessity to tap into the resulting bigdata available
Empirix A7 At some point the not so distant future using BigData from mobile devices will become a need to have not nice to have
cristianmolaro: A7: anyone with a mobile device is almost a bigdata walking sensor today
jeffreyfkelly i'd take out the word "almost"
TheSocialPitt Indeed. Human machines
craigmullins Nice image
Natasha_D_G Nicely put
the ads say “I LOVE NY”. I’ve visit often and have many friends in NY.
If you’re looking for an excuse to visit the big apple, consider some
of these events that are taking place during Data Week.
When: October 22 - October 26. I’ll be in Las Vegas for the IBM Information on Demand Conference, but some of my colleagues will be in NYC at this event.
Where: Various awesome Manhattan locations.
Price: Most NYC Data Week events are free to attend, and anyone can attend.
What is Data Week: According to their website, NYC Data Week is co-produced by the City of New York's Department of Information Technology & Telecommunications (DoITT) and O'Reilly Media's Strata + Hadoop World Conference.
It celebrates and explores the people, industries, and organizations using data to fuel innovation in New York City. The Data Innovation in Finance Panel on October 24 and Data Innovation Across the City Panel
on October 25 showcase New York City business and government leaders
using data to implement change, and talking frankly about what it takes
to succeed with data initiatives.
Data Week events include:
- A Startup Showcase with Fred Wilson and Tim O'Reilly.
- Ignite NYC @Strata, a hackathon, numerous meetups, and more.
- IBM Big Data Developer Day • Oct 22 • 8:00am–6:00pm • IBM Client Center, 590 Madison Avenue, New York, NY
IBM’s enterprise-class big data platform at IBM's Big Data Developer
Day hosted by the IBM Big Data Development team. The morning will
include interactive discussions and live demonstrations of big data for
social media and log analytics, then get hands on with Hadoop scripting
and text analytics with guidance from development experts. Seating is
limited and you must register to be guaranteed a seat. Register today!
- If you can't make this one, see the list of other Big Data Developer Days.
- DataKind DataSprint • Oct 23 • 9:00am–5:00pm • Sheraton New York, Empire Ballroom, 811 7th Avenue 53rd Street, New York
hackathon focused on a critical New York City data project. DataKind is
incredibly excited to announce that we will be setting up shop all day
at the Strata NY Conference on October 23rd with a bunch of great data
problems for you to stop by and work on! We will be serving non-profits
and charities, using data to to solve some of their toughest problems,
so bring your data skills and get ready to make the world a better
place. If you're a socially conscious data hacker who wants to make the
world a better place, RSVP now! Entrance to our DataSprint is completely
- The Future of Security • Oct 24 • 9:00am–3:30pm • Theresa Lang Community and Student Center; The New School; 55 West 13th Street, 2nd Floor
Future of Security: Ethical Hacking, Big Data and the Crowd conference
will convene a daylong series of discussions to highlight the emerging,
disruptive forces changing the landscape of the global community. Key
panels include the following topic areas: Ethical Hacking / Hacktivism;
Big Data and Networks; and The Crowd and Crowdsourced Science. Organized
by the The Parsons Institute for Information Mapping (PIIM), The Center
for Transformative Media (CTM) of Parsons The New School for Design,
and The Richard Lounsbery Foundation
Be sure to see the agenda as there are many choices that may appeal to you. Wish I was going to be there!
Today I spent an hour taking part in the TweetChat at Big Datamgmt focused on governance to avoid a data landfill: http://t.co/j2wojSb9Hf: "Getting Control of Data in Big Data Era"
it went too fast for me to actually be a contributor, so I was
participating as a reader / listener. This kept me busy enough since by
the end we had generated a fair about of Big Data ourselves: 647 tweets, 180 users with reach of 136,229 & 1,506,585 impressions.
Who were the experts?
and facilitators / moderators:
There were 8 questions posed over the hour, but I'm only posting the first 4 here.
Q1 In this Big Data era, do traditional concepts data quality, data governance & data stewardship even apply?
A summary of the answers:
Big Data refers to datasets whose size, type and speed of creation make
it impractical to process and analyze with traditional tools. That Big
Data definition comes from wikibon; see http://t.co/awsPyuqXjZ. So given that, definitionally then, traditional concepts are at the very least “impractical”… no?
dvellante My belief is that ingest process & analysis of data changes with big data.
BigDataAlex Yes, I think they apply. Our clients are very concerned about these issues and it does apply.
jeffreyfkelly Absolutely, but vastly more complex.
Natasha_D_G Traditional concepts are even more critical in Big Data era especially in data governance.
craigmullins But, of course data quality, data governance and data stewardship SHOULD apply in the age of Big Data Management.
You still need clean and common policies for data taxonomies; but the
unstructured and semi-structured data texture requires some new thinking
and technology. Specifically ideas around function shipping, name value
pairs, Hadoop, etc - applying traditional concepts to new model.
Dmattcarter In order for Big Data to be enterprise-ready, it needs to include those traditional concepts.
jeffreyfkelly The challenge is applying DQ and governance to high velocity data - hard enough with "traditional" data, ie CRM, ERP.
craigmullins Failing to apply these concepts will result in poor data quality. Analytics performed on bad quality data produces bad results.
BigDataAlex I think transparency is important too in this era of Big Data and how we govern. I would suggest Big Data Ethics manager.
BTRG_MikeMartin IG concepts apply to Big Data even more so as the issues solved by Information governance are only exaggerated.
furrier Data quality has to take on the idea that it will be moving around different systems/APIs.
Yet there are issues and adaptations that will be required as we apply
data quality, data governance and data stewardship to Big Data
BigDataAlex Love the challenge on high velocity data....algorithms in streams.
jeffreyfkelly Big Data is experimenting with data sets, while governance is applying policies that sometimes restrict experimentation.
BTRG_MikeMartin You can’t make good business decisions on bad data. http://t.co/8J1pQPy6eW
Natasha_D_G Data quality is an issue as "94% biz believe some of their customer/prospect info is inaccurate".
Data governance is critical in the Big Data management era as it makes
small problems bigger. You need data quality to enable Biginsights http://t.co/yVTA9NpXIB
furrier Data as a resource for applications; ownership of data is important to individual and/or company.
BigDataAlex In health care sector, orgs are combining medical ethics with their CIOs.
Aarti_Borkar Governance is even more important with Big Data as the security and trust is a bigger business issue now.
dvellante In part this is a discussion around the balance between data being an asset an a liability - good DQ is important for both.
searchCIO Metadata practices are gaining momentum as companies tackle Big Data. http://t.co/DSkdH4Yk6S
Q2 With data at unprecedented speed/volume, how can data quality measures be applied in time for analysis?
A summary of the answers:
With data quality, cleansing can occur as humans eyeball the data -
most raw Big Data is not eyeballed. In some cases (e.g. medical
devices, automated metering, etc.) only rudimentary cleansing (if any)
may be needed. At least as long as the meters are calibrated and
BigDataAlex Real-time analytics is critical. We love Streams. The right algorithm at the right time.
Natasha_D_G Trust = Word we try to avoid. @Aarti_Borkar: Governance is even more important with Big Data as security & trust bigger biz issue.
To deal with Big Data, speed, and volume: be proactive by starting
Big Data Management across the enterprise now & maintain http://t.co/hGJ3QkTiJf
Aarti_Borkar Data Quality for Big Data can be handled right upfront before starting Big Data analysis
BigDataAlex A next-generation of KPIs for quality vs. quantity are being implemented to separate quality from quantity in real-time.
furrier Data quality is about the context of the application & what users experience for each use case is not always the same.
jeffreyfkelly Machine learning is required to improve data quality for Big Data - velocity too high for human methods IMHO
nenshad Variety of algorithms include semantics
zacharyjeans Ask your Big Data well crafted questions. Sloppy questions lead to sloppy answers.
craigmullins Speed + volume make data quality challenging…
searchCIO Data Quality is essential to master Big Data Management http://t.co/pxZ49Xgimm
BTRG_MikeMartin Start now on data quality because if you don’t have it in now Big Data only magnifies data issues http://t.co/hGJ3QkTiJf
Natasha_D_G Excellent question especially given social media data and its 18 minute life span
jeffreyfkelly Also with Big Data, volume of data can sometimes smooth over anomalies in data quality.
Aarti_Borkar Data quality should also be handled as the results of the analysis are merged back into the reporting marts.
BigDataAlex The right analytics at the right time against the systems of systems integration.
dvellante Perspectives from a former CIO on the importance of data quality http://t.co/mYPfqNCCjm
nenshad It’s all about the data first
dvellante In my view you can't deal with Big Data quality unless you can automate the classification of data at the point of creation.
Kari_Agrawal How exactly do we clean the data when it has no structure?
BTRG_MikeMartin You can’t make good decisions and enable business biginsights without high data quality.
furrier Dirty data equals poor user experience. I wrote about it in 2009 re: twitter facebook & social data http://t.co/vpkfB0xS3h
Aarti_Borkar Data quality should be handed as part of data integration as the Information Server customers do - its the same with Big Data.
Q3 How do data governance policies apply when the point of Big Data is to explore novel use cases?
A summary of the answers:
craigmullins Finding novel uses of data does not diminish the need for data governance policies.
Natasha_D_G True, but still need boundaries.
BTRG_MikeMartin Exploring Big Data still requires trusted data so you must secure and govern even more so. http://t.co/UL0VNCiivP
craigmullins The novel uses need to be documented as part of the data governance policies.
BigDataAlex The right policy at right time. I think you can agility with accountability.
craigmullins Keeping in mind that even under ideal circumstances data governance policies can be difficult to enact.
Big data isn't just for novel new business cases - it can also vastly
improve value in existing ones - i.e. R&D, cust service.
craigmullins Consider non-intrusive data governance; see this article by my friend Bob Seiner http://t.co/GogojXCcoV
Seiner states: data governance refers to the administering
(formalizing) of discipline (behavior) around the management of data.
craigmullins And data governance is an on-going process; it should formalize what already exists + address opportunities to improve.
jeffreyfkelly There is a need to set up boundaries but give analysts freedom to explore Big Data.
furrier Innovation will not come from regulations but creative developers to play with data -#slipperyslope
Q4 How does Big Data change data retention policies, ie, deciding what data to keep vs dispose?
A summary of the answers:
tomjkunkel Formal Data Destruction processes minimize the growing data landfill and need to be incorporated into Data Lifecycle Mgmt.
dvellante: Still must be able to defensibly delete data. you may not want WIP data hanging around - too much of a risk.
BTRG_MikeMartin Big Data is not immune to the laws of information economics: http://t.co/Ta361ASBkP
BigDataAlex Focus on workflow, business process, optimization. There is no set answer. Filtration - distillation
BTRG_MikeMartin Velocity of Big Data means current best data is changing rapidly, you want decisions on the best info.
BTRG_MikeMartin: It is important to have Big Data Management framework for good business outcomes inc. policy, security, ILM & quality.
Data is retained for internal + external reasons... Internal because
the org needs it for business – external because the law demands it.
tomjkunkel Isn't there also a need for Data Entrepreneurs (A business perspective with a knack for data)?
You may choose to retain more data for Big Data Management analytics
but be careful because data once retained is discoverable during court
furrier Big data complicates data retention policies - we have shadow IT and now "shadow data" or what I call "dark data".
Natasha_D_G Big Data can extend data retention esp in R&D. Pharmas can leverage old research to accelerate new research.
jeffreyfkelly This is a major issue: with hadoop you can now store all data inexpensively - not possible before and new challenge.
BTRG_MikeMartin NO still too costly.
Kari_Agrawal If we see the huge amount of IP packets flying around, can we process those packets to get something meaningful?
craigmullins There are over 150 different regulations (at the local, state, national, and international levels) that impact data retention.
Aarti_Borkar Retention is about storing what the business needs later vs everything - that core concept does not change with Big Data.
BigDataAlex Do we need to store everything? Can we, should we?
craigmullins No, no, and no to that last series of questions!
Natasha_D_G Data hoards say keep all! Fear of losing critical info.
jeffreyfkelly Nothing worse than looking for data you know you had only to remember you threw it away!
Aarti_Borkar Defensible disposal of data becomes harder if multiple copies are made as part of Big Data analytics.
craigmullins MT @Aarti_Borkar: Defensible disposal of data becomes harder if... hence the need for #datagovernance policies!
furrier We all want data retention but who owns it after it's retained..will a data marketplace economy develop?
TheSocialPitt Storage is a huge challenge, especially in cases with many streaming video feeds, e.g. defense.
Keep in mind regulations haven't caught up w the technology - industry
needs to be proactive on this issue or the government will.
Aarti_Borkar Big Data allows for pattern searches and trends in retained data that was not easy to do earlier.
is a lot of information! I hope you can follow the discussions. I
tried to clean up a little bit and hope that I didn’t change any content
from the participants.
To find out more about managing big data, join IBM for a free event: http://ibm.co/BigDataEvent
In the world of big data, Hadoop is not the answer to every question. While the promise of big data is clear, selecting the right architecture and technologies to invest in is becoming a game-changer for a growing number of organisations. So how do you get ahead? Join IBM technical experts to learn how to maximize your IT opportunities using no nonsense presentations, live demonstrations and real-world performance comparisons.
During this half-day agenda, you will learn how to make the right decisions for your current and future architecture:
•Recognize the core technology innovations and landscape for big data that matter
•Compare IBM DB2 10.5 with BLU Acceleration with other in-memory, columnar database technologies such as SAP HANA
•Optimize the use of predictive analytics and business intelligence pervasively in your architecture
•Understand how to make wise open source investments in Hadoop for your enterprise
•Compare the capabilities and performance of analytical appliances
•Create a strategy to protect and secure your big data
So join us and learn how IBM can help your company make the right technology decisions today and in planning for what is next.
There are three options for registering:
call the hotline 010 -68,200,063
send a fax to 010 -68,207,655
send an email to: IBMEvent05@dimei.cn
For the invitation in Chinese, see these links:
Thanks to Steven Rubin for providing me with this information.
There are only a few shows remaining in the rest of this season’s collection of DB2Night Show episodes. Mark your calendar for these up coming events and take a look at previous shows that you may have missed. The education opportunity to you is huge.
#53 - Don't flip out! How to stop your query access plans from flopping! (aka DB2 HINTS!)
John Hornibrook, IBM STSM, Manager Query Optimization
Friday May 20: 11am ET, 90 minutes
Did you know you can "force" the DB2 LUW optimizer to choose a specific access strategy of your choosing? The secret is out...
Special guest John Hornibrook from the IBM Toronto Lab presented in Episode #52 where he talked about best practices for query tuning. In this show he’ll teach you how to exploit DB2 LUW Optimizer "hints", or, maybe more properly, how to tell the DB2 optimizer how to execute your queries.
Note that this show is scheduled for 90 minutes so that John can share all of his incredible presentation with you and have time for questions.
#z04 - What's new from the optimizer in DB2 10 for z/OS?
Terry Purcell, IBM SVL, SQL & Optimization
Monday May 23: 11am ET, 60 minutes
DB2 10 for z/OS is no exception to the goal of delivering incremental query optimization enhancements to the world’s most respected cost based optimizer. With skip-release migration supported, it is expected that many more customers may adopt DB2 10 in the coming year - so understanding the performance enhancements can be critical for those customers. Terry Purcell will share the insight uncovered from beta customer and early adopters, and provide the motivation for each enhancement including:
- “Safe” query optimization
- Improvements to complex OR and IN list processing
- RUNSTATS management and performance improvements
- And more!
#54 - DB2 9 LUW Core Engine Data Movement Utilities Overview with Oracle database comparisons
Burt Vialpando, Executive IT Specialist, IBM
Friday June 3 - 11am ET, 60 minutes
Special guest Burt Vialpando from IBM presented in Episode #22 which was the 15th most downloaded show in 2010. He talked about Comparing DB2 LUW and Oracle, Architectures and Administration. In this episode, Burt will give you an overview of each of the DB2 core engine data movement utilities: Load, Import, Export (with db2look), db2move, ADMIN_COPY_SCHEMA, ADMIN_MOVE_TABLE, db2relocatedb, restore from backup and split mirror. These will be compared to each other so that you can get a good idea of when to use and not to use each of these. To help round out the discussion, a comparison to the Oracle database core engine data movement utilities will be made to these DB2 utilities.
#z05 - B2 V10 Migration Planning and Early User Experiences
John Campbell, IBM Distinguished Engineer
Monday June 6: 11am ET, 60 minutes
In this episode, host Klaas Brant and guest John Campbell will introduce and discuss early experiences and lessons to be learned with DB2 10 for z/OS. It will provide quick hints on preparing for and executing the migration, performance expectations and opportunities, virtual storage constraint relief, some instrumentation changes, use of 1MB real storage frame size, use of hash access, value of rebind, etc. Key topics covered will include:
- Lessons learned
- Surprises and pitfalls
- Provide hints and tips
- Address some myths
- Provide additional planning information
- Provide usage guidelines
- Provide positioning on new enhancements
#55 - DB2 LUW Multi-Temperature Data Management
Kate Kurtz and IBM Smart Analytics Best Practices Team members
Friday June 17: 11am EDT, 60 minutes
Did you know that data has temperatures? Can data run a fever? What happens if your data catches a cold? Or rather, turns cold from hot? What can and should you do? What techniques are available for optimizing performance in databases where some data is more popular (hot) than other data?
In this episode of The DB2Night Show, various IBM experts will help us answer these challenging questions! Kate Kurtz along with team members from the IBM Smart Analytics Systems Best Practices team will share with us their expertise!
#56 Season #2 Finale! - Data Warehouse Performance Tuning!
Kate Kurtz and IBM Smart Analytics Best Practices Team members
Friday June 24: 11am ET, 60 minutes
Kate Kurtz and her IBM Smart Analytics Systems Best Practices team return again to share with us IBM recommendations and suggested best practices for optimizing performance of Data Warehouse Databases. If you're trying to make DPF or multi-partition databases run queries as fast as possible, then you should sign up for this show!
Hard to believe another season is over already! Thanks Scott Hayes and Klaas Brant for Entertaining, Informing, and best of all, EDUCATING us! We look forward to see what amazing ideas you come up with next season.
PS Click here for the recorded shows & commentary.
Bigger Big Data, Big Thoughts, and Big IdeasLast week, Leon Katsnelson was the guest on The DB2Night Show with IBM Champion Scott Hayes. I missed it! Did you as well? If so, we’re both in luck! We can watch the replay and learn. From Scott:97% of our studio audience learned something! Leon gave us a fantastic presentation on BIG DATA. Not only did he explain what it is and why it is important, but finally, at last, some explained DB2's relationship with BIG DATA! Leon also talked about HADOOP and other important technologies.Leon is the Program Director, Information Management Cloud Computing Center of Competence and Evangelism at IBM.Be sure to browse through the large and growing library of replays of The DB2Night Show that are available for you to watch: REPLAYS.Upcoming shows include:Nov 16: DB2 XXL - How to handle large volumes of data? with IBM Champion Klaas Brant.Nov 30: IBM DB2 LUW Data Warehouse & DPF Performance with Glenn Sheffield from IBM.Mark your calendar so you don’t miss a single episode of these award winning webinars.Susan
I know… what happens in Vegas is supposed to STAY in Vegas. But I’m going to break that rule and give you links to a few videos that you’ll want to check out.
First, check out the interviews I did with some of the authors who were doing book signings at IOD11. I interviewed:
Sandy Carter about her latest book Get Bold: Using Social Media
Tony Giordano for his book Data Integration Blueprint and Modeling
James Taylor for his latest book Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics to Build Adaptive, Agile, Intelligent Systems
You can find all three videos on the IBM Press Book’s Channel on YouTube.
To go with the author them, remember to check out the audio interviews I did before the conference for the IM Skills Cast Series. I interviewed Roger Sanders, Roger Johnson, Filip Draskovic, Sunil Soares, and Bob Laberge.
Here are a few interesting clips that I found on YouTube that you may want to take a look at.
I’m already looking forward to IOD12. If you are too… here is the info so you can block your calendar and start building a justification to attend.
Modified on by svisser1
IBM Impact 2014 is the place to learn how to use disruptive technologies like Cloud, Big Data, and Mobile to create faster, more adaptive and secure solutions to overcome challenges and thrive in the digital economy.
The conference takes place at the Venetian Hotel in Las Vegas, April 27 - May 1.
There is much taking place at Impact 2014 for those of you in the Data Management business. Attend these sessions to help your organization get faster answers and breakthrough performance, with exceptional time-to-value.
From our Executives:
On Monday 4:00-5:15, join the keynote "Give Your Application an Unfair Advantage" featuring Beth Smith and Leon Katsnelson in Palazzo L. You’ll get a practical perspective on how to turn the world of insight into a weapon that gives your organization and applications an unfair advantage.
On Tuesday 8:30am - 10:00am the General Session presentation is “Made with IBM” and features Bob Picciano. This takes place in Level 2, Hall B.
You will learn how to drive better engagement through deeper insight and smarter applications. And you will learn how to embrace open innovation to put data to work for your organization.
At the Expo Theater:
We’ve carefully chosen two of our most exciting topics to share with you:
Internet of Things: Choose an Intelligent Database by Fred Ho
Expo Theater, Session #1, Sunday Apr 27, 7:00 pm
The "Internet of Things" refers to the growing number of devices and sensors that communicate and interact via the Internet, offering businesses new customers and revenue opportunities. Harnessing data from billions of connected devices lies in the ability to capture, store, access and query multiple data types seamlessly and use that data in meaningful ways. Attend this session to see the Internet of Things in action with a demo of this end-to-end solution from IBM and Shaspa.
Data Warehousing for everyone with BLU Acceleration by Adam Ronthal
Expo Theater, Session #2, Monday Apr 28, 7:00 pm
IBM’s BLU Acceleration is a game-changer for data warehousing and analytics. When paired with the cloud infrastructure and business models, BLU Acceleration opens up the world of analytics to clients looking to benefit from business intelligence (BI) technology, without lengthy project approval times.
Stop by the BLU Experience, located in the Social Impact Lounge, Sands Foyer. Get a BLU tattoo to show your colors! You can also take a "selfie" and upload to facebook or twitter so it "doesn't have to stay in Vegas!" Meet with our BLU Ambassadors onsite to get further details.
I’ve pulled out these two sessions related to big data. Unfortunately, they occur at the same time, so you have to choose which you want to attend.
Getting Started with Big Data - 5 Game Changing Use Cases
with Rick Clements
Tuesday, April 29 5:00 - 6:00 pm
Session 3145: Lando 4201 A
IBM has seen a pattern emerge within and across its clients’ organizations and has identified the top five high-value applications of big data technology that can be the first step into big data. During this session, hear about each of these use cases - data warehouse modernization, enhanced 360-degree view of the customer, security/intelligence extension, big data exploration, and operations analysis - and how clients are identifying and tackling big data projects today. In addition, presenters explore the IBM big data and analytics platform - Watson Foundations - and how it sets the standard in the market with its breadth and depth of capabilities and is packaged so clients can address their immediate need, build on what they have, and realize value at every step of their journey.
Taming Big Data Derived from the Internet of Things with Big SQL
with Berni Schiefer
Tuesday April 29, 5:00 - 6:00 pm
Session 3477: Delfino 4103
Big data means many things. But with the many trillions of objects in the Internet of Things, each with many attributes, including geospatial and temporal, big data takes on new meaning. Gaining insight from all this data effectively and efficiently is a daunting challenge. SQL access over Hadoop data is an ideal and highly productive interface to extract value from all this data. In this talk, presenters describe why SQL is the right interface and how IBM InfoSphere BigInsights, with its next generation of big SQL processing, opens new frontiers for exploring big Data. Presenters provide performance-oriented best practices for storing, searching, and analyzing big data with Big SQL.
In the Expo:
We have three booths staffed with experts to answer your questions related to these areas:
InfoSphere - Information Integration & Governance
Information Integration and Governance (IIG)—a critical element of Watson Foundations, increases trust in your information; makes business operations more efficient, and mitigates risk. Learn how IIG brings together a unified set of capabilities; including data integration, master data management, data security and lifecycle management.
Think BIG: Big Data, BigInsights and Big SQL
Big SQL 3.0 is the next generation of IBM’s SQL on Hadoop offering in InfoSphere BigInsights. Big SQL 3.0 delivers full/rich SQL language support, industry-leading performance and security, open integration with analytics and reporting tools and built in security. Learn how Big SQL can give you a single point of access to your data within Hadoop.
Data Management for the Era of Big Data
Business and IT leaders in forward-thinking organizations are taking an integrated approach to unlocking value from all available data by exploiting a new generation of data management solutions. Learn how the next generation of in-memory computing can help deliver greater scale and efficiency in the era of big data.
As always, the Expo area is huge, so find us with these directions: Find the Big Data Area and go to Booth BD-7. Besides expert advice, visit also for a few trickets, books, and other surprises.
Social, Bookstore, Certification
Three of my favourite things… and yes, they’ll be well represented at the Impact conference. Join the social lounges to meet the faces behind the social messaging; drop by the bookstore to browse through the great selection of books, and pick one or two up at a discounted price; take a certification exam to prove to the world that you have the skills to perform your job.
This will be my first time attending Impact. I’m looking forward to learning as much as I can while at the conference and networking with many people. I’ll being taking photos and live tweeting as much as I can. If you’re at the conference, come say hi and tell me you’ve read my blog!
does “big data” mean to you? It is a term that is widely used and can
convey all sorts of concepts, including: huge quantities of data, social
media analytics, next generation data management capabilities,
real-time data, and much more. Once you’ve figured out what “big data”
actually means to you, you must then figure out how to manage and store
it. Are your current systems able to handle this type of data? How
can you be sure?
Several recent articles and blog entries discuss this theme.
The Commoditization of Commercial Database Management Systems?
by Craig Mullins
are so popular these days that they are being taken for granted. This
article argues that databases are intricate, multi-faceted and useful
tools that are necessary to get the most out of the data that is being
collected and should not be considered a commodity as no two offerings
The Continuing Role of the Database in the New Era of Big Data
by Bernie Spang
amounts of complex big data must be stored in databases in order for it
to be analyzed and made of use to businesses. What features in a
database are necessary to take on this challenge?
Big Data: IBM’s Mainframe Customers Base Grows
by Dave Beulke
Big Data solutions are being embraced by companies worldwide. This
discussion shows that IBM Information Management solutions are well
positioned to fill the needs to handle all your big data.
The Maturing of Big Data: From Herding Cats to Taming Tigers
by James Kobielus
all the recent innovation in the big data market, has big data matured?
Have we figured out how to make big data tigers jump through hoops, or
are we still just herding cats?
Infographic: Taming Big Data
Lots of detail here. Take a look, share the graphic and tell us what you think of it.
Taming Big Data: 12 Best Practices for Analysts - Maria Deutscher Are
you ready to “sink your teeth” into Big Data Technology? Learn how to
start using IBM’s technology correctly in order to get the most out of
your efforts by learning these 12 best practices.
In addition to these articles, I highly suggest that you read the report Analytics: The Real-World Use of Big Data.
It is based on the Big Data @ Work Survey conducted by IBM in mid-2012
with 1144 professionals from 95 countries across 26 industries.
“Across industries and geographies, our study found that
organizations are taking a pragmatic approach to big data.
The most effective big data solutions identify business
requirements first, and then tailor the infrastructure, data
sources and analytics to support the business opportunity.
These organizations extract new insights from existing and
newly available internal sources of information, define a big
data technology strategy and then incrementally upgrade their
infrastructures accordingly over time.”
Originally published on July 16, 2012 by Crysta Anderson at MasteringDataManagement
Mike Martin may be the only IBM Champion with an album currently available on iTunes. Based in the Philadelphia area, Mike and his band, Lo-Fi Genius, recorded an album, This is Rocket Science,
including the song, “George Wendt Lives in My Building.” After all,
George Wendt really did live in Mike’s building at one point.
when he’s not making music, Mike is busy securing data. Luckily, Mike’s
company, the Business & Technology Resource Group (BTRG), makes
data security far easier than rocket science.
is practice director of information governance for BTRG. He solves data
quality, data growth, data privacy, data security, test data management
and compliance problems for Fortune 1000 customers across many
lot of our customers have mature systems that have accumulated quite a
bit of data,” Mike said, noting that this leads to a familiar challenge:
How can you manage volume, ensure high-performing systems and reduce
companies seek to solve this challenge with a top-down approach. They
launch an information governance program by establishing a council,
naming data stewards and rolling out organization-wide policies. But as a
member of the Information Governance Council, Mike takes a more solution-oriented, bottoms-up approach.
looking for tangible pain points, such as data quality, data security
or data volume, he offers clients a specific solution that delivers
reportable ROI. Clients can then build from that success. “By solving
significant problems that are in your enterprise systems, it gives you a
foothold to really establish broader information governance,” Mike
explained. “Then you already have an ROI and a case that you can use to
internally sell that initiative.”
the face of big data, information governance best practices and models
still apply. In fact, Mike believes the stakes grow higher as data
volumes increase. He noted, “Without good quality, the insights we would
draw with a big data solution won’t be as valuable.”
works especially closely with IBM InfoSphere Optim, which enables data
masking for non-production systems that are often shared with partners,
employees, or others who need the data. Masking helps obscure personally
identifiable or sensitive information. BTRG’s Data Masking Factory™
helps automate the requirements gathering, analysis, design,
documentation and actual development. Mike explains that the tool
exposes Optim’s “robust capabilities” while creating an easy-to-use
checklist that lets users easily pick which fields to mask. The solution
was one of only three finalists for the highly competitive IBM Beacon
one recent implementation spanning 18 mission-critical applications,
BTRG completed the project in a mere 14 weeks. This was two weeks ahead
of schedule, and 40% under budget. The best part? Based on the
industry-standard Ponemon study
that states every row of breached data costs a company approximately
$204, Mike estimates they created $22 billion of data protection across
109 million rows of data.
the production side, privileged users like DBAs and system admins
certainly need greater. However, Mike helps clients use InfoSphere
Guardium to monitor and log these activities, which can help stave off
cyber attacks. Clients can mark certain data tables as “protected” and
terminate connections if users try to access them. BTRG also helps
clients take InfoSphere Guardium’s vulnerability assessments to a new
level, extending the best practices and standards to the entire ERP
Listen to the podcast that
was recorded with Mike, David Pittman, and Crysta Anderson to hear some
great stories from Mike! You’ll be glad that you did!
Follow Mike on Twitter @BTRG_MikeMartin and check out the fun Mike had in the February 27 Tweetchat about Big Data & Governance:
Reading list mined from "Getting Control of Data in Big Data Era" Tweetchat
Recap of Tweetchat: "Getting Control of Data in Big Data Era"
Part 2: Recap of Tweetchat: "Getting Control of Data in Big Data Era"
What are you sacrificing for the promise of big data?
What is a Tweetchat and why should you join us?
Big Data & Database Technology
To find out more about managing big data, join IBM for a free event on April 30: Big Data at the Speed of Business
The GigaOM Structure:Data Conference is taking place next week in New York City: March 19-21.
very own Paul Zikopoulos will be speaking at the conference... so make
sure you add that to your agenda. Oceanic Suite 1:05 pm, Wednesday
talks are both entertaining and informative. Another draw is that Paul
will be handing out and signing copies of his book "Harness the Power
of Big Data."
For more details about the conference, see GigaOM Structure: Data Conference.
More about Paul’s talk:
Date: March 20
Time: 1:05 PM
Room: Oceanic Suite
Venue: Pier Sixty at The Chelsea Piers (map) New York, NY 10011
has worked with hundreds of clients to identify the highest impact big
data analytics use cases. Learn from the author of “Harnessing the Power
of Big Data” about these use cases and the technologies needed to turn
big data into a competitive advantage.
Speaker: Paul Zikopoulos. Director, Information Management WW Technical Professionals, IBM
Social media connections for the conference:
tweet about it (@gigaom / #dataconf)
Last year, our team created this Valentine's Day InfoGraphic: http://www.ibmbigdatahub.com/infographic/6-ways-love-big-data
We decided to update it with 6 more ways to love your data!
Exploration: Can you access and use all your data?
Context: Does your data give you a 360 degree view?
Confidence: Can you track, monitor, and explain your data?
Operational Insight: Can you analyze machine data?
Modernization: Are you augmenting your data warehouse?
Purpose: Are you solving big business problems?
DB2 10.5 with BLU Acceleration provides all these features and more! To learn more and to stay on top of the latest news, follow us on the following social sites:
Enjoy your weekend. May it be filled with love for your family!