Modificado el por DataGovernor
A funny thing happened to me on the Long Island Expressway yesterday. I was driving from my home on Long Island to a meeting in Springfield, MA and got on the Expressway at Exit 36 at about 8:20am in the morning. My Navigation system showed the LIE was full of heavy traffic (orange and red on the map) up to the Clearview Expressway, which runs North South between the Throgs Neck Bridge and the Grand Central Parkway. My oh so clever NAV system advised me to exit the Expressway at Community Drive (Exit 34) and take Northern Blvd to the Bridge. The Exit already had a line of cars looking to get off, but I was in the far left lane and couldn't slide over in time to make the queue.
Next, my NAV voice asked me to get off at Little Neck Parkway (Exit 33), which also had an even longer line of cars backed up in the far right lane. Curiously, the Expressway was clear and moving briskly while the Exit ramp was bumper to bumper. I ignored the NAV and went ahead. At the Cross Island Expressway, which has a double Exit ramp, the cars were backed up a quarter of a mile, yet the Expressway was moving fast.
I looked into the windows of the cars waiting in the Exit queue and noticed they all had NAV systems. Maybe their NAV systems showed the Expressway was full of traffic and urged the drivers to exit and take the same detour over Northern Blvd. I chuckled and drove on at full speed to the Clearview Expressway and made it to the Throgs Neck Bridge with no traffic and wondered how all those data driven drivers were doing snarled in traffic lights on Northern Blvd.
And now I wonder what will happen more broadly when everyone is using the same analytical engines and tools to make the same decisions in their health, financial, and travel lives. Will it lead to overcrowding on exit ramps? Will automated Big Data Navigation produce over-conformist group-think in business? Or will some wild duck contrarians reap huge benefits by ignoring the detours taken by the masses?
What do you think?
Modificado el por DataGovernor
64 years ago, when my house was built the Long island Power Company installed electric meters in my basement. Two large grey metal meters are affixed to my foundation with insulated wires connecting them to my fuse box. They have a variety of dials and arrows beneath thick glass. I can see the meters but don't really understand what the numbers mean or how to read their information. Every other month or so, a polite person from the power company rings my bell to ask if they can come into my home and walk into my basement to read my meters and input the results into a handheld device that radios the information back to the power company.
The last time the meter reader was here I asked why the power company didn't trust me, the homeowner, to call in the numbers or input them in a form on the internet. She told me that many people don't understand the meters and if they do often lie to the power company about what they read to under-report their electricity usage. I asked why the power company couldn't read my meter remotely or why they couldn't measure how much elecricity my home was using.
Like, "don't THEY know that?" Nope. THEY don't, and the reason they don't has as much to say about how the electricity grid works as it does the way all complex modern industrial systems work and we in the Data Governance world can learn a lot from this.
The electricity grid was created as a downstream electrical production network. Upstream Power plants create electricity and send it downstream to factories and homes to be consumed. The power company did not build in mechanisms to measure how much electricity is being transmitted over the wires, how much is being consumed, and exactly by whom. Your monthly electrical bill is not based on your actual electricty usage. Its based on estimates of your useage based on your historical usage information. That is, the meters read your past and the power company forecasts your current usage and future performance based on that historical information.
The grid itself is run at 70% of electricity capacity to allow up to a 20% margin of error. If the lines carry over 90% of their rated capacity in aggregate, some lines could be running at 100% and therefore could overload and explode. And if some lines overload, capacity reroutes and burns up other lines, transformers, and sub-stations,. So the whole system is calibrated based on historical analytics. The power producers have no realtime understanding of how the electricity is used, in what quantity, when and where. And even the end users don't really understand where the electricity came from, how it was produced, or how the system actually functions.
In the Industrial Age, we human beings created many complex systems that function without many of the system participants understanding how the system works, and this is fine if we are all happy running our systems at 70% efficiency.
In enterprises today, we run our Data production and consumption systems with similar levels of complexity, performance, and ignorance. Most business users have no idea where the data came from, how it was produced, transmitted, and consumed. Conversely, most, if not all, Data Governance professionals have no idea how business people collect and use information to generate value. And this grid was created without any meters to read data volume, velocity, veracity, and utility.
Councils, Stewards, Policies, and Standards will improve human communication about the importance of data in an enterprise, but they won't change human behavior over time without new Data Governance "Smart Meters" that measure and report how data was created, who refined it, how it was transmitted, aggregated, repurposed, criticized, commented, stuffed in envelopes, posted in trades, hedged in inventory, reserved against premium, debated in legislation, trademarked, copyrighted, patented, packaged, and a million other uses and abuses. Until we can demonstrate a clean line between creation and use, Data Governance will be two steps forward and two steps back over and over again, generation after generation.
We need meters and readers and a new Information Age infrastructure that tells us, intelligently, what we are doing, why and when we are doing it. It should connect maintenance to operations, front office to back, middle to board, outside to in. We don't know enough today to tell regulators what we know and until we do we won't be able to close the gap between our forecasted capacity, current and optimal states.
The Information Governance Community is running a landmark Survey on the Cost of Data Quality and the #1 answer to all of the questions is "I don't know." Data Governance Professionals don't know how poor Data Quality effects Business Outcomes because they don't measure that. After Lehmen Brothers disintegrated in 2008, and the global financial meltdown spun out of control, the number one question from the public was "Didn't they know this would happen?"
No. No one knew then. No one knows now. And on one will ever know until we build more intelligent systems that connect Information Production to Consumption and measure the gaps every step of the way. This last recession has demonstrated that we are reaching the limits of our unintelligent Industrial Age networks and systems and its time for a major upgrade.
I am a relative newcomer to System Dynamics. I first learned about systems thinking from Helmut Wilke, german professor who wrote a book called Smart Governance, which talked about systems of governance and their influences on society. I met Professor Wilke in Cologne in 2007 and was so impressed with his ideas I used his book in a course was teaching with Christa Menke-Suedbeck at the Bucerius Law School in Hamburg, Germany.
A few years later, a colleauge introduced me to some work IBM did with the City of Portland to build a very large SD Simulation enabling urban planners to understand how even the smallest policy changes had ripple effects across many municipal departments, neighborhoods, families, and individuals. We created that simulation using VenSim and Forio, and I was immediately captivated by the potential to model and simulate the impact of policy on complex environments.
IBM Smarter Cities SD Demo
For over 15 years, I've been an inventor and market builder at IBM. In 1996, I invented Internet Insurance, persuading AIG, Reliance National, Chubb, Codan, and other insurers to invest in developing interent exposure coverage products and underwriting capabilities so that businesses could depend on insurance coverage as they expanded commercial operations online. In 2001, I led a team of IBMers to create the Enterprise Privacy Architecture, which is a patented methodology for embedding privacy policies and obligations into business processes. In 2004, I founded IBM's Data Governance Council and led an international group of 60 companies to create the Data Governance Maturity Model, a vast piece of commonly developed IP that benchmarks Data Governance behaviors across 11 categories and 5 levels of maturity. In 2009, I hosted a series of roundtable forums with large banks, the SEC and the Federal Reserve as we explored the causes and effects of the Credit Crisis and what new standards in risk calculation and expression could be developed to mitigate future crises. And in 2010, I created the Information Governance Community to publish the Maturity Model under an open source license and invite a global community to work with IBM, the Data Governance Council, and many new leaders in developing a larger market for Information Governance and a new leadership role called the Chief Data Officer.
I love building markets through international collaboration and this is why I have urged and lobbied iseeSystems, Ventana, Forio, Anylogic, IBM, and the SD Society to embrace an open standards process at OASIS. SD is a complex discipline that is hard to learn and hard to use. It has grown in episodes over the past 50 years but it has never really broken out of its strong academic foundations. At first, I thought I could help it grow through the Information Governance Community. In 2011, I held a series of informational webinars on SD, the City of Portland Project, some work Steve Peterson had done with urban violence in South Boston. Michael Bean from Forio.com gave us generous amounts of his time to educate our community in how SD works, how models are built, and how simulations can be used to test strategic ideas and transform organizations. Some of our community members built Data Governance models in Vensim and tested them online in Forio.
But widespread adoption eluded us. You can have great webinars with great content and discussions, but that doesn't mean everyone understands what you are talking about. I saw many of my members thinking about systems, but not in a dynamic SD way. They understood the words we used to mean different things and found the math content totally confusing. After six months of work, I had to admit my efforts at Community education were not succeeding.
Undeterred, I started talking about the need for SD Open Standards. In the IT world, Open Standards are a way to spread adoption among vendors because it lowers proprietary barriers to entry in new markets. It enables better software solutions, which end-users appreciate. And the process of Open Standards consideration and specification approval helps build market demand. As early as 2011, I saw clearly that SD lacked a robust IT vendor community. 5 or 6 small vendors providing software modeling tools was a niche market that was not growing.
In 2012, I met iseeSystems at the System Dynamics Conference in St. Gallen. My participation in the conference was very last minute. St. Gallen isn't close to anything in Switzerland and it was summer and I didn't want to travel. But boy am I glad I did. For three days, I saw incredibly thought-provoking transformational work in every industry all using a common SD methodology. I speak at many conferences throughout the world and you never see so many interesting presentations across so many diverse industries written in a common way.
I was blown away by the quality of the content but, sadly, equally depressed by the complete lack of business participation. The conference was run by academics for academics. I was the only representative from a large IT vendor. There were no banks, insurance companies, oil and gas, utilities, governments, or even big 4 consultants attending. The SD Society had a conference in 2011 in Washington DC, so I asked the organizers how many from the federal government had attended. The answer was hardly any. Why the heck not I asked. The answer was no one had thought to prioritize their participation as a target audience. The target audience was local universities.
If the purpose of the SD Society is to service the university marketplace with educational offerings and knowledge transfer, mission accomplished. If the purpose is to grow the industry and attract business audiences, current approaches are inadequate.
This is where OASIS comes in. Following St. Gallen, I went to work persuading my colleagues in IBM that an Open SD standard based on iseeSystems XMILE could help grow business demand for SD simulations. The open standards process would attract new ideas to SD and open the SD Society to new ideas as well. But it took a lot of persuading. I had to sell a vision internally that SD concepts could be used with Big Data analytics to illustrate policy options on complex ecosystems. I had to tell my colleagues that an open standard would allow IBM to embed SD vocabulary in other modeling tools such as Websphere Business Process Modeler, Rational Method Composer, and iLog. And I had to demonstrate that our investment would be modest, the risk small, and the potential payoff reasonable. It took me a year to find the sponsorship I needed to persuade our Standards Commitee to approve IBM"s sponsorship of the OASIS TC.
And that brings us to where we are today. We have a TC. We have a vision for XMILE. These are table stakes. A TC is a sales effort, and we must now expand our market of members to be global, business oriented, diverse, and inclusive. Over the next 24 months, we have to expand TC membership to 70. I'd like to see representation from North America, South America, Asia, Africa, and Europe. I see my job on this Technical Committee is to help expand customer demand for SD solutions and build a far larger market than exists today.
We are not just building a technical standard. We are building a market and I will continue to engage my peers to expand the use of XMILE worldwide as we work to develop an Open Standard for System Dynamics at OASIS.
Modificado el por DataGovernor
We are growing again. Not as fast as we want, but our economies and companies are inexorably shifting from a deep recession where risk and cost ruled decisions to renewal, profit, and a hunger for growth.
Today, organizations around the world are looking for new products to sell into new markets. And everyone is sittng on untold Petabytes of hidden value in their data and information that can be turned into new Information Products. Everyone is used to buying Information Products every day in the form of newspapers, analytical reports, music, movies, smartphone apps, and cloud-based services. Old definitions of data and information, security and access, have prevented us from realizing that the vast inland sea of information we collect and process every day could have markets and customers beyond our borders who lack what we take for granted every day.
It its time to exploit this untapped resource, to develop new tools and methods to discover what we have, engage with new ecosystems of partners, suppliers, and customers, and provide Information Products that inspire internal and external innovation, creativity, and growth.
It is the mission of the IBM Information Strategy Council to define the role of Information Strategy and to bring mature Information Product Management and Market Development tools and practices that help its members build and reap new market for Information Products and Services. The Council will encompass past and current practices of Data and Content Management, Governance, Security and Privacy, Open and Linked Data, Big Data, Analytics and Semantics. It will add Social Networking and Crowdsourcing, Cloud and Mobile, Product Management and Sales.
The Council will look at these functions from a business perspective, with the goal of creating strategic tools, assets, and methods for bringing high quality Information Products and Services to global and national markets that delight customers and drive new revenue and growth.
This Council will be open to members of the Information Governance Community and will meet for the first time in late September. This is one of the most exciting topics in a new industry all of us can have a hand in creating. Im looking for global participation from thought leaders, business practitioners, and pragmatic strategists.
If this Council interests you and you would like to join, please send me a note with the following:
1. Why you want to join
2. What Information Products you are or would like to sell from you organization
3. What you expect the Council to deliver that would benefit your organization
4. What you can contribute to deliver those benefits.
Upon invitation, I will send an IBM Information Strategy Council Agreement for you to countersign. Please feel free to write to me with any questions or comments. I am looking forward to your feedback.
This is the right topic at the right time and working together, we can do great things!
On May 6th, I delivered a presentation, entitled "The Six Steps to Building an Information Strategy," at the ISACA ASEAN Confernece in Singapore. A copy of my presentation can be seen here:
An outline of the topics covered:
1. The world is emerging from recession
2. Growth is on everyone's minds again
3. Information is an hidden asset with enormous potential
4. Now is the time to turn those assets into Information Products
5. To do so, you need to develop an Information Strategy
6. Understand the 5 major schools of thought around Information:
7. Use these Six Steps to build Information Products that generate real revenue
- Discovery Your Data with Semantic Search
- Tag it with Boundaries and Obligations
- Use your crowd to find issues and opportunities
- Make your Council reward innovation
- Develop Information Product Management as an operational discipline
- Think ecosystems and distributed value chains
8. There is a vast new industry of Information Products that is already generating huge value
Are you read?
Let me tell you about the automated future of American manufacturing and distribution that will soon displace millions of jobs:
In Norfolk Virginia, the automated shipping port uses remote controlled cranes with GPS, cameras, and computers to allow two operators in a room to move containers off docked ships onto railbeds without a human being touching the container.
In California, Governor Jerry Brown just signed a Google sponsored bill that allows driverless cars and trucks to navigate roads and similar bills have been passed in Nevada and Florida. Across the country, 18-wheelers deliver tons of cargo to manufacturers, distribution centers, and retail outlets. The drivers often flaunt regulations requiring no more than 8-hour shifts and there are tractor-trailor accidents across the country that attest to how long hours behind the wheel result in errors. Just imagine how efficient and safe our roads might be with driverless tractor-trailers and other delivery vehicles running 24-hour shifts with no driver fatigue.
In distribution centers across the country, small and powerful orange robots move vast stacks of goods throughout a warehouse with only an operator at a terminal moving a mouse to deliver a palette from Area DD to the loading zone where a driverless delivery van waits to scoot the products to Best Buy, where robots wait to off-load the materials and put them on the sales floor where NFC terminals will allow any shopper to pick up a product, touch their credit card and walk out the door without a human being touching the product since it landed at port.
This is the future of American product shipping, distribution, and retail. It is automated, efficient, inexpensive, and nearly jobless. It runs 24/7 without health insurance, pension benefits, withholding taxes, sick days, or vacation. The sophistication of robots and automation is going to sweep through the economy, leaving millions of unskilled workers permanently out of work. Economists will talk about structural dislocations, the need for job-retraining, and transitions. But the jobs will be lost nonetheless and the consequences for American social tension, income inequality, and poverty will be profound.
We are moving inexorably towards a knowledge economy in which large-scale manual labor will be replaced by automation. At a time when college education costs are skyrocketing and public subsidies are shrinking, when political parties are polarized and the nation lacks consensus on just about every topic, its hard to imagine a nation less well prepared for the jobless future of unskilled citizens.
I am a photographer. Wherever I go in the world, my camera comes with me and after my work day is over I use the camera as a creative outlet. Capturing how light falls in the world around us is for me a way to express myself and create art, to be an artist in addition to an IT professional, husband, father, friend.
Unfortunately for me I am also a Linux user and thus my options for post-production photography management are extremely limited. There are some open source tools which have promise but as yet are more limited than my needs. Most photographers use Adobe Photoshop or Lightroom, but neither of these tools as yet runs in Linux. Fortunately, there is one program, called Bibble 5, that is in many ways just as good the Adobe tools and it runs in Linux. I had been using Bibble 5 for a couple of years when Corel decided to acquire Bibble and its developers in January 2012. Bibble had been developed by a group of 5-6 very skilled developers who created a wonderful tool. They were like open source developers - open, available, interactive, and proactive about communication. They maintained blogs, hosted forums, answered emails, and were extremely responsive to bug reports, customer needs, and new ideas. They created a 3rd party plugin model for their software and had an enthusiastic ISV developer ecosystem that provided awesome additional tools for their tool.
The Corel acquisition was warmly welcomed by the Bibble customer and ISV community and Corel won immediate good will by quickly releasing a new version of the product, now called Aftershot Pro (ASP), in February, just two months after the close of the acquisition. The new version had the same power as Bibble, but with a nicer interface, some new tools, and additional camera support. However, Corel was not quick to migrate the communication tools that Bibble had effectively used to maintain customer loyalty.
They had a different communication culture.They talked to us when they had new product versions to announce, other tools to sell us, and otherwise nothing - silence. The user community, used to vibrant interactions amongst themselves to solve problems and trade advice, setup their own user forum. On the Forum, users posted general discussions about their uses of the product, posted bugs and improvement suggestions, posted and tested plugins, and discussed photography in general. Corel set up a Facebook page to promote their software, and users gratefully posted photography examples from using Aftershot. A small point release was announced and shipped in May, and it seemed that the product would become a real Adobe competitor for the Linux world.
Over the summer, one of the plugin developers learned that all of the former Bibble developers had been terminated by Corel. This information was posted in the ASP Forum and it caused immediate alarm among users. The Bibble developers were extremely popular among end-users and their termination was seen as an ominous sign of cost-cutting at Corel that would have dire consequences on the future of the product. Some users expressed their fears that the Linux version of ASP would soon be discontinued. Others analyzed past Corel acquisitions and noted historical declines in innovation that followed similar terminations of acquired development teams. Others fumed about outsourcing in general and discussed fears that new developers would not understand customer needs.
This conversation devolved over the summer. In August, one customer started a thread called "Goodbye ASP," and discussed the features he was missing and why he would move to another product. Another customer started a thread called, "What are you switching to?," where many of us discussed competing open source products and running Adobe in virtual machines. The fear and anger in these thread accelerated in September, and more customers started threads bashing the ASP product and the lack of innovation, new camera and lens support. One thread invited users to post photos with unattractive artifacts created by ASP, illustrating how the product ruined photos. Another thread helped users to transition to competing products, with tips to migrate photo catalogs. Soon, plugin developers started posting on the forums of competitors discussing their projects to port their plugins.
You can see this conversation here: http://forum.corel.com/EN/viewforum.php?f=90
All the while, over three months of rising rumors, customer fear and anger, Corel was absolutely silent. Customers posted threads asking Corel to comment on the Bibble terminations and product improvement plants. No answer.
The damage that was done to this company's reputation is enormous. I am a customer who has already learned to use other tools to process my photos. Hundreds of other current customers have been turned into vocal antagonists. Corel has lost customer trust and goodwill not 10 months after spending millions of dollars to acquire a terrific product and its loyal customer base.
For all we know, Corel is working on a terrific upgrade that will soon ship and do amazing things for us. But I can say clearly that few existing customers will recommend it because the company has demonstrated complete ignorance of customer communication.
What should they be doing and what can they do now to change the situation? Corel should be using Big Data to analyze social media feeds from their user forums, Facebook, and Twitter. They should be looking for negative feedback and they should respond immediately with information and re-assurances. Companies build sophisticated call-center applications to answer questions from customers when they proactively call. They need to build similar systems to read customer sentiments when passively when they discuss the product and other products online. This is extremely valuable information that must be captured, analyzed, and used proactively every day. There is lots of information about product features and customer needs from other product forums that are also important. Comanies today need to be seen as responsive, sensitive, and caring. They need to behave like open source communities, sharing bugs, discussing features, and working with their user base as an extended community of experts and product architects who know their product as well as the developers.
Companies who do not integrate user base communications and social media transparently into their customer service solutions via big data won't long have customers to serve.
Take my advice. Learn from Corel's mistakes. You don't want users complaining like this without company communication:
more than one month since the beginning of the crisis and
still no news from COREL. The post on facebook seems to be just words
and only words but actually no real fact were shown. No a single beta,
not a single concrete action were presented to the customer (for all of
the customer, the corel one let'say the new comers and the ""old""
former customer of Bibble)
I am really considering to switch
now. I was a customer of Bibble 4, then I went to Bibble 5, with its pro
and cons but at least I was happy with this SW, and its community, its
evolution, and its third party plugins. Today it s only rumours, there
is no evolution, the old users are leaving, the company seems be
completely static, and thirds parties developers are loosing complete
interest into the product.
I don't want to waste more time in a
software which goes nowhere, where there is absolutely no concrete plan
for the future, and might disappear from one day to the other. As a
linux users I wonder if I should switch to a solution with a virtual
machine - that one will be dedicated to RAW processing, without any
internet access neither crazy anti-virus running on it...
- what are your experience: VirtualBox or Vmware ?
which software to put on the virtual machine ? lightroom, Capture one,
DXO ... ? I was still happy with the speed of ASP and its
responsiveness. I really want a solution which doesnt freezes every few
seconds before applying a change. Once again what are your experience
in this virtualized world ?
I know that there are still some
solution under linux, at least 2 : rawtherapee and darktable. But from
my point of view they are not yet ready.
- rawtherapee is quite
good in terms of rendering, but it s so slow that it become unusable for
a long and correct workflow. More there are some important features
for me which are missing, such as layer, region correction, grad filter,
black and white tools, catalogues ...
- darktable is a little bit
too young, it might become a solution. The quality output is still not
there, and the user interface is a bit unbelievable sometimes.
was a really happy user of bibble 4, an happy user of bibble 5. I
wished that ASP could have been a real alternative... That's the past, I
will not recommend this software as I could have done."
On September 20-21, IBM is hosting The Big Data Governance Summit at the Ritz-Carlton Bachelors Gulch in Vail, Colorado. Velocity, Volume, and Variety without Veracity creates Vulnerability.
This event is about Metadata, Stewardship, Security, Privacy, Data
Quality, and Big Data. We can reach to the skies, pull in petabytes of
relational tables, twitter feeds, video, audio, and documents, but its
all garbage in and garbage out without Data Governance.
knows this, and its our task to do something about it. We have to show
how it can be done – how anyone can build vibrant, dynamic, Big Data
Ecosystems that use common standards, ontologies, and methods to tag
huge volumes of data, index its value and context at high Velocity, and
search across its variety to discover trends with large clusters of
computational power that deliver high Veracity and low Vulnerability.
is the promise of Big Data Solutions, uniting disparate data sources
across our organizations, our cities, and our planet; leveraging data
sets based on purpose specification; searching for meaning and value
with brute force speed.
I can see this promise. Its within our
grasp. We can bridge our stovepipes of data and non-standard behaviors
into lean, mean, transformation machines that yield incredible insights
and informational power.
But this promise is only in reach with
Data Governance Solutions to provide metadata tagging, standards,
ontologies, purpose-based access protocols, audits, security &
privacy, data quality, discrete retention rules, and new tools and
technologies to automate how we do it.
The purpose of this event
is to explore how we can bring these ideas forward to help the world
adopt Big Data Ecosystems more rapidly, more successfully, more
We are meeting at the Ritz-Carlton Bachelor's Gulch,
which is the wonderful venue where we first shared the IBM Data
Governance Council Maturity Model with the world in 2007. We will look
at real life examples of firms using Big Data, exploring ecosystems, and
developing standards to model and simulate them.
This meeting is hosted by the IBM Data Governance Council but it is open to all.
Join us as we move forward Big Data Governance.
This morning, General Motors announced that it would no longer advertise its cars on Facebook. This announcement comes a day before the Facebook IPO, and casts a shadow on the business model of Facebook. GM said that they will continue to support their page and user community on Facebook, but that ads just weren't effective in helping consumers to make car buying decisions. Ford jumped on this announcement to say they would continue to buy ads on Facebook and that Social Media requires a consistent commitment to innovation and community development.
Maybe. But I think GM's decisions does illustrate a key problem for Facebook and Twitter - the revenue model. Social Media grew up without dependencies on ad-based revenue. On Facebook, you aren't a customer. You are a product, and its your likes, dislikes, friends, photos, videos, and content that generate value. Selling products to products via advertising is hard. Members don't use Social Media to go shopping. There's no commerce platform there. They use it to be social. There are so many other outlets that are more effective for advertising than Social Media.
So how should Facebook and Twitter make money? My idea: make it collective. The value is in the data.
1. Make terms and conditions explicit that every member owns their own data via copyright. This does two positive things.
A. It indemnifies Facebook and Twitter for the crazy, infringing, and potentially libelous posts of their members by allowing them to claim that they are conduits of content rather than publishers or distributors.
B. Copyright establishes the rights to royalties for content created and posted on their networks, which enables the next step.
2. Allow members to opt-in to Big Data analysis by Social Media partners and intermediaries.
3. Charge Social Media for Big Data Searches by data volume.
4. Pay members royalties every time their data is used in Big Data Searches.
This simple model creates powerful incentives that transform user members from products into mutual social network content providers with an economic interest in posting content that will be used in Big Data searches. It establishes data property rights that insulate Facebook and Twitter from vouching for the content on their networks. Members will also discover that providing high quality data that companies want to search for means more royalties and so the system will produce better behaviors. And it creates a 2-tier royalty distribution model that will also pay Facebook and Twitter handsome revenue that will change online advertising and make every other content aggregater change too.
Of course, Facebook and Twitter will have to sort our who's a person and who's a bot, and will have to provide content creation tutorials to help users/customers create content that has value by sharing the top 100 Big Data queries and sample results.
But this Business Model has something for everyone and is a true win:win. It benefits customers by establishing data property rights and royalties for content. It benefits organizations who want to do Big Data searches by providing ever richer data streams of high quality and availability. And it benefits Facebook, Twitter, and their investors by providing an enormous profit making engine selling Data.
The Data is the Value. The more there is, the more valuable it becomes. Pay your customers to create higher quality data and charge your partners to use it. Its a simple Business Model.
Dick Costolo - @dickc - and Mark Zuckerberg - @finkd - are you listening?
I use Big Data every day. I don't have Hadoop, a Data Warehouse, ETL, or a big analytical engine. But I use search engines, which are indexes of web-pages from around the world, to discover related and unrelated facts. I use Twitter and Linkedin, which aggregate the ideas of millions of people, to understand the sentiments of the people I follow. And I make decisions, and mistakes, with this information every day.
We all do. And in that context, we are all Big Data users and abusers, and we can identify with larger enterprises that are also confronting vast streams of information from every corner of the globe, created by individuals, communities, corporations, and governments. We as individuals never had industrial data management applications. We never had Data Governance Councils, Stewards, or Data Management professionals. So we've been selecting data streams first and using the ultimate analytical engine - our brains - to integrate that information, glean trends, and make decisions.
What's new about Big Data is that large enterprises are copying the information processes that We The People use every day. They are selecting streams first, aggregating them second, determining application third, making decisions fourth. Judging consequences of decisions... later, if at all. Organizations around the world are deciding to retain information much longer because there is a belief that latent, slow developing, trends may lie dormant in that information that can be discovered much later.
But with vast volumes of information, long retention cycles, high velocity decision-making has the potential to do enormous damage as much as enormous good. And we know from experience, that decision-making is often influenced by cyclical trends, personal prejudice, and national dogma. Counter-Cyclical views can be marginalized. Whistle-blowers can be fired.
But Big Data also offers an historic opportunity for Data Management. This industry for too long has been seen as back-office archivists recording the deeds and attributes of heroic business leadership in dingy databases in large glass-house mainframes and data warehouses. They have taken back seats to application developers and business analysts who first and foremost collect the requirements of business users for new applications, features, and functions.
But Big Data changes all of that. It makes information sources and streams more important than applications, features, and functions. It changes the emphasis in value creation and puts the onus on Information Management to produce better sources and streams, easier aggregation and integration, manufacturing information products any user can leverage in any application they wish.
Its large enterprises automating the way We The People use online information every day, and the power and consequences of this paradigm shift are profound and potentially quite scary.
We need Information Governance over every part of Big Data to assure that organizations can answer these fundamental questions:
1. Can we trust our sources?
2. Do we know where they came from?
3. How do we verify the authenticity of the information?
4. Can we verify how the information will be used?
5. What decision options do we have?
6. What is the context for each decision?
7. Can we simulate the decisions and understand the consequences?
8. Will we record the consequences and use that information to improve our Big Data information gathering, context, analysis, and decision-making processes?
9. How will we protect all of our sources, our processes, and our decisions from theft and corruption?
This morning, the Information Governance Community began discussing these issues in a global teleconference moderated by IDC. We have just scratched the surface of these issues and have much more to discuss. We have agreed to create a new category - Big Data - in our Maturity Model to provide organizations with new methods to benchmark their Big Data Governance maturity. But we also agreed that our existing Maturity Model categories also apply and we need to update them to include Big Data issues and questions.
I believe this is critical work. Big Data is an enormous opportunity to make information the arbiter of value creation in the Information Age. But it is also an enormous risk because the same solutions can be used to make dangerous and destructive decision-making a high volume, high velocity science.
Every new technology can be used for both good and evil. Join the Information Governance Community to help ensure Big Data serves the best possible uses.
If you have a Data Governance program today you already know its easier to start one that do one. Real governing is not like a Hollywood movie. Its hard to know what's wrong, why its wrong, how to fix it, and how to get people to care or follow the fixes. And you have to do this every day and all the gurus tell you to get metrics and KPI's, build a framework and follow my process. But those gurus don't live your life, they don't work in your space, and they don't have to make tons of messy compromises to get things done.
But you do, and you know that Governance is tough stuff.
In the Data Governance Council, we know that too and we want to help. We helped build the market with the landmark work we did on the Maturity Model. That gave you a way of knowing that what your already know isn't enough. You could use it to help others realize it wasn't enough too. And that gave you a place to start your program.
Well, now that you are in the thick of it, we think there's a way to communicate how your organization really works - to simulate your environment so you can help folks learn what's going on, how stuff gets done, and what would happen if you made some changes. We know you do that anyway, all the time. But we want to help you do it in a safe test environment before you put your ideas into production.
We call this Predictive Governance - the SCIENCE of describing the world as it is to run simulations on how we'd like it to be. Normally, most folks do it the other way around... The simulate the way they think the world works so they can describe how they want it to be...
Now I could tell you all about how this new way of working is going to look, how its going to help you, and what its going to do. But its more powerful if you see it for yourself. What I'm sharing with you today is an early preview into the Predictive Governance Simulation we are building. Its not pretty or polished, but it works and you can play with it now.
Have a look and let us know what you think:
If you'd like to join the IBM Data Governance Council and help us do more with this, drop me a line.
Temporary Suspension of Disbelief.
What I'm going to tell you know may seem like a fairy tale, but I want you to calm down and listen carefully. The world we know now in 2012 will seem a distant memory by 2022. The Big Data we have today will seem quite small and quaint in 2022. Because on the horizon today we can already see the emergence of the future in which all business decisions will be simulated before committed to action.
Everything. Manufacturing, Pharmaceuticals, Government Programs. It will all be simulated in detail and richness that will be fun and immersive like the best 3D video games.
Spreadsheets will go the way of the Dodo bird - extinct. And everything we do in the real world will have a Data Model, and those models will form the basis of all value creation in the world. Let me give you a few examples of what I mean:
Today, manufacturing is a highly automated process that happens in discrete phases using raw materials, labor, machinery, large physical locations, logistics, and supply chains. Its possible to design a product in San Rafael, make a prototype in China, and go to mass production in Thailand, serving markets in North America in under six months. That's fast by historical standards for simple products like eReader binders, smartphone cases, and even decorative objects. But its also a highly energy intensive application. Product prototyping relies on remote manufacturing and postal services to transmit the prototype and design ideas, shipping to transport goods to market ports, rail and trucks to deliver to stores and customers.
But in the next decade, that long supply chain will be transformed by 3D Printing, which will enable intricate product manufacturing in every home. A 3D Printer uses lasers to fuse layers of metal, polymer, and other materials into highly intricate products in hours. You can create products in 3D Printers that are impossible to create in any other way. They can have moving parts and can be printed disassembled or assembled. And this capability will turn every home into a factory.
Need a new part for your washing machine? Print it. Want a cup with four spigots? Print it. Develop a new gas/diesel hybrid motor with a hydrogen battery backup? Print it.
In the future, online retailers like Amazon will sell vast catalogs of digital product designs that individuals create and license under copyright for local 3D Printing. Open-Source networks will share designs under Community License terms. And designers will create their own catalogs of digital assets that have astronomical value protected by patent. 3D Printers are available today for about $2000, and with one of those you can create polymer based products in about 3-8 hours depending on the complexity. By 2022, we will all be able to purchase 3D printers that can create new products with hundreds of intricate moving parts, layer by later, in under an hour. And a new supply chain of raw materials shipping, and recycling, will replace the goods delivery supply chain of today. Of course, you won't yet be able to print a car in your garage in a day, but you will be able to print a new steering wheel, brake pads, or chrome-plated exhaust for your 1949 Studebaker.
Of course, this assumes you can trust the data. Trusting sales data in a spreadsheet is one thing. But trusting design data in a brake pad that will or will not stop your car in an intersection is quite another form of trust. And 3D Printing won't just be used to print locally what we produce today. It will also be used to print things we can't produce today, things that are incredibly complex. And because in the beginning the cost of printing and risk of failure will be high, all designs will be simulated online before they are created.
The simulation environments will look like 3D First Person Shooters or MMORP Games like World of Warcraft, in which new product designs are created in data first and tested in simulation environments with live people acting in Avatar roles. These simulations will be created to mimic the complexities of the real world, and human beings will use them for everything. You won't buy any digital designs to print locally without a simulation certificate. And that brings us to our next area,
In 2022, we will look back at drug trial testing on rabbits, monkeys, and humans as being incredibly brutal and wasteful. Why would someone subject another living being to a painful experiment without first testing the drug or remedy in a bio-data simulation? In 2012, we can already see and describe our world at the atomic level. We can create memory arrays with just 12 atoms, and manipulate cells and molecules. By 2022, we will be able to simulate organic interactions at the sub-cell level. New drugs will be tested in 3D simulations of simple and complex organisms many times before they are ever tested on live beings. It will be far faster, cheaper, and easier to test new drug ideas on artificial environments with detail on par with real organisms than risking injury and death in live subjects.
Want to develop a new pesticide? Test it in a corn or wheat simulation, including the entire bio-ecosystem of a corn or wheat farm - insects, birds, people, wind, rain, earth and sky. All included. Test first in the computer, refine and retest, refine and retest. Then try outside. Its already possible to simulate some simple organisms, but this area will grow rapidly. And in the future new drugs and chemicals can be tested and developed in weeks and months instead of years or decades. Drug companies will store huge libraries of simulated drugs, environmental simulations, and the outcomes for audit and review. Labs will constantly compare simulation results to real drug trials to refine the simulations and learn from mistakes.
A program is a policy. That's neither a product nor a chemical. Its a set of rules designed to influence or change human behavior. Cut a tax and you subsidize a program. Write a new law, and you create a new restriction. Declare war, and who knows the outcome. In 2022, each of these decisions will be simulated in large online simulations with millions of people, miles of territory, buildings, cars, planes, trains, and all. There will be online simulations with people interacting as avatars (like Second Life), and anyone with an idea will be able to test policy assumptions on that population, changing the size and scale of democratic participation in ways we can't fully imagine today. No one will ever think about testing out a policy on the population without first testing it in a simulation.
Want to build a battleship? Simulate it. Want to put a Naval Simulation in the Pentagon? Simulate the design of the design. Want to audit the Federal Reserve? Simulate it.
Where are we today with simulations? We have some rudimentary vocabulary for taking complex human and machine interactions and reducing the complexity to simple simulations everyone can understand. Like a lot of things, the simulation industry has become popular by dumbing itself down and making its simulations consumable.
Where do we need to go? We need to develop new methods and vocabulary to capture human knowledge of complex ecosystems and transform that knowledge into equally complex simulations that convey understanding of the most feature-rich and intricate environments.
In 2012, we have Big Data. In 2022, the world will be Data-Driven. All physical goods will have a Data artifact, and many data artifacts will have no physical comparison. We will make no decision without a simulation. The simulations will look, feel, and be almost real. We will wade into them and out of them like walking into another room.
And what we think of today as big will get MUCH BIGGER. In 2022 the Future of Everything will be Simulation. We will Predict the Future before we Live It, and we will use Predictive Governance to make sure we can trust what we simulate.
Join the IBM Data Governance Council as we Simulate and Explore the Future: http://dgcouncil.eventbrite.com
Data Governance Programs are popping up all over the globe. It isn't hard to get one started anymore. But it is hard to be good at it and to make it last. In fact, I see more programs taking one step forward and two steps back – narrowing focus to demonstrate results – to fall in line with other IT projects than chart a clear path towards larger transformation.
But lets be clear – Data Governance is about Business Transformation. We can't change organizational behavior to take data seriously if we can't change how we work.
We in the Data Governance Council have a vision that Data Governance is a coordination of people collaborating on common goals and purposes – to use data as an asset. That vision requires that piecemeal project management of data issues must evolve into systemic governance structures and methods, whose goals and purposes themselves transcend the people, applications, and interactions.
Until last year, we didn't fully know how to close the gap between where we are today and where we'd all like to go. But today we see the way forward, and the Data Governance Council is embarking on a bold new program to develop Predictive Governance: systemic ways of describing our world and modeling potential interactions to understand what works and how to improve it.
Traditional scientific analysis says that to understand a problem you have to take apart the issue and decompose it into all its components and sub-components and find the root cause.
But this assumes there is always just one root cause and one thing to blame:
“Data Quality in our branch operations is atrocious, so we have to fix our incentive structure.”
“Our network was hacked and our customer data was exposed, so fire the CISO.”
Its almost irresistible to search for scapegoats to common problems using simple cause and effect analysis.
People rarely ever imagine that
Individual data quality problems are symptomatic of larger systemic challenges in the information supply chains we have created over decades to handle information flows from source to target;
and no CEO expects that network hacks are the result of systemic weaknesses in IT systems that are themselves a reflection of organizational culture and priorities.
Its hard to accept that people created the systems that enable Poor Data Quality, Global Jurisdictional Jungles, Metadata misunderstanding, Lax Security, Privacy Invasions, and Big Data Mischief. No one deliberately creates these problems. No one wants them to continue. But they do continue nonetheless because people really don't understand the elements and interdependencies of the systems they have created.
The point of Predictive Governance is that we work in large ecosystems and we must work to understand them. If we can't describe our ecosystems, we can't rise above the superstitions and organizational behaviors that constantly hold us back.
This event will explore the ideas and methods behind Predictive Governance, new Enterprise Data Governance Solutions that integrate multiple business and IT domains, and Internet Jurisdiction and Multi-Stakeholder Governance in the context of global regulatory confusion as an archetype of Predictive Governance Challenges.
These are big problems and we are working on big solutions.
See the agenda. Read our blogs. Understand our mission. Be prepared to interact.
This is a thought leadership forum for change. Join us and make a difference.
This event is open to all who wish to join the IBM Data Governance Council. Register to attend here: http://dgcouncil.eventbrite.com/
Does the European Union "promise to be true in good times and in bad,
in sickness and in health?" Will the Union survive the current Debt
Crisis and become more integrated or will it break apart under the
pressure and allow insolvent states to exit the common currency?
Can the United States maintain its high standard of living and reduce its debt burden at the same time?
may read these questions in the press every day and never believe they
have everything to do with Data Governance, but they very much do.
Governments make tactical decisions every day to increase debt amounts
by small fractions thinking that their incremental spending is nothing
in comparison to what others have done in the past - failing to see the
correlations between current consumption and long term systemic
With 7 billion people on the planet Earth, our
societies have become so complex it is impossible with past methods of
governance to foresee how policies impact even the smallest ecosystems.
So we rely on blunt cause and effect relationships to over-simplify our
options and fit our ideas into media soundbites. And the result is
non-correlated policies that are anything but smart or predictive.
seek to change this. We know that without new tools and techniques to
see beyond the next effect, every cause will yield policies that fail.
We are the IBM Data Governance Council and we see that Data is the raw
material of the Information Age and that effective Governance relies on
conceptual thinking, integrated approaches, correlated analysis, and a
relentless search for truth.
We call this Predictive Governance
and this meeting will explore what this means, how it works, and how we
as a Community can create predictive models that:
1. See the
Relationships between Data Quality and Security & Privacy and Data
Architecture and ILM and Metadata and Audit and Reporting and
Stewardship and Policy and Organizational Awareness and Business
Outcomes - the Forest and the Trees in our Information Ecosystems.
Model and Simmulate how new integrated policies, people and
technologies are available to Govern in these complex Ecosystems.
Understand and articulate these relationships to laymen who only see
the problems at hand and have no patience for larger integrated
Please join us for this important two day event.
Participation is open only to members of the IBM Data Governance
Council. Organizations wishing to join the Council may sign up for this
event and execute a Council Agreement in New York at the meeting.
I understand the moral outrage that Germans, French, and Dutch feel over the financial mess that Greek debt is causing in the EU, but their crisis-management policies have made the situation far worse than it had to be and it will be get even worse the next few years. I've written in the past how the lack of Data Governance in the EU allowed member states to accumulate national debt without verified reporting for the last decade. Greece was let into the Euro with only promises to reduce historic debt levels of 13% down to the treaty obligated 3%, but there were no audit controls or verified reporting requirements.
OK, that's the past.
This week, the NY Times reported that in 2009 the IMF presented a secret report to EU leaders disclosing that Greece had lied about its deficits for a decade. Why anyone was surprised by this is a mystery to me. Did the EU really expect Greece to change decades of post-war economic behavior once they were granted membership in the Euro? In my experience, behavior only changes when there is transparency and information illustrating deviations and consequences.
This was five months before Papendreou disclosed the amount of Greek debt to his nation and the world. But rather than deal with this issue quickly, EU leaders hid the report and delayed real action on the situation and instead forced Greece to accept crippling budget cuts that are now destroying that nation's economy.
Let me explain how this works. The world is in a serious global recession. In a recession, unemployment rises and incomes fall. Tax collection follows income. A nation needs GDP growth greater than 2% to increase employment beyond the rate of new workers entering the workforce. When people have less money, they spend less on consumption and the economy suffers. Normally, governments increase spending during these times to compensate for the lost consumer demand and that stimulus spending in turn increases tax revenues. But its normal to run a deficit during a recession because it takes a while for consumer demand to increase.
The EU is forcing Greece to cut government spending by 20%. Greece already has a reported unemployment rate of 16.3%. The Greek government should be deficit spending now to make up the gap and help bring people back to work. The Debt can be cut when the economy recovers and people are back at work. The labor market can be liberated, pensions can be reformed, and new audit standards can be enacted in a new EU Treaty to make sure that actual budget deficit numbers are part of the public record. All of this can be done over a decade when tax collections are rising and the government can ease changes into the economy in a non-destructive manner. This will save money for Greece and the EU.
The EU enforced budget actions are having the opposite effect, forcing the Greek economy to contract even further. This will in turn increase unemployment which will reduce tax collections and make the budget deficit worse.
As is normal for EU policy, the policies being enacted make worse the very thing they seek to avoid - Debt.
There is no silver lining. The Unity Government will preside over an historic reduction in living standards for everyone in Greece over the next decade.
This prescription will now be applied to Italy. Watch how the contagion creeps North.