Adler on Data Governance
DataGovernor 120000GKJR 1,983 Views
"USSR Nationalizes World's Largest Insurance Company" - A headline like that in Pravda in the 1950's might not have been that unusual. The Soviet Union was always bragging about having the world's largest buildings, rocket ships, and industrial organizations.
But the same story in today's NY Times about the anti-regulation Bush Administration taking an 80% interest in AIG, today's largest insurance company worth $1T, for a mere $80B, and replacing its entire management team is simply breathtaking.
In the past six months the US government has been a broker to a leveraged buyout (Bear Sterns), become the world's largest mortgage issuer through the nationalization of Fannie Mae and Freddie Mac, and an M&A advisor to Bank of America and Merrill Lynch.
Now its in the commercial insurance business!
No doubt automakers in Detroit have AIG-envy today.[Read More]
DataGovernor 120000GKJR Tags:  twitter social facebook data big media model business 5,068 Views
This morning, General Motors announced that it would no longer advertise its cars on Facebook. This announcement comes a day before the Facebook IPO, and casts a shadow on the business model of Facebook. GM said that they will continue to support their page and user community on Facebook, but that ads just weren't effective in helping consumers to make car buying decisions. Ford jumped on this announcement to say they would continue to buy ads on Facebook and that Social Media requires a consistent commitment to innovation and community development.
Maybe. But I think GM's decisions does illustrate a key problem for Facebook and Twitter - the revenue model. Social Media grew up without dependencies on ad-based revenue. On Facebook, you aren't a customer. You are a product, and its your likes, dislikes, friends, photos, videos, and content that generate value. Selling products to products via advertising is hard. Members don't use Social Media to go shopping. There's no commerce platform there. They use it to be social. There are so many other outlets that are more effective for advertising than Social Media.
So how should Facebook and Twitter make money? My idea: make it collective. The value is in the data.
1. Make terms and conditions explicit that every member owns their own data via copyright. This does two positive things.
A. It indemnifies Facebook and Twitter for the crazy, infringing, and potentially libelous posts of their members by allowing them to claim that they are conduits of content rather than publishers or distributors.
B. Copyright establishes the rights to royalties for content created and posted on their networks, which enables the next step.
2. Allow members to opt-in to Big Data analysis by Social Media partners and intermediaries.
3. Charge Social Media for Big Data Searches by data volume.
4. Pay members royalties every time their data is used in Big Data Searches.
This simple model creates powerful incentives that transform user members from products into mutual social network content providers with an economic interest in posting content that will be used in Big Data searches. It establishes data property rights that insulate Facebook and Twitter from vouching for the content on their networks. Members will also discover that providing high quality data that companies want to search for means more royalties and so the system will produce better behaviors. And it creates a 2-tier royalty distribution model that will also pay Facebook and Twitter handsome revenue that will change online advertising and make every other content aggregater change too.
Of course, Facebook and Twitter will have to sort our who's a person and who's a bot, and will have to provide content creation tutorials to help users/customers create content that has value by sharing the top 100 Big Data queries and sample results.
But this Business Model has something for everyone and is a true win:win. It benefits customers by establishing data property rights and royalties for content. It benefits organizations who want to do Big Data searches by providing ever richer data streams of high quality and availability. And it benefits Facebook, Twitter, and their investors by providing an enormous profit making engine selling Data.
The Data is the Value. The more there is, the more valuable it becomes. Pay your customers to create higher quality data and charge your partners to use it. Its a simple Business Model.
Dick Costolo - @dickc - and Mark Zuckerberg - @finkd - are you listening?
Amazon has some Information Governance problems.
A week ago, I placed a large order of Nerf Guns that Amazon keeps refusing to process. My kids love these things and I guess some adults I know kind of like them too. We're all heading out to my sister's house in Point Reyes for Christmas this year and a combined Family Reunion. Both my sisters will be there with 7 kids in a medium-sized house for four days and the best we could all come up with to keep them occupied was felt-warfare among the tall grasses of the Inverness wetlands.
If only Amazon would cooperate.
I have no desire to carry ten Nerf weapons on trans-continental jets. I can see explaining to turgid DHS officials why a family of four needs automatic-nerf canons with heat-seeking velcro missiles. So, I prefer to order them online and let Fedex make the arms shipments discretely.
But my order is stuck in Amazon credit card limbo. It seems that the last time I bought something and shipped it to my sister instead of my home address I used a credit card which expired in May. Problem is, Amazon somehow associates that credit card with my sister's mailing address. I've deleted it in my online account, and I buy things from them all the time with the current card, but Amazon hasn't purged this relationship.
From an Information Governance perspective, what kind of problem is this? It is of course a Data Quality issue, but normal DQ tools might have a hard time with rules matching in this case. My gut is that Amazon just doesn't sweep and purge their accounts for outdated credit cards. Its pretty frustrating as a consumer, especially during these busy days. Some records management would solve that problem, but by now the point is moot for me. I just don't have the time or patience to bother fixing their sloppy Information Governance issues.
Fortunately, Walmart sells Nerf Guns too...
Last August seems like a long time ago. There have been so many miles traveled and so many purchases made since, that what I did on August 6th is beyond my recall. Amazon doesn't seem afflicted with the same memory challenges because yesterday I received a note from Amazon "customer service" notifying me that they would soon charge me for the 4 furnace filters I had not returned from my August 6th purchase. Sadly for Amazon, this was the last straw in an otherwise congenial relationship and the story, in all its data details, does illustrate the enormous challenges firms face today in understanding what their data means across many disparate systems.
As you can guess, I purchased some furnace filters from Amazon on August 6, 2009. They come 4 in a box and they are 25x20x5, and at that size the box costs $122 with shipping. Orderd on the 6th, a big box arrived on the 12th. Once opened, only one filter was apparent. A quick check to the account demonstrated that I had indeed ordered and paid for 4, but only 1 was sent. Try and call Amazon and you will find they have no number. In fact, trying to communicate with them is very difficult. An email form is buried in their site. I found it, and sent a note asking why they sent 1 when 4 were requested. Would the other three arrive shortly? I needed one urgently, but we use 4 a year so a full shipment was required.
Within a couple of days an apologetic note from Amazon was returned. They offered several attractive options:
1. Return the filter for a full refund.
2. Keep the filter and get a full refund.
3. Get a replacement shipment of 4 filters at no additional cost.
The first filter was already doing service in the furnace so taking it out didn't seem convenient. I still needed more filters for the winter, so getting 4 more for no additional cost was very appealing so I opted for that choice. In two weeks another large box arrived with the four filters. At this point I was very happy with myself and with Amazon and thought the matter closed.
Three weeks later, Amazon sent a strange note asking me to return the filters I had purchased. I didn't read it fully as I guessed it was a mistake and ignored it. A week later, Amazon debited $122 from my account for no apparent reason and when I checked my email I found another note explaining that since they had not received my RMA they would now debit my account for the purchased filters. I immediately sent Amazon a note asking why I was now being charged twice for the same filters. A day later, Amazon responded with an apology and claimed that they had two different customer service systems and the other one had somehow malfunctioned. Of course, this wasn't a straightforward transaction. I didn't just buy something and have it shipped. Amazon had made a shipping error and offered a non-standard solution that somehow didn't conform to their accounting system. Unfortunately, Amazon made their mistake my problem.
I thought the matter closed until yesterday, when I received another reminder note to send back my RMA Filters. I wonder what Amazon now wants with 2 dirty 6-month old furnace filters. I'd be happy to send them back along with all the household dust I can find because after this latest "customer service" snafu, furnace dust is about all that Amazon will be getting from me in the future.
The internet is the world's greatest strip mall. If one store can't meet your needs, there are others who will.
Last night, I was one of two panelists at a Global Association of Risk Professionals (GARP) symposium on Systemic Risk at Fordham Business School in New York. We were to be a moderator with three panelists, but one canceled at the last minute, presumably to stay home and watch the Yankees lose to the Phillies last night. The room was on the 12th floor in a mid-60's squat tower accessible from two elevators among a bank of six in the stone cold open and office-like lobby. Twelve is the top floor in the building, with a Rockefeller penthouse atmosphere. Black marble floors, mahogany paneling, subdued sixties swank.
The symposium room was longer than wide, seated classroom for one hundred in three neat blocks. We panelists were paired on a white-clothed-table with microphones we didn't need. The moderator introduced us both; the NYU Business School professor and the IBM Data Governance guy. The audience looked half-asleep, and the first question rolled out on the table, "What is Systemic Risk?" Our gracious moderator had prepared a raft of intelligent questions for us that evening, but we would only get through two in the brief hour we digested.
What is Systemic Risk? The professor told us it was the result of exogenous market conditions that created upper atmospheric bubbles in complex derivative instruments capable of devastating global economies. It could be measured in the up and down-swing of aggregate equity performance and controlled through the central banks he currently advises. He saw Systemic Risk as a macro-economic phenomena, the product of weak government regulation, greed on Wall Street, outrageous compensation packages, and unnecessary complexity in financial markets.
Before the event, I wasn't quite sure what I was going to talk about. It was a hectic Monday full of ten conference calls on twenty different topics. I left late, had traffic on the Grand Central, got lost at Lincoln Center looking for parking, and there was no coffee when I arrived. I'm not an evening person un-caffeinated, and perhaps not the best morning person in the same condition. But droll media babble passed as tenured professorial wisdom will rouse me on the sleepiest of days.
Systemic Risk is the probability of loss to a system. It is not actually a thing that can be calculated. It is a series of things that result in a loss event with causality and impact. Systemic Risk is not only about macro-economic catastrophe, because to say so is to say that we are not involved in Systemic Risk accept as victims. And that ain't true. Insofar as all of us, The People, are members of communities, parties, religions, nations, and environments we are part of a System. We are inter-related, inter-dependent, capable of causality, errors and omissions, losses and claims. Each incremental failure can cascade and result in systemic exposure.
The Credit Crisis is the result of a series of public policy mistakes from 1999 to 2006 that encouraged bad business practices at many different stages of the mortgage underwriting and securitization process. These were incremental failures that contributed to loss events that destroyed parts of the economic systems upon which markets rely. The lesson to humanity from this experience is that We The People are all members of SYSTEMS large and small that can fail as a result of incremental policy mistakes. Actuarial Science has for too long focused on the probabilities of contained loss events.
My body is a SYSTEM and Cancer is a systemic risk to me. It causes a chain of events which can result in organ failure and death. Your company is a system, and bankruptcy is a systemic loss event. If bees die, plants won't be pollinated, and that can be causality to a systemic risk to our ecoSYSTEM. The BBC Reports (http://news.bbc.co.uk/2/hi/science/nature/8338880.stm) that record numbers of plants, mammals, and amphibians are under threat of extinction. This is a systemic risk. When entire species of frogs in remote places like Tanzania become extinct in the wild, humans take note - this incremental failure is closer to your role in the food chain than you may think.
Every System has risk. Every person in every system has a role.
If we accept the gossip-press gospel that the Credit Crisis is purely the result of greed on Wall Street, and can only be fixed by wise regulators in Washington, shame on all of us for missing the opportunity to internalize the economic externalities. It is not an academic exercise to study the risk in every system large and small. Systemic Risk is a real-world imperative for all of us.
DataGovernor 120000GKJR 2,747 Views
In today's American Banker,
Allan Mendelowitz and John Liechty wrote a viewpoint article calling for the creation of a National Institute of Finance to be a data collection point and archive of financial information. They write:
"As a solution, we propose the creation of a National Institute of Finance, which would serve as a national data archive and think tank for the financial regulatory community. The mission would be to provide regulators the data, software, computing power and analytic capacity they need to oversee and safeguard the health of the modern financial system.
A centerpiece of the NIF would be a Federal Finance Data Center. Participants in the U.S. financial and insurance markets would be required to report all positions to the center at regular intervals. Crucially, these reports would include both exchange-traded and over-the-counter contracts, complete with counterparty relationships. The center would let regulators assess systemwide contagion and concentration risks, perform stress tests, including the impact of the failure of a large institution, and hence make better-informed decisions in times of crisis.
Given the sensitivity of this data to financial institutions and, in broad terms, America, top-level data security would be essential. Financial institutions would have serious, legitimate concerns about third parties learning details about their positions. But the military and national security communities provide functioning models for handling such problems. More to the point, there is no alternative. Without detailed information on transactions, positions and counterparty relationships, any attempt to identify systemically important institutions is guesswork.
A second major role of the NIF would be to maintain a research center, including a National Risk Lab with skilled staff and facilities analogous to those used by the quants on Wall Street. In the spirit of the National Academy of Science, the NIF would act as a clearing house for ideas and problems between industry and a broader research community — identifying essential problems, sponsoring the sustained research efforts needed to solve these problems and integrating solutions into a rigorously tested, well-understood set of models."
If you read my last blog, you might think I would agree with this idea, but for some practical and philosophical reasons I do not.
First the practical. The government already collects quite a lot of financial data from many regulated institutions that it does not use well, share between agencies, or compare across companies. But the data is there, and building an effective information architecture to better leverage, analyze, share, and compare the data is cheaper, easier, and more effective than building a new academic think tank.
Second, the philosohical. In a democracy, a single aggregation point of information is a single point of control. That control can be abused like any other power, and the information can be restricted or changed based on the political beliefs of whomever governs the data center. And since our government is controlled by politicians, we don't want that.
What we want is multiple aggregation points and lots of access-restricted, and anonymized data sharing across multiple agencies. We want information innovation to grow in different parts of the government given different levels of relative maturity and interest, budget and skills. We want organizational freedom to invent new ways to collect and use information collected through reporting, audits, hearings, and investigations.
We want our regulatory authorities to develop their own best practices and share them x-government, and that kind of innovative environment can only thrive when there are multiple sources of data collection, common entity registration, and empowering technology that makes sharing, analysis and interpretation easy.
Building this isn't difficult, and technology can overcome the organizational obstacles that prevent data sharing among agencies today. But aggregating data responsibilities in a new organizational structure will only create a new fiefdom for someone else to control.
I think we can do better.
I use Big Data every day. I don't have Hadoop, a Data Warehouse, ETL, or a big analytical engine. But I use search engines, which are indexes of web-pages from around the world, to discover related and unrelated facts. I use Twitter and Linkedin, which aggregate the ideas of millions of people, to understand the sentiments of the people I follow. And I make decisions, and mistakes, with this information every day.
We all do. And in that context, we are all Big Data users and abusers, and we can identify with larger enterprises that are also confronting vast streams of information from every corner of the globe, created by individuals, communities, corporations, and governments. We as individuals never had industrial data management applications. We never had Data Governance Councils, Stewards, or Data Management professionals. So we've been selecting data streams first and using the ultimate analytical engine - our brains - to integrate that information, glean trends, and make decisions.
What's new about Big Data is that large enterprises are copying the information processes that We The People use every day. They are selecting streams first, aggregating them second, determining application third, making decisions fourth. Judging consequences of decisions... later, if at all. Organizations around the world are deciding to retain information much longer because there is a belief that latent, slow developing, trends may lie dormant in that information that can be discovered much later.
But with vast volumes of information, long retention cycles, high velocity decision-making has the potential to do enormous damage as much as enormous good. And we know from experience, that decision-making is often influenced by cyclical trends, personal prejudice, and national dogma. Counter-Cyclical views can be marginalized. Whistle-blowers can be fired.
But Big Data also offers an historic opportunity for Data Management. This industry for too long has been seen as back-office archivists recording the deeds and attributes of heroic business leadership in dingy databases in large glass-house mainframes and data warehouses. They have taken back seats to application developers and business analysts who first and foremost collect the requirements of business users for new applications, features, and functions.
But Big Data changes all of that. It makes information sources and streams more important than applications, features, and functions. It changes the emphasis in value creation and puts the onus on Information Management to produce better sources and streams, easier aggregation and integration, manufacturing information products any user can leverage in any application they wish.
Its large enterprises automating the way We The People use online information every day, and the power and consequences of this paradigm shift are profound and potentially quite scary.
We need Information Governance over every part of Big Data to assure that organizations can answer these fundamental questions:
1. Can we trust our sources?
2. Do we know where they came from?
3. How do we verify the authenticity of the information?
4. Can we verify how the information will be used?
5. What decision options do we have?
6. What is the context for each decision?
7. Can we simulate the decisions and understand the consequences?
8. Will we record the consequences and use that information to improve our Big Data information gathering, context, analysis, and decision-making processes?
9. How will we protect all of our sources, our processes, and our decisions from theft and corruption?
This morning, the Information Governance Community began discussing these issues in a global teleconference moderated by IDC. We have just scratched the surface of these issues and have much more to discuss. We have agreed to create a new category - Big Data - in our Maturity Model to provide organizations with new methods to benchmark their Big Data Governance maturity. But we also agreed that our existing Maturity Model categories also apply and we need to update them to include Big Data issues and questions.
I believe this is critical work. Big Data is an enormous opportunity to make information the arbiter of value creation in the Information Age. But it is also an enormous risk because the same solutions can be used to make dangerous and destructive decision-making a high volume, high velocity science.
Every new technology can be used for both good and evil. Join the Information Governance Community to help ensure Big Data serves the best possible uses.
DataGovernor 120000GKJR 1,792 Views
On Monday, October 19, 1987, the NYSE Dow Jones Industrial Average tumbled 508 points. That was 22% of the DOW then, and while today's DOW losses of 504 points are less than 5% it is nevertheless a major market correction.
Back in 1989, I was a professional liability insurance underwriter at Continental Insurance on Maiden Lane and my best friend worked at Goldman Sachs in their Mortgage Backed Securities division. We spent the day on the phone calling each other as the DOW sank to new levels. We were both History Majors in college and we knew then that we were participating in a history-making event. At the close of the trading day, we rushed to Fraunces Tavern and ordered dinner while news crews circled interviewing everyone about the day's stunning losses. Several journalists asked us for interviews and we declined.
I've never forgotten that day. It took two years for the stock market to recover to the level at the opening bell. And today, September 15, 2008, as we watch Wall Street vanguards Merrill Lynch and Lehman Brothers crumble under a massive de-leveraging that is shaking the foundations of the post-WWII economic order, I see again we are in the midst of an important historical moment.
BM Data Governance Council Leads XBRL Initiative to Create New Reporting Standards for Risk Measurement
DataGovernor 120000GKJR 2,991 Views
ARMONK, NY - 15 Dec 2008: In a move to provide businesses worldwide with consistent tools for measuring aggregate risk in the financial world and provide a real-time view of market exposure, the IBM (NYSE: IBM) Data Governance Council is seeking input from banks and financial institutions, corporations, vendors and regulators to create a standards-based approach to risk reporting.
Today, organizations have inconsistent methods and vague language for disclosing operational, market, and credit risk. These inconsistencies make regulatory oversight both extremely difficult and complex. The first step to enabling new transparency of risk and exposures in the financial services industry is semantic clarity -- a precise method for consistently describing and reporting risk across all organizations. Such transparency could provide a new macro-economic tool and greater fiscal accountability for regulators, investors and Central Banks worldwide, making it easier to identify toxic assets on the books, mitigate fraud, help prevent wide scale fiscal crisis and rebuild confidence in financial systems.
The IBM Data Governance Council is exploring the use of Extensible Business Reporting Language (XBRL), a software language for describing business terms in financial reports, to risk reporting. XBRL could be used to provide a non-proprietary way of reporting risk that could potentially be applied worldwide. It is already widely used for financial reporting throughout Europe, Australia and Japan. The widespread use of this standard ensures adequate skills and understanding among firms and regulators.
"Creating a risk taxonomy using XBRL will provide a vocabulary and a common language allowing everyone to understand what risk means, and that's the first step in making it easier to calculate and report," said Steve Adler, chairman of the IBM Data Governance Council. "When we have semantic clarity around the way organizations describe risk, incidents, events, losses, claims, exposures, forecasts and reserves, it gets easier to aggregate loss information, analyze it with standard actuarial methods, compare past exposures to present conditions and opportunities, and forecast potential outcomes."
According to the Council, an XBRL Taxonomy of Risk could serve as a fundamental building block to enable interoperability and standard practices in measuring risk worldwide. Such standards could potentially enable Central Banks to manage vast databases of loss history and trend analyses that could better inform policymakers and member banks helping to minimize risk and produce better returns.
"XBRL is gaining widespread adoption among global capital markets, banking and securities regulators, and plays an important role in market reforms by contributing to transparency and process enhancements," said Anthony T. Fragnito, chief executive officer, XBRL International, Inc. "XBRL International is pleased to be a part of this important initiative by the IBM Data Governance Council."
The Council is immediately seeking proposals and discussion on this topic to help drive a year-long effort to create a proposed specification for XBRL for risk reporting. Initial discussions about this specification will take place February 26-27, 2009 in New York City at a meeting to be attended by the Enterprise Data Management Council, the Financial Services Technology Consortium, XBRL International, XBRL.US, and U.S. Securities and Exchange Commission staff.
"This is an opportunity for both improving the effectiveness of the risk management function and the quality of reports," said Dan Schutzer, executive director of Financial Services Technology Consortium. "XBRL for risk reporting also holds the potential for cost-reduction through the development of consistent, clear and comprehensive reporting standards."
The IBM Data Governance Council is a group of 50 global companies, including Abbott Labs, American Express, Bank of America, Bank of Tokyo-Mitsubishi UFJ, Ltd, Bank of Montreal, Bell Canada, Citibank, Deutsche Bank, Discover Financial, Kasikornbank, MasterCard, Nordea Bank, Wachovia, and the World Bank, among others, that have pioneered best practices around risk assessment and data governance to help the business world take a more disciplined approach to how companies handle data.
Data governance helps organizations govern appropriate use of and access to critical information such as customer information, financial details and unstructured content, measuring and reporting information quality and risk to enhance value and mitigate exposures. IBM's work in this area supports and furthers the company's Information on Demand strategy, that has delivered results through consistent earnings growth, hundreds of new customer wins, strategic acquisitions and industry-first software offerings.
For more information on the Data Governance Council, visit http://www-306.ibm.com/software/tivoli/governance/servicemanagement/data-governance.html
For more information on IBM, visit http://www.ibm.com/think
Holli HaswellIBM Media Relations512firstname.lastname@example.org[Read More]
DataGovernor 120000GKJR 3,735 Views
Data Governance isn't a new word for the same old stuff. If your organization isn't achieving sustainable results from your data and information management projects, Data Governance can help. But you'll need to do more than just adopt a new name. You'll need to do something far harder - you will need to change how you work and how your IT systems work.
This isn't easy. Best practices, Maturity Models, and Starter's guides can help. But at the end of the day if you don't change, everything stays the same and the results are desultory and predictable.
I meet a lot of people who ask me about the Data Governance products or roadmaps organizations should buy. The best products you can buy are the ones that tell you what you don't already know. To govern effectively, you need to know what's going on in the context to when it is happening, what it means, and how it relates to other things. Governance without awareness is a dictatorship of ignorance - people make decisions in their comfort zones because they don't know any better and don't know that they don't know any better either.
OK, nice words Adler but what does that really look like? It looks like Android.
Last week I switched from an iPhone 3GS to a Samsung Galaxy S. Lots of reasons behind the switch, a primary motivator for me is that Android is based on Linux, which in turn is based on the collective contributions of a global community coordinating their ideas for the common good. I like that, and I like Android because as a mobile operating system it integrates lots of disparate applications to provide me with useful information when I need to know it.
Example: Boingo. Boingo is a wifi service that works in some 80,000 airports, hotels, and other hotspots around the world. You pay a monthly service to Boingo to connect for "free" in these hotspots. Very hand for a global traveler. On the Iphone, you have an app but you have to first connect with the iPhone to a local hotspot and then see if Boingo works there. This is an example of the old, industrial model of application development. A single application developed for a singe purpose that the operator has to initialize.
In Android, Boingo is integrated into the wifi backbone of the phone and the information notification system. As I drive around my neighborhood, the phone alerts me automatically when I enter a Boingo hotspot and can connect. It tells me what I don't know and helps me take advantage of services I may need. It is intelligent and by sharing information it offers me new opportunities. It gives me content and context, when I know I need it and when I don't.
That's the point of Data Governance. You need to learn what you don't know and help others to benefit from that information. You need to enable and empower new information sharing technologies and methodologies. Include the excluded, bring in the outliers, benefit from diverse points of view and find new solutions to age old problems that have befuddled and bemoaned your organization for decades. You can't warm over the same old stuff and call it Data Governance. You can't govern data, manage information or knowledge because these things are inert.
But you can govern people and empower their decision-making with trusted information and insight about what's going on every day that they don't already know. Because with knowledge, human beings can change their behavior and that's what Data Governance is all about - changing organizational behavior.
This isn't a small thing. This is a very big thing. Its about the influence of Information on organizational structures, how corporations change how they work in an Information driven transformation. This change isn't coming from within. We aren't transforming organizations with information. My god, if that were the case we would have succeeded decades ago with the first mainframes. What's happening today is that our organizations are being confronted with the change of billions of new sources of autonomous information production we don't control. This is the mass of humanity communicating with each other over the Internet with the speed of now and the intimacy of a small village.
We aren't transforming with information we are being transformed by information, and this is a wave of change we are either riding or drowning in.
Newspapers, Magazines, Music and Movie production are already being replaced by global and autonomous information distribution. Not everywhere, not all at once. But even the strongest brands feel the pressure and are adapting to change. In the beginning they will change their models of distribution. Soon after, they will change models of work.
Industrial models of organization - Thomas Gradgrind and the repitive drudgery of assemblyline work, the process controls and enslaving stopwatch measurements of efficiency - these last vestiges of the way we worked in the latter 19th and 20th Centuries hold on in our organizations like a virus resisting antibiotics. There are power structures invested in these models, and they will continue to hold on for some time yet to come.
But you need to ask yourselves. Where do you want to be working, in the past or in the future? Riding on the wave or under it?
Change isn't just a word. Data Governance isn't an option.
Fog. I thought we were in the clouds as the plane wheels hit the ground like a fighter jet landing on a carrier deck. Visibility was maybe three feet and the fog was so dense the plane parked on the tarmac and we were brought to the terminals in bright yellow buses. Kastrup is so efficient. Clean walnut parquet greets you as you climb up floors to reach the neatly organized passport control, where kind border control guards actually smile when you arrive at the window. In JFK they growl at you and treat you like a criminal begging for mercy to enter a dingy airport that feels more like a mid-50's bowling alleyl. In Copenhagen, the baggage is at the carousel when you arrive and the airport feels like a luxury shopping arcade. Mercedes taxis whisk you into the city, on a sleepy sunday when most of the city is just having brunch.
My hotel room isn't ready when I arrive, but I'm happy to have some hours to relax in Vesterbro and wander the empty streets as the fog burns off into early autumn sun drenched splendor. The grass is green, the trees are yellow and red, the sky is bright blue. It takes me two hours to adjust and remember the life I led when I called this city home for 5 years in the 1990's. Bicycles wiz by on their own lanes next to the sidewalks. Late 19th Century apartment buildings hide hip modern interiors. Small, heavily taxed cars conceal a standard living that is the envy of most other nations. What a remarkable governance experiment. High personal income taxes (top rate is 52%), VAT (25%), car tax (220%), and all manner of other taxes are balanced by very low corporate tax rates (26%) and a free labor market, yielding universal healthcare, excellent pensions, and free education through PhD. This country is a net oil exporter, thanks to lucrative North Sea oil platforms, yet produces 60% of its energy needs from wind, solar, and geothermal.
While America watched its bridges and roads deteriorate, Denmark built huge public works projects extending road and rail bridges to Sweden, Germany, and from Jutland to Zealand. They unified their rail system in Copenhagen, and deployed high speed rail to Hamburg and Stockholm. They made Kastrup into the logistical hub of Scandinavia, linking the Nordic countries to the EU mainland. It is a remarkable little country, and this week the weather is also wonderful.
I'm here to speak at a conference - IBM Software Group Day. I'm in a Global Services track and have 35 minutes to go through some dense Data Governance content. The conference site is a mile from my hotel and I love the walk through Vesterbro, along many sleezy streets west of the Main Train Station that today feel quite a bit better than they were a decade ago when I lived nearby. The conference venue is an old slaughterhouse, now filled with 1200 IBM customers, and some fantastic art works on the walls. The conference organization is fantastic, and everything seems to run as efficiently as the rest of Denmark. My session is just after lunch, and my slides suffer some strange powerpoint virus which mixes them up just short of delivery. But the audience is wonderful and we had a great time going through the discussion. Somehow I finish on time, which is rare, and get some great questions after.
Enclosed is what I presented. Its similar to the SIMposium 2010 deck with two new use cases. They worked well in Copenhagen and I have plans for something even better at IOD:
Copenhagen SWG Day Presentation
The rest of the week is full of customer meetings, but every day I'm here I'm reminded of the life I once lived in Denmark and the part of me that that lies dormant the rest of my life when I'm not here. Its the casualty of international travel, that you learn not only great things about the places you visit but also what you learn about yourself that is only evident when you are there again.
Please join us for an international crowdsourcing experience!
In May 2006, the IBM Data Governance Council used poster board and sticky notes in an oak paneled room in the Chateau Frontenac in Quebec City to create the categories, elements, and levels in the first version of the Maturity Model. About 35 people
participated in that process in Quebec, and perhaps another 50 more in subsequence meetings.
On September 14-16 2010, the Council will use social networking crowdsourcing technology to include a global community in a discussion about the Maturity Model - Live!
Brainstorming ideas in the room will be transcribed immediately into the topics on www.infogovcommunity.com.
Suggestions and comments from practitioners all around the world will be relayed to the participants in the room.
Of course, this venue is awesome, and there is no substitute for live, face to face, communication. But if you can't travel to Tamaya, and spend three fabulous days with The Council in the Desert, you can still tune into the action by going to infogovcommunity.com.
In the room or in Rangoon, you can watch the ideas flow and chime in live or tune in later and add your views.
Either way, what you contribute will impact the community and change the Maturity Model. Synchronous or Asynchronous, this meeting is the beginning of a global dialog on Data Governance Maturity.
What we do in the room will make a difference. And what you contribute from your own room will make a difference.
Please join us in Tamaya or online at www.infogovcommunity.com to capture the best ideas from the Global Information Governance Community, contributed for the Community and published in an open-sourced IBM Data Governance Council Maturity Model.
Steven B. Adler
IBM Data Governance Council
DataGovernor 120000GKJR 2,574 Views
A funny thing happened to me on the Long Island Expressway yesterday. I was driving from my home on Long Island to a meeting in Springfield, MA and got on the Expressway at Exit 36 at about 8:20am in the morning. My Navigation system showed the LIE was full of heavy traffic (orange and red on the map) up to the Clearview Expressway, which runs North South between the Throgs Neck Bridge and the Grand Central Parkway. My oh so clever NAV system advised me to exit the Expressway at Community Drive (Exit 34) and take Northern Blvd to the Bridge. The Exit already had a line of cars looking to get off, but I was in the far left lane and couldn't slide over in time to make the queue.
Next, my NAV voice asked me to get off at Little Neck Parkway (Exit 33), which also had an even longer line of cars backed up in the far right lane. Curiously, the Expressway was clear and moving briskly while the Exit ramp was bumper to bumper. I ignored the NAV and went ahead. At the Cross Island Expressway, which has a double Exit ramp, the cars were backed up a quarter of a mile, yet the Expressway was moving fast.
I looked into the windows of the cars waiting in the Exit queue and noticed they all had NAV systems. Maybe their NAV systems showed the Expressway was full of traffic and urged the drivers to exit and take the same detour over Northern Blvd. I chuckled and drove on at full speed to the Clearview Expressway and made it to the Throgs Neck Bridge with no traffic and wondered how all those data driven drivers were doing snarled in traffic lights on Northern Blvd.
And now I wonder what will happen more broadly when everyone is using the same analytical engines and tools to make the same decisions in their health, financial, and travel lives. Will it lead to overcrowding on exit ramps? Will automated Big Data Navigation produce over-conformist group-think in business? Or will some wild duck contrarians reap huge benefits by ignoring the detours taken by the masses?
What do you think?
I'm writing this blog entry in my hotel room on the 14th floor of the Grand Hyatt in Jakarta, Indonesia. Traffic screams by the massive fountain circle outside in a constant torrent of horns. I've been here all of two days. Met a customer in town this morning, and yesterday we drove three hours to meet a customer in Western Java. I've seen rice patties, jungle, mountains, tea plantations, small villages and ways of life unchanged for centuries, glittering shopping malls with every brand available, fantastic office towers, and levels of luxury unembarrassed by poverty in every street. It is at once fascinatingly familiar and different at every corner.
This year, I've visited customers in Jakarta, Manila, Tampa, Columbus, Johannesburg, Dallas, Hamburg, Warsaw, San Francisco, New York, Brussels, and Cologne. And every where I go I hear the same stories, the same issues, the same needs.
Data Governance is a global market. Everyone is doing it.
Tomorrow I fly to Bangkok, where Red Shirts have held a government hostage for six weeks. On the edge of a knife, a nation split Red and Yellow, and I'm hosting a Data Governance Workshop for 2 dozen customers.
The market need is hotter than Red.
If your company doesn't have a program working today, it's a competitive disadvantage.
Don't wait. Just do it.
Recently, I played tennis with my son. At 16, he's tall and lanky like me, but full of boundless energy and I have to play smart to keep up with him. I taught him most of what he knows in tennis and we both play at the same level - though I do enjoy when he wins. But on this day, there was no winning or losing. Our rallies were endless. We exchanged vollies, drops, topspin, and slice. If I won a point, he came back and won the next. There was no mercy and no letup. At one point, he sliced a ball low to my mid-court forehand and I had to rush from the backhand side of the court across to reach it. I'm not as fast as I once was but on this day I crossed the court with speed. As I got to the ball and lined up a chip drop, I looked up and found that my intrepid son had already anticipated that move and was rushing to the net to cut me off. I stopped short and just laughed. I said "you know what I'm going to do next, don't you," and he said "like, yeah, I know all your shots." That happens when you play with your son, because we know each other so well.
We played out the rest of the match and after I thought about that laugh we shared at the net as a metaphor for much of what I've learned about Data Governance, Risk Measurement, the financial crisis and the challenges of information and knowledgeknowlegde. You see, people are best at anticipating what they expect - especially in situations that breed familiarity. That's the reason why Value at Risk (VAR) was such a seductively attractive formula - in a largely pro-cyclical business culture, a formula that helps you anticipate what you expect (that today will look mostly like tomorrow, yesterday and the day before) is a winner. People who anticipate other outcomes are either brilliant visionaries who make "discoveries" (minority), or outliers who make trouble (majority).
I began the year thinking that financial regulatory authorities could make better policy decisions if they had the right data. But I now understand that many of them had the right data in 2005, 6, and even 7 but they didn't understand it, chose to ignore it, or lacked the political will to make radical, outlier, decisions that would adversely effect many key constituencies.
Hence my conclusion: Data Governance isn't enough. Collecting and aggregating data is an important step, but people need to understand what the data means as information, and that information needs to be communicated widely as knowledge. Not the finite biological knowledge we all have in our brains - the organic translation all of you reading this article are performing right now - but the metaphysical knowledge of a community knowing a common truth about the world so they are prepared to accept a decision to avoid an outcome they did not expect.
I don't care what kind of new Systemic Risk Council gets built at the Federal level of our government, or indeed what kind of new Regulatory Information Architecture is designed to support it. All of that is important but not as important as the steps people take to disseminate the information in both raw and interpreted form to a wide and varied constituency. The more people inside and outside the group that know what the group knows the better chance we have that outliers will interpret things the group will miss. And it is upon those outliers - the ones who anticipate what we don't expect - that crisis prevention most rests upon.
This last point is the hardest. In the financial crisis, only a few economists like Nouriel Roubini predicted the credit crisis before it began. Most of the other economists predicted it perfectly only in hindsight. But Nouriel was largely ignored by those economists and the media as "Dr. Doom," the naysayer who only saw the bad while so much good was going on. And that is human nature. If you aren't in the tribe of believers you are a barbarian, an outsider, who can't be trusted and must be demonized or destroyed.
This is of course very bad for the discovery of non-expected results, unless of course you ARE a barbarian trying to hack your way into the group in which case you should be destroyed. Trusting what you know, where it came from, where its going, and who's going to know it and do something about it will require new forms of transparency and self-governance. George Orwell wrote about the alternative, and we don't need to follow his example.
Because what we want is Trusted Information that empowers Doubt. Doubt about what information means is essential to effective decision making. And this is where I think a new Information Governance discipline, one that focuses on the Information needs of Governance as well as the challenges of Governing the use of Information is needed.
That's at least what I learned from my son on the tennis court last week. We'll see what he teaches me today.