On September 20-21, IBM is hosting The Big Data Governance Summit at the Ritz-Carlton Bachelors Gulch in Vail, Colorado. Velocity, Volume, and Variety without Veracity creates Vulnerability.
This event is about Metadata, Stewardship, Security, Privacy, Data
Quality, and Big Data. We can reach to the skies, pull in petabytes of
relational tables, twitter feeds, video, audio, and documents, but its
all garbage in and garbage out without Data Governance.
knows this, and its our task to do something about it. We have to show
how it can be done – how anyone can build vibrant, dynamic, Big Data
Ecosystems that use common standards, ontologies, and methods to tag
huge volumes of data, index its value and context at high Velocity, and
search across its variety to discover trends with large clusters of
computational power that deliver high Veracity and low Vulnerability.
is the promise of Big Data Solutions, uniting disparate data sources
across our organizations, our cities, and our planet; leveraging data
sets based on purpose specification; searching for meaning and value
with brute force speed.
I can see this promise. Its within our
grasp. We can bridge our stovepipes of data and non-standard behaviors
into lean, mean, transformation machines that yield incredible insights
and informational power.
But this promise is only in reach with
Data Governance Solutions to provide metadata tagging, standards,
ontologies, purpose-based access protocols, audits, security &
privacy, data quality, discrete retention rules, and new tools and
technologies to automate how we do it.
The purpose of this event
is to explore how we can bring these ideas forward to help the world
adopt Big Data Ecosystems more rapidly, more successfully, more
We are meeting at the Ritz-Carlton Bachelor's Gulch,
which is the wonderful venue where we first shared the IBM Data
Governance Council Maturity Model with the world in 2007. We will look
at real life examples of firms using Big Data, exploring ecosystems, and
developing standards to model and simulate them.
This meeting is hosted by the IBM Data Governance Council but it is open to all.
Join us as we move forward Big Data Governance.
On Saturday, I took the train from Brussels to Cologne. The train is one of those modern ICE's - sleek, clean, quiet, and fast. The terrain through Belgium is hilly and the tracks pass over rolling fields, deep ravines, and wooded glens. As we neared the German border, the landscape leveled out and the train picked up speed, reaching 200 k/hr at one point. And as the small towns whisked by, I couldn't help think how magical it is to travel from Belgium to Germany via train with no border crossing and no passport control. It is so simple and easy, and without even a word you pass from one country to another.
This is a marvel of modern Europe, and it reminds me that the last 65 years are the longest period of peace in Central European history. Europeans have somehow, perhaps accidentally, realized a reality about modern warfare that has yet escaped the United States of America - modern war is Dumb Governance. For during the same 65 year period the United States has been involved in five large-scale wars lasting over 5 years each, 6 smaller military adventures, and of course one very long Cold War.
If you read my last blog post, you will understand my statement and reasoning that modern war is Dumb Governance. To paraphrase Von Clausewitz, war is the extension of diplomacy by other means. That is, it is an articulation of national policy - the communication of it.
Now back up a minute. If we have a policy that is communicated, according to the principles of Smart Governance it must also have had a decision-making process, some metrics and business case, hopefully either sustainable or situational goals, and some measurable results that we should care to compare to the goals.
In the old days, back before the industrial Revolution, it took 8 people working on farms to support two people working in cities. That meant that you had to have a lot of arable land and unskilled labor to support those cosmopolitan types in cities who made all the decisions. War then was one means to acquiring more arable land for civilized expansion. If you conquered more territory through war, you could expect to feed more city dwellers who produced more income via trade and crafts and that made your society wealthier.
In the early Industrial Age, this logic began to wane because industrial capacity isn't only dependent on land and labor. Its also dependent on capital, and capital tends to dry up when tanks cross borders. Of course, natural resources are also important to industrial economies. But warfare tends to be a fairly resource intensive activity so gains won on the battlefield can be difficult to hold and the net benefit of acquired resources can be undermined by the resource drain of battle.
In the Information Age, knowledge is power and both intellectual labor and capital flow so freely throughout the world that warfare gains on the battlefield don't provide sustainable balance sheet benefits. In fact, they are a net cost to any society waging war.
Think about it for a minute. On 9/11 the World Trade Center was blown up by heinous terrorists based in Afghanistan. Immediately, the United States sent 30,000 troups to invade that country. The stated goal of this policy was to protect Americans from terrorism. The measured need for the policy was the attack on 9/11. The policy decision was made by the President of the United States with full support of Congress and the American people. The policy was communicated with 30,000 American troops and a good contingent from international allies.
And the outcome? Eight years later, we are still occupying one of the poorest countries in the world with over 60,000 troops. Afghanistan is not even a real nation in modern terms. It is a tribal collage of small warlord controlled fiefdoms. Pakistan is barely a modern state, and Afghanistan is 40 years behind Pakistan. Kabul has 4 million people and 95% of them have no running water in their homes. The GDP is only $12.8 billion. It has no agriculture, no industry, few natural resources, no significant knowledge resources.
The war in Afghanistan has cost US Taxpayers $172 billion to date. That is 13 times the GDP of the entire country. We are spending more each year to wage war than Afghanistan is even worth.
Compare the Outcome to the Goals. From an economic perspective, it's a huge loss.
War today is a net economic loss for any country that wages it. Resource control is simply not worth the costs. The Europeans have figured that out. That the US could learn the same lesson...before we bankrupt our nation through warfare...
Winter and I have arrived in Warsaw. It is November 9, 2009, twenty years to the day since the Berlin Wall fell and I am in a gorgeous Hilton Hotel in the city that still has the scars of the 20th Century written in it's streets. The trees are bare here, the temperature hovers around 5C. A foggy rain shrouds the city. All around this hotel are the scarred foundations and empty lots from the Jewish Ghetto, destroyed in 1943 by the barbaric Nazis.
It's an eerie feeling, but this day is like any in Warsaw. My hotel is full of professional wrestlers and their groupies from a large match nearby. The lobby has men with necks the size of my waist. They are drinking at the bar, flirting with women drawn to the spectacle, and loudly proclaiming their happy personalities.
We remember this day as the end of tyranny in the East, the final chapter in a 50 year book of horror that began in 1939. Lucky those alive today who don't have to remember.
Last week I hosted a Data Governance Executive Breakfast for 20 CIOs in Warsaw Poland. It was my first trip to the Iron Curtain Capital and I expected a concrete grid of grim apartment complexes and monumental communist office architecture. Instead, I found a lovely city still working hard and succeeding to erase 50 years of Nazi occupation, annihilation, and communist oppression. Warsaw today is a gem of a city, with warm and friendly people, beautiful architecture, an eager business atmosphere, and a deep historically rich intellectual tradition.
My one day in Warsaw was graced with gorgeous weather and a terrific morning event that combined both Data Governance content and XBRL. My partner in the Breakfast presentation was Michal Pienchofsky from Business Reporting AG, a Data Governance Council Member specializing in XBRL consulting who is based in Warsaw. Michal gave a terrific presentation linking Data Governance goals and structures to XBRL taxonomies, regulatory compliance, and business optimization.
After the event, I met an old family friend who lives in Warsaw. Stacy is the father of my brother-in-law, and in the summer of 1944, at the age of 16, Stacy joined the Warsaw Uprising and fought against the Nazis. It was a valient and tragic effort that for three months engaged German units in a bloody campaign to win back the Polish Capital. The effort was largely unassisted by both the Americans and the Soviets - who were actually sitting outside the city some 11 miles away and waited for the Germans to mop up the resistance before liberating what was left - rubble - of Warsaw themselves.
It happened that this summer marks the 65th Anniversary of the Warsaw Uprising, and Stacy took me on an uprising tour of Warsaw, showing me the manhole cover where he entered the sewer to cross the city underground to evade Nazi patrols, the intersection where his Gozdawa Battalion setup a barricade, the churches where Nazi tanks hid in waiting, and many walls where bullet holes and plaques still mark the spots where thousands of Polish Civilians were executed by the Nazis in reprisal for the uprising.
We visited the Uprising Museum, which is a fascinating and well done museum documenting the events of the uprising. They have the B-25 that the Polish Government in exile used to send supplies to the resistance fighters, replicas of the sewer pipes that you can walk and crawl through to get an idea of what it was like - without the sewage - and many photos detailing the grim battle and the utter destruction of Warsaw afterwards. The Nazis leveled the city after the uprising was crushed as an example to any other nation that wanted to rise up against their tyrannical rule. Not one building, not one facade even, was left standing in the city.
The lovely inner city that one sees in Warsaw today was complety rebuilt by the Communists after the war. I've been to Prague many times, where it is often remarked that the old city was preserved after the war because the Communists didn't have the money to put up new buildings. I think Warsaw demonstrates the lie of that assumption. Communists obviously love good architecture and cultural heritage as much as Capitalists do, because they did a marvelous job restoring Warsaw to its some of it's pre-war splendor. There are still many sites outside the inner city where scars from WWII are visible. I haven't seen that in other WWII sticken cities, like Hamburg which was 80% destroyed by allied bombs in WWWII. Just across the street from the Hilton Hotel where I stayed there were empty lots and war ruins of buildings, which is quite amazing in the 21st Century.
But in the 20 years since the Iron Curtain has come down, already there are many modern changes to Warsaw and I can well imagine that this city, with its great people, and hunger for innovation, and rich traditions, will regain its former glory as a great city in the 21st Century.
I posted some photos I took while in Warsaw on Picasaweb. Have a look if you are interested:
It was a great trip business-wise and it certainly demonstrated the resilience of the human spirit even under the most barbaric forms of oppression.
Last week I was in Hamburg Germany teaching my Data Governance Course with my friend Christa Menke-Suedbeck at the Bucerius Law School (www.law-school.de). One evening, I went to the Deichtorhalle to see a lecture from one of my old Hamburg friends, Tom Holert. His lecture was part of a larger panel discussion on modern photography, focusing on the 19th Century artistic techniques photographers use to "stage" their photos, blurring both photo journalism and art.
Tom Holert is one of Germany's most prolific and well-respected art and music critics, and his presentation left me deeply concerned. Have a look at this photo that Tom presented. It is by Eric Baudelaire and it is called "The Dreadful Details."
At first glance, it looks like any other horrific photo from the Iraq War that we have all become uncomfortably comfortable viewing. These depictions of fear, death, power, and dread no longer shake the subconscious as they once did during Vietnam.
However this one should, because it is entirely fake. The photo was taken on a Hollywood backlot and all the people in it are actors. It is a modern example of staged art to resemble reality. But you will think it is reality unless you are aware of this context - unless you know the provenance.
Yesterday, the NY Times reported that 16 retired US Generals working as military analysts on Fox, ABC, NBC, CBS, and CNN News had been secretly collaborating with the Pentagon to shape US public opinion in the most brazenly documented form of government propaganda I have certainly seen in my lifetime. These military analysts acted as supposed "impartial" experts on network TV but in reality were toting the Pentagon public line.
Again, staged reality. We all believe this stuff unless we are informed of the context, the provenance of this data.
Whether you support the war or not, you have to recognize that these distortions have an enormous cost. A recent Harvard University study put the direct and indirect costs of the Iraq War at $1 trillion, a figure I'm sure the Pentagon has already developed analyst talking points to refute.
Of course the loss in human life on all sides of the conflict outweigh the economic costs. But I would argue that the most enduring damage is in the global brand of the United States of America, whose public image around the world has been so tarnished.
In the Information Age, we are all victims of Toxic Content in our data streams. This Toxic Content makes the truth the most endangered data asset in the world and I fear that what we have lost so far we will struggle to regain against new data distortions that will make old lies seem like quaint nostalgia.
And if you think propoganda only invades public data streams, read my articles on Subprime...[Read More
In yesterday's Financial Times, Hank Paulson, the former Treasury Secretary, wrote an article entitled "Reform the Architecture of Regulation." In the article, Hank blames inadequate regulatory authority and overlapping jurisdictions for the failure to forecast and prevent the current credit crisis. He recommends an ideal regulatory infrastructure composed of three agencies: "one charged with maintaining market stability across the entire financial sector, one for supervising the soundness of those institutions with explicit government support, and one reponsible for protecting consumers and investors."
Hank wants the Federal Reserve to have the systemic risk authority in the first case. He wants the Fed "to have access to information from a broader set of financial organizations, including hedge funds and systemically important payment systems. This authority should also have the power to intervene if it concluded that the financial system was at risk."
He goes on to say that both the Treasury and the Fed lacked appropriate powers to allow Lehman Brothers to wind-down in an orderly way, and of course that might be true but the Lehman failure was not in the lack of orderly bankruptcy. The failure was due to Hank Paulson deciding to let Lehman brothers fail in the first place. AIG, Fannie Mae, Freddie Mac, Bear Sterns, Merrill Lynch were all too big to fail, but Lehman wasn't. When the government picks winners and losers in a time of national crisis, the public is the ultimate loser.
It must be nice to look back on past failures and propose future solutions, but Hank's analysis omits too much.
1. This crisis was preventable. The government had the data, and the Federal Reserves own economists admit that in a report written in October 2007,
"Were problems in the subprime mortgage market apparent before the actual crisis showed signs in 2007? Our answer is yes, at least by the end of 2005. Using the data available only at the end of 2005, we show that the monotonic degradation of the subprime market was already apparent. Loan quality had been worsening for five consecutive years at that point. Rapid appreciation in housing prices masked the deterioration in the subprime mortgage market and thus the true riskiness of subprime mortgage loans. When housing prices stopped climbing, the risk in the market became apparent."
2. The government is in fact awash with the right data that provides leading indicators about many aspects of the economy. Individual agencies collect that data and either do not understand it, do not compare it across companies or geographies, or do not disseminate in an intelligent manner.
3. In some cases, the government in fact lacks important data that can yield indicators of systemic risk, but initiatives like the XBRL Risk Taxonomy can remedy these deficiencies quickly.
Changing the organizational structure of financial regulation will be disruptive and will not necessarily produce better results. Existing structures with new authorities can produce the same or better results.
But the government does need a Business Intelligence strategy to make better use of the information it already collects, and integrate new sources effectively. This isn't just about reporting standards. Every day, auditors request information from financial firms. Regulatory auditors are often camped in regulated firms for extended periods. They collect all kinds of important data about business practices, assets, liabilities, and losses. Where does this information go? Is anyone collecting it using standard practices and integrating the structured and unstructured content into comparable repositories? Can anyone compare practices from firm to firm without having participated in site audits?
I suspect the answer to these questions is no, and we are all witnessing the results of the governments' failure to use its own information intelligently.
In any administration, pro or anti regulation, it will always be in someone's interest to disregard information that doesn't fit their philosophy or needs. The only safeguard in a democracy is information proliferation to many diverse interests. This enables others to regard information that others disregard, to glean meaning that others miss.
Hank Paulson is wrong. The government doesn't need a new regulatory architecture. The government needs a new Regulatory Information Architecture.
Tim Geitner is on Capital Hill today asking Congress to provide regulators with new powers to control the derivative markets. He claims that derivatives blindsided the Administration and nearly destroyed the world's economy. Congress, by all accounts, seems willing to provide these new powers to both the SEC and the CFTC, which will include the power to collect positional data from key broker/dealers and enforce positional limits on trading to constrict bubble formation. These are good ideas. But these powers alone won't fix the current problems in our economy or prevent future financial catastrophes. They are at best solving a symptom of the credit crisis, not the source problem.
The source problem was bad home loan mortgage underwriting, and those bad loans continue to produce 12% foreclosure rates that have not abated. That problem was the result of misguided policy mistakes by Congress and FHA in 2004-2006 that were not corrected by the Federal Reserve until June of 2008. The tail of those bad loans still haunts our mortgage market. The non-GSE mortgage market today is effectively dead. Anything without a federal insurance program isn't being underwritten, and none of the policies of the Obama Administration have changed the nasty state of foreclosure nationwide.
Rising unemployment across the country is adding to the delinquency and foreclosure rates. Tinkering with the derivatives market is a nice sideshow. But until the Obama administration gets serious about mortgage reform, they are only addressing the symptoms of our problems not the core.
A colleague wrote to me today with the following question:
"Would you be able to point me to information that describes what can make a data governance effort fail? Points about technology problems, organizational politics, a lack of organizational leadership, etc. come to mind. Are you able to expand this list or explain how to avoid failure in a data governance program?"
I get asked this question often and my answer is an easy elevator pitch for anyone looking to explain Data Governance:
"Two things always lead to failure:
- Lack of outcome oriented programs
- Lack of power and accountability in the data governance board.
Of course, no one has a monopoly on screw-ups so there are innumerable additional options for modest and wholesale disaster in any organization run by human beings. Important thing is to be ready for failure and catch them when they are small, institutionalize the learnings, adjust and move forward."
People who say that failure is not an option are dangerous fools. Statistically, failure has the same odds as success, and if you plan for both your program will grow in wisdom and effectiveness - which is the very best from Data Governance that we can all plan for.
Modified on by DataGovernor
64 years ago, when my house was built the Long island Power Company installed electric meters in my basement. Two large grey metal meters are affixed to my foundation with insulated wires connecting them to my fuse box. They have a variety of dials and arrows beneath thick glass. I can see the meters but don't really understand what the numbers mean or how to read their information. Every other month or so, a polite person from the power company rings my bell to ask if they can come into my home and walk into my basement to read my meters and input the results into a handheld device that radios the information back to the power company.
The last time the meter reader was here I asked why the power company didn't trust me, the homeowner, to call in the numbers or input them in a form on the internet. She told me that many people don't understand the meters and if they do often lie to the power company about what they read to under-report their electricity usage. I asked why the power company couldn't read my meter remotely or why they couldn't measure how much elecricity my home was using.
Like, "don't THEY know that?" Nope. THEY don't, and the reason they don't has as much to say about how the electricity grid works as it does the way all complex modern industrial systems work and we in the Data Governance world can learn a lot from this.
The electricity grid was created as a downstream electrical production network. Upstream Power plants create electricity and send it downstream to factories and homes to be consumed. The power company did not build in mechanisms to measure how much electricity is being transmitted over the wires, how much is being consumed, and exactly by whom. Your monthly electrical bill is not based on your actual electricty usage. Its based on estimates of your useage based on your historical usage information. That is, the meters read your past and the power company forecasts your current usage and future performance based on that historical information.
The grid itself is run at 70% of electricity capacity to allow up to a 20% margin of error. If the lines carry over 90% of their rated capacity in aggregate, some lines could be running at 100% and therefore could overload and explode. And if some lines overload, capacity reroutes and burns up other lines, transformers, and sub-stations,. So the whole system is calibrated based on historical analytics. The power producers have no realtime understanding of how the electricity is used, in what quantity, when and where. And even the end users don't really understand where the electricity came from, how it was produced, or how the system actually functions.
In the Industrial Age, we human beings created many complex systems that function without many of the system participants understanding how the system works, and this is fine if we are all happy running our systems at 70% efficiency.
In enterprises today, we run our Data production and consumption systems with similar levels of complexity, performance, and ignorance. Most business users have no idea where the data came from, how it was produced, transmitted, and consumed. Conversely, most, if not all, Data Governance professionals have no idea how business people collect and use information to generate value. And this grid was created without any meters to read data volume, velocity, veracity, and utility.
Councils, Stewards, Policies, and Standards will improve human communication about the importance of data in an enterprise, but they won't change human behavior over time without new Data Governance "Smart Meters" that measure and report how data was created, who refined it, how it was transmitted, aggregated, repurposed, criticized, commented, stuffed in envelopes, posted in trades, hedged in inventory, reserved against premium, debated in legislation, trademarked, copyrighted, patented, packaged, and a million other uses and abuses. Until we can demonstrate a clean line between creation and use, Data Governance will be two steps forward and two steps back over and over again, generation after generation.
We need meters and readers and a new Information Age infrastructure that tells us, intelligently, what we are doing, why and when we are doing it. It should connect maintenance to operations, front office to back, middle to board, outside to in. We don't know enough today to tell regulators what we know and until we do we won't be able to close the gap between our forecasted capacity, current and optimal states.
The Information Governance Community is running a landmark Survey on the Cost of Data Quality and the #1 answer to all of the questions is "I don't know." Data Governance Professionals don't know how poor Data Quality effects Business Outcomes because they don't measure that. After Lehmen Brothers disintegrated in 2008, and the global financial meltdown spun out of control, the number one question from the public was "Didn't they know this would happen?"
No. No one knew then. No one knows now. And on one will ever know until we build more intelligent systems that connect Information Production to Consumption and measure the gaps every step of the way. This last recession has demonstrated that we are reaching the limits of our unintelligent Industrial Age networks and systems and its time for a major upgrade.
The Subprime Credit Crisis is not even half over, but one thing we should have all learned by now is that Banks that paid attention to Data Governance lost the least. Whether they were able improve decision-making with better quality loan origination data or calculate risk with enhanced x-functional tools, some banks had better operational programs and show better returns.
In the wake of Subprime, fund managers should be asking companies they invest in about their Data Governance practices. Questions like:
1. Do they have a DG Organizational Structure that creates enterprise policies and reports results to the Board of Directors?2. Is there a Stewardship function that assess data quality on an ongoing basis and works to improve operational decision-making with high quality data?3. Are Data, Security, and Risk functions working together to maximize internal intelligence about operational, credit, and market risk controls?4. Can the organization calculate risk and forecast potential losses?
If a company today can't answer these questions intelligently, they are not governing their information assets and the market should represent those management failures in share prices.
Data is the raw material of the Global Information Economy. Companies that govern its uses well will demonstrate better bottom line performance. Companies that don't carry an investment risk. It's that simple.[Read More
On October 9th, as stock markets around the world were plunging, and the US election was about to pop, President Bush invited G7 finance ministers to the White House for an urgent summit on the Credit Crisis. During the meetings on the 9th and 10th, Treasury Secretary Hank Paulson urged the Europeans to back the recently ratified US plan to buy up toxic debt assets from bank balance sheets. He wanted to calm global markets with a common bailout strategy driven by the White House.
But the Europeans had other ideas.
They had been listening to their Swedish colleagues describe how the Swedes had dealt with their own national credit crisis in 1997. The Swedish solution, which worked very well, called for direct capital infusions into banks in return for preferred stock shares. Gordon Brown, Prime Minister of England, had already been calling for that strategy earlier in the week. The Europeans were also not at all inclined to follow US leadership in a crisis caused in large part by US financial mismanagement.
So on Saturday the 10th, the G7 issued a bland statement about the importance of coordinated action and the Europeans flew to Paris, where on Sunday they issued their own statement announcing their adoption of the Swedish option. On Monday, October 13th, Secretary Paulson met with the CEO's of 8 top US banks and handed them a piece of paper which they had to sign before leaving. The paper stated that the US Treasury Department would be making mandatory capital investments in each of their institutions in exchange for preferred stock. By 6:30pm, all the CEO's had signed and the Americans had followed the lead of the Europeans.
In London, Paris, and Berlin, this about face for the US bailout plan was seen as a stunning political victory. The finance minister of Germany declared on the floor of the Bundestag that the era of US financial supremacy was at an end. Newspapers and talk shows across Europe herralded the new age of EU financial regulatory supremacy. At a conference I attended in Athens in late October, the EU officials I met were flush with regulatory victory. The facts of the G7 meeting were lost on no one.
In the US, Americans were far too consumed with the presidential elections to care about what the Europeans thought. In many of my conversations with economists, business people, and regulators, most Americans didn't even understand what had happened over the weakend of October 10-11.
Let me state what happened in the most clear and unambiguous terms - The Bush Administration lost control of the post-Bretton Woods financial order. Since that meeting, the EU has been talking up the need for global financial regulation and is pressing for new powers and funding for the IMF. The lame duck Bush Administration has no power to resist.
Last week, Angela Merkel, suggested that what the world needs is a World Map of Risk, some kind of global dashboard that will peer through the complexity of financial derrivatives and complex credit default swaps to illustrate trouble spots and areas needing more EU regulatory oversight.
This is a bad idea for several reasons:
1. People causing financial risk have more financial incentive to hide it than share it.2. It is age-old cultures in financial firms that create incremental risks that rise to systemic failures. 3. The only regulatory reform that will work is the kind of business segregation that Glass-Steagall provided from 1934 to 1999. And even then, regulators will need to expect creative entrepreneurs to invent new ways to make money that evade regulatory definition.4. The European Union repealed their banking segregation laws in 1993 when they opened up the common market for labor and banking across the EU. And even though that law and the US Financial Services Modernization Act of 1999 prohibited so called "Universal Banks" from investing bank deposits in financial markets, these institutions used leverage ratios to skirt the regulations.
Great data can't overcome poor leadership. People make decisions, computers execute. Technology can help people to better measure risk, but only people can decide to govern it.
Today, we the global Information Governance Community are announcing that we are publishing the Data Governance Council Maturity Model under an open source copyright (for non-commercial purposes) on a website called www.infogovcommunity.com.
The purpose of the website and the publication is to invite the world to participate in a crowdsourcing project to involve thousands of Information Governance practitioners from around the world and help the global community to update the Maturity Model and broaden the definition of Information Governance.
The site is powered by Chaordix, a fantastic company to work with. We've been working together in a two-month beta test of crowdsourcing in which the Council reviewed the site and submitted ideas each week which Chaordix took and implemented. What you see today is a product of Community interaction and technology.
Take a tour. On this site, you can interact with peers from around the world in the time and timezone most convenient to you. You can use the Maturity Model to self-assess your organization's capabilities, work on topics to define Best Practices, and establish your credentials as a leader in the growing international market known as Information Governance.
Check out the leaderboard, where the best and brightest can see how their ideas are recognized by the community, or the blog where longer ideas are published to inspire insight and discussion. Infogovcommunity.com brings together Information Governance and Social Networking to inspire innovation for the common good.
The site is brought to you buy IBM but supported by the Community for the Community with a self-funding subscription model. Starting on September 1st, Community members will pay $299/year for individual membership and $699/year for corporate membership to cover the yearly costs of maintaining the site.
The XBRL Risk Taxonomy work we've begun has the potential to reshape the dynamics of financial regulation. But XBRL is just a tool, not a solution. The solution that an XBRL Risk Taxonomy can enable is standardized loss reporting from financial institutions to regulatory authorities and back again.
What do I mean by this?
Every regulated financial institution would provide loss event and tail data to regulatory authorities in XBRL via the Risk Taxonomy. With thousands of institutions reporting losses on a regular basis, the repository would grow large quickly, but meaningful trend data would still take at minimum a few years to accumulate. But over time, this repository would not only provide regulatory authorities with a risk pulse on the financial system, but would also enable financial institutions to compare their own loss reporting to industry aggregates to improve trending and forecasting. The key is to link this loss trending information to management decision cycles so that every decision point can be compared to past experiences and future forecasts. This is not to make already risk averse executives hide beneath their desks, but rather to enlighten human decision-making with risk probability information at the point of action, record decisions and results, and constantly learn from mistakes and improve over time.
We humans do this on an intuitive level every day, but our best decisions are dependent on human memory and painful reminders of past individual failures. To combat systemic risk resulting from incremental action, human experience needs to be captured, profiled, and broadcast to more humans who may have an interest to regard what others disregard. This creates autonomic opportunities as well as governance checks and balances.
The Insurance Standards Organization performs this kind of loss data aggregation for a variety of insurance lines and many insurance companies purchase this data via subscriptions to calculate insurance premiums and reserves. Sadly, despite the rapid rise of insurance-like hedging strategies on Wall Street, like credit default swaps and portfolio insurance, no one is using traditional insurance products to cover Operational exposures and without loss history insurance carriers can't price coverage or buy re-insurance.
With several years of loss data accumulated it should be possible to create an open insurance exchange to underwrite the losses with insurance coverage. This would allow banks to transfer operational risks (which are most similar to professional liability exposures) off their balance sheets to insurance vehicles. The banks would pay a premium for the coverage, the market would price risk on a near-realtime basis, and regulators like the SEC could govern premiums and fair trade mechanisms. In some ways this would function like credit default swaps but the trades would be on an open market, and rising risk in financial institutions would result in higher premiums, which in turn could be correlated with equity and bond markets to create additional incentive and penalty mechanisms for risk management.
I think this idea has enormous benefits for many market participants. Risk self-insurance is inherently inefficient capital allocation without deep loss history. In the insurance market self-insurance is most practical when commercial coverage is unavailable or too expensive. In the banking world, banks have self-insured their own losses for decades without empirical risk measurement programs.
Today of course the Taxpayer is providing catastrophic insurance coverage for banking failures and that is the most inefficient coverage in the world!
A better model would price future risks based on past losses and make banks pay premiums for loss producing behavior. The XBRL risk taxonomy can create a data model to facilitate loss history aggregation that can create enough data for accurate underwriting. And when that information can be placed on an open market, banks would have a financial incentive to report losses - because the market would transfer the losses to insurance coverage and banks would have more capital for investments. At a time of capital constraints, this solution has something for everyone - market mechanisms, regulatory reform, and better capital allocation.
What's hard about pricing operational risk coverage is the long tail of losses. Traditional insurance policies, with fixed duration, deductible, incident and aggregate policy coverage won't scale to the volume of loss events and severity tail. An exchange, however, can price large volumes of loss events and tail growth in near-real-time, providing both incentives and penalties for poor risk management in firms that transfer via the exchange. That in turn will transform loss reporting from the cat and mouse game it is within firms today into a business necessity because every unreported loss is a balance-sheet deduction in capital allocation that will get penalized severely by the market when reported late.
Turning this solution into reality will require a new Risk Information Management infrastructure in financial institutions, regulatory authorities, and market exchange mechanisms. It will depend on a common data model, standard risk measurement reporting processes and technologies, and cultural changes on Wall Street. This is why we are starting with an XBRL Risk Taxonomy to standardize loss reporting.
We can't create solutions like these overnight, but by starting with common reporting standards we can inspire a 21st Century infrastructure that regulators can build upon to enable risk analysis and oversight at nearly the same speed the market participants create it.
I'm hosting a meeting on these topics on February 26-27 at the Levin Institute in NY. More information about this meeting can be found here:www.fstc.org/docs/conferences/IBMDataGovernanceForumonXBRL.pdf
In the last five days, a lot of people have asked many great questions that I thought I'd answer on this page to provide a better accounting of what this is all about and what we hope will result.
Q: What is XBRL?A: XBRL (Extensible Business Reporting Language) is an XML language for describing business terms, and the relationship of terms, in a report. It enables semantic clarity of terminology by standardizing a data model - the field names and their relationships - for reporting purposes.
Q: Why Do we need a Risk Taxonomy in XBRL?A: Because Risk measurement, calculation, and reporting are mysterious, arcane, and underutilized business processes in banking and financial markets and reporting standards can demystify, simplify, commoditize risk calculation as a more ubiquitous part of business decision-making.
In the insurance world, risk measurement, calculation, and forecasting are THE BUSINESS. But insurance companies don't tell you what formulas they use to calculate your premium, how they determine their own reserves, or what protocols and methods they use to pay out claims. Actuaries study for years to learn these methods, and very few business professionals - and virtually no IT professionals - have any idea how risk is measured, calculated, and reported.
Q: But what do you mean by Risk Measurement? Don't we need Risk Management?A: Sure. Risk Management is important. But only human beings can manage risk, and before we get there we need to measure past losses, compare them to current events, and forecast potential outcomes. Making a business decision without this analysis is risky. Making a business decision with this analysis is also risky, but when the inputs and decisions are recorded, we have the opportunity to learn from our mistakes and improve over time. We will never eliminate risk, but we can use scientific decision-making techniques to improve our odds.
Today, most people focus on Risk Management. They use qualitative risk assessments to imagine what kinds of vulnerabilities, loss events, and losses may be incurred from business activities. This is a valid method for forecasting and preventing potential losses. But the methods and results vary with the qualitative insight and skill of the practitioner, and they are dependent on disciplined application. Over time, it is very difficult to compare quantitative loss results to qualitative risk assessments.
We can leverage standards in risk measurement reporting to apply quantitative risk assessment to the practices of risk measurement and management so that inputs and outputs have a mathematical foundation. That foundation allows automation, and automation enables ubiquity of application. And that's the purpose of a standard - to enable widespread application and value - so that everyone can measure, calculate, and report risk; without an actuarial degree.
Q: Why do we need risk standards?A: One of the things we've seen in the current Credit Crisis is the ambiguity and confusion about risk. Regardless of whether you are a trader paid to take risks or an IT professional paid to avoid risk, it is nearly impossible to understand the incremental impact of your decisions on your department, your division, your company, your industry, your market, economy, or nation. There is just too much data today and our regulators haven't tooled up to take advantage of the information companies could produce to help regulators and markets operate more transparently.
We know now in dramatic hindsight that incremental risks have systemic impact. People can only understand that impact when they can aggregate the incremental losses in the past, compare them to current circumstances, and make forecasts about the future.
To aggregate and compare risk data, we need standards and XBRL seems to us to be the most logical and effective tool to create those standards.
Q: How could the XBRL Risk Taxonomy be Used?A: These standards will enable more effective risk measurement and reporting within firms, new macro-economic tools for regulators and policy-makers, transparency for financial markets, and a more ubiquitous use of risk calculation in decision-making across innumerable disciplines.
Let me give you an example:
The insurance industry does risk calculation all the time. If you are a doctor, lawyer, accountant, or financial advisor, chances are you buy professional liability insurance. When you apply for the coverage, you tell your insurance company about yourself, your business activities, past losses, claims, and insurance coverage. The insurance company will compare your application to their own database of insureds, losses, and rates.
The insurance company will also compare your loss profile to claims data it purchases from the Insurance Standards Organization (ISO). ISO aggregates loss data from insurance companies across the US and provides anonymous records back to the same companies. Insurance companies need that 3rd party verification of loss data for loss rating and trending. No matter how large an insurance company, and no matter how many years a company has been doing business and collecting loss history, everyone compares in-house data to aggregate industry data. Its a larger statistical sample size and it helps everyone set aside the right amount of premium from each insured for reserves to payout future losses.
We need the same kind of system in the financial markets. It is partially there today. Under the Basel II accord, banks are required to report the amount of gross income they set aside to self-insure against fore-casted losses. But they only report that in the aggregate. No one is reporting the underlying data from which the risk reserves are calculated, and data reporting on that level could have huge benefits.
One benefit is that regulators could compare reported loss information across national and international economies. This would provide enormous new insight into macro-economic trends that could help reduce business cycle volatility.
Another benefit is that banks and financial firms could compare their own loss information to very large samples of industry losses. This would make their own forecasting models far more efficient and that would help everyone manage risks more effectively and reduce paid losses over time.
A final benefit is that markets and rating agencies would gain new insights into underlying exposures in financial instruments and that would enable far more accurate and timely forms of risk rating, making markets more transparent and efficient.
Q: Why is the Data Governance Council leading this standards initiative?A: Because Risk measurement, calculation, and reporting within and between enterprises is not possible without semantic clarity around how we classify, describe, and document incidents, losses, events, formulas, and a host of other terminology. This is a very complex topic, and it is so easy to be confused and confounded by the terminology. Before we can all talk about this topic intelligently, we need a common vocabulary. That vocabulary will enable efficient communication, transferable methods and skills.
And this is very much a Data Governance challenge. The Data Governance Council has been studying these issues for four years and - together with our partners in the FSTC, EDM Council, OCEG, and other organizations - we think we can make a difference with this standard.
Q: Why would organizations want to apply XBRL to risk?A: We can see clearly from the subprime credit crisis that there are still some non-standard methods for appraising risk. We don’t have semantic interoperability to allow us to take an aggregate look at risk across multiple organizations. This makes it hard for companies and regulators to agree on what risk there is and it is difficult to consistently report the risk companies are taking. XBRL can be a tool to help organizations use common standards for the way risk is described.
Q: What benefit would XBRL for risk reporting provide companies and regulators?A: By translating risk reporting into a consistent software language, this will enable organizations to more easily perform advanced analysis, meaningful research and compare risk and loss history among multiple organizations. It could be used for internal reporting purposes or external. Regulators could use it potentially to create a global loss history database of anonymous credit, market and operational incidents, events, and losses from every institution, much like the insurance industry relies upon. XBRL could make risk simpler and more powerful and that should create broad market benefits.
Q: What are the primary obstacles to the adoption of XBRL for risk reporting?A: The real challenge is not in creating a risk taxonomy using XBRL. The challenge is getting agreement upon it and ensuring there is willingness worldwide to use it. That is why the Data Governance Council is seeking input from organizations and regulators worldwide.
Q: Who is supporting this initiative?A: In addition to more than 50 IBM Data Governance Council members, the Securities and Exchange Commission, the Enterprise Data Management Council, the Financial Services Technology Consortium, the Organization of Compliance, Ethics, and Governance, XBRL International and XBRL.US are all contributing to the process.
Q: How far along are you in the process today?A: We have a starter taxonomy that we will begin socializing at an XBRL for Risk Forum on February 26-27 at the Levin Institute in New York. The Data Governance Council’s role is that of a facilitator, seeking proposals and comments to begin defining a taxonomy for risk that can be agreed upon by many organizations worldwide. This work will continue through the first half of next year with a final recommendation expected by the end of the year.
On February 26-27, I hosted an XBRL Risk Taxonomy Forum in NY at The Levin Institute in which we explored the concepts of operational, market, and credit risk. Through interactive discussions, we looked at how those concepts could be articulated in an XBRL Taxonomy and what benefits regulatory authorities and market participants could derive from new key risk indicator monitoring. We looked at the ORX example of Operational Risk loss event reporting and saw how 50+ existing banks are sharing operational loss data to better trend individual losses and learn x-industry loss patterns.
And on the last day, we explored positional reporting as a key risk indicator of market crowding and bubble formation. One outcome of the meeting was a call for a followup meeting to review the ORX example in greater depth and explore both existing risk reports and sources of positional data.
On April 23, we will meet again at the Levin Institute to focus more deeply on the ORX data model, an examination of existing regulatory reporting, and positional reporting options from Swift and DTCC.
The work will be done in English – no XML – to make it easy for everyone to participate. Our goal is to answer some fundamental questions:
1. Is the ORX data model sufficient for Operational Risk reporting on a national level?
2. What is the right business model for Operational Risk reporting and who should maintain the taxonomy?
3. What kinds of key risk indicator data are already collected by financial regulators that are either not used on a systemic basis or not shared across the government?
4. What is the most efficient method for collecting end of day/week positional data?
- from market participants directly?
- via clearing and settlement firms?
5. What should be the role of a semantic repository in the construction of risk reporting taxonomies?
6. How should the regulatory authorities build and maintain regulatory taxonomies?
7. How should the world maintain semantic consistency between many regulatory taxonomies?
8. What should a 21st Century Regulatory Information Architecture look like?
We can't possibly answer all of these questions in one day, but we can begin an informed dialog and encourage global participation - No one else is addressing these issues and I think we can make a difference doing so.
I look forward to seeing you on April 23rd.https://www.ibm.com/developerworks/blogs/resources/adler/IBM%20Data%20Governance%20Risk%20Taxonomy%20Meeting.pdf
I am a relative newcomer to System Dynamics. I first learned about systems thinking from Helmut Wilke, german professor who wrote a book called Smart Governance, which talked about systems of governance and their influences on society. I met Professor Wilke in Cologne in 2007 and was so impressed with his ideas I used his book in a course was teaching with Christa Menke-Suedbeck at the Bucerius Law School in Hamburg, Germany.
A few years later, a colleauge introduced me to some work IBM did with the City of Portland to build a very large SD Simulation enabling urban planners to understand how even the smallest policy changes had ripple effects across many municipal departments, neighborhoods, families, and individuals. We created that simulation using VenSim and Forio, and I was immediately captivated by the potential to model and simulate the impact of policy on complex environments.
IBM Smarter Cities SD Demo
For over 15 years, I've been an inventor and market builder at IBM. In 1996, I invented Internet Insurance, persuading AIG, Reliance National, Chubb, Codan, and other insurers to invest in developing interent exposure coverage products and underwriting capabilities so that businesses could depend on insurance coverage as they expanded commercial operations online. In 2001, I led a team of IBMers to create the Enterprise Privacy Architecture, which is a patented methodology for embedding privacy policies and obligations into business processes. In 2004, I founded IBM's Data Governance Council and led an international group of 60 companies to create the Data Governance Maturity Model, a vast piece of commonly developed IP that benchmarks Data Governance behaviors across 11 categories and 5 levels of maturity. In 2009, I hosted a series of roundtable forums with large banks, the SEC and the Federal Reserve as we explored the causes and effects of the Credit Crisis and what new standards in risk calculation and expression could be developed to mitigate future crises. And in 2010, I created the Information Governance Community to publish the Maturity Model under an open source license and invite a global community to work with IBM, the Data Governance Council, and many new leaders in developing a larger market for Information Governance and a new leadership role called the Chief Data Officer.
I love building markets through international collaboration and this is why I have urged and lobbied iseeSystems, Ventana, Forio, Anylogic, IBM, and the SD Society to embrace an open standards process at OASIS. SD is a complex discipline that is hard to learn and hard to use. It has grown in episodes over the past 50 years but it has never really broken out of its strong academic foundations. At first, I thought I could help it grow through the Information Governance Community. In 2011, I held a series of informational webinars on SD, the City of Portland Project, some work Steve Peterson had done with urban violence in South Boston. Michael Bean from Forio.com gave us generous amounts of his time to educate our community in how SD works, how models are built, and how simulations can be used to test strategic ideas and transform organizations. Some of our community members built Data Governance models in Vensim and tested them online in Forio.
But widespread adoption eluded us. You can have great webinars with great content and discussions, but that doesn't mean everyone understands what you are talking about. I saw many of my members thinking about systems, but not in a dynamic SD way. They understood the words we used to mean different things and found the math content totally confusing. After six months of work, I had to admit my efforts at Community education were not succeeding.
Undeterred, I started talking about the need for SD Open Standards. In the IT world, Open Standards are a way to spread adoption among vendors because it lowers proprietary barriers to entry in new markets. It enables better software solutions, which end-users appreciate. And the process of Open Standards consideration and specification approval helps build market demand. As early as 2011, I saw clearly that SD lacked a robust IT vendor community. 5 or 6 small vendors providing software modeling tools was a niche market that was not growing.
In 2012, I met iseeSystems at the System Dynamics Conference in St. Gallen. My participation in the conference was very last minute. St. Gallen isn't close to anything in Switzerland and it was summer and I didn't want to travel. But boy am I glad I did. For three days, I saw incredibly thought-provoking transformational work in every industry all using a common SD methodology. I speak at many conferences throughout the world and you never see so many interesting presentations across so many diverse industries written in a common way.
I was blown away by the quality of the content but, sadly, equally depressed by the complete lack of business participation. The conference was run by academics for academics. I was the only representative from a large IT vendor. There were no banks, insurance companies, oil and gas, utilities, governments, or even big 4 consultants attending. The SD Society had a conference in 2011 in Washington DC, so I asked the organizers how many from the federal government had attended. The answer was hardly any. Why the heck not I asked. The answer was no one had thought to prioritize their participation as a target audience. The target audience was local universities.
If the purpose of the SD Society is to service the university marketplace with educational offerings and knowledge transfer, mission accomplished. If the purpose is to grow the industry and attract business audiences, current approaches are inadequate.
This is where OASIS comes in. Following St. Gallen, I went to work persuading my colleagues in IBM that an Open SD standard based on iseeSystems XMILE could help grow business demand for SD simulations. The open standards process would attract new ideas to SD and open the SD Society to new ideas as well. But it took a lot of persuading. I had to sell a vision internally that SD concepts could be used with Big Data analytics to illustrate policy options on complex ecosystems. I had to tell my colleagues that an open standard would allow IBM to embed SD vocabulary in other modeling tools such as Websphere Business Process Modeler, Rational Method Composer, and iLog. And I had to demonstrate that our investment would be modest, the risk small, and the potential payoff reasonable. It took me a year to find the sponsorship I needed to persuade our Standards Commitee to approve IBM"s sponsorship of the OASIS TC.
And that brings us to where we are today. We have a TC. We have a vision for XMILE. These are table stakes. A TC is a sales effort, and we must now expand our market of members to be global, business oriented, diverse, and inclusive. Over the next 24 months, we have to expand TC membership to 70. I'd like to see representation from North America, South America, Asia, Africa, and Europe. I see my job on this Technical Committee is to help expand customer demand for SD solutions and build a far larger market than exists today.
We are not just building a technical standard. We are building a market and I will continue to engage my peers to expand the use of XMILE worldwide as we work to develop an Open Standard for System Dynamics at OASIS.