Two years ago, I met Helmut Willke, the author of Smart Governance: Governing the Global Knowledge Society, at a hotel cafe near the great cathedral of Cologne. Professor
Willke is a sociologist who teaches Global Governance at the Zeppelin
University in Friedrichshafen, Germany. Late in 2009 I became
interested in Governance as a system of decision-making and Professor
Willke had written an excellent book exploring this topic. While the
Professor is German, he writes extremely well in English and his book
very well written and insightful. Like a lot of philosophical texts, it
is not an easy read. Dense descriptions, long sentences, and theory
backed by ample example make it a book you have to read at least twice
to fully comprehend.
I was in Cologne in late February 2010 to meet the CIO of the City and attend Rosenmontag at City Hall. I had already seen several days of Karnival, with the endless parades, costumes,
and candy strewn about the streets. For five or six days in February,
the staid and reserved city of Cologne becomes an endless drunken party
attracting visitors from all over the world who wear outrageous costumes
and march in parades on incredible floats and throw candy to the
bystanders. Its unlike any parade I have ever seen. Quite amazing.
It had snowed a lot that year. It was white from Brussels to Berlin,
and Cologne was still covered by eight inches. The square in front of
the Dom was clear, and I had spent the morning before our meeting
visiting the Roman museum across the square. Cologne is an ancient
Roman city and the ruins are collected in a fantastic museum right next
to the Dom. Of course there are columns and pediments, but also beautiful mosaic floors, jewellery, stained glass,
and decorative arts. There is a model of the Roman city and you can
see how the Germans built the city on the same street grid with walls
built on top of the Roman walls. Of course, much of this was destroyed
by allied bombs in WWII, but some remnants remain.
Looking back at Roman colonial rule of Cologne was an excellent
introduction to the systemic ideas of Governance Professor Willke and I
discussed over coffee that afternoon. He is not a tall man, mostly grey
late-50′s I would say, with bright blue eyes. He makes an immediate
impression, and is passionate about his book. I had used the book as
text for a class I taught at the Bucerius Law School on Data Governance
in Hamburg that January. My students did not entirely appreciate the
dense prose and abstract ideas, but through class conversation we did
ultimately appreciate the idea that Governance is a system of
decision-making that could be described and modelled. And we used
Social Networking metaphors to explore the idea of policy-making, human
behaviours in a system of Governance, and how to model potential
outcomes. Of course there is political science, which describes
political models of Governance – Democracy, Dictatorship, Monarchy, etc –
but what is unique and important about Professor Willke’s book is the
application of systems theory to Governance.
We had some coffee and talked mostly about how the Professor wrote
the book and why. As I had in 2007-8, the Professor had used the Global
Credit Crisis as a use case to describe failures in Governance. I had
covered this topic from a Data Governance perspective, arguing that
hundreds of incremental failures in business processes and data quality
had produced a domino effect that plunged the global economy into
Depression. He covered the topic from a decision-making perspective,
and while we approached this topic from different directions we arrived
at similar conclusions – policy-makers can’t possibly make the best
decisions without understanding the consequences of those decisions on
incredibly complex and interconnected global systems. And those
consequences are impossible to understand without new information
systems that render the complexity with software and illustrate how the
policies will be accepted and resisted.
In my class at Bucerius, my students complained that the Professor
had not done enough to provide solutions to the problems he had
identified, or that his solutions were too abstract. I presented these
criticisms to him at our meeting and he responded that it was not
possible to offer concrete solutions because every systemic problem
needs to be modelled to understand the variables and outcomes – that
there is no one size fits all. At the time, I thought this was a
dodge. It took me a few more years to understand that he was right.
There are no Governance Solutions that can auto-magically produce the
best outcomes for every decision. But it is possible for policy-makers
to use systems theory and software to construct decision-making models
that can plot many of the actors, objects, variables, and potential
outcomes to understand the impact of policies on complex systems made up
of hundreds, thousands, and even millions of human beings with unique
behaviours.
After my course, I synthesised concepts from the book with ideas from my students to create the Six Steps to Smart Governance.
It’s not meant to be a Framework. Frameworks and models are nice tools
to help people feel more secure about challenges they seek to overcome,
but they are not useful in making better decisions. The Six Steps are
meant to be a structure for decision-making that one would apply
iteratively; in which each of the six steps would involve different data
points and variables. Of course, it is highly summarised, flavoured
with marketing. And I would say in hindsight, its not really useful as a
practical or operational tool. It’s really just a theory, a
simplification of the better documented ideas Professor Willke writes
about in his book.
And I think we can do better. In the IBM Data Governance Council we
will soon begin to explore dynamic simulation models that go far beyond
the Six Steps to Smart Governance, and I recommend reading both the white paper and Professor Willke’s book:
Smart Governance: Governing the Global Knowledge Society
Today, thanks to really powerful simulation software, we can create
dynamic models that help demonstrate the impact of policy on people,
processes, and technology. The Data Governance Simulation Project will
revolutionise the field of Data Governance by applying theory, software,
and observed practices to an interactive model that will yield powerful
insights into Data Governance Value Creation and Risk Mitigation.
A lot of people ask me, “how do I show the value of metadata?” Some
say, “how do I make the business case for Data Governance?” Consultants
and Gurus will have a framework or process to offer you, a get started
guide with use-case examples, graphics, and legends about their
successes. But these myths won’t help you, because your challenges are
unique. Your politics are special, and your people are not machines.
Best practices are useful examples of glorified solutions that are very
hard to replicate outside the lab. And as many are already finding out,
people resist policies they don’t think apply to them and its really
tricky to understand how to change organisational behaviours on an
on-going basis without policies that dynamically change with new
circumstances.
Data Governance is, by nature, a systemic challenge and you can’t
solve systemic problems without systemic solutions. Projects and teams
that expect quick hits and 90-results are the reason you have systemic
Data Governance problems in the first place. But it is possible to
create software models that allow you to plot the goals, metrics,
policies, communications, outcomes, variables, and modifiers and
evaluate the impact of new policies and controls on your environment.
And that’s the lesson of Smart Governance: you can model complex
environments through Simulation and make better decisions. To learn
more about using Simulations to make better decisions, take a look at
the IBM Smarter Cities Demo.
In that demo, the complex interactions of human beings living in a city
are compared to the goals of human policies, the metrics measured by
interactions, and potential outcomes.
Many of our organisations are as complex as small cities. Policy and
Politics share the same ancient Greek root word – epolis. epolis is a
city, which itself is an aggregation of human beings who require
Governance to arbitrate their diverse interests and achieve better
outcomes for all. Today, we can simulate those interactions and help
Policy makers profile the impact of their policies before they are
deployed. Its a kind of Visual Risk Calculation.

If you would like to participate in the Data Governance Simulation
project, please read the Six Steps to Smart Governance White Paper, the book by Professor Willke, and join the IBM Data Governance Council by executing this membership agreement.
Only members of the Council will be able to participate in this
exercise and you don’t want to miss this because it will fundamentally
change Data Governance.
|
On Saturday, I sat with an old friend at a secluded restaurant on a grassy river bank North of Bangkok. We are both actively engaged in the banking industry as observers, speakers, and peripheral participants. My friend has a more direct engagement with a Thai Bank but still as an adopted outsider. Lunch was excellent, and we sat on a wooden pier just feet from the river's edge as barges, trawlers, and all manner of ships slowly passed by with and against the current. A pair of large floor fans blew hot air our way and an umbrella shaded us from the searing sun playing tag with the clouds above. The heat in Thailand is soft, enveloping, pervasive, and quietly oppressive. You have no hope of resisting its dictatorship. Somehow the Thais have developed a sweating immunity to their own condition, whereas this Western visitor is deficient in that regard. During lunch we compared current events in both Thailand, where the Red Shirts have barricaded themselves behind sharpened bamboo poles and tires doused with gasoline. Their encampment was many miles from our lunch spot, and indeed encompasses but a small corner of the entire city of Bangkok. Yet their determination to resist the current government, who themselves are only in power due to a similar incident involving a Yellow Shirt protest two years ago, has driven away western tourists and continues to cause confusion and insecurity in the highest elements of Thai society. And we discussed the Credit Crisis, Greek Debt, US Politics, and Regulatory Reform. On Greek Debt, we discussed how the former Greek government hid the massive debt it had accumulated from EU Regulators (reporting a deficit of only 3.5% each year instead of the 12% it actually was accumulating), and how this massive amount came to light only with a change in government - when one group had an interest in reporting the bad data another group had an interest in hiding. Most today call this an act of Fraud, but it also has to be admitted that it was not just the former Greek government who had an interest in hiding their debt. The Germans, French, Belgians, and perhaps even the European Central Bank had an interest in ignoring the reality of Greek economic underdevelopment and overextension. The data about Greek debt was available. Greece can't borrow on the black market. Their debt has to be issued in
bond markets, and the amounts, yield, and maturity dates are all public
record. Bond markets are largely transparent. But Transparency creates its own information asymmetries. First, the availability of information doesn't mean everyone collects the same amounts, has the power to use it, or knows what it means. Second, there is a private sector deference to public sector data aggregation, analysis, and reporting, and the public sector relies on static information reporting programs that limit source authentication, audit, and repudiation. These two behaviors allowed the Greek Government to report fraudulent deficit figures to the EU and the EU didn't bother to verify that information against publicly available market data. One could argue that the construction and expansion of the EU Common Currency without adequate audit powers created an environment rife for fraud, but this is too easy. EU regulators could have at any time used data from bond markets to verify Greek debt. Why the EU didn't monitor the discrepancy between public reports and private market data has more to do with EU politics than Data Governance. Every government is comprised of politicians who owe their hold on power to public perception. Everyone in Europe played See No Evil, Hear No Evil, Speak No Evil on the subject of emerging market debt in the EU. The information was available. Net inflows of financing and debt accumulation can be gained by studying the bond markets. Public obligations in Greece are also no secret. Everyone in Europe knew that pension guarantees starting at age 50 in Greece were a ridiculous luxury in a country with such low productivity and wages. Transparency and Reporting do not, in themselves, guarantee that anyone is using or validating information sources correctly. Every report needs to be validated with external sources, because Transparency is not the same as the Truth. If the EU wants to fix this structural problem in its own multi-nation confederation, it will need to create an independent auditor, like the US Government Accountability Office, whose role it is to audit member programs and reports, to discover waste and abuse. All reported data must be verified. If we didn't learn this in the Mortgage Credit Crisis, now is the time to take it home in the Sovereign Credit Crisis. Banks, Hedge Funds, and other investment institutions should not wait for the EU and other governments worldwide to get the audit role right. They should build their own Information Analytics programs to validate the assertions of governments as well as listed companies because what Greece did is not new. Fraud is a part of business. Data Validation should be seen as an important part of Market and Credit Risk Measurement and Mitigation programs. This is where Data Governance and Risk Management intersect, and new technologies will be needed to make reporting aggregation and analysis easier and faster. On the river, in Bangkok, I asked my friend if his bank monitored the market and credit activities of their Thai competitors. They do not. They expect the government to collect data from every bank, aggregate and report that to the banking community. And his bank reads those reports. I would argue that the events of the last three years clearly demonstrate that governments are not well equipped to be doing primary market data analysis on behalf of themselves or any industry. They lack the technology infrastructure and the analytical skill to make intelligent use of the data the market already provides and their political dependencies create natural conflicts of interest. Businesses must perform their own due diligence to verify government reports and conduct primary market data analysis of every potential investment opportunity. Unverified data should not be trusted. This is Data Governance Rule #1.
|
It starts with the definition. Systemic Risk is the risk inherent to an entire market or market segment, so says one website. From the definition, we can already see that systemic risk is primarily concerned with the prevention of risks to The System. The System in question is the global financial "system." So the goal of Systemic Risk is the prevention of loss to those involved in The System. You might ask, "why is Adler focusing on the obvious?" Well, I don't think it is that obvious who The System is and what their interests are. There are lots of well meaning people running around trying to craft new laws and methodologies to assess and prevent Systemic Risk. Most of them will fail without first understanding the needs and interests and goals of The System. The Goal of every System should be to serve the needs and interests of The Customer. Corruption of The System is when individuals or groups place the needs of themselves as actors in The System above the needs of The Customer. When The System is mostly serving the needs of itself, it is mostly corrupt. The Global Financial System has had corruption for decades. In the last decade, the influence of Systemic Actors exercising the needs of themselves over the needs of The Customer has become acute. The Financial Meltdown of the past 30 months is the result of this imbalance. So I wonder, which new brew of experts and which new conference will measure the needs of The System and compare them to the needs of The Customer to assess Risk? I have a self-serving answer:
|
We want our leaders to make the right decisions. But often they lack the right information. And when they do, they "trust their gut." Unfortunately, the stomach is the wrong organ for decision-making and ignorance, fear, and prejudice are often the poor substitutes for trusted information and rigorous analysis. And we know the result. We don't want every decision belabored by bureaucracy. We want decisions that are Smart; informed by what we know we don't know, contextual to past experience, compared to current conditions, and prepared for future exigencies. Few organizations have mastered this process today. Few in fact have any notion of what information they need to make Smart Governing Decisions. But everyone wants to. The aspiration is universal, because the competitive challenges in a flat world make bad decisions very costly. Smart Decisions demand Smart Information - Information about what's going on. Operational Awareness. That's the kind of information we will be exploring at The Smart Governance Forum I will be hosting on February 1-3, at the Ritz Carlton Half Moon Bay in California. I think we can make a difference and I hope you will join us. Smart Governance Forum Agenda
|
Last night, I was one of two panelists at a Global Association of Risk Professionals (GARP) symposium on Systemic Risk at Fordham Business School in New York. We were to be a moderator with three panelists, but one canceled at the last minute, presumably to stay home and watch the Yankees lose to the Phillies last night. The room was on the 12th floor in a mid-60's squat tower accessible from two elevators among a bank of six in the stone cold open and office-like lobby. Twelve is the top floor in the building, with a Rockefeller penthouse atmosphere. Black marble floors, mahogany paneling, subdued sixties swank. The symposium room was longer than wide, seated classroom for one hundred in three neat blocks. We panelists were paired on a white-clothed-table with microphones we didn't need. The moderator introduced us both; the NYU Business School professor and the IBM Data Governance guy. The audience looked half-asleep, and the first question rolled out on the table, "What is Systemic Risk?" Our gracious moderator had prepared a raft of intelligent questions for us that evening, but we would only get through two in the brief hour we digested. What is Systemic Risk? The professor told us it was the result of exogenous market conditions that created upper atmospheric bubbles in complex derivative instruments capable of devastating global economies. It could be measured in the up and down-swing of aggregate equity performance and controlled through the central banks he currently advises. He saw Systemic Risk as a macro-economic phenomena, the product of weak government regulation, greed on Wall Street, outrageous compensation packages, and unnecessary complexity in financial markets. Before the event, I wasn't quite sure what I was going to talk about. It was a hectic Monday full of ten conference calls on twenty different topics. I left late, had traffic on the Grand Central, got lost at Lincoln Center looking for parking, and there was no coffee when I arrived. I'm not an evening person un-caffeinated, and perhaps not the best morning person in the same condition. But droll media babble passed as tenured professorial wisdom will rouse me on the sleepiest of days. Systemic Risk is the probability of loss to a system. It is not actually a thing that can be calculated. It is a series of things that result in a loss event with causality and impact. Systemic Risk is not only about macro-economic catastrophe, because to say so is to say that we are not involved in Systemic Risk accept as victims. And that ain't true. Insofar as all of us, The People, are members of communities, parties, religions, nations, and environments we are part of a System. We are inter-related, inter-dependent, capable of causality, errors and omissions, losses and claims. Each incremental failure can cascade and result in systemic exposure. The Credit Crisis is the result of a series of public policy mistakes from 1999 to 2006 that encouraged bad business practices at many different stages of the mortgage underwriting and securitization process. These were incremental failures that contributed to loss events that destroyed parts of the economic systems upon which markets rely. The lesson to humanity from this experience is that We The People are all members of SYSTEMS large and small that can fail as a result of incremental policy mistakes. Actuarial Science has for too long focused on the probabilities of contained loss events. My body is a SYSTEM and Cancer is a systemic risk to me. It causes a
chain of events which can result in organ failure and death. Your company
is a system, and bankruptcy is a systemic loss event. If bees die,
plants won't be pollinated, and that can be causality to a systemic
risk to our ecoSYSTEM. The BBC Reports
(http://news.bbc.co.uk/2/hi/science/nature/8338880.stm) that record
numbers of plants, mammals, and amphibians are under threat of
extinction. This is a systemic risk. When entire species of frogs in
remote places like Tanzania become extinct in the wild, humans take
note - this incremental failure is closer to your role in the food
chain than you may think. Every System has risk. Every person in every system has a role. If we accept the gossip-press gospel that the Credit Crisis is purely
the result of greed on Wall Street, and can only be fixed by wise regulators in Washington, shame on all of us for missing the opportunity to internalize the economic externalities. It is not an academic exercise to study the risk in every system large and small. Systemic Risk is a real-world imperative for all of us.
|
Recently, I played tennis with my son. At 16, he's tall and lanky like me, but full of boundless energy and I have to play smart to keep up with him. I taught him most of what he knows in tennis and we both play at the same level - though I do enjoy when he wins. But on this day, there was no winning or losing. Our rallies were endless. We exchanged vollies, drops, topspin, and slice. If I won a point, he came back and won the next. There was no mercy and no letup. At one point, he sliced a ball low to my mid-court forehand and I had to rush from the backhand side of the court across to reach it. I'm not as fast as I once was but on this day I crossed the court with speed. As I got to the ball and lined up a chip drop, I looked up and found that my intrepid son had already anticipated that move and was rushing to the net to cut me off. I stopped short and just laughed. I said "you know what I'm going to do next, don't you," and he said "like, yeah, I know all your shots." That happens when you play with your son, because we know each other so well.
We played out the rest of the match and after I thought about that laugh we shared at the net as a metaphor for much of what I've learned about Data Governance, Risk Measurement, the financial crisis and the challenges of information and knowledgeknowlegde. You see, people are best at anticipating what they expect - especially in situations that breed familiarity. That's the reason why Value at Risk (VAR) was such a seductively attractive formula - in a largely pro-cyclical business culture, a formula that helps you anticipate what you expect (that today will look mostly like tomorrow, yesterday and the day before) is a winner. People who anticipate other outcomes are either brilliant visionaries who make "discoveries" (minority), or outliers who make trouble (majority).
I began the year thinking that financial regulatory authorities could make better policy decisions if they had the right data. But I now understand that many of them had the right data in 2005, 6, and even 7 but they didn't understand it, chose to ignore it, or lacked the political will to make radical, outlier, decisions that would adversely effect many key constituencies.
Hence my conclusion: Data Governance isn't enough. Collecting and aggregating data is an important step, but people need to understand what the data means as information, and that information needs to be communicated widely as knowledge. Not the finite biological knowledge we all have in our brains - the organic translation all of you reading this article are performing right now - but the metaphysical knowledge of a community knowing a common truth about the world so they are prepared to accept a decision to avoid an outcome they did not expect.
I don't care what kind of new Systemic Risk Council gets built at the Federal level of our government, or indeed what kind of new Regulatory Information Architecture is designed to support it. All of that is important but not as important as the steps people take to disseminate the information in both raw and interpreted form to a wide and varied constituency. The more people inside and outside the group that know what the group knows the better chance we have that outliers will interpret things the group will miss. And it is upon those outliers - the ones who anticipate what we don't expect - that crisis prevention most rests upon.
This last point is the hardest. In the financial crisis, only a few economists like Nouriel Roubini predicted the credit crisis before it began. Most of the other economists predicted it perfectly only in hindsight. But Nouriel was largely ignored by those economists and the media as "Dr. Doom," the naysayer who only saw the bad while so much good was going on. And that is human nature. If you aren't in the tribe of believers you are a barbarian, an outsider, who can't be trusted and must be demonized or destroyed.
This is of course very bad for the discovery of non-expected results, unless of course you ARE a barbarian trying to hack your way into the group in which case you should be destroyed. Trusting what you know, where it came from, where its going, and who's going to know it and do something about it will require new forms of transparency and self-governance. George Orwell wrote about the alternative, and we don't need to follow his example.
Because what we want is Trusted Information that empowers Doubt. Doubt about what information means is essential to effective decision making. And this is where I think a new Information Governance discipline, one that focuses on the Information needs of Governance as well as the challenges of Governing the use of Information is needed. That's at least what I learned from my son on the tennis court last week. We'll see what he teaches me today.
|
This morning, EU Regulators announced that they propose to create a Risk Board to monitor financial market performance and systemic risk indicators among the 27 member nations in the European Union. I've advocated a Council approach to risk-based decision-making since the beginning of this year and I think the EU proposal is a good idea in concept. Unfortunately, in Europe it seems decision-making takes a large number of people, becaue the European proposal would have 63 people participating on the Risk Board. A deliberative body with 63 people is not a "Board" - it is a legislature. To complicate matters, "only" 32 members of this board would have voting rights. Unfortunately, the only power they can vote on is a warning to member states that some part of their market performance contains systemic risk. How they plan to determine that threat and get everyone to agree on what it means in any reasonable amount of time is not clear. My guess is that this is a proposal to setup an intra-governmental think-tank that will study issues, write economic reports that no one reads, and only threaten to issue warnings because a vote on a warning will never happen.
http://www.bloomberg.com/apps/news?pid=20601087&sid=aHMAgVn36kzw
Note to Obama Administration: If you want to create a Systemic Risk Regulatory Structure that is guaranteed to fail due to political indecision and lack of authority, copy the EU model.
|
On September 14, David Bogoslaw published an article in BusinessWeek
entitled "How Banks Should Manage Risk." Rick Bookstaber and I are
quoted in this article because we first had an interview with David
following the XBRL Risk Taxonomy Meeting I hosted at the Levin
Institute in New York on May 13, and we had follow-up interviews two
weeks ago. As is the case in any press interviews, some of what you
say gets printed and a lot doesn't. In this case, I think much of the
substance of what I told David was out of scope for the BusinessWeek
audience and the goals of his article.
In
terms of a banking audience, David gets it all right, and I agree with
Rick Bookstaber's comments too. But what the article omits is the fact
that from 1999 to July of 2008 the US Congress, the White House, FHA,
the SEC, and the US Federal Reserve all participated in an
industry-backed weakening of the financial regulatory framework that
was built in the 1930's. In 1999, The Financial Services Modernization
Act (named Gramm-Leach-Bliley, or GLBA for short, after its authors)
removed 70 year restrictions on bank, investment bank, and insurance
cross-ownership. At the same time, derivative market oversight was
specifically excluded from GLBA and financial markets were allowed to
create and trade complex derivative instruments without regulatory
reporting or control.
In 2001, President Bush exhorted
Americans to "go shopping" to support the US economy following 9/11 and
the Federal Reserve obliged by cutting interest rates down to 1% to
pump liquidity into the US market. In 2004, Congress lobbied Fannie
Mae and Freddie Mac to relax underwriting guidelines on home loans to
allow sub-prime borrowers to participate in "The American Dream," and
own a home, and FHA provided loans subsidies to make it easier. In
2006, Congress pressured the same GSE's to relax underwriting on Alt-A
mortgages, allowing self-employed individuals to declare their income
with a signed affadavit instead of documenting their income through tax
filings. As I've written in past blogs, that change gave license to
mortgage fraud across the country as Alt-A borrowers could make wild
income declarations without validation and that pumped tens of
thousands of fraudulent mortgages into the global financial system.
This change wasn't reversed until July 2008, when the Federal Reserve
finaly changed Alt-A underwriting guidelines. The long tail of the bad
mortgages underwritten from 2006 to 2008 mean we will suffer
significant foreclosure rates welll into 2011, extending the depth and
breadth of this recession.
2006 proved to be the top of the
Housing Market in terms of house valuations and bank fees generated
from loan securitization and derivative markup. The pile-on
legislation and market encouragement from Congress, the White House,
and the Federal Reserve came from industry pressure to keep the party
going as long as possible.
Yes, Banks took on too much risk
from 2001 to 2007. But the US Government encouraged and enabled
excessive risk taking during that period, and both need to be monitored
to prevent future crises. There is an inherent conflict of interest in expecting the government that enabled the current credit crises to participate in the forecasting and prevention of the next one.
There is a history of financial
de-regulation followed by marked innovation and crash that goes back
100 years in the US. The innovation generates enormous wealth on Wall
Street and new tax revenues for Federal, State, and Local Governments.
The relationship between government enablement and financial innovation
was omitted in David's account and needs closer scrutiny because
policy-makers, and the public, will need new information management tools to realize the
impact of incremental policy decisions on financial market performance
over the longer term to be able to regulate wisely in the future.
In
the article, I recommended that the government create a new Regulatory
Information Architecture, modeled on the Information Sharing Councils
created by the Bush Administration for terrorism intelligence gathering
following the 9/11 Commission Report and the Intelligence Reform and
Prevention of Terrorism Act (IRTPA) of 2004. But more is needed.
A
year ago, I believed that new information technology and data
collection would enable the US Government to better analyze the
performance of financial markets and forecast potential bubbles and
crisis. I'm sure that enhanced information sharing in the US
Government will enable better regulatory enforcement, but it's not
enough to prevent future crises. The public needs to play a role in
the oversight process because the Government has its own interests
which are not always perfectly aligned with those of the public.
Administrations change, and with those changes come new philosophies of
governing and regulation, and in a Democracy like ours you always want
to enable others to regard and report information that others disregard
or deny.
Therefore, what's needed is more information
transparency about market holdings and the actions of market
participants so that anyone in any firm, university, or industry
watchdog can analyze nearly the same macro and micro economic data that
federal regulators observe and make their own forecasts and
predictions.
Without public access to better market data, we are just enabling government to encourage risk taking more efficiently in the future.
You can read the businessweek article here: http://www.businessweek.com/print/investor/content/sep2009/pi20090914_336015.htm
|
Data=Information=Knowledge. Or so we would like to say. In theory, data is unorganized information, and knowledge is information put to use by human beings. But theories are for academics. And this theory is super convenient if semantic consistency is important. There are Data Architects who only think about data in databases, Information and Content Architects who only work with unstructured repositories, and even Knowledge Architects who I suppose work with information taken out of human brains and put into... structured or unstructured repositories on computers...
In real life, in real companies, these are artificial distinctions. Organizations want to control data/information supply chains because they are full of quality control problems, security vulnerabilities, compliance challenges, and operational exposures. Those risks imperil decision-making, increase operational costs, and reduce revenue opportunities. Quality control and risk mitigation are challenges for every data type.
Five years ago, "Data Governance" seemed like a great name for a new discipline to help transform organizational behavior from vertical to horizontal; because information is transformational. What we meant then and mean now is not just about "Data" in the purest structured sense. We mean Data in the most plural and unlimited sense. People want to govern other people's use of all kinds of information in every form.
No data stovepipes please! We need Data Governance Solutions for all human uses of information regardless of their form or structure, use or abuse.
Anyone who tells you different is just so 20th Century...
|
On October 7-9, I will be hosting a conference on The Future of Data Governance at the Mohonk Mountain House (www.mohonk.com) in New Paltz, NY. This event has been designed to explore the challenges and solutions of Data Governance organizations constantly ask about: 1. How do I transform data into an asset? Data isn't an asset until you make it one, and its not an asset like gold, stocks, or oil. Those assets have commodity values based on their scarcity and demand. Data is an asset with infinite availability, so its value can't be based on the amount you own or the amount someone wants. The value of data is purely perceptional, unless there is a market for that data. iTunes, DVDs, Newspapers, and cable TV are all examples of data with values based on market demand through external sales channels.
But internally, we have no market for data sales. So the best we can do within an enterprise is increase the perceptional value of data as an asset. It has a perceptional value to Business when IT can demonstrate incremental revenue obtained through data consolidation, aggregation, cleansing business intelligence, and new sales.
Your data either is producing new revenue or it isn't. But when it is, getting business and operations to take notice and care about how the uses of this data are to be governed is easy. At this conference, we'll hear from customers who are both struggling with these issues and also those who have solved them. And I think we'll see that there are indeed best practices in working horizontally with Trusted Information that is a cause celebre for governance.
2. What are the risks to data assets everyone in the organization should be aware of? There are so many risks and liabilities from working with data today. We read about data breaches, privacy violations, and compliance challenges so often we become inured to the issues. But when Data becomes a perceived asset in your organization, knowing which risks to mitigate, avoid, or transfer out is critically important. Because no one has infinite resources to protect against every exposure, new methods in risk calculation, embedded deep in business processes and decision-making, are needed. And risk calculation can only take place when past mistakes and losses are accurately recorded, trended over time, and integrated into BI applications.
At the conference, we will explore the increased scrutiny that risk is getting and some of the best practices available in risk calculation, risk taxonomies, and forecasting solutions. We'll hear from customers with real use cases and experiences, as well as some vendors with exciting new solutions.
3. Organizationally, how do we govern the use of data assets and protect against risks? Data is unorganized Information, and Knowledge is information digested by a human being. Data itself can't be governed. It is inert until organized into information and transformed by a person into knowledge. A person can create data and information assets or put them at risk, so therefore only a person can be governed. Governance is a political process for organizing behavior to achieve certain goals.
Data Governance can be called other things, but the political organization can't succeed without x-organizational support. Just as we seek to create information assets by overcoming data stovepipes, so too do we need to overcome organizational stovepipes and link Business, Operations, and IT to achieve Data Governance goals.
Many organizations in the Data Governance Council have been successful in creating information assets, protecting them from risks, and organizing x-functional participation in Data Governance Councils. And they have achieved some stunning results.
Five years ago, Mohonk was the venue where I hosted our very first Data Governance event. Back then we organized three tracks to focus on Policy, Content, and Infrastructure questions. We had a lot of questions and ran each track as an interactive forum to frame common issues, understand the dimension of Data Governance, and identify convergent areas our customers wanted to explore. We had long discussions about data supply chains, policies and rules, metadata and data classification, security and risk. The dialog was extremely interactive, and coming out of that meeting there were many who wanted to continue. That was the genesis for the IBM Data Governance Council.
We knew then that Data Governance would become an important field. Some early visionaries like Robert Garigue from Bell Canada, Christa Menke-Suedbeck from Deutsche Bank, Charlie Miller from Merrill Lynch, Ed Keck from Key Bank, and Richard Livesley from Bank of Montreal helped us all to see the dimensions of the emergent market. And it was those leaders who helped to shape the Data Governance Council Maturity Model, which in turn helped define the elements of the Data Governance marketplace.
Of course, what we couldn't see then is how failures in Data Governance would threaten the world economy itself. The Credit Crisis was caused by incremental policy failures in almost every stage of the mortgage data supply chain. Loose credit led to bad home loan underwriting decisions, which were masked by rising home values. Huge fees in MBS and CDO trading led to inside-deals with credit rating agencies and banks and vast amounts of poorly documented mortgages came to be regarded as Tier 1 assets on many balance sheets around the world. These instruments were insured by complex derivatives traded without clearinghouses and created interconnected obligations among the largest banks with huge exposures should any one of them fail.
The media has focused on the wide segment of the funnel, the derivative market failure. Credit Default Swaps in this market had a notional market exposure exceeding $100 trillion. But the failure was within a supply chain and poor underwriting standards in loan origination from 2005 to 2008 continue to pollute banks with Toxic Assets and the long tail of mortgage foreclosure haunts our economy. Our mortgage market remains heavily discredited around the world and new Data Governance solutions are needed to restore investor confidence in the US Mortgage Market.
I've been working with a range of policy-makers and thought leaders on providing concrete solutions to those challenges, and I will host a round-table discussion on US Housing Data as a use case example on the value of data, the terrible risks that can still plague our economy from data pollution in that supply chain, and the concrete steps that can be taken now to address these issues.
I think this conference will be thought provoking and practical. The market is looking for Data Governance solutions. Not just know-how and not just software. But know-how and software and examples how to apply them. That's what we'll do and I hope you can join us. I think it will be the best Data Governance Conference ever. The venue is fantastic, the room rate unbelievable, and the conference fee is a true bargain.
This agenda will continue to evolve, so come back often for updates.
Conference Page
Introduction
Conference Agenda
Directions to Mohonk
Registration
|
While academics contort over the rise of successful bank lobbying on Capital Hill, Jack Reed has introduced the Rating Accountability and Transparency Enhancement (RATE) Act of 2009, which "would provide new oversight and transparency rules for Credit Rating Agencies." This is a serious bill with excellent ideas that will do more to correct one area of abuse in the credit crisis than many other current proposals. Credit Rating should be transparent so that market participants can validate rating methods and the SEC can provide oversight and audit over problems and failures. RATE includes further strengthening of existing regulatory structures, with new authorities provided to the SEC. But the important component here is new rating disclosure requirements which would make the methods credit rating agencies use to rate bonds, MBS, CDO, and other derivatives transparent and auditable. I also like the proposal for a new independent Compliance Officer, which is a power long overdue in ALL corporations. SUMMARY: The Rating Accountability and Transparency Enhancement (RATE) Act of 2009 (http://reed.senate.gov/newsroom/details.cfm?id=313172) The
bill strengthens the Securities and Exchange Commission’s (SEC)
oversight of Nationally Recognized Statistical Rating Organizations
(NRSROs) through enhanced disclosure and improved oversight of
conflicts of interest, and makes credit rating firms more accountable
through greater legal liability. Accountability of NRSROs • Holds
NRSROs liable when it can be proved that they knowingly failed to
review factual elements for determining a rating based on their
methodology or failed to reasonably verify that factual information. • Requires
the SEC to explore alternative means of NRSRO compensation, and
requires a Government Accountability Office study on payment methods,
in order to create incentives for greater accuracy. SEC Authority • Establishes an office in the SEC to coordinate activities for regulating NRSROs. • Directs
the SEC to ensure that NRSRO methodologies follow internal NRSRO
guidelines and requirements for accuracy and freedom from conflicts of
interest. Due Diligence Certification • Requires
certification if due diligence services are used to ensure that
appropriate and comprehensive information was received by the NRSRO for
an accurate rating. Ratings Disclosures • Requires
NRSROs to notify users when model or methodology changes occur that
could impact the rating, and to apply the changes to the rating
promptly. • Requires the SEC to establish a form for NRSROs to
provide disclosures on ratings, including methodological assumptions,
fees collected from the issuer, and factors that could change the
rating. • Requires NRSROs to provide rating performance information, such as information on the frequency of rating changes over time. Conflicts of Interests • Requires
NRSROs to have an independent compliance officer to manage conflicts of
interest and independently review policies and procedures governing
ratings so they are free from conflicts. • Requires the SEC to regularly review NRSRO conflict of interest guidelines. • Creates
a look-back provision requiring that if an NRSRO employee later becomes
employed by an issuer, the NRSRO must review any ratings that the
employee participated in over the previous year to identify and remedy
any conflicts of interest; and provides for SEC reviews of NRSRO
look-back policies and their implementation.
I see this bill as another indication that financial regulatory reform will fix underlaps and gaps in existing authority rather than build a new systemic risk regulatory institution.
|
Agriculture is not the first word that comes to mind when contemplating systemic risk regulation, but the Senate Agriculture, Nutrition, and Forestry Committee was the gladiatorial arena for systemic risk regulation of derivatives last week. Agricultural commodities are traded on the Chicago Mercantile Exchange and the Commodities, Futures, and Trade Commission (CFTC) regulates commodities trading, and the Senate Agriculture Committee oversees CFTC. A week ago, the Senate completed nomination hearings for Gary Gensler, the new CFTC Chairman. Gary's nomination was approved unanimously by the committee, and his participation in the hearings last week on "Regulatory Reform and the Derivatives Market" was his 8th day on the job. But judging by his testimony performance, it is easy to see why both Democrats and Republicans love him. He's smooth, diplomatic, and combines left and right positions in the same sentence. Other expert testimony came from:
Ms. Lynn Stout
Stout Testimony
Professor
UCLA School of Law
Los Angeles, CA
Mr. Mark Lenczowski
Lenczowski Testimony
Managing Director
J.P. Morgan Chase & Co.
Washington, DC
Dr. Richard Bookstaber
Bookstaber Testimony
New York, NY
Mr. David Dines
Dines Testimony
President
Cargill Risk Management
Hopkins, MN
Mr. Michael Masters
Masters Testimony
Masters Capital Management, LLC
St. Croix, USVI
Masters Testimony
Mr. Daniel A. Driscoll
Driscoll Testimony
Executive Vice President and Chief Operating Officer
National Futures Association
Chicago, IL
Lynn Stout and Michael Masters presented populist, anti-establishment, arguments for regulatory reform. Mr. Masters has impressed me in the past with his presentations on derivative markets, and in his testimony he pushed hard for notional derivative clearing and exchange trading. Mark Lenczowski and David Dines toted the bank party line on the need for choice in derivative markets, the complexity of the OTC market, and the extra costs standardization of derivatives would add to transactions. Rick Bookstaber made some reasoned and logical remarks about how easy it would be to standardize derivative trading and why it would be desireable to put it into an exchange. He said that the opacity of derivatives makes them the weapon of choice for gaming the regulatory system, that banks use them to acheive investment goals that hide leverage, skirt taxes, and obfuscate investor advantage. The key battle positions now are: Conservative: Leave things as they are with greater capital and margin requirements, some transactional reporting. The banks contend that exchange trading is an option in today's market but that customers should decide whether they want to buy derivatives on exchanges or via OTC. Banks already face Capital and margin requirements on derivative trading, so new limits would largely impact non-bank derivative market players. An enhanced status quo seems unlikely, and I think the banks know this and thus are taking this position as a negotiating tactic to limit the Moderate choice. Moderate: Force derivative trading into clearing houses, require capital and margin requirements, set new position limits on holdings, and use TRACE to track market transactions. This is the essence of the Geitner proposal and Mr. Gensler espoused this position eloquently. I also believe that the banks are comfortable with this solution, because they created the clearing houses and have enormous influence there. The new capital and margin requirements would make benefit the 14 primary broker dealers and if the banks are going to give up some opacity through clearing houses they want at least to ensure a cartel status for derivative dealing. Because Gensler and Geitner are already on board with this, and bank lobbyists are behind their support, I see the moderate option the most likely. Liberal: Force derivative trading into an open exchange in which all transactional volume, price discovery, bid/ask, etc is fully transparent. This option creates the greatest market efficiencies and allows any dealer of any size to participate in a very liquid and open derivative market. In the beginning, there would be some semantic challenges packaging bespoke derivatives into mass-customized and standardized products. But the data models and technology exists to perform these data gymnastics and the industry would, over time, become adept at provide customized derivative products in standard offerings. In an exchange, it is harder for banks to game the system, and the benefits of derivative trading are more widely shared. Thus, banks want to avoid this. Unless Obama comes out in favor of exchanges, I see the Liberal option falling to the bank cartel. The challenge with any of these scenarios is enforcing positional limits. CFTC, and the Senators, want the regulatory power to impose position limits. This would entail positional reporting and some kind of kick-back function at the clearing house or exchange to limit registered broker/dealer transactions. But the technical solution has some complexities not obvious to the untrained senatorial eye... A derivative position is not the same as an equity position. When I own two shares of IBM Stock, they are two units of the same instance. When I own two XYZ currency swaps with the same maturity date, they are two instances of the same unit, and they may also have other characteristics that make them different. It is not possible to add up all the derivative units at the end of the day and compare them in the same way as you might with equities. You have to record each transaction and tally up the common elements, and then you need to analyze all the composite positions to determine what they mean. One imortant thing that all the panelists missed is the fact that it is not possible to standardize derivative products, per se. It is the components and their semantic definitions that can and must be standardized. That is, a Chevy and a Ford are both cars but they are different types of cars. Yet both have standardized components (often made by the same parts suppliers) that make them subject to classification and their functions interchaneable. We need the same kind of classification of derivative components, so that every buyer and seller can set the features they want for the financial goals they have.
By standardizing derivative components, and plugging them into a configuration engine, it will be possible for an exchange to offer customizeable derivative products to any buyer and seller in the same way as banks do today via the OTC market. The conditions may vary, but the components will be interchangeable. This is the dirty little secret banks don't want anyone to know. Because when exchanges can offer mass-customized derivative products, the huge transactional fees that banks derive from the opacity of risk will evaporate...A few months ago, the big talk in DC, NY, and among academic circles was that the CFTC would get merged into the SEC, and that the Fed would assume responsibility as the systemic risk regulator. I think that talk is now dead. Last week, Mr. Harkin, Chairman of the Committee, and Mr. Chambliss, the ranking republican, made many mentions and requests of Mr. Gensler on his resource requirements for regulating derivatives in CFTC. Mr. Gensler mentioned that the CFTC is woefully underfunded, with only 570 people on staff, and the commission would have to double in size at least to manage the complex derivative market. Harkin and Chambliss made it quite clear that Mr. Gensler would be getting new authorities and new funding, signaling to Treasury that CFTC will remain independent and overseen by Harkin and Chambliss in Senate Agriculture, thank you very much. Power being what it is, the deck chairs in Washington will not be changed. Systemic Risk will be regulated in parts and pieces. I predict we have Systemic Risk Governance Councils in our future and that all the major regulators will get new authorities, new funding, and oversight from the same crusty old men and women in Congress who failed to oversee and fund them correctly prior to the crisis...
|
ComplianceWeek covered the XBRL Risk Taxonomy Forum Meeting in NY last week with an excellent article enclosed here.It is a longer article, but this is from the front page: Using XBRL to Attack Systemic RiskBy Todd Neff — April 7, 2009 Already hard at work making Security and Exchange Commission filings interactive, XBRL technology now finds itself at the heart of plans to save the U.S. financial system from future calamity. A group of risk-management leaders in the financial industry has begun studying how XBRL might bring clarity and transparency to the murky world of financial risks, much the same way Corporate America has just begun using XBRL to bring more clarity to financial statements. While any such system is a long way off, proponents say the technology is tailor-made to help regulators (and investors) root out hidden threats to corporate balance sheets before they, well, break the bank. XBRL could, for example, let a regulator peer through a bad debt line item and see the individual loans feeding it; that task would take hours of spreadsheet diving today. But XBRL could also do much more. Steven Adler, director of IBM Data Governance Solutions, says the computer language provides a standard vehicle for regulators to track not only weeks-old summary data, but also financial positions accruing across many banks and market segments. That would shed more light on systemic risks—which, left unchecked, can bring financial calamity of the sort we’re witnessing today. Any potent XBRL-based scheme to report risks, however, would require the reporting of daily financial positions, a major shift in how trading firms, hedge funds, and investment banks do business. To that end, Adler’s IBM Data Governance Council is spearheading a movement that would change how investment banks and hedge funds interact with regulators. “At this point, everybody is aware change is coming,” Adler says. “And parties would rather be in the room together talking about common solutions.” A speech Federal Reserve Chairman Ben Bernanke delivered last month shows him to be in agreement. Bernanke advocated taking a “macro-prudential” approach to risks that are “cross-cutting,” affecting many firms and markets or concentrating in unhealthy ways. It would involve “monitoring large or rapidly increasing exposures—such as to sub-prime mortgages—across firms and markets.” You can read the full article here.[ Read More]
|
On February 26-27, I hosted an XBRL Risk Taxonomy Forum in NY at The Levin Institute in which we explored the concepts of operational, market, and credit risk. Through interactive discussions, we looked at how those concepts could be articulated in an XBRL Taxonomy and what benefits regulatory authorities and market participants could derive from new key risk indicator monitoring. We looked at the ORX example of Operational Risk loss event reporting and saw how 50+ existing banks are sharing operational loss data to better trend individual losses and learn x-industry loss patterns. And on the last day, we explored positional reporting as a key risk indicator of market crowding and bubble formation. One outcome of the meeting was a call for a followup meeting to review the ORX example in greater depth and explore both existing risk reports and sources of positional data. On April 23, we will meet again at the Levin Institute to focus more deeply on the ORX data model, an examination of existing regulatory reporting, and positional reporting options from Swift and DTCC. The work will be done in English – no XML – to make it easy for everyone to participate. Our goal is to answer some fundamental questions: 1. Is the ORX data model sufficient for Operational Risk reporting on a national level? 2. What is the right business model for Operational Risk reporting and who should maintain the taxonomy? 3. What kinds of key risk indicator data are already collected by financial regulators that are either not used on a systemic basis or not shared across the government? 4. What is the most efficient method for collecting end of day/week positional data? - from market participants directly? - via clearing and settlement firms? 5. What should be the role of a semantic repository in the construction of risk reporting taxonomies? 6. How should the regulatory authorities build and maintain regulatory taxonomies? 7. How should the world maintain semantic consistency between many regulatory taxonomies? 8. What should a 21st Century Regulatory Information Architecture look like? We can't possibly answer all of these questions in one day, but we can begin an informed dialog and encourage global participation - No one else is addressing these issues and I think we can make a difference doing so. I look forward to seeing you on April 23rd. https://www.ibm.com/developerworks/blogs/resources/adler/IBM%20Data%20Governance%20Risk%20Taxonomy%20Meeting.pdf[ Read More]
|
In the last five days, a lot of people have asked many great questions that I thought I'd answer on this page to provide a better accounting of what this is all about and what we hope will result. Q: What is XBRL?A: XBRL (Extensible Business Reporting Language) is an XML language for describing business terms, and the relationship of terms, in a report. It enables semantic clarity of terminology by standardizing a data model - the field names and their relationships - for reporting purposes. Q: Why Do we need a Risk Taxonomy in XBRL?A: Because Risk measurement, calculation, and reporting are mysterious, arcane, and underutilized business processes in banking and financial markets and reporting standards can demystify, simplify, commoditize risk calculation as a more ubiquitous part of business decision-making. In the insurance world, risk measurement, calculation, and forecasting are THE BUSINESS. But insurance companies don't tell you what formulas they use to calculate your premium, how they determine their own reserves, or what protocols and methods they use to pay out claims. Actuaries study for years to learn these methods, and very few business professionals - and virtually no IT professionals - have any idea how risk is measured, calculated, and reported. Q: But what do you mean by Risk Measurement? Don't we need Risk Management?A: Sure. Risk Management is important. But only human beings can manage risk, and before we get there we need to measure past losses, compare them to current events, and forecast potential outcomes. Making a business decision without this analysis is risky. Making a business decision with this analysis is also risky, but when the inputs and decisions are recorded, we have the opportunity to learn from our mistakes and improve over time. We will never eliminate risk, but we can use scientific decision-making techniques to improve our odds. Today, most people focus on Risk Management. They use qualitative risk assessments to imagine what kinds of vulnerabilities, loss events, and losses may be incurred from business activities. This is a valid method for forecasting and preventing potential losses. But the methods and results vary with the qualitative insight and skill of the practitioner, and they are dependent on disciplined application. Over time, it is very difficult to compare quantitative loss results to qualitative risk assessments. We can leverage standards in risk measurement reporting to apply quantitative risk assessment to the practices of risk measurement and management so that inputs and outputs have a mathematical foundation. That foundation allows automation, and automation enables ubiquity of application. And that's the purpose of a standard - to enable widespread application and value - so that everyone can measure, calculate, and report risk; without an actuarial degree. Q: Why do we need risk standards?A: One of the things we've seen in the current Credit Crisis is the ambiguity and confusion about risk. Regardless of whether you are a trader paid to take risks or an IT professional paid to avoid risk, it is nearly impossible to understand the incremental impact of your decisions on your department, your division, your company, your industry, your market, economy, or nation. There is just too much data today and our regulators haven't tooled up to take advantage of the information companies could produce to help regulators and markets operate more transparently. We know now in dramatic hindsight that incremental risks have systemic impact. People can only understand that impact when they can aggregate the incremental losses in the past, compare them to current circumstances, and make forecasts about the future. To aggregate and compare risk data, we need standards and XBRL seems to us to be the most logical and effective tool to create those standards. Q: How could the XBRL Risk Taxonomy be Used?A: These standards will enable more effective risk measurement and reporting within firms, new macro-economic tools for regulators and policy-makers, transparency for financial markets, and a more ubiquitous use of risk calculation in decision-making across innumerable disciplines. Let me give you an example: The insurance industry does risk calculation all the time. If you are a doctor, lawyer, accountant, or financial advisor, chances are you buy professional liability insurance. When you apply for the coverage, you tell your insurance company about yourself, your business activities, past losses, claims, and insurance coverage. The insurance company will compare your application to their own database of insureds, losses, and rates. The insurance company will also compare your loss profile to claims data it purchases from the Insurance Standards Organization (ISO). ISO aggregates loss data from insurance companies across the US and provides anonymous records back to the same companies. Insurance companies need that 3rd party verification of loss data for loss rating and trending. No matter how large an insurance company, and no matter how many years a company has been doing business and collecting loss history, everyone compares in-house data to aggregate industry data. Its a larger statistical sample size and it helps everyone set aside the right amount of premium from each insured for reserves to payout future losses. We need the same kind of system in the financial markets. It is partially there today. Under the Basel II accord, banks are required to report the amount of gross income they set aside to self-insure against fore-casted losses. But they only report that in the aggregate. No one is reporting the underlying data from which the risk reserves are calculated, and data reporting on that level could have huge benefits. One benefit is that regulators could compare reported loss information across national and international economies. This would provide enormous new insight into macro-economic trends that could help reduce business cycle volatility. Another benefit is that banks and financial firms could compare their own loss information to very large samples of industry losses. This would make their own forecasting models far more efficient and that would help everyone manage risks more effectively and reduce paid losses over time. A final benefit is that markets and rating agencies would gain new insights into underlying exposures in financial instruments and that would enable far more accurate and timely forms of risk rating, making markets more transparent and efficient. Q: Why is the Data Governance Council leading this standards initiative?A: Because Risk measurement, calculation, and reporting within and between enterprises is not possible without semantic clarity around how we classify, describe, and document incidents, losses, events, formulas, and a host of other terminology. This is a very complex topic, and it is so easy to be confused and confounded by the terminology. Before we can all talk about this topic intelligently, we need a common vocabulary. That vocabulary will enable efficient communication, transferable methods and skills. And this is very much a Data Governance challenge. The Data Governance Council has been studying these issues for four years and - together with our partners in the FSTC, EDM Council, OCEG, and other organizations - we think we can make a difference with this standard. Q: Why would organizations want to apply XBRL to risk?A: We can see clearly from the subprime credit crisis that there are still some non-standard methods for appraising risk. We don’t have semantic interoperability to allow us to take an aggregate look at risk across multiple organizations. This makes it hard for companies and regulators to agree on what risk there is and it is difficult to consistently report the risk companies are taking. XBRL can be a tool to help organizations use common standards for the way risk is described. Q: What benefit would XBRL for risk reporting provide companies and regulators?A: By translating risk reporting into a consistent software language, this will enable organizations to more easily perform advanced analysis, meaningful research and compare risk and loss history among multiple organizations. It could be used for internal reporting purposes or external. Regulators could use it potentially to create a global loss history database of anonymous credit, market and operational incidents, events, and losses from every institution, much like the insurance industry relies upon. XBRL could make risk simpler and more powerful and that should create broad market benefits. Q: What are the primary obstacles to the adoption of XBRL for risk reporting?A: The real challenge is not in creating a risk taxonomy using XBRL. The challenge is getting agreement upon it and ensuring there is willingness worldwide to use it. That is why the Data Governance Council is seeking input from organizations and regulators worldwide. Q: Who is supporting this initiative?A: In addition to more than 50 IBM Data Governance Council members, the Securities and Exchange Commission, the Enterprise Data Management Council, the Financial Services Technology Consortium, the Organization of Compliance, Ethics, and Governance, XBRL International and XBRL.US are all contributing to the process. Q: How far along are you in the process today?A: We have a starter taxonomy that we will begin socializing at an XBRL for Risk Forum on February 26-27 at the Levin Institute in New York. The Data Governance Council’s role is that of a facilitator, seeking proposals and comments to begin defining a taxonomy for risk that can be agreed upon by many organizations worldwide. This work will continue through the first half of next year with a final recommendation expected by the end of the year. http://www.fstc.org/docs/conferences/IBMDataGovernanceForumonXBRL.pdf[ Read More]
|