Do today’s MBAs need Analytical Skills? That was the question that a recent Symposium
tried to answer.
On October 21, George
Institute for Integrating Statistics in Decision Sciences (I2SDS)
and IBM’s Analytics
held a Symposium entitled: Analytics and the 21st Century MBA. The abstract provides a good description of
the thesis of the Symposium:
The 21st century belongs to those who can think and act analytically. No
longer is it good enough to make business decisions, no matter what the field,
based on little more than feelings or gut reactions to events. Consumer
products companies, insurance companies, banks, governments, and even sports
teams are turning to Analytics to improve their bottom line and assure their
survivability in this age of hyper-competition and increasingly severe
This Symposium… will demonstrate how Analytics is, a critical component
of 21st Business careers, whether the practitioner's primary responsibility is
in a functional area (Marketing, Operations, Finance, Strategy, International
Business, HR) or a vertical such as Health Care or Tourism.
The Symposium provided talks by leading users of Analytics in Marketing,
Retail, Finance, and the Public Sector.
More on the Symposium is at: http://business.gwu.edu/decisionsciences/i2sds/pdf/GWU%20ASCOutline.pdf
Do you agree with the thesis? Are you
seeing more need for employees with analytical skills? Do you think those with these skills are
having an easier time getting jobs?
I’d like to hear your thoughts.
Frank Stein, Director, Analytics
The six years since IBM ushered in the new era of Cognitive Business have witnessed several pivotal transitions. The massive system of servers and disk drives that beat Jeopardy! using an advance orchestration of machine learning, natural language processing and statistical reasoning has evolved into a sophisticated set of services delivered through a world class cloud infrastructure. To help you understand the direction of these enhancements and their impact on Cognitive Business, the IBM Analytics Solution Center was pleased to have Rob High, IBM Fellow, VP and CTO Watson Solutions, present on the future of cognitive augmented intelligence.
Rob started by taking us back to the Jeopardy Challenge in 2011, reminding us how hard it is for a machine to answer a question correctly, but also how good people, like Ken Jennings, are at answering questions. What changed that allowed Watson to win at the game of Jeopardy? IBM took a different approach than classical AI which focused on semantics, ontologies, and rules -- IBM focused on linguistics and the use of machine learning to help uncover signals to the right answer.
Rob cited the consulting firm IDC’s Futurescape report that said that “by 2018, half of all consumers will regularly interact with services based on cognitive." Why this remarkable adoption of cognitive technologies? We collectively are generating so much data today that we can’t consume and make sense of all we are generating. Doctors can't read everything in their field – they would need to spend 150 hours a week to read everything, leaving no time for doing their job – or sleeping. Every one of us is in a similar situation.
What are Cognitive systems? Cognitive systems have 4 characteristics – they understand, reason, learn, and interact with people. Rob explained that these systems are different from traditional rule-based systems because they are taught based on data rather than programmed. This training data impacts how the system will answer questions – customers will use training data specific to their organization and thus create cognitive systems that conform to their organization’s business approach, and more broadly, its philosophy.
Since the Jeopardy Challenge, IBM has been very active in enhancing the technology and providing new Cognitive offerings. Rob focused on IBM’s latest work on Conversation services. Conversations are much broader than just answering fact-based questions. Conversations, whether between two people, or people and machines, should engage the user, understand the user’s concerns, build on an idea, and leave the user inspired and satisfied at the end of the conversation. In the best conversations, each party comes away from the conversation with new thoughts that were generated within the conversation. It will be hard to develop such a sophisticated Conversation service but this is our goal.
Rob then showed a video of a future Cognitive Mergers & Acquisition Advisor named Celia that responded to questions from two people analyzing acquisition targets. Celia could understand the conversation between the two people and then interrupted to ask, “It sounds like you are discussing the work we did last week, would you like me to bring up the results from that session?” Imagine a cognitive assistant that could participate in your conference calls, recalling previous action items, checking to see if the items had been accomplished, or performing analysis that it deems pertinent to the discussion.
One of the crowd-pleasers at the Seminar was the demo of “Embodied Cognition” using a Pepper humanoid robot (from Softbank Robotics) connected to Watson Conversation service. Besides answering questions, Pepper would turn to face the speaker, gesture with her (?) hands, and provide inflection in her voice. Pepper can also use the Watson Visual Recognition Service to recognize individuals and Watson Tone Analyzer to understand the user’s emotional state. Although the answers were no different than what Watson could provide without the Pepper embodiment, the human-like interactions were a strong draw to the humans attending the seminar!
Rob’s slides are available at www.ibm.com/ascdc under the May 31 event. Or email me if you’d like more information: firstname.lastname@example.org
In medieval times, Alchemists hoped to convert base metals
into the noble metal gold through the use of a Philosopher's Stone.
Today, in the field of information science, we talk about
Information Alchemy, converting data into information and then into
knowledge. Some people even add a 4th
stage of converting knowledge into wisdom[i], but
that will be for another blog post.
Data is defined as the raw characters or numbers, whereas information is
defined as the processing of that data into various relationships so they have
some meaning. Dr. Eisenberg at the University of Washington describes knowledge as the
“collected, combined, organized, processed information for a purpose.” Over time, it is thought that accumulated and
refined knowledge leads to Wisdom.
This year, the total of all digital data created is forecast
to reach close to 4 Zettabyes, or 4x 1021, according to IDC[ii]. This is nearly four times the 2010 volume and
it is growing rapidly. All of this data
should let us make a smarter and better planet.
However, today we’re drowning in all this data because we don’t have the
time as individuals to process all this information, and we don’t have computer
systems that can turn this data into insight,
But soon that will change.
We are entering a new era in computing which IBM is calling Cognitive
Computing. The first of these systems is
the IBM Watson system which debuted on the Jeopardy! Show 2 years ago. Traditional computing systems have done a
great job with handling data, including storing it and manipulating it into
information. So now we have lots of
financial, inventory, customer, and all sorts of other, mostly numerical,
We also have lots of unstructured information such as text,
audio, graphics, and video. We used to say that 80% of the new bytes being
created today were associated with unstructured data, but that number is
probably closer to 90% given all the video being created these days. This text and multimedia information is
human-readable – in fact, it is designed by humans for humans to understand but
is not easily understandable by today’s computers.
And that is a considerable problem. Today, the transformation of information into
knowledge is primarily done in people’s heads.
Not just by scientists, engineers, or financial analysts, but by
everyone who reads an article or watches a video. The time available for people (some would
say skilled people) to analyze information to gain insights (knowledge) is the
limiting factor in the production of new knowledge today. To say this another way, we are now
information-rich, but knowledge-poor.
The goal of the cognitive computing efforts is to remove
this limitation by designing computer systems that can take this abundance of
information, much of it in human readable/viewable formats, and convert into
knowledge. For example, in the Jeopardy!
IBM Challenge, the Watson computer system analyzed its deep information stores
to find the answer that best answered the clue and the category. It did this feat by utilizing many different
algorithms to attempt to “understand” the text information and a machine
learning (artificial intelligence) scoring system to select the best response.
In a more significant effort, IBM is working with Memorial
Sloan-Kettering and WellPoint (a major BC/BS licensee) to use cognitive
computing technology to assist doctors by helping to identify individualized
treatment options for patients with cancer. It is, in effect, creating knowledge of the
appropriate treatment options from information about the patient’s condition
and medical history, and information from clinical trials and best practices on
While the field of cognitive computing is just beginning, I believe
over the next several years, we will learn how to perform “Information Alchemy”
and we’ll see how this newly created knowledge can benefit our organizations
and our lives.
As the quintessential information-based organization, government agencies may be in the biggest need for "information Alchemy." Do you seen this need? Do you see opportunities for Cognitive Computing at your agency?
Director of IBM’s Analytics
[i] Eisenberg, Mike,
“Information Alchemy: Transforming Data and Information into Knowledge and
Wisdom”, March 30, 2012, http://faculty.washington.edu/mbe/Eisenberg_Intro_to_Information%20Alchemy.pdf
Derechos, Droughts, Hottest July on Record, Shattered
High Temp Records, Greenland Ice Sheet Melts. Just what is going on with the weather these
days? Is this weather really abnormal or
does it just seem to be that way? Is this part of a trend? Does global climate change mean we’ll have
more of these extreme weather events? Being
a data and analytics person, I started looking to see what data analysis had
been done on this subject.
The US Climate Extremes Index[i] provides
a measure to track the occurrence of extreme data (although it doesn’t take
into account Derechos and other severe wind events). The trend of the index (smoothed) has been on
the rise since 1970 and now is at an all time high, as shown below. The Index
was at a record high 46% during the January-July period, over twice the average
value, and surpassing the previous record large CEI of 42% percent which
occurred in 1934. Extremes in warm
daytime temperatures (83 percent) and warm nighttime temperatures (74 percent)
both covered record large areas of the nation, contributing to the record high
year-to-date USCEI value.
This index is
compiled by combining measurements throughout the country (1,218-station US Historical Climatology Network)
that show the percentage of the country impacted by extreme weather in terms of
maximum temperatures much above or below normal, minimum temperatures
above/below normal, percentage of country in severe drought/severe moisture
surplus, percentage of the country with a much greater than normal proportion
of precipitation derived from extreme 1 day events, and the percentage of the
country with a much greater than normal number of days with
The U.S. Global
Change Research Program in 2009 published a study which documented the changing
climate and its impact on the United
study uses 3 standard forms of data analysis: 1) reports on observations, 2)
predictions based on the observed trends, and 3) modeling to better predict future
climate changes based on various assumptions about the amount of heat-trapping
gases in the atmosphere. While the first
two types are based on large quantities of collected data, they use only U.S.
observations. The modeling, however,
must be done on a global basis which substantially increases the amount of data
that must be crunched.
Here are some of the findings as they relate to extreme
Overall Warming of the Climate
Temperatures, on average, in the1993-2008 period are 1-2ºF
higher than in the 1961-79 baseline. By
the end of the century, the average U.S. temperature is projected to
increase by approximately 7-11ºF under a high emissions model and by
approximately 4-6.5ºF under a lower emissions scenario. The temperature observations show that there
has been an increase in warmer and more frequent warm days and warm nights, and
warmer and less frequent cold days and cold nights in most areas.
More intense, more frequent, and longer-lasting heat waves
In the past several decades, there has been an increasing
trend in high-humidity heat waves, characterized by extremely high nighttime
temperatures. Parts of the South that
currently have about 60 days per year with temperatures over 90ºF are projected
to experience 150 or more days a year above 90ºF under a higher emissions
scenario. In addition to occurring more
frequently, at the end of this century these very hot days are projected to be
about 10ºF hotter than they are today.
Increased extremes of summer dryness and winter wetness with a generally
greater risk of droughts and floods.
Trends in drought have strong regional variations. Over the past 50 years, with increasing
temperatures, the frequency of drought in many parts of the West and Southeast
has increased significantly. Models show
that the Southwest, in particular, is expected to experience increasing drought
as the dry zone just outside of the tropics expands northward with global
Precipitation coming in heavier downpours, with longer dry periods in
While average precipitation over
the nation as a whole increased by about 7% over the past century, the amount
of precipitation falling in the heaviest 1% of rain events increased nearly
20%. One of the outputs of the climate
modeling is to project the probability of certain events. For example, heavy downpours that are now a “1
in 20 year occurrence” are projected to occur about “once every 4-15 years” by
the end of the century. These heavy downpours are expected to be
10-25% heavier by the end of the century than they are now. This will likely cause more flooding events
(flooding depends both upon the weather and the susceptibility of the area to
More intense but fewer severe storms
Reports of severe weather such as
tornadoes and severe thunderstorms have increased during the past 50 years.
However the climate study indicates that much of this may be due to better
monitoring technologies, changes in population areas, and increasing public
awareness. Climate models do project an increase in the frequency of
environmental conditions favorable to severe thunderstorms. But the report notes, “the inability to
adequately model the small-scale conditions involved in thunderstorm
development remains a limiting factor in projecting the future character of
severe thunderstorms and other small-scale weather phenomena.[iii]” Advances in modeling and big data analytics,
as well as improved monitoring networks are likely to reduce this limitation in
The June Derecho that hit the Washington metropolitan
area shows an example of the current state of the art in forecasting a severe
storm. The Storm Prediction Center of
NOAA was able to provide approximately 4 hours advance warning of the
storm. Longer term predictions would
require additional data about the atmospheric instability that propelled the
Derecho from Iowa to the Washington
Metro area, as well as better real time modeling.
Shift of storm tracks towards the poles
Cold season storm tracks are
shifting northward over the last 50 years, with a decrease in the frequency of
storms in mid-latitude areas. The
northward shift is projected to continue, and strong cold season storms are
likely to become stronger and more frequent, with greater wind speeds and more
extreme wave heights.
The climate changes will have an
interesting effect on the so called “lake-effect”. Over the past 50 years, there is a record of
increased lake-effect snowfall near the Great Lakes. As the climate has warmed there is less ice
on the Great Lakes which has allowed greater
evaporation from the surface resulting in heavier snowstorms. Eventually, the temperatures are expected to
rise sufficiently that much of the precipitation will end up falling as rain,
reducing the snow totals.
While trending of individual elements such as temperatures
is useful, accurate predictions require consideration of the interaction
between the climate elements. For
example, there is mutual enhancement effect between droughts and heat
waves. Heat waves enhance soil drying,
and drier soil heats the air above more since no energy goes into evaporating
the soil moisture. Big data modeling can
show the results of this escalating cycle of warming on the future climate.
The New Normal
So it seems that all this abnormal weather we are seeing
will become the new normal. Forewarned
Analytics Solution Center, Washington, DC
[ii] Global Climate Change
Impacts in the United States,
Thomas R. Karl, Jerry M. Melillo, and Thomas C. Peterson, (eds.) Cambridge University Press, 2009
On July 4th, CERN scientists announced that they
observed a particle that strongly resembles the Higgs boson, a critical element
of the standard model of particle physics.
This particle is thought to be responsible for the characteristic of
mass, which gives objects weight when combined with gravity.
Detection of the Higgs Boson would not have been possible
without the last decade’s advances in processing big data. Joe Incandela, CMS Spokesman at CERN,
explained that if every collision that they scanned was a sand grain, these
sand grains would have filled up an Olympic sized pool over the last 2
years. They had to find the several
dozen or so grains of sand that exhibited characteristics consistent with the
In addition to developing the Large Hadron Collider, the
CERN teams also developed a data strategy to deal with the data from the
hundreds of millions of particle collisions occurring each second. The sensors record the raw data on billions
of events occurring in the proton collider. These readings are then reconstructed
to show the energy and directions of many particle traces. The data goes through 2 stages of filtering
to reduce the data on 40 million collisions/sec down to 10 million interesting
ones per second, and then to 100 or 200 collisions that are studied in
According to Rolf-Dieter Heuer, director general at CERN, “The
computing power and network is a very important part of the research.” Over
15 Petabytes (1 million Gigabytes) are stored each year. This is distributed through the Worldwide
Large Hadron Collider Computing Grid (WLCG) to each of 11 major Tier 1 centers
around the world, and from there to research centers and individual
scientists. In the U.S., the Open
Science Grid, supported by NSF and DOE, provides much of the compute and
storage power for this work. The
scientists use Monte Carlo simulations for
generating and propagating the physics interactions of the elementary particles
passing through the collider to determine which ones correspond to the
hypothesized behavior of the Higgs Boson.
What they found was a never seen before elementary particle
that seems to fit the behavior of the Higgs Boson and is very heavy –
approximately 133 proton masses. Further
data analysis is now needed to ascertain its spin, decay modes, and other
Think the amount of data generated by the Large Hadron
Collider is huge? The forthcoming Square
Kilometre Array radio telescope is expected to generate 100’s of Petabytes of
data per day. More on that in a future
Does your government agency monitor the social media for information relevant to your mission? Should it?
IBM's Analytics Solution Center recently held a seminar to explore
how agencies and companies can obtain value and insight using social
Pat Fiorenza discussed how agencies can develop an ROI Model - Return
on Influence Model - for social media. Agencies use social media
analytics to help inform their decision making by gathering
information/research, and learn what other agencies and citizens are
saying. Interesting examples from CDC and Govloop were provided.
Learn more here.
Ed Burek, IBM, talked about how savvy companies are now taping into
customer generated content, how government agencies could do the same to
learn how tax payers feel about government actions and messaging. He
gave examples of how regulatory agencies could received the unvarnished
comments from those impacted by regulations, as well as how they could
stay on top of "negative chatter." IBM has created a framework to
derive business insight from the vast amounts of social media that is
now being transmitted. Called Cognos Consumer Insight it provides real
time information on trends and sentiment.
Rick Lawrence, IBM Manager for Machine Learning at Watson Research
Center next talked about the leading edge of social media analytics. He
provided examples from the research portfolio on discovering Who are
the Key Influencers? , Identifying emerging topics of discussion, and
Mapping the billions of tweet to concepts that we really care about.
All of the presentations are available on the ASC website under Past Events (May 10, 2012)
Does your agency care about what its constituents are saying about it
on social media? Does your agency need to have real time intelligence
on events within its mission space? With 340 million Tweets per Day, 2
million blog posts, and 500 million facebook updates, how can you find
the important information? Social Media Analytics may be an idea
whose time has come.
Analytics Solution Center
P.S. The Center for the Business of Government issued a new report on Tweeting in Government. Pat provided a good overview here.
At the end of the Superbowl, people created 12,233 tweets per second. And it turns out that was less than half the
number of tweets created in Japan
on December 9th, when 25,088 tweets per second were recorded about
the Castle in the Sky anime movie.
Which, according to the Chinese, is nothing compared to the 32,312
messages per second sent on their twitter-like Sina Weibo system during the
beginning of the Chinese new year.
Within the government space, we’re no strangers to our own Big Data. Whether you’re in the DOD or NASA, the IRS or
SSA, you’ve got your own Big Data to deal with.
Last week, Forrester Research released a report that should help those in
government understand the Big Data Market.
It is called “ The Forrester Wave™: Enterprise Hadoop Solutions, Q1 2012,
(February 2, 2012)” report. IBM Technologies evaluated were IBM InfoSphere
BigInsights (IBM’s Hadoop-based offering), and IBM Netezza Analytics. In this
evaluation, IBM was placed in the Leaders category of the Wave and achieved the
highest possible score in both the Strategy and Market Presence segments. In
the third segment, Current Offering, IBM received the second highest score. You
the complete report here.
The report by analyst James
Kobielus states, “IBM has the deepest Hadoop platform and application portfolio.”
The IBM Analytics Solution
Center in Washington, DC
also focused on how to handle Big Data at its January 19th
seminar. The seminar covered various
aspects of Big Data including data-in-motion processing software, Hadoop
software, SONAS (scale out network attached storage), and the Netezza data
1. Big Data in Motion
back to the Tweeting, if you’re a government agency and you need to get
actionable insights into 10s of thousands of tweets per second which might be
about an unfolding crisis, how would you do it?
InfoSphere Streams is unlike anything else in the market in its ability
to ingest, analyze and act on data “in motion” – that is, data is processed and
analyzed at microsecond latencies.
2. Hadoop Big Data
is an open source codebase supported by the Apache software foundation. It is designed to process large volumes of
unstructured data. For example, if a government agency wanted to analyze months
of tweets or documents in non-real time, the Hadoop distributed file system
would be a good choice. The enterprise
class IBM Hadoop-based offering, BigInsights, is designed with system
management, security, and performance features that go beyond what is available
in the open source. It provides the
ability to analyze and extract information from a wide variety of data sources,
and promotes data exploration and discovery.
Attached Storage, or NAS, has become a very popular way to provide storage
within an organization. However NAS has
a number of limitations when dealing with
Big Data including the number of objects (files) it can support, support
for very large files, the i/o bandwidth
it can deliver to applications, and fragmented data management across multiple
systems. The IBM SONAS system is
designed to overcome these limitations and look like a very large virtual
system to the applications.
4. Data Warehouse Appliance
data warehouses when used for large volumes of structured data can be costly to
operate and maintain, and can be very slow when used for sophisticated
analysis. The Netezza appliance is a
dedicated device requiring no tuning or storage administration and with special
hardware chips to accelerate the performance of advanced analytics.
Want to learn more?
- More details on the topics can
be found at the ASC Website under
- On the educational front, we
provide free online training through BigDataUniversity.com. To
date, more than 13,000 students have registered for courses on Hadoop,
cloud computing and more.
We are working with a broad range of clients to help them define
their big data strategies. We look forward to working with you on your Big Data
The Forrester Wave™: Enterprise Hadoop Solutions, Q1 2012,
Forrester Research, Inc., February 2, 2012. The Forrester Wave is copyrighted
by Forrester Research, Inc. Forrester and Forrester Wave are trademarks of
Forrester Research, Inc. The Forrester Wave is a graphical representation of
Forrester's call on a market and is plotted using a detailed spreadsheet with
exposed scores, weightings, and comments. Forrester does not endorse any
vendor, product, or service depicted in the Forrester Wave. Information is
based on best available resources. Opinions reflect judgment at the time and
are subject to change.
On November 30, the Partnership for Public
Service (www.ourpublicservice.org) released
their new study, “From Data to Decisions: The Power of Analytics.” [i]
Keynoting the event was Shelley Metzenbaum, Associate Director for
Performance and Personnel Management, OMB.
She told the audience that Performance Management is a core pillar of
the Obama Administration and that Measurement and Analysis was the key tenet to
PM. She encouraged the audience to
identify analytics practices that work and spread the word to others. She exhorted the audience to not just collect
data but to use the data to pinpoint problems – “Ask Why, Why, Why” with
respect to performance problems.
The report studied 7 programs[ii] in 8 federal
agencies to understand how they use analytics and how it helped them achieve
better program results. The study
provides clear examples of how data is being used to understand problems and
improve mission performance. It
documents how CMS is using data to answer the question why isn’t health care
quality better and how can we direct scare resources to improve it? In a similar fashion, VA and HUD are using
data to figure out how to reduce homelessness of Veterans including identifying
bottlenecks that are keeping their voucher program from being more
successful. David Zlowe, Performance
Improvement Officer at VA, emphasized in the study that the power of VA’s
analytics approach isn’t in the numbers but in the discussion that are sparked….having
leadership engage in an appreciative conversation guided by hard data.” The 4th program that the
Partnership reported on in some detail is the FAA’s Safety Management
System. This program helps to identify
risks and to understand what contributes to all levels of hazards.
The Partnership event included a panel discussion with Michelle Snyder,
Deputy COO, CMS; Estelle Richman, COO
and Acting Deputy Secretary, HUD; and David Zlowe, PIO, VA. Ms.
Snyder’s advice to the audience was, “Take data, analyze it, tell the story to
the people so it relates and influences the decision makers.” Ms. Richman’s recommendation was to remember
that the analytics are but a method to accomplish the goal of creating an
outcome that can improve people’s lives.
And Mr. Zlowe summarized by saying, “We don’t lack data, we lack
We’d like to hear your experiences driving decisions based on data
in the government. If you'd like a copy of the report, write to me at: ASCdc@us.ibm.com
[i] The Study was a
collaboration between IBM’s Center for the Business of Government and the
Partnership for Public Service
[ii] HUD and VA Veternans
Affairs Supportive Housing (HUD-VASH) program; Safety Management System (SMS)
in the FAA; HHS CMS nursing homes and transplant programs; Coast Guards’
Business Intelligence system (CGBI); NHTSA “Click it or Ticket” campaign;
Navy’s Naval Aviation Enterprise; SSA’s use of mission analytics n customer
As government leaders do you believe the world is getting
more complex? More volatile? If so, you’re not alone - - Sixty percent of
the CEOs surveyed by IBM in our 2010 CEO Study thought the world was getting
more complex, and even more, 69%, felt the world was getting more
For the first time, we also posed a similar set of questions
to college students. These future
leaders viewed the world as even more complex than the CEOs we surveyed. But
they saw less volatility, and significantly less uncertainty than the CEOs (65%
of the CEOs, but only 48% of the students).
Could it be that the students are more acclimated to economic boom/bust
cycles and feel more comfortable with the uncertainty of today’s world?
Or could it be that in the instrumented, interconnected,
collaborative world that they are used to (most of the students never knew a
world without web browsing and many don’t remember the pre-Facebook era), they
feel more comfortable dealing with this complex world? As a student in France put it, “We will have more
information, so it [the world] should be more predictable.”
We found that students who had the greatest sense of
complexity put much more emphasis on the analytics and predictive capabilities
of information. They were 50% more
likely to expect significant impact from increased information than peers who
did not have the same sense of complexity.
And they were 22% more likely to believe that organizations should focus
on insight and intelligence to enable their strategies. Also,
interestingly, students in China
were significantly more likely to prefer a fact- and research-based style of
decision making than their peers around the world. Does that indicate that the Chinese students
have been trained to feel more comfortable dealing with data than their
With the baby boom heading towards retirement in the coming
years, does this mean the government workers who replace them will be more
comfortable using information and analytical techniques to handle the world’s
problems? Or could it be that complexity
will always rise to be just beyond our ability to manage it with our current
level of technology?
Click here to see the IBM Report: “Inheriting
a complex world”
Click here to see the IBM Report: “2010 Global CEO
More on Analytics for Government here: www.ibm.com/ASCdc
Do you think our future leaders are inheriting a more
complex world? And do you feel they are
more prepared to manage it?
Comment on this blog or write to me at ASCdc@us.ibm.com
Frank Stein, Director of IBM’s Analytics Solution