Measuring ROI on social software is an elusive topic, so it’s wonderful when I find projects that have managed to quantify it in some way. The following story focuses on a particular task, that of social tagging.
The Enterprise Tagging Service in IBM aims to provide an alternative approach to helping people find information compared to traditional search engines. Search based on keyword analysis often relies on a taxonomy that is rigid due to the way the software performs its structural analysis of web pages, identifying and classifying the keywords. Social tagging allows people to add human semantics to keywords that they define that sometimes can amount to finding a resource faster based on what people think is relevant.
IBM’s ETS cost $700k to develop and deploy across the worldwide intranet as a sidebar to a number of key web properties: traditional search engine results, top content pages, and web applications like the IBM internal social brainstorming tool, Thinkplace. As a service it can really be added to any internal page. Readers can tag any page with the widget, look up tags they contributed, find others who have used the same tag, and certainly find other relevant resources by that same tag. The ETS tool was based on the Lotus Connections Dogear software.
The ETS team instituted a survey to ask users howthis tool helped them. What they found was amazing when you look at itin context: the average person saved 12 seconds, across the 286000+searches performed through ETS each week. This sums up to 955 hourssaved each week across the company. In terms of cost savings, itamounts to a rough estimate of $4.6 million a year, in terms ofproductivity gain. The reusability of this page widget also resulted in$2.4 million in cost avoidance (reimplementing this for eachsite).
This social task is spread across the IBMintranet site, but is essentially a single set of overall content setas a mass collaboration of knowledge; in other words, the knowledgedoes not get balkanized into separate tag systems, running in theclassic problem of information getting locked away in pockets in theorganization. It involves the swarm effects of many users contributing,each for their own need, but resulting in an overall benefit for allemployees.
This is based on an internal IBM news story by Kieran Cannistra.
I’ve increased my attendance at E2.0 by 100% by going two
years in a row; okay, that was a bad metrics joke. The Enterprise2.0 conference in
Boston was the big gathering of customers, analysts, bloggers and
aficionados this year. We’re still debating how many people really attended but
I’m guessing it is around a thousand.
The week began early for me starting with presenting during
the Black Belt practitioner’s workshop on Monday. I’m proud of my fellow
members of The 2.0 Adoption Council who presented the workshops all day long.
There are about 10 speakers, starting in the morning with the effervescent Jamie Pappas (EMC) speaking on
business value; the cheery Megan
Murray (Booz Allen Hamilton) on planning; and myself on adoption. The afternoon
had a several pairs of speakers: Stan Garfield (Deloitte) and Luis Suarez (IBM) on community building;
Donna Cuomo (MITRE) and Ted Hopton
(UBM) on metrics and analysis; Bryce
Williams (Eli Lilly) and Richard Rashty (Schneider Electric) on positioning
tools; and Bart Schutte (St Gobain) and Kevin Jones (NASA) on mitigating risks.
I’m also thrilled so many people stayed from 8:30am till
4:15pm. It really is a fire hose of knowledge, even when spread across so many
hours. These were real issues and scenarios that the speakers have experienced
in trying to bring Enterprise2.0 to life in their own organizations.
Has E2.0 gained ground? I definitely think so. For any idea
to take hold, there needs to be stability in what it means, and increasing
adoption and expression of the notions within it. Seeing The 2.0 Adoption
Council’s rapid growth within just one year (with over 100 member large
companies) worldwide, with active practitioners is one area of social proof.
The other is the reduction of “What is it?” and more of “How do we do it?”
E2.0 seems to be entrenched in the domain of the CIO and IT
organizations. That’s a shame because it really does spread across many
domains. Gautam Ghosh lamented the
lack of attendees or speakers from the HR realm in a few tweets during the
event. Yet many of the talks were certainly around employee behavior and
I have to be honest. There are many things that are still
left unanswered this year. I didn’t expect solutions but I was looking for more
thought on the following ideas:
Surprisingly, I agree with Dennis
Howlett. I don’t think people should be looking for a single answer or
approach to figuring this out. What was being affirmed is that are some
cases of ROI particularly in the external or public-facing environments,
but very rare for internal enterprise 2.0 environments. However, most
examples of an approach to ROI I know are still very specific to scenarios
that cannot be easily replicated. The industry going through its period of
denial – “Don’t try to look for ROI”—but organizations still need that.
and Personally Identifying Information – I raised this last year at the
conference, and it was great that there was at least one session by Carl
Frappollo (Information Architected, Inc) that described the interviews and
study he did early this year on this subject. The focus was very
Euro-centric because of the specifics of several countries there with
intense legal scrutiny in this area.
Carl’s point about organizations along the following interest scale--‘Big
S’ security, ‘small c’ collaboration, and ‘Big C’ collaboration--certainly
described the differing views on the legal fog organizations face.
is about transforming human behaviors at work – More folks are starting to
recognize that it is not trivial to bring communities and other social
environment to life. There were numerous cases talking about adoption
including my own part of the workshop. I’ve heard several different philosophies:
fascist / ‘Hitler’ approach (as described by the speaker) of mandating
that people use these tools;
‘taking the toys away’ – removing alternatives so they have no choice but
to use social tools.
carrot principle through monetary rewards or a point system to purchase
goods – apparently some folks still have that available.
visibility principle – non-monetary rewards but peer recognition (again
surprisingly, from @dahowlett).
get beyond “adoption”’ – This was another sentiment I heard several times,
but I attribute it to short-attention span. The general statement was
‘adoption’ was last-year’s thing, and we needed a new ‘thing’. As evident
from our own experience which my excellent peer Jeanne Murray and I
described, adoption goes through many stages of evolution. Each step
people need new things, and you need new adoption tactics. The big-picture
Enterprise 2.0 doesn’t happen in a year, although you can achieve many
Social Media vs. Enterprise 2.0 – I think people are starting to agree
that working with the external audience entirely is a different context
than Enterprise 2.0. I did hear several questions to this front, so this
distinction hasn’t completely permeated yet.
tend to get weedkiller put on it” – I quote Oliver Marks (ZDNet, Sovos
Group) here. E2.0 adoption efforts without official executive support do
not tend to last long. This goes along with the next realization.
transformation teams even in large companies are small and understaffed –
I made a joke: “For a global organization of about 400,000 people, I think
the right size [for an E2.0 team] is about 7 or 8.”The reality is that most organizations
have only one person working on it if they are lucky. Frankly I think that
this is a recipe for failure because a single person, even with some
volunteer help, would find such an organization-wide transformation truly
monumental. However, this is a catch-22 / paradox: You can’t get more
staff until you can prove its value, but you need people to help you prove
and Medium Businesses have different problems than large organizations – I
heard this brought up only once, but I think it is a very important
statement in reflection of the last point. The large companies, including
our own, can afford to have experts staffed to focus on Enterprise 2.0.
SMBs with only a thousand people or so don’t have that luxury.
has to be something for everyone” – The speaker (I don’t recall who) was
making a point mostly that individuals need to feel the impact to see the
value. However, I want to elevate that the pendulum can’t swing entirely
to focus on the individual. Too much emphasis on gaining organizational
value can lead to poor adoption, but too little focus on it deemphasizes
the business reasons to support such a significant transformation project.
Maturity and Lifecycle models – This was a gaping hole. I’m of the school
of thought that there are many different archetypes for social
environments. Yet, many describe theirs as if it is the answer, or
use their single case to refute other claims. Thomas Van der Wal’s wrap-up on continuing
myths per this conference revisits the participation inequality
principle made famous, but not originated by, Jakob Nielsen—the 90% are
lurkers/readers, 9% are contributers but 1% are intense contributors.
Some activity metrics case studies in our organizations have shown that it
really depends on the goals of the communities. For example, some are
decidedly intended as only an outlet for information albeit in a more
social sphere; others focus on intense rewriting of content.
Yet, these myths persist often because the metrics systems are quite poor,
and they look at the external social media context, not internal
interaction within the organization. A great weakness is the inability to
uniquely distinguish who is participating in a community and the different
forms of participation actions beyond just reads and writes. In the
external world, with the possibility of endless different users, this
might be more of a reality, but within the boundaries of known employees
in org E2.0, the clarity of detail reshapes how we see this. There are
many other factors: affinity to the community, time within member’s
workflow to participate, recognizable value and outcomes for the member,
rhythms of activity.
It will still take a bit of time, or if at all, we can
figure on better patterns of a maturity lifecycle, but let’s not jump to
default conclusions simply because it is easy to remember.
The core competency here is in terms of facilitating relationships and communications between different parties. There are in fact many different types of interactions that this role takes on. In as such, this means they participate as a part of many different role-interaction patterns. This is significant since when such patterns are frequent and repeated, it becomes almost transactional, and therefore measurable. If you need the example of a more common role-interaction pattern: think of a support call from initial contact to completion after a solution or resolution has been reached and the customer is verified as satisfied. Each such complete interaction has a measurable value; or you could also measure it in terms of cost or time it took to conduct that interaction end-to-end. Finally, you could also measure it in terms of quantity of those interactions actually reaching completion rather than partial or incomplete resolutions (likely meaning an unhappy customer left hanging).
The RI patterns for Community managers are of a different sort but the following tables give some suggestions of the kinds of patterns they could participate in.
Table 9.1 -- The Value of Community Managers
Improving relationships with members by providing a human face
to an organization or a large social group
Bringing the value of their own relationships and contact
networks within the organization
Arbitrating conflicts between members, or between the member and the
Coordinating member projects and activities
Serving as a repository of situational knowledge about the
organization, the members, or the content
To the sponsoring organization
Serving as an organizational spokesperson to the membership
Providing a view into the climate
of the members about the topic or purpose (the business climate within the
enterprise, across business partners, or across the industry)
Housing a repository of situational
knowledge about members, the content, or the topic
Encouraging and monitoring
member involvement and participation in the topics that
interest the sponsor
members might have with the organization
describing value or outcomes of the social group
and potential for hires or rehires
Table 9.2 Supporting Customers or Partners
Customers or business partners (public-facing, cross-boundary, third-party)
Marketing or sales
Increasing the number of touches with customers
Identifying customer evangelists and activists
Discovering industry trends and customer interests
Acting as marketing
liaisons to customers
Guiding marketing on
appropriate messaging or tactics
Product development and delivery
Assisting in gathering product requirements from audience
Conducting market research with customers
Identifying competitor activity or offerings
Conducting design tests and product beta-testing
Delivering products to customers online
Customer relations or product support
Providing a human
interface to the organization or social group
Serving as a “finger on the pulse” of audience concerns
Helping partners locate internal representatives or departments
Helping customers find appropriate support resources
Identifying troubled or
There's another table on their roles within the enterprise supporting employee and organizational interactions.
[I should say right ahead that I’m not picking on them
(since I disagreed before), but when many good ideas come across from Hutch Carpenter
and the Spigit folks, sometimes I just have to disagree.]
The article Maslow’s Hierarchy of Enterprise 2.0 ROI on the Spigit blog from last week proposed a framework for a pyramidal hierarchy of needs
aimed specifically at ROI of Enterprise 2.0. They are correct in some ways describing
a pyramid of levels starting at the base with tangible needs and moving up
towards increasingly intangible ones.
I’ve linked to their image here, source Spigit Blog. [I may take this image off
if they ask so but you can generally find it on their blog post]
However, I’m not so sure that it can be so easily applied
here in terms of the levels. For one, Maslow’s theory indicates that humans cannot
focus on the higher levels until the lower levels are satisfied. This would be
nice to conclusively say this of Enterprise 2.0 ROI but I can give examples
where it is very difficult to identify “cost-savings” at the bottom of the pyramid
in a conclusive and replicable way, but easy to identify “employee satisfaction”
somewhere around the middle.
Cost savings is a comparative; you need to determine that it
is most efficient to do things with one or more e2.0 tools than existing or
traditional non-e2.0 processes. The trouble is that this is not systematic
across all e2.0 experiences. It’s not simply a matter of deploying a discussion
forum, for example, to support customers before you start seeing results (even
before you see cost-savings); in fact, there’s no guarantee it will ever become
enough of a social environment where the vendor, partners, other users etc. are
properly supporting the needs of a customer. In comparison, a support workflow,
even if more expensive, has immediate results. Until the social environment
actually does support customers, it is a cost-center.
However, even without knowing cost-savings per Maslow’s
theory, you can use survey instruments to determine employee satisfaction. Qualitative
measures such as “satisfaction” work best by gathering input directly from
people; it’s simply something in their heads that you need to get to. This
means surveys, interviews, and focus groups. However, it does get a metric—which
ROI is—of the level of satisfaction, without ever having to find out if the
social environment creates cost-savings. This is similarly so for “customer
satisfaction,” and I’d argue for “cross-org collaboration” as well.
So, while the idea of relative dependencies and ranking of hard
and soft metrics that indicate some beneficial return, I don’t think this
approach works. The logic has some holes and I wouldn't be able to sell this idea to folks around here.
On returning from the recent Enterprise 2.0 Conference in Boston, I had time to reflect on the scaling issues that come up in social software adoption across an enterprise. In watching Gentry Underwood's excellent presentation on how they designed the social computing environment in IDEO, I tweeted to him that new issues start to pop up when you move from an enterprise social environment for 500 people to 200,000--or in IBM, nearly 400,000 people in 170 countries. This is not a bragging point, rather a one of frustration.
There certainly are other large or technological-oriented companies deploying social environments, but from my experience in hearing from others, no one has hit some of the scale issues that we have in IBM. Obviously we are talking about an enterprise's deployment rather than a social site like Facebook; they're very different issues for each.
For one, while employee profiles and directories are starting to become commonplace in other enterprises, we have had one for well over a decade in one form or another. We've already gone through the issues and practices others have found: (a) include everyone; (b) prepopulate with relevant contact, work info, projects, etc; (c) popularize it as the place to look up data; (d) integrate into or make it THE basis for contact info for other existing internal and extranet Web apps; (e) invite partners,contractors and suppliers; (f) tie to enterprise-wide LDAP and single-sign on; (g) integrate into common work processes and behaviors. In fact, the last I looked, we had nearly 600,000 profiles in our Bluepages (including employees, supplementals, contractors, bots, some partners and suppliers).
While the Bluepages system certianly popular, it is but one of several dozen commonly used social software tools, some of which in themselves have hundreds of thousands of unique users. We have thousands of smaller communities and wikis some of which have tens of thousands of members. The multiple tools comes out of our laissez-faire attitude to allow many software ideas to emerge, and through our user base test and advocate the best ideas.
The population size of this system isn't quite the issue, but I put some thought into what enterprise 2.0 deployment issues might appear with scale and came up with the following chart. I hope this can help other maturing e2.0 environments consider some of the issues they may be coming up agains
of people across time zones B. distribution
of people across cultures / countries C. distribution
of people across physical locations D. distribution
of people across job categories or dissimilar job roles E. projects
people work on are very different in nature F. distribution
across access devices (desktops, laptops, mobile) G. many separate (non-integrated) social tools,
or different interfaces H. many separate databases as information sources I. many
separate or isolated social instances J. number/reach
In working recently on the topic of leadership and decision making processes in social environments, I thought I'd clarify something per my book. Quite often I see these decision-making methods split into simple categories--centralized versus marketplace (or distributed)--
when there is so much more. Additionally, the way how people work to produce results is not the same as who is involved in making the decisions.
One milling question from those who’ve looked closely
at my book, Social Networking for Business, is that leadership and decision-making processes seem to appear in two
different areas: the chapter 3 “Leadership in Social environments” and then
later again in the section “Describing the Form of Aggregation” in Chapter 4 on
Social Tasks. I should explain the key differences here.
Chapter 3 focuses on six different common leadership models:
Centralized, Centralized w/ Input, Delegated, Representative, Starfish and
Swarm. These models focus on whois allowed to participate
in the decision making process, set direction for the social group, and select
leaders. These range from those with very strongly centered to very distributed
The Aggregation methods on the other hand describe how
these decisions are made or this work executed: Independent,
Autonomous, Consensus, Deliberative, and Combative. These again are
alternatives to each other to create results.
Independent—Members work on the task separately, but the results are aggregated across all members
Autonomous—Members work on the task separately of each other, and their results are distinctly visible to other members as separate work.
Consensus—A group of members works directly together on the task with the intent to deliver an overall collective result, even if it’s not unanimous or convergent.
Deliberative—A group of members works directly together without the intent or necessity of coming to a consensus on a single result.
Combative—Members must compete against each other to derive the best result from the group, denying other choices.
Certain pairs are more likely to occur: e.g., a swarm is
likely to use the Independent aggregation where only the combined results (voting)
across many members result in a single value. A delegated model is likely to
have autonomous decisions spread across the different domains delegated across
The moral here:
Set the right expectations -- Be clear not only about who
can make the decisions, but also for those who can do so, how they can make
Per my previous note, I mentioned that we have 400,000
people collaborating across 170 countries in IBM.
That raises a great question of what does it mean to have 400,000 people
collaborating? Are they all in one massive social network connected to each
other? Are they participating in the same spaces? Are the contributing to each
To give an idea, first we need to look at the state of
social computing in IBM. First, there is not
one but at least 32 different social applications each of which can have
hundreds of thousands of unique users, and tens of thousands of instances
(e.g., separate wikis, individual blogs, etc.)
By another count, there are over 200 applications--it varies based on what different folks consider as a "social application".
For example, in rough numbers of some of the tools
200k replies (aging removes some)
Dogear /social bookmarking
Beehive (social net)
Cattail (social file sharing)
This is just a subset and unofficial list of these services.
There are other tools for enterprise wide social searching, social
brainstorming, instant messaging, tweeting, podcast/videocast sharing, social
profiles, and analysis tools. Some of these other tools are used by 100% of
employees particularly instant messaging and out Bluepages (profiles) systems.
Others have even more people because non-employees such as business partners,
customers, and even suppliers have access to them.
People generally use them as follows:
across the enterprise: e.g a blogger
team spaces: departments and hierarchical teams
spaces: across multiple departments
group spaces: e.g., someone creates a Lotus Connections Activity and
So the groupings vary significantly, and a number individuals
do use many of these tools for different reasons, but unique users still reach
across the company.
The types of activities or projects in these spaces are just
as varied as the job roles, products and markets. Think of it, just in terms of
products alone, I think we have over 5000 distinctly, different ones (and not
just variations); some are very complex (imagine working on the DB2 database), and others are smaller. That still doesn’t include the many thousands of customer projects
people are working on at any one time. So in general there aren't any common scopes or scales for
what people work and interact on.
The general philosophy that creates this mix is that as a
company we encourage an internal free-market environment to allow many tools to
appear and compete with each other. This helps the best ideas to emerge out of
new social experiments and methods. While someone has to pay for the
environments, this is up to each social app project to figure out how to fund. There
are official tools that are universally supported, but there are also other research
and experimental projects—even Beehive as a research project easily includes over
We also do not police these activities. People are talking
about their non-work activities, but that is a natural outcome of social
interaction. As long as people are not breaking their business conduct and the
social computing guidelines, they are okay to use it how they like.
This kind of quantitative information really doesn’t show how
people are collaborating just where. Rather our BlueIQ team collects
success stories, especially recreatable and reuseable scenarios, from
individuals illustrating how they are productively working together in these
In general, it is complex to say how people are
collaborating, but safe to say that they are collaborating widely in the
social environments in IBM.
For our social computing metrics system, we have the ability
to see how people act on others contributions. For example, given one person’s
post, we can tell who is sharing, tagging and sometimes reading it, with
identities of all. This can tell us how much a person is impacting those around
them, who and how.
[Note: From an enterprise measurement viewpoint who the
individuals are is not important but you need their ID to key off other
demographics such as their job roles, geographic location, or organizational
location. This might be of interest to each person, but I’m looking at the
gestalt of the organization. Also this is information we are allowed to see per
This leads to several possibilities, given person X’s post.
The first set is diversity of reach:
a)What job roles are consuming their
b)Where in the organization are the
c)Given a single post how much
consumption is happening; and what’s the average per post
On the business level, this can tell us a lot about how
well the organization is connected, and if the expected views of what
job roles rely on others is actually occurring and how much. For example,
sales people working with their sales engineers or seeking domain knowledge
experts. It can show how far they
reach across the organization, and what other roles they connected to that
were not expected. For example, sales people in Slovenia working with
Researchers in Israel.
The second set may look at secondary effects. Given person X
posts, and person Y shares or tags, who is Person Z that eventually consumes it.
a)What job roles (persons’ Z) are
the end consumers
b)Where in the org they come from
c)How much and what’s the average.
d)Is there additional resharing or
This extends the first set by looking at eventual impact
from the source.
So far, I’ve just talked about one path of action from a
creator (source) to a consumer (sink).The next level is to look across many
actions on if there is bidirectional interaction happening between the roles.
This looks for ‘lasting’ relationships based on continued bidirectional
interaction. This can happen in immediate sequence (e.g., I post,
someone replies to me, I reply back, and so on); or it can be delayed
sequence of events (e.g., I post, someone reads/tags it, a week later they
send something else through a different social tool).
Here we are looking beyond immediate or unidirectional
consumption, towards the idea of if people are forming lasting relationships.
Notice for one that I didn’t even say that it was necessary
for people to friend each other before any of this happens. In fact, I think
that friending action while certainly making it obvious is highly variable.
Some people consider friending to identity those who they have lasting
relationships with, but others use it simply to keep track of people they are
watching rather than have any interaction with. The difference lies in the
bidirectional vs. unidirectional relationship there. In other cases, some folks
never actually friend others but certainly interact with them, therefore
indicating a relationship.
Why is this any different than SNA (social network analysis)
tools? Perhaps it’s the limitation of the SNA tools I have found in terms of
the level of demographics and granularity they can show. For example, some do
not show the demographics I need because they simply don’t contain that info,
or don’t understand which demographics are useful for business reasons.
In terms of granularity, most SNA tools can show the
structure for each person; i.e., the relationships and interactions between
person X and those around them, but I need info about the aggregate level of
everyone of one demographic (e.g. job category), and the relationships they
form. This is beyond most SNA tools today.
The biggest part is that it takes a lot of data collection
and number crunching over many, many people to even begin to analyze this. This is beyond System level metrics (how many users, how many documents), or object level (how much activity per person or object), but goes into the meta level that we would like to understand. This is also only one aspect of many others.
On the business side, the goal is to better understand the connections across our organization, and where we can try to focus energies to improve communications or encourage interaction. It is using information from social systems to create a smarter organization. For enterprise 2.0 to become a success, it is not just about empowering individuals to use social computing systems, but it is to make the organization itself function better.
I tried the MIT Personas site just to see what it would come up for my
name. Basically, it does a Web search for any name you give it and
analyzes the text on a semantic level to find common themes or
categories for content from you or about you (e.g., it might suggest
topics like "sports", "legal", "social", "management", "online", etc.)
How it does this is a little beyond my knowledge of semantic
processing, as well as what practical use this is seems to escape me.
But it's fun. :)
Some issues with it, is that because it searches on people names across
the Web, the content may mistake e.g., one "Luis Suarez" (our own
blog-evangelist) with another "Luis Suarez" (soccer player). It helps
if you have a globally unique name like "Rawn Shah" (so far). Another
problem is that the results may vary depending on how it processes
whatever search results it finds. So, in trying it out three times to
see how close it was, it showed slightly different categories for me
and sized these differently as well.
I can't figure out why the following categories appear since I rarely if ever talk about them: military, religious, religion, genealogy