For our social computing metrics system, we have the ability
to see how people act on others contributions. For example, given one person’s
post, we can tell who is sharing, tagging and sometimes reading it, with
identities of all. This can tell us how much a person is impacting those around
them, who and how.
[Note: From an enterprise measurement viewpoint who the
individuals are is not important but you need their ID to key off other
demographics such as their job roles, geographic location, or organizational
location. This might be of interest to each person, but I’m looking at the
gestalt of the organization. Also this is information we are allowed to see per
This leads to several possibilities, given person X’s post.
The first set is diversity of reach:
a)What job roles are consuming their
b)Where in the organization are the
c)Given a single post how much
consumption is happening; and what’s the average per post
On the business level, this can tell us a lot about how
well the organization is connected, and if the expected views of what
job roles rely on others is actually occurring and how much. For example,
sales people working with their sales engineers or seeking domain knowledge
experts. It can show how far they
reach across the organization, and what other roles they connected to that
were not expected. For example, sales people in Slovenia working with
Researchers in Israel.
The second set may look at secondary effects. Given person X
posts, and person Y shares or tags, who is Person Z that eventually consumes it.
a)What job roles (persons’ Z) are
the end consumers
b)Where in the org they come from
c)How much and what’s the average.
d)Is there additional resharing or
This extends the first set by looking at eventual impact
from the source.
So far, I’ve just talked about one path of action from a
creator (source) to a consumer (sink).The next level is to look across many
actions on if there is bidirectional interaction happening between the roles.
This looks for ‘lasting’ relationships based on continued bidirectional
interaction. This can happen in immediate sequence (e.g., I post,
someone replies to me, I reply back, and so on); or it can be delayed
sequence of events (e.g., I post, someone reads/tags it, a week later they
send something else through a different social tool).
Here we are looking beyond immediate or unidirectional
consumption, towards the idea of if people are forming lasting relationships.
Notice for one that I didn’t even say that it was necessary
for people to friend each other before any of this happens. In fact, I think
that friending action while certainly making it obvious is highly variable.
Some people consider friending to identity those who they have lasting
relationships with, but others use it simply to keep track of people they are
watching rather than have any interaction with. The difference lies in the
bidirectional vs. unidirectional relationship there. In other cases, some folks
never actually friend others but certainly interact with them, therefore
indicating a relationship.
Why is this any different than SNA (social network analysis)
tools? Perhaps it’s the limitation of the SNA tools I have found in terms of
the level of demographics and granularity they can show. For example, some do
not show the demographics I need because they simply don’t contain that info,
or don’t understand which demographics are useful for business reasons.
In terms of granularity, most SNA tools can show the
structure for each person; i.e., the relationships and interactions between
person X and those around them, but I need info about the aggregate level of
everyone of one demographic (e.g. job category), and the relationships they
form. This is beyond most SNA tools today.
The biggest part is that it takes a lot of data collection
and number crunching over many, many people to even begin to analyze this. This is beyond System level metrics (how many users, how many documents), or object level (how much activity per person or object), but goes into the meta level that we would like to understand. This is also only one aspect of many others.
On the business side, the goal is to better understand the connections across our organization, and where we can try to focus energies to improve communications or encourage interaction. It is using information from social systems to create a smarter organization. For enterprise 2.0 to become a success, it is not just about empowering individuals to use social computing systems, but it is to make the organization itself function better.
It seems so old
school to try to classify social computing metrics but I keep getting the same requests from various internal teams, who are sometimes not familiar with some of the metrics, don't understand
them, or simply use other metrics better suited to Web sites rather
than social sites. A second goal is to evaluate the qualities of
these metrics to determine if they are useful (e.g. using the SMART
analysis approach). A third is to see the relationship of the metrics
to each other—whether there are dependencies, or if some metrics
are more meaningful when reported alongside or compared with others.
To give an idea,
while it's considered outdated by others, some still look for
Pageviews, and Unique Visitors--classic web metrics better suited to
measure how people visit pages, than interaction from social
environments. Similarly, "Interaction" itself becomes
another stopping point for metrics. These are the metrics most
commonly recorded by social software tools: number of posts, the
number of downloads, the number of connection invites, etc.
In working with
our social computing researchers we're also looking at Network Effect
metrics such as the Topics (what people discuss) that come out of the
system, or the ratio of consumption to a person's content
such as marketing teams have an emphasis on Engagement metrics,
considering how much a person is becoming involved in a social
environment, an event, a marketing offering, or other engagements.
Other engagement metrics aren't specific to marketing only. For
example, thought-leadership metrics include the ratings on content
someone has submitted, or how often they have been quoted or
retweeted by others. A more complex one is to determine the Impact a
person has on their target audience.
To go further
along on marketing metrics, these can even build up towards the sales
pipeline—how many interested individuals are there, are they
potential sales leads, have they actually asked for sales info, has
that lead been validated, and then closed. Joe Cothrel, Chief Community Officer of Lithium
suggested similar ideas in an article for Strategy+Leadership magazine back in 2000, on conversion rate from a
visitor to a sale, as applied to social environments.
and sales, there are other indicators that relate to business value
metrics. Some suggestions in a recent email exchange with Dr. Walter
Carl, Chief Researcher of ChatThreads and a member of WOMMA's board on metrics include
cost reduction (using this tool to communicate is a lower cost than
other existing ways), accelerating adoption of any business
philosophies, values or company directives, processes that minimize
lost revenue, etc.
Lots of Metrics,
but what are their qualities?
So what should be
obvious is that there are lots of metrics, categories, subclasses,
variations, and inter-relations that different organizations or even
different teams within the same organization utilize. What
constitutes business metrics and delivered value for one team may not
even be relevant to another. So I'm still surprised when people ask
for a generic ROI methodology.
All the same, the
next step is to look at the qualities of these metrics. I mentioned
the SMART acronym earlier which are basic questions if a given metric
type or unit is:
(specific and targeted to an area of measurement),
data point that can be captured and collected),
(robust data that can be analyzed and utilized by a stakeholder),
realistic, meaningful and consistent measurement),
(current and possible to collect in good time).
all these qualities, there will likely be a problem with either
collecting the data in a way that is meaningful and available in time
for use in a business.
There are other
qualities that I think are important to consider as well:
it scalable in quantity? Can you capture larger and larger volumes
of data or does it become computationally intractable
it apply across social environments of the same type? Is the metric
relevant to a single social environment, or can it apply to many
environments of the same structure (e.g., a discussion forum)?
scalable and still meaningful across different social environments
(e.g. A blog and a forum)?
Does it drive
behavior? Does it encourage that person or other people to interact
credible? Is it a measure that is accepted by other teams,
organizations or even industry-wide?
significant as a performance and/or a diagnostic metric? Performance
metrics are useful for comparisons across like types. Diagnostic
metrics help determine the state of the system.
Is it a
quality metric? That is, counting it does not really describe the
value contained within it, so you need a secondary way of looking at
it helpful to look at it across different demographics? This is very
insightful in some metrics, and just not necessary in others.
I'm sure there are
more relevant qualities, but this is already quite a lot to think
about. These qualities can help decide which metrics are the most
useful or what they can tell us, independently of the others.
is to look at which metrics should be reported alongside each other,
or which ones depend on others directly or indirectly. That's where
things start to get real interesting and much more subjective.
No conclusion here
because this is on-going work trying to map out all these variants of
metrics, but here's to hoping it inspires others to think and work
along these lines.
In looking at @prem_k’s
mindmap on social learning today, I spent a few minutes considering what events
can be measured relative to this topic. Unfortunately, I cannot embed the
diagram in this blog but please take a look at his
I came up with the following measurable elements and
hopefully most are self-explanatory. The mechanics of how you actually measure
these items can very from trivial counting exercises to some fairly complicated
metrics for mapping networks and measuring influence and sentiment. However, I
think most of it has been done before, perhaps just not applied specifically to
learning and pedagogy. So who’s up to that challenge?
I’m also just starting on Marcia Conner (@marciamarcia) and Tony
Bingham’s book, The
New Social Learning(ASTD & Berrett-Koehler, Sep 2010), and I
expect I’ll be learning a lot from it too.
-disemmination relative to origin
(generalized SN diagram)
-disemmination of topics across overall
network (generalized SN diagram)
-rate & velocity of
-Resharing/promoting (e.g. RT,
-Acknowledging/rating (e.g. +1,
-Relationship effects -
Friending/following/connecting, or unfollowing/negative externalities/outcomes
-Searching / search results (text,
tags, social searches)
[I should say right ahead that I’m not picking on them
(since I disagreed before), but when many good ideas come across from Hutch Carpenter
and the Spigit folks, sometimes I just have to disagree.]
The article Maslow’s Hierarchy of Enterprise 2.0 ROI on the Spigit blog from last week proposed a framework for a pyramidal hierarchy of needs
aimed specifically at ROI of Enterprise 2.0. They are correct in some ways describing
a pyramid of levels starting at the base with tangible needs and moving up
towards increasingly intangible ones.
I’ve linked to their image here, source Spigit Blog. [I may take this image off
if they ask so but you can generally find it on their blog post]
However, I’m not so sure that it can be so easily applied
here in terms of the levels. For one, Maslow’s theory indicates that humans cannot
focus on the higher levels until the lower levels are satisfied. This would be
nice to conclusively say this of Enterprise 2.0 ROI but I can give examples
where it is very difficult to identify “cost-savings” at the bottom of the pyramid
in a conclusive and replicable way, but easy to identify “employee satisfaction”
somewhere around the middle.
Cost savings is a comparative; you need to determine that it
is most efficient to do things with one or more e2.0 tools than existing or
traditional non-e2.0 processes. The trouble is that this is not systematic
across all e2.0 experiences. It’s not simply a matter of deploying a discussion
forum, for example, to support customers before you start seeing results (even
before you see cost-savings); in fact, there’s no guarantee it will ever become
enough of a social environment where the vendor, partners, other users etc. are
properly supporting the needs of a customer. In comparison, a support workflow,
even if more expensive, has immediate results. Until the social environment
actually does support customers, it is a cost-center.
However, even without knowing cost-savings per Maslow’s
theory, you can use survey instruments to determine employee satisfaction. Qualitative
measures such as “satisfaction” work best by gathering input directly from
people; it’s simply something in their heads that you need to get to. This
means surveys, interviews, and focus groups. However, it does get a metric—which
ROI is—of the level of satisfaction, without ever having to find out if the
social environment creates cost-savings. This is similarly so for “customer
satisfaction,” and I’d argue for “cross-org collaboration” as well.
So, while the idea of relative dependencies and ranking of hard
and soft metrics that indicate some beneficial return, I don’t think this
approach works. The logic has some holes and I wouldn't be able to sell this idea to folks around here.
Per my previous note, I mentioned that we have 400,000
people collaborating across 170 countries in IBM.
That raises a great question of what does it mean to have 400,000 people
collaborating? Are they all in one massive social network connected to each
other? Are they participating in the same spaces? Are the contributing to each
To give an idea, first we need to look at the state of
social computing in IBM. First, there is not
one but at least 32 different social applications each of which can have
hundreds of thousands of unique users, and tens of thousands of instances
(e.g., separate wikis, individual blogs, etc.)
By another count, there are over 200 applications--it varies based on what different folks consider as a "social application".
For example, in rough numbers of some of the tools
200k replies (aging removes some)
Dogear /social bookmarking
Beehive (social net)
Cattail (social file sharing)
This is just a subset and unofficial list of these services.
There are other tools for enterprise wide social searching, social
brainstorming, instant messaging, tweeting, podcast/videocast sharing, social
profiles, and analysis tools. Some of these other tools are used by 100% of
employees particularly instant messaging and out Bluepages (profiles) systems.
Others have even more people because non-employees such as business partners,
customers, and even suppliers have access to them.
People generally use them as follows:
across the enterprise: e.g a blogger
team spaces: departments and hierarchical teams
spaces: across multiple departments
group spaces: e.g., someone creates a Lotus Connections Activity and
So the groupings vary significantly, and a number individuals
do use many of these tools for different reasons, but unique users still reach
across the company.
The types of activities or projects in these spaces are just
as varied as the job roles, products and markets. Think of it, just in terms of
products alone, I think we have over 5000 distinctly, different ones (and not
just variations); some are very complex (imagine working on the DB2 database), and others are smaller. That still doesn’t include the many thousands of customer projects
people are working on at any one time. So in general there aren't any common scopes or scales for
what people work and interact on.
The general philosophy that creates this mix is that as a
company we encourage an internal free-market environment to allow many tools to
appear and compete with each other. This helps the best ideas to emerge out of
new social experiments and methods. While someone has to pay for the
environments, this is up to each social app project to figure out how to fund. There
are official tools that are universally supported, but there are also other research
and experimental projects—even Beehive as a research project easily includes over
We also do not police these activities. People are talking
about their non-work activities, but that is a natural outcome of social
interaction. As long as people are not breaking their business conduct and the
social computing guidelines, they are okay to use it how they like.
This kind of quantitative information really doesn’t show how
people are collaborating just where. Rather our BlueIQ team collects
success stories, especially recreatable and reuseable scenarios, from
individuals illustrating how they are productively working together in these
In general, it is complex to say how people are
collaborating, but safe to say that they are collaborating widely in the
social environments in IBM.
There are several different ways of looking at what to measure and how to measure benefit or value in social software systems.
First, who receives the benefit from the system, and how do you measure their benefit:
the individual view - the question: "How do I as an individual benefit from the social systems and networks I am involved in?"
the comparable individual view - If I can measure how each person benefits, can we compare that benefit between the persons? This isn't always so, because the value to an individual may be specific to themselves, and not quantifiable in a universal manner
the organizational view - How does the organization benefit from social software (at different levels of social system, teams, departments, units, etc.)? Is this organizational view a composite of the comparable individual view or is it different?
the comparable organizational view - Just like the individual view, there may be a comparable organizational view as well. These again rely on establishing a comparable basis of measurement of one organization versus another.
Then there is a difference between the value of the social network as a structure, versus the content in that network:
the structural view - How do you measure value of the structure of contacts, parterships, collaborations, connections, networks, and other connectivity-based views of the social systems you are involved in?
the content or knowledge view - How do you measure the value of knowledge or content from that network? Can they be packaged as assets specifically?
Aside from the value of structure or content as different forms of assets, how do you measure goal-achievement from the network.
These are just my own ruminations. I believe that there are some ways to develop or gather metrics of some of these, but it may be a while before we can agree on how to measure all aspects of these. Before thinking about how to formalize this, you should take a look at these ideas of how to carefully define a measurement process by Peter Andrews, part of the Senior Consulting Faculty for the IBM Executive Business Institute. These are the brainiacs that think about the "thing behind the thing" (to paraphrase the classic statement from many a mobster movie): how to define or measure abstract concepts like innovation, strategy and more.
We had a call today posing the question of how to work with Influencersin your community. I compared community influencers similar to how theindustry works with Analysts. Many organizations have Analyst Relationsteams whose sole job is to interact and communicate with analysts thatcover their products or services. Analysts come from many origins, butare usually recognized as experts in their topic, and hold a primaryjob function to cover the topic, or consult and interact with manycompanies, the press or other customers about the topic.
It's not a far stretch to say that leading bloggers, forum members, andothers who interact in a community are starting to gain (or evenbecome) the same status. The primary difference for most is the"amateur" status: rather than "being an expert" as their official job(the professional analyst), they are experts because of an existingjob, function or coverage they have. However, my point is that weshouldn't disregard them because of their amateur status. In fact,quite the opposite: this represents an opportunity to work with otherwho can help spread the word.
For the converted, this is not news, but in general this is still a newnotion. In fact, many organizations haven't even considered how toaddress this new population of amateur-analysts. They are looking for anew generation of PR/AR people who do get it. Businessweek's July 24th issue raises this issue.
On the other hand, some have (see the Nike examplein that article). Or on the other hand, there's the current debate onblogging-for-pay, where some bloggers are looking to get paid for theiractivities when in relation to product mentions, etc. I find thisdebate not very surprising when you consider this is already exists inthe field of professional-analysts. What is happening is that they aretrying to make a transition from amateur to professional status.
That is a bigger leap than a lot of people may think.
It's more than a matter of showing that you have X number of people whoread your blog, hence you should be paid $Y. In fact, what it points tois the somewhat mysterious/mystical reputationfactor. Anyone who tells you they have an equation for fame is talkinghorse's eggs (it's one my mom's favorite Bengali sayings: ghorar-dim).
This circles back to our Influencers and trying to figure out who theyare, and how to support them. With the social networking software oftoday, however, it is possible to learn some general behaviors like howmany people read something you've written, how often they may come backto it, what they think of your posting, etc. (Obviously, this onlyenters the picture when the influencer in questionwants to actually participate in this. Otherwise, you'd be running intosome heavy privacy invasions.) You need a number of different softwareelements to find that out, including a good metrics collection andanalysis system, a ratings system, per-user histories, peer networks,etc. This is more than a few separate pieces of social software we aretalking about, which means a lot of investment in developing the righttools. It's already going on in the industry as apparent by the growingnumber of social software sites and products that many new startups aretrying to capitalize on. So, there is and will be many different waysfor companies to start leveraging social systems.
But what is the value to the company? How do you even value somethinglike that, especially when they are so widespread across many differentsocial networking sites, tools, all of which provide different types offeatures, metrics, etc.
Many web sites in the industry have generally agreed upon the UniqueVisitor metric as a common measurement for people coming to sites, buthow do you measure reputation.Just because you have a lot of people reading the post, does notnecessarily mean you have any kind of influence over them. In fact, thepopulation may actively dislike an influencers views. Amusingly enough,this reminds of how Howard Stern became a big hit: not just his fansbut also those anti-fans listened to his show. Because of how thetraditional Wanamaker advertising model works (see my earlier post),it is considered a success. Is this still true in the evolvingpin-point targetted marketing model available on the Internet? (I'msure Mr Stern's producers certainly hope so: can he deliver on his$500M salary).
Do we need an industry-wide reputation metric? Is that even possible?Would this have any impact on what people use now: the traditional CPMadvertising/marketing metric model?
Well to get off the pontification of where it may go, I'll step into what we are thinking in terms of a reputation model next...
I came across a little gem out in the field of data we collect on ourpageviews yesterday: "Topic Popularity". It looks at the pageviews toour articles and tutorials, per the taxonomy topic keyword in metadataof each article, then divides it by the number of articles that keywordappears in to get an average pageviews per article (per topic name).
Good business plans delivers on results, and to get results you first have to be able to determine what they should be and be able to measure them. Ie seen many business operations that aren quite sure what results they are supposed to be delivering, or have no easy way to measure those results. They end up not really progressing or succeeding in the long run.
With the new territory that is Web 2.0, this comes sharply into view. Organizations that implementing or running Web 2.0 services like blogs, forums, wikis, and other social interaction systems, all need to know how to measure them and what measurements are meaningful. At least wee lucky that in the online world, collecting data and doing business analytics can be more automated.
Many companies already agree that for Web sites (Web 1.0) you need to be able to determine pageviews (PVs), and unique monthly visitors (UVs) as your two key metrics, to determine the success of the site.
But now consider what Web 2.0 is about and think about if those metrics still give meaningful information. If youe an organization like ours where our Community has a wide range of Web 2.0 services, how do compare those metrics between that of a forum and from a blog? Does it even make sense do that when what youe really interested in are things more like: How vibrant or healthy is our community? Who do people interact with? Is our communtiy self-supporting or do we have to do a lot to keep it alive? How much does it cost us to support our community?
My idea on this is that PVs and UVs are too low-level to answer these questions, and we need another level of metrics beyond that which Il call participation metrics. These metrics are used to try to answer the questions, or at least get a sense for what those levels are.
Now, the catch: How do determine participation metrics in a Web 2.0 system when even the ways people participate are very different between blogs, forums and other services?
The key, I think, is to go back to social network theory and the core ideas of collaboration; in particular, the idea of relationships between the members of any social network or community. It fairly easy to quantify a relationship, but it very hard to determine the quality of the relationship.
In this case, I focusing on the quantity of relationships, as well as the population mixes. Taking dW as an example, there are many ways of looking at our population but the one that interests me here is the relationships between a consumer and a producer. Simply said, you can look at four main population segments:
the developerWork staff (dW)
our internal ustomersacross the many product and technology groups in IBM (IBM)
the general membership audience of dW (members)
the authors, content sources, bloggers, and major contributors/interactors (the experts)
Thus, you can create a matrix of sorts here based on the interaction activity going on a specific area:
General technology blog
Blog by IBMer on tech/product
alphaWorks tech forum
Member-2-member (m2m), IBM-2-member (i2m)
dW produced podcast
e2m, dW-2-IBM (d2i)
Expert roundtable forum
You can go on defining more and more based on every (repeatable) use-case you can think of. More significantly, what this does is coaslesce together all the Community uses that generally contribute to specific relationships. While not entirely accurate, you could generalize that each use case mostly contributes to one or two types of relationships.
Thus, you develop a mapping across your entire landscape of interaction types for participation metrics based on relationships. If you have multiple communities (or dozens like dW has), you could limit the scope of the data to all service uses relevant to a specific community (e.g., IBM Rational ClearCase community) or specific set of communities (e.g., all IBM Rational communities), or you could look across all your communities at once (e.g., all dW). You have essentially, a set of participation metrics that applies to a range of data.
How do you use these metrics? It depends upon the questions you ask:
Is our community self-supporting? This is a relatively easier question because it generally looks at your ratio of member-2-member relationships versus all others. On the other hand, if you find your staff responding with all the answers all the time, (e.g., i2m) then the answer might be o
How vibrant is your community? First you need to define what vibrancy or health means to your organization. For some organizations that might mean how many people are talking about product X (the # of relationships in community around X). Others might look at how many experts have formed around your community. Yet others may be interested if the self-supporting segments of community are growing.
How much does it cost us to support our community? What are the cost centers? These are tougher ones. It requires a second set of data defining how much it costs to deliver each service or community use. But you can map, e.g., e2m is costing us $X across the entire community, and $X1, $X2, for the top activities in e2m.
How do we learn what is effective in our community? Here it helps if you have several or many communities to compare against. E.g., within expert2member, you could have a dozen different communities spread across different topics. Examine why the top e2m communities are growing faster than the ones on the bottom. It might be subjective elements (population for topic 1 is just growing much faster) or it might be things which you can address such as use-case features, or the approaches of the experts. It will at least help you to recognize how these are doing and provide a basis of comparison.
Again, this idea is more of a method than actual steps to take for your communities. You can see that the information is subjective to the goals and direction of your organization.
I was reading an article by Om Malik in the current Dec 05 issue of Business 2.0, called The Return of Monetized Eyeballs. In essence it's talking about the fact the buyers are once again valuing the ideas of pageviews and monthly unique visitor counts.
They refer to recent purchases like MySpace.com (sold to News Corp.) for about $580 million for their 40 million registered members.
Apparently, the current value for a single unique monthly visitor hovers around $38. Using that value, they determined (amongst others):
Slashdot/OSTG to be worth about $155 million (around 4 million UVs/month)
The FaceBook a University student-targeted site that gives each member their own social space - $127M (3.34 M UVs/mo)
The Drudge Report - Matt Drudge's political blog - $120M (3.2 M UVs/mo)
If you are curious how dW stacks up, using the 2 million unique visitors each month stat from the an October 2005 news item, we would be about $76 million, based on those visitors to our online site alone. [FYI: dW does a lot more than just the online sites].
At least they do point out that not all pageviews are alike. I'd add to that not all unique visitors are alike either.
So next comes some ideas of how to measure community activity relative to these industry metrics...