Cyberspace, The Virtual World, The Matrix, Mainframe, ENCOM, etc. Now The Living Web.
For being part of a generation that has grown up with online games,books, stories and movies about virtual worlds and moving at the "speedof the Net", it seems like it is taking quite a bit of time to get tothe idea of having such a world in common use (and hopefully not adystopic vision like that in The Matrix). I get the feeling we arelooking so far ahead that we are not focusing on taking the stepsinvolved in getting there.
Most of the mainstream media (books & movies) still focus on the 3Dworld, because obviously it's cool to imagine an alternate reality thatyou can actually move around. Even games tend toward that directionbecause it is the environment that can more easily excite people. Theyall talk about having a persona that moves and interacts with thisworld and the other personas in it. In fact, in the Matrix, the worldis so real, it is the "real world" as we know it; it's only when youunderstand the actual world of the Matrix behind it that you can dointeresting things like defying gravity.
The truth is probably a little depressing. Most of the virtual worldthat we know as the Web is 2D; in fact, by just a guesstimate, I'd sayover 98% of all the info we have online is in text. I also think for along time to come, the virtual world will still be overwhelminglytext-based for decades to come. To get to that futuristic vision asportrayed in the media, a lot of work behind the scenes still needs tohappen.
Ten steps (not in order) to get to virtual worlds:
Establish ubiquitous individual identities or online personas.
Enable personas with actions they can do (e.g., create content, initiate contact/discover others, exchange information, etc.)
Enable personas to categorize, aggregate, identify, mark or otherwise "control" the information around them (i.e. tagging)
Define "homes" or bases where each persona exists and controls.
Establish Reputations - enable rankings or ratings on personas, based on what others think of their interactions with them
Allows personas to "move freely" across system boundaries, or export their personas or info about them
Establish online economies (virtual valuation, and common exchange rates, around fixed not unlimited valuations)
Establish domains and guidelines of how they operate (i.e., online "cities", and proper governance of these cities)
Global accessiblity to the persona or personas you control, using any device
Actual visualization of the personas, their homes and the domains they live in (yes 3D worlds)
These details are a little more mundane and most people would justprefer if someone else just created and provided it for them. In fact,I think there is already a trend towards this new kind of "hosting" ofonline personas, and not just Web pages. Even the idea of blogggingfits somewhere beyond a web home page but still before reaching acomplete persona.
Don't get me wrong: in limited areas, most of these properties alreadyexist. In particualr I'm talking about online games and MMORPGs.However, they are limited in the sense that they create a separatefictitious world that you have to apply your context to. In otherwords, yes, it's make believe, which is also why it is fun. Also theyare limited in the sense they exist almost entirely within their owndomain. You don't see characters moving outside a game like Everquestand moving onto World of Warcraft; or even any reason or correlation todo so.
In terms of people and businesses however, the information is real (notartifically created to role-play) and pervasive across wherever you go.The good news is that some of these items are already starting tohappen, as you can see per what Newsweek describes as The Living Web.What's more, these personas are a limited thing. My belief is thatpeople in general do not want to have to maintain multiple personas,for the same reason, people do not really want to have to keep track ofmultiple email accounts.
So there you have it. There's much work to do in the middle to get to pervasive virtual worlds.
My friend John--also known as Action Figure John but that's a different story--brought by the most expensive coffee I'd never heard of until then. This coffee is so hard to produce that I doubt Starbuck's or Peet's could ever list it on their boards.
Jamaican Blue Mountain, you say? Pshaw... that's middle class stuff... :)
Around $150 or more a pound for the roasted beans, this coffee has to be shipped directly from the plantation. It is the legendary Kopi Luwak... and here's where the snickering begins.
This exotic coffee from Indonesia can only be found on plantations in Sumatra, Java and Sulawesi. Not only do they have to grow a good bean but it requires the assistance of Paradoxurus hermaphroditus, the Palm Civet (/snicker). This small mammal is common in many parts of South-East Asia and does the very important function of eating the raw red berries, digesting them, and then pooping them out! (/snicker /snicker) The enzymes from the digestive tract apparently help to break down some of the bitter proteins. The happily fed mammal then walks away to eat another day. Farmers collect the beans and give it a light roast, then vacuum pack it and ship it to coffee extremists worldwide. John ordered it from AnimalCoffee.com I believe.
I just had to try this out, even though I'm not a coffee drinker myself.
For our afternoon of watching the Tivo'd new season episode of Battlestar Galactica, John brought his pristinely packaged poo poo coffee, along with his shiny brass coffee pot and burner, which he uses to make Turkish/SE Mediterranean coffee (yes, the true gritty stuff).
John ground a handful of beans in his brand new matching brass hand-mill coffee grinder, since it gets smaller grains than an automatic mill. It takes about a good 5-10 minutes of grinding to get it that way though. Then with some fine drinking water for fewer impurities, boiled over a small alcohol stove, the coffee came out quite nicely.
He thinks we stillneed to refine how much coffee to water and how fine to grind it. Thegrit was not as fine as the Turkish coffee he usually drinks (about 2pots a day). But as you can see none of it went to waste, and people quite enjoyed it to the bottom. (/snicker)
The HBR June issue has an article(requires subscription) by John Gourville which is one of the firstI've read to explain so clearly the issues behind the psychology ofadopting new products. I take this to mean not just products but alsoservices from any organization.
In summary, the idea was raised by the Nobel-prize winning psychologistDaniel Kahneman that explores why people deviate from rational economicbehavior. This combined with other work, on how individuals valuechoices in the marketplace, define the basis on how people handle theintroduction of a new product or service.
The four behaviors that arise are: people evaluate new product asalternatives, based on perceived value rather than subjective value;they consider new products relative to points of reference to existingones; they view such references as benefits/gains orshortcomings/losses; and finally, the most important behavior, losses have a far greater impact than gains.
This leads to the endowment effect whichthe author reports as the behavior where people hold things theyalready have in much greater value than those they don't. In fact, itleads to a multiplying effect in the market where consumers value theirexisting holdings as three times more valuable, while organizationswith new ideas/products value their own holdings as three times asvaluable. Thus, to convince consumers to change from the old to the newproduct may require up to nine timesthe improvements over the older product. As the author explains, AndyGrove of Intel has held the belief that an innovaion that can transformthe industry rapidly needs to offer a 10x improvement over existingalternatives. (I know my Sharp Aquos HDTV doesn't quite reach thathigh, but I still like the switch over from my 25" inch Sony tuber :)
The author continues by giving several categories of probability: the Sure Failure, the Easy Sell, the Long Hauls, and the Smash Hits. AnEasy Sell indicates limited (small) changes to the existing product andlimited changes to the behavior necessary. A Smal Hit has significantchanges to the product but only limited changes to the behavior.
I strongly suggest you pick up a copy of the HBR to understand the full extent of this article.
This has direct impact on any new technology, a topic of constant focusfor us at dW. Every company wants the smash hit, but for many it ismore by accident than on purpose. However, often they focus onimprovements to products based on what they think is important, andduring the product development stage do not really consider the factorsof user adoption behavior because it is so hard to measure. I've comeacross so many cases where the technology is considered quite advancedby the technical team that develops, supported by surveys of groups ofthe bleeding edge customers who are already raring to use it; but, theyfall short when they call out to the general market which issignificantly slower to adopt the idea.
In fact, in some cases, the improvements are just not enough (per theabove marketing theory). In other cases, the perceived value of theproduct is just conveyed either adequately to the consumers, or by the right influencers who can change perception. Ittakes the right mix of technical know-how, eloquence, stubbornness,ingenuity, and charm to find the proper evangelists for a project. Butfirst you need to be able to find those who are fans of the idea in thefirst place (goes back to my other thought on building a fan-base); theevangelist themselves must be a convinced fan, or others willeventually consider them a sham.
This theory applies to the tens or hundreds of thousands of newprojects that start up on the internet. A product manager needs toconsider if the product falls into the win category of Smash Hit or will be a Long Haulsell as much as the innovation of the product itself. This meansunderstanding the user behavior and need for the product in the firstplace. The difficulty lies in quantifying the improvements.
In some cases, you can do it by this process:
get a sample population of an existing alternative to the product
investigate what part of the product they utilize the most, which has the biggest impact, or which causes the most pain
measure how much of the time or effort is spent by users on those elements
consider what, if any, improvements the new product has to those impacting elements of the old
if the product improves the time- or effort-consumed for that element, measure how much it does so
Consider this improvement versus the type of win you could get
Easy to say all this; hard to carry it out.
The impact of the endowment effect is the old adage: people fear change.It's not fear per-se, but more inertia. To overcome the inertia, youneed to change perceptions. To grow customers and fans, you need tobuild the audience and community around the idea, to build up momentumof understanding and acceptance to overcome this inertia. (I mentionthat this is a crucial second step of the Open Source business model earlier)
As our developerWorks Community Editor, I'm faced with issues which are part talent management and part information management. Aside from all the regular issues of focusing effort and developing content to build a stronger community, I'm facing a happy problem of having so many enthusiatic contributors: what happens when there's too much to read?
It isn't hard to see this problem is evident; just try googling on a topic and see how many hits you get. Would you really want to read anything more than 100 entries?
In talking with James Snell and others a few weeks ago regarding corporate blogs, wikis and other such tools, we agreed that there are typically two ways of looking at who should contribute. We both agree everyone should be able to do so; but when you start running into large numbers, how do you organize this information?
First, there's the top-down approach where you pick the topics and find people to contribute to each category. This is quite commonly used; dW does this in our many zones.
Then there's also a bottom-up approach where you want topic experts to self-emerge from a population.
This second approach is harder to figure out, but the answer may not be as complicated, especially when you have a large population and numbers on your side.
I think the idea lies in building a Reputation network, modeled on trust. Each individual essentially would rate how a particular interaction with a potential expert turned out (in two types of trust, which I'll discuss later). You can then average all the trust ratings per candidate to determine their reputation. The number of interactions combined with this average trust rating describes their reputation.
Those with higher reputations automatically emerge as "experts" in whatever categories they want to focus on. The process is democratic and self-emergent within the community. For that matter it also automatically helps to define topic areas that people are interested in.
That's the basic principle. I'm quite sure there are many ways of looking at this, so I'd be curious to hear counterpoints.
This rings back to the most recent Pirates of the Carribean movie where at one point a whole fleet of pirate vessels at sea with their own pirate flags. It looked like they all shopped at that Pirate Museum gift store. Obviously the movie is fiction, with more attitude added to pirate life than reality, but sometimes you see just how fake it is. In terms of the actual movie, I still think the first of the three was still the best, with this third being far too overachieving in terms of plot, and direction. Overstimulation is sometimes just that. The characters behave as the director figured self-interested pirates would, meaning they would do whatever they needed to achieve their own goals, but at some point, it got far too complicated to keep straight who wanted what. Visually, they also tried far too much, pushing into ludicrous speed... yes, they went plaid.
According to a recent New York Times article, Tim Berners-Lee is partnering with MIT and the Univ of Southampton (UK) to launch their Web Science Research Initiative. I don't have much more information than that but it sounds like a graduate level research space into a more modern version of social networking analysis. The key people are Berners-Lee, Wendy Hall (head of School of Electronics and CS at U of Southampton), Nigel Shadbot (prof of AI, Univ of Southampton), and Daniel Weitzner (principal research scientist at MIT).
Commenting on the new initiative, Tim Berners-Lee, inventor of theWorld Wide Web and a founding director of WSRI, said, "As the webcelebrates its first decade of widespread use, we still knowsurprisingly little about how it evolved, and we have only scratchedthe surface of what could be realized with deeper scientificinvestigation into its design, operation and impact on society.
"The Web Science Research Initiative will allow researchers to take theweb seriously as an object of scientific inquiry, with the goal ofhelping to foster the web's growth and fulfill its great potential as apowerful tool for humanity."
I'm pretty sure the web is already an object of serious scientific and even commercial inquiry, but more effort is always a good thing. In comparison, our undergraduate UAMIS class seems much less conceptual. I'm sure those ideas will eventually trickle down to us too. Social network analysis is a complex enough task, and not a simple topic; i.e., it'd need a full-blown semester to teach.
I listened to our pre-release version of the 40-minute podcast interview with Tim O'Reilly, recorded by our editor Scott Laningham recently. It starts out as a backgrounder for how O'Reilly & Associates got started but moves quickly into the nature of participation, how the Internet and the web is the medium for new business, and the participatory nature of the net (what is now referred to as Web 2.0). Tim agrees with the other Tim (Sir Tim Berners-Lee) that really the term web 2.0 only serves to clarify the aspect first initiated with the Internet: network effects.
It's available on dW Radio and definitely worth listening to. I'll probably make it a supplementary piece for our MIS students to listen to for the class.
I came across a little gem out in the field of data we collect on ourpageviews yesterday: "Topic Popularity". It looks at the pageviews toour articles and tutorials, per the taxonomy topic keyword in metadataof each article, then divides it by the number of articles that keywordappears in to get an average pageviews per article (per topic name).
I've been playing around with TouchGraph's Google Browser to see how our blogs are linked in from other sites, and displays it in a graph network showing other sites searchable on Google that have pointers to your own blog.
I'll try to take a snapshot of the results later and paste that in here.
The basis of community in my mind is in the interaction between people. The tools, mechanisms, processes, and communications we use are the outward expression of these interactions. A community really lies on a subliminal level in the form of the trust we develop directly with other members of the community.
For the most part, this trust is an implicit factor that we associate mentally with another entity (a person or a group). However, to externalize this factor into a meaningful measurement that many others can relate to can only be a rough approximation. Beyond that, to normalize that measurement across anyone, anywhere reduces it to a lowest common denominator of trust.
This is still important, however. By indicating how you would rate trust between yourself and another person, and by sharing this rating with others, you describe the relationship network around you. Metcalfe's law states that the usefulness, or utility, of a network equals approximately the square of the number of users of the system. This is useful in understanding the context in interactions between people. Context after all makes the difference between princes and purloiners.
You can build a complex directed graph by combining the various relationship networks of all the individuals in a community. However, that just becomes increasingly complex with each person you add. From Metcalfe's law, the possible number of connections is squared, making huge numbers of connections.
A simpler way is to create an aggregate trust rating per person across all the ratings they receive from others. Each entity thus has a single rating factor.
Actually, two rating factors really. You need to ask after the result of any interaction:
1. Helpfulness - Was this knowledge source helpful? Do you trust that they were providing the information in good faith and for your benefit?
2. Usefulness - Was the information provided by the knowledge source useful to you? Did the information help? Can you put it to use?
A person can be a source of useful information but be too busy to help people get that info. On the other hand, a person can truly want to help but not really know much about a subject.
How do you apply these ratings? Keep it simple: rate 1 through 5, lowest to highest, on each of these factors per interaction you have with someone.
What you really end up getting over time, is a measure of the reputation of an entity as well as the number of people they interact with. By measuring recurring interactions with the same knowledge source, you can also measure how loyal people are to that source.
As part of the Managing online communities course, students have a semester-long assignment to blog weekly. This is to force the habit of writing on a regular basis and feeling the experience for themselves. As a Community manager they would have to try to encourage other people to blog regularly, but as most bloggers know (most blogs fail due to lack of updates) it isn't trivial to write on a regular basis. When you hit the workforce, it becomes even more difficult.
In any case, all our MIS 300 blogs can be read by the public. I think only the first 38 of those blogs are operational (the number of students we have). You'll find varying degrees of creativity and coverage, but that is to be expected.
Other tools-based assignments include those on forums, group editing a wiki, and creating a podcast. I'll try to point to those when they are ready.
Lee Provoost’s post, “Adopting Enterprise 2.0 in large organisations: Fiat or Ferrari?” talks about how people can start with smaller
cars like a Fiat and eventually upgrade if need be to a Ferrari, rather than
wait decades of riding public transportation until they save enough for their
top car. I don’t see the entire negative with public transportation but when it
comes to social software, this ignores a large problem: migrating from one
social software system to another is a lot more complicated than just replacing
the tool itself.
Having seen first-hand several generations of social tools
in our company, and trying to get people to migrate from one existing
environment to a new one, it takes a bit of work to get people to switch to the
new system. A better analogy than cars may be “transportation networks”. In the
US, think about asking people to stop driving and using trains instead.
First of all, change is hard: people become used to certain
features and know how to work it quickly. Unless the new social tool has the exact
same features handled exactly the same, it means new learning, often new
terminology, and trial and error.
Social tools also don’t always make it easy to migrate from
one system to another. So you may also need to reenter your profile
information, preferences, and generally reinstate what you may have already had
People place a lot of content and context into their social
environment, and unless that is all migrated with them too, they may see it as “loosing
their standing in that social environment.” For some this is in the form of
rankings; for others useful or valuable content that they left behind. New
social environments don’t need to start at zero but more often than not, they
are not fully compatible with the old ones or provide different tools and
require different fields; thus, migration is not a simple prospect of
Finally, a new system should probably not only perform better,
but all them to interact in better ways. This also means new features to ask
people to try out. The power of the new social tool may be in those new
features but in maintaining the status quo, many users will keep using what
they know, until enough people have adopted the new features.
These are just a few basic reasons in adoption that make it
difficult to simply “level up” to a new social system. If you run a social
software system in your enterprise, you should certainly not treat it like just buying a new car.
We'll be upgrading the Managing online commuities course next semester. The last I heard, the new course number will be MIS 424/524. This means that it will be a both undergrad (senior) & grad level course. Partially it is administrivia: grad students and professionals want to take it and can't do that as a 300-level (junior) course. Next semester, you don't have to be a degree-seeking student to register for this course, but yes, it is still local (rather than an online course) and a full semester long.
To go with that we will need to revise some of the cirriculum as well as add additional work appropriate for graduate level students. Unfortunately, I have not had much time to think about that and may not have it done before I leave on vacation in two weeks. I guess I get to take my laptop along and work a little over the holidays.
One of the regular assignments we have is analysis of news articles and papers on Web 2.0 and social projects and sites, and a presentation to the class. This semester has been more freeform to see what students will come across. For the grad level, I think we will need them to provide greater insight into the site they review: what is their business model/how do they make money or pay for it; how do they attract/retain visitors; how do they reward members; etc.
The final project also needs to start much earlier to give our students more time to complete the work. Apparently there was some confusion on what they were supposed to do. Also the selection of tools at their disposal were not sufficiently varied. We'll try and look for more tools per my previous note.
We like the aspect of interfacing with high schools, but we may need to expand to more schools to have a larger user base. The complication is that each trip to the high school has some overhead involved for both parties; the high school computers have firewalls/filters for some sites which are relevant; they may not be adequately equipped for enough students; high school classes may have a lot of movement (people come in and out of class often). A better way would be to bus the high school students to a controlled environment like the Hoffman ecommerce lab at UAMIS.
The core competency here is in terms of facilitating relationships and communications between different parties. There are in fact many different types of interactions that this role takes on. In as such, this means they participate as a part of many different role-interaction patterns. This is significant since when such patterns are frequent and repeated, it becomes almost transactional, and therefore measurable. If you need the example of a more common role-interaction pattern: think of a support call from initial contact to completion after a solution or resolution has been reached and the customer is verified as satisfied. Each such complete interaction has a measurable value; or you could also measure it in terms of cost or time it took to conduct that interaction end-to-end. Finally, you could also measure it in terms of quantity of those interactions actually reaching completion rather than partial or incomplete resolutions (likely meaning an unhappy customer left hanging).
The RI patterns for Community managers are of a different sort but the following tables give some suggestions of the kinds of patterns they could participate in.
Table 9.1 -- The Value of Community Managers
Improving relationships with members by providing a human face
to an organization or a large social group
Bringing the value of their own relationships and contact
networks within the organization
Arbitrating conflicts between members, or between the member and the
Coordinating member projects and activities
Serving as a repository of situational knowledge about the
organization, the members, or the content
To the sponsoring organization
Serving as an organizational spokesperson to the membership
Providing a view into the climate
of the members about the topic or purpose (the business climate within the
enterprise, across business partners, or across the industry)
Housing a repository of situational
knowledge about members, the content, or the topic
Encouraging and monitoring
member involvement and participation in the topics that
interest the sponsor
members might have with the organization
describing value or outcomes of the social group
and potential for hires or rehires
Table 9.2 Supporting Customers or Partners
Customers or business partners (public-facing, cross-boundary, third-party)
Marketing or sales
Increasing the number of touches with customers
Identifying customer evangelists and activists
Discovering industry trends and customer interests
Acting as marketing
liaisons to customers
Guiding marketing on
appropriate messaging or tactics
Product development and delivery
Assisting in gathering product requirements from audience
Conducting market research with customers
Identifying competitor activity or offerings
Conducting design tests and product beta-testing
Delivering products to customers online
Customer relations or product support
Providing a human
interface to the organization or social group
Serving as a “finger on the pulse” of audience concerns
Helping partners locate internal representatives or departments
Helping customers find appropriate support resources
Identifying troubled or
There's another table on their roles within the enterprise supporting employee and organizational interactions.