The late great management guru, Peter Drucker, helped to innovate theidea that there is a whole economic class known as the Knowledge Workeronce said: "Nobody has really looked at productivity in white collarwork in a scientific way. But whenever we do look at it, it isgrotesquely unproductive." In other words, trying to define how tomeasure the productivity or performance of Knowledge workers (commonlycalled white collar workers), is an exercise in futility.
Davenport's bookpoints out that it is hard to create a common correlative measurementaround "knowledge workers" as a whole class. In fact, he describes thatthe typical way of dealing with knowledge workers is the HSPALTA approach: Hire smart people and leave them alone. Unfortunately, this doesn't really examine how to improve the system or help people improve themselves.
Maybe we will eventually discover some future magical formula thatmeasures this performance and how to improve it. In the past, inAgricultural societies, we had found ways to improve agriculturaloutput. Farmers from the dawn of time will tell you that "farming is anart", but the truth is that farming is also a science. Art issubjective, hard to measure, quantify and teach. Science is morestructured and actually can be taught (although not necessarily easily).During the Industrial Age, we achieved similar goals for manufacturingoutput. Now that we are in the Information Age, we are stumped, becauserather than a physical unit output, it is more of a mental qualitativeoutput, and that seems to us a very subjective element.
The good news is, as Davenport points out, there is at least one way to measurethe quality of knowledge. It's been done for centuries: the Peer Reviewprocess. It's most common in academia, whereby a group of your peersexamines your output and gives an analysis of what they think of it.It's how Masters and PhDs are still given out for the most part,worldwide.
I think that this is a good thing for us because that Peer Review process is atechnique that can be applied to unstructued knowledge on our site. Inits simplest form it is a Ratings systemwhereby anyone reading a piece of information on the site can vote 1through 5 on what they think of the article. It's entirely subjectivebut if you get a large number of ratings, it tends to average out whatpeople think of the information. This can apply to structured as wellas unstructured knowledge. This is the first level of a Ratings model.
That's a very basic notion. In fact, to be more useful, you may want tocollect all those ratings per a person's knowledge output and store itand those knowledge output items as part of their identity. Thus, youcan see what a person has contributed and produced and what peoplegenerally think of their output (their level of quality). This is amore evolved Rating system, generally referred to as a Reputation model.
Then, in turn, you could use a person's current rating as a weightingfactor to any rating they apply to others; i.e., normalize the value ofthe function of "my current rating" multipled by the rating value theyascribe. Thus when an industry luminary says you have a good idea, itweighs more towards the rating of that information, than when a novicerates it. Thus you have a weighted average of your Reputation based onwho actually rates your articles. This is a second evolution of Ratingsinto a weighted or a Ranked Reputation model.
How do you yourself become such an "industry luminary"? Essentially, alot of high-ranked people giving you good ratings implies that a lot ofknowledgable or influential people think that your output has a highlevel of quality. Thus, you would appear higher on the rankingshopefully amongst those lofty people who are the luminaries.
Measuring the performance of Knowledge workers
rawn 100000R0P5 Tags:  reputation knowledge_workers unstructured_knowledge trust_models 2,377 Visits