Wednesday, September 24, 2008

Structural equivalence: related tags in social bookmarking

In my "Holy Trinity of Network Power," structural equivalence is conceptually the most obscure. But practically speaking, it is easy to use. For example, searching for "sna" with the social bookmarking engine delicious provides the following:

I have enlarged and highlighted the "Related Tags" provided by delicious. This sort of information helps people find and learn from others with shared interests, using structural equivalence, regardless of how many degrees of separation they have on Facebook or LinkedIn. I'll expand more on this idea soon.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2008 by Connective Associates LLC except where otherwise noted.

Monday, September 22, 2008

Structural equivalence: social bookmarking on a corporate intranet

Last week Laurie Damianos of MITRE presented to the Boston KM Forum, sharing her experience implementing a social bookmarking system within the enterprise.

For newbies, I often describe social bookmarking as similar to Amazon.com in its ability to track both people who read the same "books" and "books" that share common audiences--whether those "books" are literal or metaphorical. For the mathematically curious, structural equivalence is the underlying principle. Also, here's an introduction to social bookmarking I wrote a while back. Bill Ives has written a few times about applying social bookmarking within the enterprise, including specific references to MITRE's and IBM's experiences.

Laurie's presentation was great and left me feeling more excited than ever about business applications of social bookmarking. But I also left feeling puzzled by the response I got to one of my (many) questions. One way MITRE manages its in-house social bookmark system is by deleting bookmarks created by people who have since left the company. When I asked if there had been any debate within MITRE about deleting this information, I got two responses from the group: (1) Bookmarks are deleted, but the content (referenced by the former bookmarks) remains; and (2) Without the context of an owner, what good is a bookmark?

These two assertions strike me as odd, especially coming from a group that aims to solve the "lost knowledge" problem (e.g., Dave DeLong).

Deleting bookmarks of ex-employees seems to me on a par with burning bibliographies of articles whose authors are dead. After all, the artices and their references still exist. Furthermore, the authors are no longer around to provide context to their bibliographies. So why don't we save library shelf space and rip out all those bibliographies? Anyone who has ever done research can answer that question.

If bibliography-burning seems extreme, here's a milder example much closer to the MITRE reality: Amazon.com could save tons of disk space if it deleted the purchase records of people who haven't bought anything for the past year (i.e., those who have "left Amazon"). I wonder what the managers of Amazon would say to someone who suggested this strategy and argued that (1) the products purchased are still listed, and (2) the purchasers have left, so why bother to keep those records?

As pioneers of collaborative filering, managers at Amazon would probably recognize purchase records of the departed as a valuable resource. Acquiring those records in the first place is one of the biggest competitive advantages a service like Amazon can achieve--commonly known as surmounting the "cold start problem."

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2008 by Connective Associates LLC except where otherwise noted.

Friday, September 19, 2008

Network Centrality: Rob Cross Braintrust Keynote and Density

As an example of network-cluster-driven-behavior, last time I suggested a simple way to stereotype the work of Rob Cross. The first row of the table below, from his "Braintrust Keynote" presentation, was my Exhibit A:
The other rows of the above table deserve comment as well. Let's focus today on the third row, Centrality, with apologies to those who thought that my recent series on network centrality was finished.

In all my posts on centrality, I never actually described a mathematical formula for calculating it. There are quite a few reasonable ways to define centrality. See this post for links to a few of them. We see above that Cross's Braintrust Keynote describes centrality as the "average # of relationships per person." Unfortunately, this notion of centrality has nothing at all to do with what other people mean when they say "centrality."

First, a preliminary clarification: "Centrality" is most commonly used to describe a single node in a network, but it is also used to describe a global property of an entire network (much like "centralization" in the bottom row of the Braintrust Keynote table above). So we should be clear that "average # of relationships per person" is a global property of an entire network.

With that in mind, observe the following two networks that have exactly the same number of nodes, exactly the same number of edges, and hence exactly the same value of "centrality" or "average # of relationships per person":
I don't think too many people would describe the above two networks as having equal centrality, despite the Braintrust Keynote assertion.

It's a shame to equate "centrality" and "average # of relationships per person." They are two of my most favorite network metrics. I have devoted enough recent bandwidth to centrality to make clear my affinity for that metric. Soon, I will explain why I like "average # of relationships per person" as an alternative to density (top row of the Braintrust Keynote table) that is much less susceptible to the network size bias noted by Kathleen Carley.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2008 by Connective Associates LLC except where otherwise noted.

Thursday, September 11, 2008

Network Clustering: Guide to Stereotyping Rob Cross and Kathleen Carley

Recently I mentioned how network clustering on the WWW indicates that Rob Cross and Kathleen Carley each have their own close-knit camps that co-dominate the world of "organizational network analysis." Before that, I shared Ron Burt's point that such close-knit camps are known not only for amazing productivity but also for stereotyping outsiders.

I am outside both the Cross and Carley camps, but I enjoy stereotyping as much as anyone, so today I provide convenient superficial labels with which my readers can simplify the contributions of these two notable network leaders.

Guide to stereotyping Rob Cross and Kathleen Carley:
  1. Rob Cross provides stories for business
  2. Kathleen Carley provides computer models for the military
Wasn't that easy? Now let's look at one example of each stereotype.

(1) The recent research of the Network Roundtable features Cross's "Braintrust Keynote Presentation." Here is his third slide:
Note the simple and compelling story in the top row of the table: Network density within and across departments of less than 20% indicates little collaboration. If you read the actual presentation, you'll see that the "target density" is only 9.4% because the current density is less than half that, so the target is a healthy step up towards 20%. I will skip the other rows of the table for now.

(2) Kathleen Carley's camp responds to the above story with the following article:
As far as stories go, this article sucks. But look, it is classified under "statistical simulation," because the researchers use computer programs not only to analyze networks, but also to create the very networks that they study (no pesky data collection necessary).

For those whose eyes are glazing over, let me summarize the computer model punchline with a picture. The following three networks all have exactly the same density, 20%; and so according to Cross each of the three networks below has exactly the minimum recommended allowance of connectivity to indicate collaboration:
As you can see, density of 20% means different things depending on how many nodes are in the network.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2008 by Connective Associates LLC except where otherwise noted.

Tuesday, September 09, 2008

Read email or smoke pot---The choice is yours

While playing hooky from Rob Cross's school of networks, I am free to indulge in all kinds of reckless neuron-destroying behavior. One option is attending to email, which is even better than pot-smoking at reducing IQ.

Chances are you know someone with an email problem. Give them the gift of 5 additional IQ points by inviting them to take this survey, created by Peggy Kuo at the University of New South Wales, Australia:

Email Addiction in the Workplace.

The aim of this study is to determine if Email Addiction exists in the workplace; if so what factors contribute to it and how can it be measured or determined. In addition we also aim to determine the impacts it has on productivity in the workplace.

If you decide to participate, you will be asked to complete an online survey. It is envisaged that the survey will take between 5-10 minutes to complete. There are no known or foreseeable risks associated with the survey.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2008 by Connective Associates LLC except where otherwise noted.

Friday, September 05, 2008

Network Clustering: Rob Cross and Kathleen Carley

Next Monday, Sept 8, begins the 2-day Network Roundtable Fall Conference. Rob Cross at UVA has led the Network Roundtable from its inception. He and his colleagues have quite an agenda planned for their time in DC.

My regular readers with sharp eyes may have noticed Rob Cross in a recent post of mine. That post introduced network clustering with an example --- a WWW clustering analysis of "organizational network analysis" computed by Grokker:

One of my favorite metaphors for clustering analysis is the table of contents. It is useful for seeing the big picture, all-inclusively, broken down into sub-categories. In an organizational network setting, a natural application would be identifying communities of practice (including those that don't yet recognize themselves as such).

Continuing with the book metaphor, we can see that the WWW authors of organizational network analysis have devoted "chapters" to these topics:
  1. Social networks
  2. Organizational systems
  3. Public health
  4. Information management
  5. Knowledge
  6. Tools
  7. Rob Cross
  8. Kathleen M Carley
  9. Other
Most of these "chapters" are based on fields or methods of work. Two "chapters" stand out for being based on individual people.

Another way to view these "book chapters" is as "closed networks" (relatively speaking), as I described in my last post. I refer my readers again to that post, this time keeping Rob Cross and Kathleen Carley in mind. It's fun to speculate how the Cross and Carley camps employ stereotypes to describe their counterparts.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2007 by Connective Associates LLC except where otherwise noted.