A blizzard of holiday celebrations have passed since then, so to recap: In my Working Wikily series of posts, I spoke to the healthy priorities of those who do not write and publish on wikis and merely use information shared by others. This seemed to provoke violent agreement.
During the holiday season, I thought a great deal about those who do write and publish. The contemplation was entirely self-serving: Claire Reinelt and I spent those months slavishly editing our paper, "SNA and Evaluation of Leadership Networks."
We are optimistic that the result will be accepted for publication in Leadership Quarterly (LQ), and hopeful that LQ will allow us to share the manuscript before its estimated publishing date in 2010. You'll see it here first, if/when LQ does allow us to share it.
Most germane to this post, Claire and I are incredibly grateful to the anonymous reviewers of LQ. Their extremely pointed criticisms and their constructive suggestions for improvements enabled Claire and me to improve our originally submitted draft into something exponentially better. The extra time also gave Claire a chance to convince me to read Skye Bender-deMoll's overview of SNA. Again, I am grateful.
In my networked world, I very rarely encounter editorial demands as stringent as those imposed by an old-school academic peer-reviewed printed-on-paper journal. What an invaluable experience this has been.
More often in a networked world, editorial demands are sub-consciously self-imposed. Every day, it gets easier to avoid those with a different point of view and simply google our way to information that confirms what we already believed. That, at least, is what is suggested by this recent press release from the NSF, which I originally commented about here. Of course, the NSF only suggests that this phenomenon applies to the behavior scientists. Perhaps we non-scientists are more dedicated to the tireless pursuit of truth.
Certainly I credit Skye Bender-deMoll for scientific pursuit of truth. In this May 2008 paper, Skye presents the network Tree of Knowledge; then presents the Babel-izing confounding of terminology and lack of synthesis that characterize the field; and then suggests some editorial advice that is worth taking. For example:
"Although many people are advocating that network techniques will help a great deal with evaluation tasks, there have not been any large scale systematic studies comparing the various pilot projects. Most projects have been fairly small, both in sizes of networks and numbers of participants.This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License and is copyrighted (c) 2009 by Connective Associates LLC except where otherwise noted.
"The majority of projects appear to be doing more diagnosis than assessment. This may be partially because in most cases there is no “known standard” for comparing assessments. Also, researchers tend to be cautious because the data collection is not rigorous or comprehensive enough to be safely used for evaluation at this time.
"Although a few of the academic studies show high methodological maturity, much of the work still seems quite exploratory. In several papers the insight appears to come more from the in-depth interviews and data collection process, with the network analysis component serving as a parallel approach."