Saturday, March 9, 2013

Quantifying Quality for the Individual Scholar

When I went on the job market last year I had an unsettling this-is-how-things-work realization: beyond writing a strong teaching statement I had no way to "prove" to hiring committees that I was a thoughtful and effective teacher. I could "prove" myself as a productive scholar to some extent by listing conference presentations and publications on my C.V., and I could quantify my service to the profession and my department by listing my contributions on the same document. I realized for better or worse that I needed to make my teaching more visible to those who would read my job documents because that's all they would know about me. One way to do this with teaching was to try to win a teaching award, or some other commendation that I could include in my C.V.

[cool collage by Leo Reynolds available at Flikr Creative Commons]


This need to quantify made me feel a bit sleazy, like I was only teaching to win an award to get a job. That wasn't at all the reality of my situation, but the way the profession works forced me at least to add that dimension to my thinking about becoming a serious member of the community.

The bottom line here is visibility. Knowledge workers are often required to render predominantly intangible aspects of their work tangible, or at least visible, to others both inside and outside their fields.

Back in 2005 a physicist named Jorge E. Hirsch recognized a problem similar to the one I've described above when he pointed out in an article in Proceedings of the National Academy of Sciences of the United States of America that, short of winning a Nobel Prize or some other highly-visible award, it is very difficult for a scientist to "quantify the cumulative impact and relevance" of her/his "research output" (see the first paragraph of the article linked above).

So, he proposed an index that would quantify both the impact and relevance of scholarship by tracking the number of articles a scholar publishes and the number of times the scholar's articles have been cited in other articles. Or, rendered in Hirsch's more technical language (the following comes directly from the article linked above as well):

"A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np h) papers have ≤h citations each."

In terms of practical application, Alan Marnett explains, "So we can ask ourselves, 'Have I published one paper that’s been cited at least once?'  If so, we’ve got an H-index of one and we can move on to the next question, 'Have I published two papers that have each been cited at least twice?'  If so, our score is 2 and we can continue to repeat this line of questioning until we can’t answer ‘yes’ anymore."

What's cool about the h-index (as it has come to be called) is that it depends entirely on the impact a researcher's work has, and not on the perceived prestige of a particular journal. This is not to suggest that some journals are not prestigious for good reasons. In fact, in the next post I'll address how this quantification manifests itself in terms of journals and how we can use this information to become better members of the scholarly community despite what may seem a necessary distastefulness inherent in this whole process.

No comments:

Post a Comment