Saturday, March 23, 2013

Quantifying Research at the Journal Level: The h5-Index

Try this:

1. Go to Google Scholar
2. Click on metrics
3. Click on your preferred scholarly discipline on the left-hand side

Once you get to your chosen field or subfield, what you'll find is a list of the top journals in that field ranked from highest to lowest in terms of their respective h5-indexes.


The h5-index works similarly, but not identically, to the individual h-index I discuss in the last post that you can use to quantify the work of an individual scholar. The difference is that a journal's h5-index indicates the h-index for the entire journal over the last 5 years. An individual scholar's h-index demonstrates the number of articles, h, with at least h number of citations. A journal's h5-index demonstrates the largest number of articles, h, for the entire journal over the last 5 years that have been cited at least, h times.

So, if your favorite journal has an h5-index of 10, then that means the journal has published 10 articles over the last 5 years that have all been cited at least 10 times. Now, the most-cited article may have been cited 50 times, but the tenth most-cited article will only have been cited 10 times. Of course, we could lower the h5-index ranking to 8 and include more articles, but remember the h5-index is the largest number of articles from the journals over the last 5 years with the most possible citations.

The Google Scholar metrics thus rank journals in your field according to how much they are being cited. These metrics, then, give you a sense of which journals are having the greatest impact on your field as a whole, on individual scholars' works, and thus on your formation as a knowledge worker.

So far the posts in this series have been purely informative, avoiding interpretation of these metrics as much as possible. Beginning with the next post, however, I want to offer some useful ways of thinking about this data that will be geared towards non-sleazy strategies for making this information work for you.

Saturday, March 9, 2013

Quantifying Quality for the Individual Scholar

When I went on the job market last year I had an unsettling this-is-how-things-work realization: beyond writing a strong teaching statement I had no way to "prove" to hiring committees that I was a thoughtful and effective teacher. I could "prove" myself as a productive scholar to some extent by listing conference presentations and publications on my C.V., and I could quantify my service to the profession and my department by listing my contributions on the same document. I realized for better or worse that I needed to make my teaching more visible to those who would read my job documents because that's all they would know about me. One way to do this with teaching was to try to win a teaching award, or some other commendation that I could include in my C.V.

[cool collage by Leo Reynolds available at Flikr Creative Commons]


This need to quantify made me feel a bit sleazy, like I was only teaching to win an award to get a job. That wasn't at all the reality of my situation, but the way the profession works forced me at least to add that dimension to my thinking about becoming a serious member of the community.

The bottom line here is visibility. Knowledge workers are often required to render predominantly intangible aspects of their work tangible, or at least visible, to others both inside and outside their fields.

Back in 2005 a physicist named Jorge E. Hirsch recognized a problem similar to the one I've described above when he pointed out in an article in Proceedings of the National Academy of Sciences of the United States of America that, short of winning a Nobel Prize or some other highly-visible award, it is very difficult for a scientist to "quantify the cumulative impact and relevance" of her/his "research output" (see the first paragraph of the article linked above).

So, he proposed an index that would quantify both the impact and relevance of scholarship by tracking the number of articles a scholar publishes and the number of times the scholar's articles have been cited in other articles. Or, rendered in Hirsch's more technical language (the following comes directly from the article linked above as well):

"A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np h) papers have ≤h citations each."

In terms of practical application, Alan Marnett explains, "So we can ask ourselves, 'Have I published one paper that’s been cited at least once?'  If so, we’ve got an H-index of one and we can move on to the next question, 'Have I published two papers that have each been cited at least twice?'  If so, our score is 2 and we can continue to repeat this line of questioning until we can’t answer ‘yes’ anymore."

What's cool about the h-index (as it has come to be called) is that it depends entirely on the impact a researcher's work has, and not on the perceived prestige of a particular journal. This is not to suggest that some journals are not prestigious for good reasons. In fact, in the next post I'll address how this quantification manifests itself in terms of journals and how we can use this information to become better members of the scholarly community despite what may seem a necessary distastefulness inherent in this whole process.