Hadas Shema muses on the importance, or lack of importance, of certain new measurements — measurements of the supposed “impact” of a published study. She writes, in Scientific American‘s Information Culture blog:
When in trouble or in doubt, invent new words. We have bibliometrics and scientometrics from the Age of Print. Now they are joined by informetrics, cybermetrics, webometrics and altmetrics… Psychology speaking, there’s something exciting about alternative metrics. You can watch the metrics go up daily, an almost instant gratification, while citation-based gratification can take years. On the other hand, that’s a lot of pressure. Is my article being covered in blogs, tweeted, bookmarked? And what does it even mean? …
Not all alternative metrics sources were born equal. I’m very biased here, but I don’t think one can study a lot from the number of tweets. An article can be tweeted because it has a catchy title or won an Ig-Nobel prize, but that’s not going to tell us much about its scholarly impact. Now that I think of it, winning an Ig-Nobel can also earn you blog coverage, so it’s maybe not a good example, but in general, I think that genuine (not promotional/spam) blog coverage says more about an article’s future impact than tweets. Twitter is about dissemination of news, it does not require deep thought and consideration.
Shema’s thoughts here are in harmony with those of our old, departed friend Jerry Lettvin, who loved to say, about many things in academia: “If you can measure it, it must be important!”