University administrations and funding agencies like to rank academics using numbers that can be looked up on the internet. When it comes to research publications, there are two main themes, citations and impact factors. I recently learned that these do not seem to be correlated.
What is a citation? One of your papers gets a citation when it is referenced by another paper. There are companies that keep track of the number of citations that each of your papers has. Then people can look up how many papers you have, and how many citations each paper has, and they can calculate various numbers with these citation numbers. The implication is that good papers will have lots of citations, and bad papers will have very few citations. So the implication is that you can tell how good someone’s papers are by looking at their citations. Obviously publication and citation patterns will vary across different subjects, as the following table shows.
This table is from the 1998-2008 Thomson Reuters’ Essential Science Indicators database. Here’s another table
A second number is called the impact factor. The impact factor is a function of the journal and a window of years. For example, you could calculate the 2011 impact factor for the Journal of Craptology for the window of years 2006-2010. The 2011 impact factor for that window is the average number of citations in 2011 received by all the papers published in that journal in the window 2006-2010. Most impact factors only use a 2-year window, which is not suitable for some subjects such as mathematics. The implication is that good journals have high impact factors, and bad journals have low impact factors.
Let’s consider these metrics together. If they work as expected, one would expect some relationship. One would think that journals with high impact factors would have papers with high numbers of citations, and that journals with low impact factors would have papers with low citation counts. This is something that can be calculated, so we can see if this is true.
It turns out to be false. I read this here and here. That is, there appears to be no connection between the impact factor of a journal and the citation counts of the papers in that journal. Journals with high impact factors have some papers with lots of citations, and also have some papers with very few citations. And the same goes for journals with low impact factors.
I’ll do my own comparison soon. For now let me refer to a SIAM article which states “It has long been noted that what the impact factor measures is not well correlated with the quality of a journal, and even much less with the scientific quality of the papers appearing in it or of the authors of those papers. In our field, the 2008 IMU–ICIAM–IMS report Citation Statistics made that case eloquently. Less emphasized has been that these metrics are open to gaming, and are in fact being gamed;…”
Here’s a recent blog post about how papers from ecology appear to decrease citations if they increase the number of equations.
I don’t have a problem with an individual academic stating that this paper has N citations and M downloads on their CV. Academics use whatever they can when applying for jobs, grants, etc. These numbers do indicate something, although they should be used very carefully. Impact factors should be used with extreme caution – see the 2008 report for a great example.
I’ll be up for promotion soon, so if you want to do a deal – I’ll cite you and you’ll cite me – just get in touch… and if anyone has an automated script that will automatically download my online papers, let’s talk.