Should I vote in the QS World Rankings?

I was invited recently by email to vote in the QS world rankings of universities. Part of the QS rankings (40%) goes for something called “academic reputation” and I suppose my vote/opinion counts towards that. My main problem with rankings in general is that sometimes people use them for things that they are not supposed to be used for. For example, those in power might use the rankings to make policy decisions. Further, it is never really disclosed how the rankings are arrived at. Yes, there are some vague explanations, but it is never possible to replicate the results. Given all that, should I vote?

There have been many posts recently about world rankings. One describes some impacts of the world rankings on higher education and its quality and regulation. Another is called the ultimate absurdity of college rankings. A third by Richard Holmes gives some reasons why rankings are unreliable, especially the subject rankings.

Anyway, to be more specific about my own dilemma, should I give my opinion about other universities? I have a problem because I don’t really know enough. I have to name up to 30 (foreign) universities that produce the best research in the natural sciences, as well as 10 in my own country (Ireland). Sure, we can all name Caltech and so on at the top. After the top ten or so, I run out of names.  So what do I do? Who should I name? How do I decide if one university is better than another?  Can I compare, for example, Kyoto University with the University of New South Wales? To be honest, I know they are both very good with top class reputations, but I couldn’t rank one above the other. (Kyoto was 35 and NSW was 52 in the overall QS rankings in 2012, by the way.) To be honest, I probably wouldn’t name either of them in my 30, but that’s just because I don’t know them and I don’t know anyone there. The same goes for hundreds of other universities.

So what sort of a picture does my response, and the responses of other academics like me, actually give? The number of votes a university gets is probably proportional to how many foreigners know someone who works there. The top ten will remain the top ten, because everyone automatically assumes they are great and everyone has heard of them. But when you move down the list to around number 50, and below, how precise can we be? Some volatility is to be expected, and indeed happens. (Why volatility happens is not always clear, as discussed here)

The data for “reputation” is gathered in such a random fashion that it cannot be meaningful.

The president of University College Cork wrote an email to all staff in May 2011, and got into hot water. The full text is here QS now says that they use “sophisticated anomaly detection algorithms” among other things, to stop academics asking their friends to vote for them.

It was recently exposed that the QS rankings used a site that pays people to fill out surveys. There is an excellent article about this here by Elizabeth Redden. Indeed, I was offered 300 pounds in credits to complete my survey. After I did so, out of curiosity, I tried to buy QS reports with my credits. The website didn’t work, I got an error message and the credits were worthless.

i wonder if anyone has actually done a survey to see how many students actually use rankings when deciding on a university.

Impact Factor – what it should and shouldn’t be used for

There was a good editorial in Nature Materials that clarified a few things for me about the impact factor.  They made the point that the impact factor of a journal in conjunction with the median does tell you something about the journal.  It does not tell you something about an individual person, or an individual paper. It should not be used for grant-giving, tenure, appointment or promotion.

There was another editorial in Nature on this topic in 2005.

And again in 2003.  This one comments on the fact that most people just copy references from another paper. I have definitely observed this. The rich get richer, and the poor get poorer, when it comes to citations. You have to get a paper in the loop, and then sit back and watch the citations pile up.

Unlike impact factor, citations do tell you something about an individual paper, after a suitable period of time has elapsed. Some people say that the only way to tell if a paper is a good paper is to read it yourself. I disagree. First of all, that doesn’t work if the field is not my field and I am not qualified to judge. Secondly, my opinion is just one person’s opinion, whereas if I look at the number of citations, I am getting the opinion of all the other researchers in that field in the whole world (in some sense). It would of course be better to pick up the phone and ask all the other people in that field individually what their opinion is, but that is not practical. I think the number of citations is a compromise, it’s not perfect because there are different reasons a paper might be cited, but it’s better than nothing.

There’s a related blog here, about the REF in the UK. The author makes the point that averaging the h-index over a department seems to be a reasonable measure. Another thing I learned here is that the impact factor will not be used in the REF in 2014.

One of the comments makes the interesting point that once we start using a metric to make our decisions, this metric ceases to have any value because people will start playing games to manipulate the metric. One way around this is to keep changing the metric.

Taiwan Research Rankings

Through ninth level Ireland I saw a post by Richard Holmes on the Taiwan rankings. These are university rankings just for research, and just for science and engineering.

Here is how they compute their rankings, which are based on the Thomson-Reuters (formerly ISI) databases.

  • 25%  to research productivity (number of articles over the last 11 years, number of articles in the current year),
  • 35% to research impact (number of citations over the last 11 years, number of citations in the current year, average number of citations over the last 11 years)
  • 40% to research excellence (h-index over the last 2 years, number of highly cited papers, number of articles in the current year in highly cited journals).

Some of the measures seem to be *absolute* numbers, like the total number of articles over the last 11 years, and not relative numbers. This  favours larger universities. Also, arts and humanities are not counted.

I looked up the Irish universities.

235 Trinity College Dublin

277 University College Dublin

311 Queen’s University Belfast

398 University College Cork

No others in top 500.

I find it interesting that University of London, Royal Holloway comes outside the top 500, and on the other hand is number 11 in the world for research impact according to the THE world rankings. Why is there such a big difference?

Citation indicator in world rankings

An interesting post by Richard Holmes about the THE university world rankings, and why the citation part of the rankings is not yet reliable. I just noticed that he withdrew the post and replaced it with an apology.

The world top 20 in the THE citation indicator rankings are

1.   Moscow (State) Engineering Physics Institute (MEPhI)
1.   Rice University
3.   University of California Santa Cruz
3.   MIT
5.   Princeton
6.   Caltech
7.   University of California Santa Barbara
7.   Stanford
9.   University of California Berkeley
10.  Harvard
11.  Royal Holloway London
12.  Chicago
13.  Northwestern
14.  Tokyo Metropolitan University
14.  University of Colorado Boulder
16.  University of Washington Seattle
16.  Duke
18.  University of California San Diego
18.  University of Pennsylvania
18.  Cambridge

The list looks slightly odd to me. I would like to know how these numbers are calculated, but I can’t find the information anywhere. It is apparently calculated by Thomson-Reuters, who use their large database of citation data to compute this number.  I would like to calculate it for my own university UCD, but I don’t know how. In 2011 UCD got 80.5 and in 2012 UCD got 74.9.  What happened to us?  The formula for calculating the number seems to be a secret, so we can’t replicate the calculation. Just accept it. We got worse between 2011 and 2012.

On a different matter, there is an interesting post by Phil Davis on the relation between impact factors and citations. There is none. [Essentially none, in my opinion.]  I made an earlier post about this, and gave links to other writings.

University Rankings 2

There is an interesting post here about where TCD-UCD would be in the THES university rankings if the two universities merged. To answer this question, you need to know the criteria on which the rankings are based. Currently, TCD is 117 and UCD is 159. Being optimistic, the poster says that the joint university could be as high as 104.

Rankings should not be the reason to merge.  The universities should merge if it makes good academic sense to merge, if the overall education and research would improve. If the new university was better than the sum of the parts, then one assumes it would naturally rise in the rankings as a consequence. Then again, do rankings really measure what they are supposed to measure?

This merger was first proposed by Minister of Education Donogh O’Malley in 1967, by the way. Peter Sutherland in 2010 also proposed the merger, saying the result would be in the top 20. That might take a while.

University Rankings

The ranking of universities is a thorny topic in the academic world.  You can read endless articles and blogs about it.  The release of a new ranking usually makes the news whether the local universities go up or go down – there’s a story there anyway. What does bother me is what these rankings are based on. Sometimes we might go up or down simply because they changed how the ranking is calculated!

Do world university rankings matter? Does anyone make a decision based on these rankings? Do students decide where to study based on rankings? Here’s a Nature article on it. The usual problems with rankings appear – what started out as an entertaining newspaper story ends up being used for things it was not intended to be used for. On March 15 2012 the following appeared in the Irish Times.

The Department of Education has said that ranking systems and league tables rating educational institutions should be interpreted “with caution”.

Responding to the publication of the Times  Higher Education World Reputation Rankings today, in which Ireland was not represented in the top 100 universities in the world, the spokesman said differences in criteria between ranking systems can affect the outcomes of such league tables.

“Notwithstanding these reservations, it is recognised that league tables are referenced by international investors, employers and students as a marker of quality across systems and as such they cannot be ignored,” the spokesman said.

Good to see that someone else is aware that different criteria will produce different rankings. I can believe that investors might use rankings, and employers. I would really like to know if students actually use them. I see rankings as a crude measure of something, and one should be aware of exactly what is being measured. Certainly rankings are a big business and a lot of money is at stake. I do have my doubts that the enormous amount of data on which rankings are based is correctly collected and tabulated and normalized. Data collection errors have been made. Where does the data come from anyway? Is some of it provided by the universities?

An expert group even published a paper on best practice in compiling rankings. The European Union launched a project to design a new ranking system, and put it out for tender. I’m not sure what is happening but the website is here which says the first results of U-Multirank will be out in 2013.

Anyway, what criteria are rankings based on?   It depends on which ranking you look at.  There are lots of different rankings, and more are appearing all the time.  Currently there are probably two major rankings in the world, the QS ranking which comes from Quacquarelli Symonds, and the Times Higher Education Supplement (THES) ranking which comes from The Times and Thompson-Reuters.  These two rankings used to be the same, the THES-QS ranking, until they split in 2009. THES now use Thompson-Reuters for their citation numbers, and QS use Scopus.

The THES ranking makes it clear what their rankings are based on – which is a good thing.  The same cannot be said for the QS rankings. I decided to delve a little deeper and have a look at my university (University College Dublin) under the most recent (2011) THES ranking.

In the 2011 THES rankings UCD was ranked 159 in the world, with a score of 45.9 (joint with two other Dutch universities).  The number 1 ranked university (California Institute of Technology) had a score of 94.8.  First question is: what do these numbers even mean? The score of 45.9 is an average of scores in five categories.  Not all of the five categories are equal, some have a higher weighting than others.  Here are the scores of UCD in the five categories.

Teaching  25.2         (30% of total)

International Outlook 83.2      (7.5% of total)

Industry Income  32.3      (2.5% of total)

Research  23.7         (30% of total)

Citations 80.5         (30% of total)

Are you any the wiser?  Second question: what do these numbers mean?  Before going into that, let’s check the UCD overall ranking.  According to the weightings given, we should compute

(0.3)(25.2)+(0.075)(83.2)+(0.025)(32.3)+(0.3)(23.7)+(0.3)(80.5)

and this turns out to be equal to 45.8675.  I checked this myself on my calculator.  The THES gave UCD a score of 45.9, so it looks like they rounded up.

The THES website gives the scores within each category but doesn’t give the rankings within each category, by the way.

The next question is: how are the scores for each category calculated?  THES provides the following diagram on their website:

Well, ok. This gives a breakdown for each category, into subcategories.  So we must ask, how do they calculate the score for each subcategory.  There is an article on the website that outlines this breakdown but doesn’t give the full details. I suppose the full details are not made public.

The three main categories are teaching, research and citations. They have 30% weighting each. Let’s look at citations, which was worth 32.5% last year by the way.  UCD did well here, with a score of 80.5. The outline doesn’t say how the score is calculated, but it does say the following:

The data are drawn from the 12,000 academic journals indexed by Thomson Reuters’ Web of Science database and include all indexed journals published in the five years between 2005 and 2009. Citations to these papers made in the six years from 2005 to 2010 are collected – increasing the range by an additional year compared with 2010-11, thus improving the stability of the results and decreasing the impact of exceptionally highly cited papers on institutional scores.

Again, this does not say how the number is arrived at. Anyway, it clear that citations are important when it comes to this ranking. I’ll write another post about citations.

I also note that 18% of the research 30% comes from “reputational survey”. What does that mean? And 15% of the 30% for teaching comes from the same category.  This refers to the Academic Reputation Survey that THES conduct.  As far as I know this involves sending an email to ALL university professors around the world and asking them to fill in a survey giving their opinions about other universities.   I think this is highly unscientific. This seems to me to be the same as the Academic Peer Review for the QS rankings, which I will now discuss.

Let’s move on to the QS rankings.  UCD was ranked 134 in the world in the 2011 QS rankings, with a score of 56.9.  Again we should ask: what does this number mean and how was it calculated? I have not been able to find this information on their website directly, however I found it using a search engine.  Here are the categories and their weightings:

Academic Peer Review 40%
Global Employer Review 10%
Citations Per Faculty 20%
International Student Ratio 5%
International Faculty Ratio 5%
Faculty Student Ratio 20%

I could not find the breakdown of scores in each category for UCD (or any university).

Clearly the “academic peer review” is a huge factor, counting for 40%. What does this mean?  As far as I know, this score is found by sending an email to ALL university professors around the world and asking them to fill in a survey giving their opinions about other universities.   I think this is highly unscientific. Philip Altbach of Boston College wrote “Whether the QS rankings should be taken seriously by the higher education community is questionable.” There is an interview in today’s newspaper with TCD provost where he is asked why TCD dropped to number 65 in the world rankings. This refers to the QS rankings. TCD is ranked 117 in the 2011 THES ranking, with a score of 51.1.

Both THES and QS say that they normalize scores using z-scores (how many sigmas you are from mu).  They have been doing this since 2007 only. THES use Thompson for their citation numbers, and QS use Scopus.

A third highly used ranking system is the Academic Ranking of World Universities (ARWU) also known as the Shanghai rankings.  They don’t appear to have made any announcement in 2011. In 2010 they rated UCD in the 301-400 category.  Clearly not worth further comment!! As far as I know they only use research criteria, such as citations, funding, number of Nobel prizewinners on the staff, publications in Nature and Science.

In the USA there is an annual ranking known as the US News and World Report which has been going since 1983.  This is not without controversy, see the wikipedia article for example.

The danger of rankings is that universities start to make decisions in order to move up the rankings, instead of making decisions based on what they actually want to do. One might think that these two aims go hand in hand, but they don’t.  Reed College pulled out of the US News and World Report in 1995, and their president said “by far the most important consequence of sitting out the rankings game, however, is the freedom to pursue our own educational philosophy, not that of some newsmagazine.”

Another danger is that governments will make funding decisions for universities based on rankings.

However, rankings are not going to go away. We have to live with them. What we can do is try to ensure they are done properly, and that people know what they measure. Because they don’t measure the most important things.