Big Science

I came across a very interesting article about `big science’ by a geneticist Bill Amos. He makes a few points which I agree with – some of them I mentioned before in a previous post about putting all our eggs in one basket.

The main point is about how we are moving towards a world where research funding goes to a few large groups, rather than to many small groups. We put all our eggs into a few baskets. One obvious consequence is that a greater percentage of scientists will have no research funding. The detrimental consequences of this are many, such as good scientists leaving the country.

It makes good sense to ask whether this practice of big science leads to good science. What exactly will throwing ever larger amounts of money at a problem solve? Further, does it lead to good training of our PhD students? Are they trained to be innovative thinkers and leaders when they are a junior member of a huge team?

Another interesting point refers to value for money. In my experience, small grants to small research groups provide much better value for money than large grants to large groups. In a small group, with a small grant, each penny has to be counted and there is very little wastage.

GPS and Neutrinos

In 2011 some scientists on the OPERA team announced that they had found some particles (called neutrinos) that appeared to travel at a speed that was faster than the speed of light. Since we believe that nothing can travel faster than light, this was a bit of a surprise. Most people didn’t believe it and assumed there was a mistake. It turned out that, yes indeed, there was a mistake. I only recently read that the mistake involved GPS.

Other scientists tried to replicate the results, and were unable to. This showed that something was wrong with the equipment that the OPERA team was using. The team tested all their equipment and finally narrowed the mistake down to two possible things. This was the scientific method in action. And it worked, which is great.

Neutrinos are supposed to travel at the speed of light, which is 299,792 km per second. In order to measure speeds this fast we need a very accurate clock. These scientists used the clocks in GPS satellites for their clock. I wrote before how how pervasive GPS has become all over the world, because of their clocks. This is just another example. There was a similar post yesterday on the BBC about what would happen if satellites stopped working.

Anyway, it seems that one of the problems was that a certain cable has to be fitted with exactly the correct orientation. If the orientation is changed, a different time is recorded. This seems quite a subtle error, in fairness. The other error is linked to the oscillator used to produce the time-stamps in between the GPS synchronizations.

The whole event caused a lot of tension in the OPERA team and some team members resigned.

Should I vote in the QS World Rankings?

I was invited recently by email to vote in the QS world rankings of universities. Part of the QS rankings (40%) goes for something called “academic reputation” and I suppose my vote/opinion counts towards that. My main problem with rankings in general is that sometimes people use them for things that they are not supposed to be used for. For example, those in power might use the rankings to make policy decisions. Further, it is never really disclosed how the rankings are arrived at. Yes, there are some vague explanations, but it is never possible to replicate the results. Given all that, should I vote?

There have been many posts recently about world rankings. One describes some impacts of the world rankings on higher education and its quality and regulation. Another is called the ultimate absurdity of college rankings. A third by Richard Holmes gives some reasons why rankings are unreliable, especially the subject rankings.

Anyway, to be more specific about my own dilemma, should I give my opinion about other universities? I have a problem because I don’t really know enough. I have to name up to 30 (foreign) universities that produce the best research in the natural sciences, as well as 10 in my own country (Ireland). Sure, we can all name Caltech and so on at the top. After the top ten or so, I run out of names.  So what do I do? Who should I name? How do I decide if one university is better than another?  Can I compare, for example, Kyoto University with the University of New South Wales? To be honest, I know they are both very good with top class reputations, but I couldn’t rank one above the other. (Kyoto was 35 and NSW was 52 in the overall QS rankings in 2012, by the way.) To be honest, I probably wouldn’t name either of them in my 30, but that’s just because I don’t know them and I don’t know anyone there. The same goes for hundreds of other universities.

So what sort of a picture does my response, and the responses of other academics like me, actually give? The number of votes a university gets is probably proportional to how many foreigners know someone who works there. The top ten will remain the top ten, because everyone automatically assumes they are great and everyone has heard of them. But when you move down the list to around number 50, and below, how precise can we be? Some volatility is to be expected, and indeed happens. (Why volatility happens is not always clear, as discussed here)

The data for “reputation” is gathered in such a random fashion that it cannot be meaningful.

The president of University College Cork wrote an email to all staff in May 2011, and got into hot water. The full text is here QS now says that they use “sophisticated anomaly detection algorithms” among other things, to stop academics asking their friends to vote for them.

It was recently exposed that the QS rankings used a site that pays people to fill out surveys. There is an excellent article about this here by Elizabeth Redden. Indeed, I was offered 300 pounds in credits to complete my survey. After I did so, out of curiosity, I tried to buy QS reports with my credits. The website didn’t work, I got an error message and the credits were worthless.

i wonder if anyone has actually done a survey to see how many students actually use rankings when deciding on a university.

Percentage of Academic/Non-Academic Staff

Figures were released (via Dáil written answers) for Irish third-level institutions of the percentage of staff that are academic and non-academic.

ImageThis is a screenshot from the post.

UCD and TCD are the only institutions where academic staff are less than 50%.

The average percentage of academic staff across Irish universities is 52.4%. Across all universities in Australia, the interesting article here from Australia states that the average percentage of academic staff is 45%. In the UK it is 48%. There must be a balance point that maximizes efficiency of the system. I find it hard to believe that less than 50% academic staff is optimal, but I haven’t done any studies.

The Economist posted a humorous 1955 article by Parkinson, which contains Parkinson’s Law “work expands to fill the time available for its completion” and a funny yet realistic treatise on the growth of public administrative departments.


Replicating Austerity Results

An interesting story came to light this week when a student tried to replicate the results in a paper. He found that he wasn’t able to.

I wrote something before about the fundamental principle of the scientific method – that the results of an experiment should be capable of replication and verification by others. If you can’t do that, it’s not science. What happened here is that a graduate class was given an assignment: pick a paper and replicate the results. An excellent assignment for graduate students, in my opinion, and a valuable contribution to the scientific community. This particular student was unable to replicate the results in his chosen paper. It turns out that he found an error. The authors made a mistake.

First and foremost, this shows the scientific method and the scientific community at work. If the system works, papers with errors should be found out, and the errors corrected, and science as a whole moves on. Collectively we learn something. It’s a good example of that.

The authors are 100% responsible for the error. It was a bad error, and it’s not good news for the reputations of the authors. It makes them look bad. Other papers by these authors will now be questioned.

A second point is that scientific papers that are published are supposed to be checked by reviewers/referees. That’s part of the process. Why didn’t the peer review process catch the error in this case? Because it wasn’t peer reviewed. The paper was presented at a conference that does not referee the papers.

This student picked a paper that is being quoted a lot by politicians and economists that are in favour of “austerity measures.” Apparently it has been used as an argument in favour of austerity. One cannot blame the authors of the paper for that (unless there are things going on behind the scenes that I am not aware of, as has been alleged on reddit for example). If anyone is going to use the findings of a paper, they are responsible for checking the paper first. If you don’t check the results, especially the results of a paper that has not been peer-reviewed, and there is a mistake, you cannot blame the authors for your consequent errors. You are responsible for your own papers.

The paper in question is available here. The research was funded by the U.S. National Science Foundation.

Problem solving versus basic skills

I’ve posted before about the teaching of mathematics, and the question of teaching “problem solving” versus teaching basic mathematical skills, like how to add numbers, add fractions, or solve quadratic equations.

It’s all part of teaching mathematics, so we must teach both, and we must find a balance. On the one hand, students cannot solve interesting problems without having the basic skills needed to solve them. Skills are learnt by practice, by drill, by doing lots of similar exercises. On the other hand, learning skills by themselves is a bit dry and becomes more interesting when students can see a use for them.

Many years ago we used to teach lots of skills, with no problem solving. Now we are in danger of a complete switch the other way around. There was a comment on this by mathematician and Fields medallist Vaughan Jones in a New Zealand newspaper recently.

I believe there is a difference between asking a child “what is 3+2” and asking “if I have three apples and my friend gives me two apples, how many apples do I have?”  The latter is known as a word problem, albeit a simple example. A word problem has to be converted into an equation, which then has to be solved. In this example, the child must realize that the solution to the word problem is the solution to the sum “3+2”. That conversion, from a word problem to an equation, is a crucial part of problem solving. When the equation is found, it must then be solved, using the skills (in this case, addition) previously learnt by drill.

Word problems can be contrived, but there are enough good examples to use. If you haven’t already seen it, there is a great article A Mathematician’s Lament by Paul Lockhart about teaching mathematics and solving problems.


The Link between Research and Education

Last week’s announcement of research centres has reinforced the policy of this government that will prioritize research resources into areas that bring together research and enterprise.

The government has decided that the nation’s research budget should be targeted towards those research areas with the greatest potential for economic return. Last March, the Research Prioritization Steering Group recommended fourteen specific areas that funding should be directed towards.

What will happen to other areas, and other subjects? They are going to receive little, if any, government support for their research. In the university environment, due to the close links between education and research, the quality of education in those subjects is going to suffer, as well as the research.

The teaching of a professor or lecturer in a university is informed and influenced by their research. Ask any academic, and they will tell you that teaching and research fertilize each other.  The job of an academic is all about the discovery and the dissemination of knowledge. If research in our low-priority subjects comes to an end due to lack of government funding, there won’t be any research-informed teachers left.

Doctoral students are vital for the continuity of both research and education.  A consequence of research prioritization is that we only have PhD students in the chosen areas. Other subjects will not get the funding to support PhD students. Apart from depleting the overall research base in the country, this is a double whammy.

Firstly, the research of the PhD supervisor suffers if there are no PhD students and postdoctoral researchers. The research team starts to collapse, and an international reputation built over years may be lost.

Secondly, there are knock-on effects for the education of our undergraduates. Labs and tutorials cannot be held without PhD students. The teaching infrastructure in the universities starts to collapse. Furthermore, our future school teachers will not be properly taught in those non-priority areas.

In a few years we could have a country where university staff in non-priority subjects spend all their time on teaching, and none on research, because they have no assistants. Priority areas, on the other hand, will have the PhD students and infrastructure to enable the research to continue.  A kind of two-tier system develops.

One example to consider is maths. As part of the current major curriculum reform, the Minister for Education and Skills places great emphasis on the importance of mathematical standards at all levels – bottom up and top down. Ironically, one of the subjects not mentioned anywhere in the Research Prioritization report is mathematics. Mathematics could be said to have fallen between fourteen stools. Maths is everywhere, and at the same time, nowhere, because there is no research support. This could have consequences for mathematical standards at all levels.

The ignoring of maths does not recognize the importance of mathematics as a subject in itself, a subject that needs support by itself and not just as a servant to a priority application area.

The solution is for the government to not prioritize certain areas to the total exclusion of other areas. Prioritizing means allocating a majority of resources, but not all resources. Indeed, the press release of the launch of the RPSG report last March stated “…the Government’s plan to target the majority of the Government’s core €500million budget…”.

The government can and should fund research across all subjects, to varying degrees according to the priorities and available resources. There are several funding agencies available to distribute the €500 million research budget. Some leadership and joined-up thinking is required to allocate funds across the different agencies, which come under different government departments. There are encouraging signs in this regard from the Prioritization Action Group, chaired by Minister Seán Sherlock, which has been established to accomplish this task.

The whole research prioritization strategy could have other negative consequences. It is known as “picking winners” and is a high-risk strategy. An article ‘Picking winners, saving losers’ in The Economist (April 2010) concluded that the strategy usually fails, and works in certain circumstances only.

One consequence is that we lose expertise in the non-priority areas, leaving us unable to respond to future challenges if there is a major development in one of those fields.

This strategy could also bring about a further downgrading of our universities, which are currently struggling in the world rankings. Research is one of the key components for these rankings. Excellent world-class researchers who are already working in Ireland will be excluded from funding simply because their area of research is not considered a priority area.  In order to receive support, they may move to another country.


Impact Factor – what it should and shouldn’t be used for

There was a good editorial in Nature Materials that clarified a few things for me about the impact factor.  They made the point that the impact factor of a journal in conjunction with the median does tell you something about the journal.  It does not tell you something about an individual person, or an individual paper. It should not be used for grant-giving, tenure, appointment or promotion.

There was another editorial in Nature on this topic in 2005.

And again in 2003.  This one comments on the fact that most people just copy references from another paper. I have definitely observed this. The rich get richer, and the poor get poorer, when it comes to citations. You have to get a paper in the loop, and then sit back and watch the citations pile up.

Unlike impact factor, citations do tell you something about an individual paper, after a suitable period of time has elapsed. Some people say that the only way to tell if a paper is a good paper is to read it yourself. I disagree. First of all, that doesn’t work if the field is not my field and I am not qualified to judge. Secondly, my opinion is just one person’s opinion, whereas if I look at the number of citations, I am getting the opinion of all the other researchers in that field in the whole world (in some sense). It would of course be better to pick up the phone and ask all the other people in that field individually what their opinion is, but that is not practical. I think the number of citations is a compromise, it’s not perfect because there are different reasons a paper might be cited, but it’s better than nothing.

There’s a related blog here, about the REF in the UK. The author makes the point that averaging the h-index over a department seems to be a reasonable measure. Another thing I learned here is that the impact factor will not be used in the REF in 2014.

One of the comments makes the interesting point that once we start using a metric to make our decisions, this metric ceases to have any value because people will start playing games to manipulate the metric. One way around this is to keep changing the metric.

Eggs in one basket in the university sector

I wrote an earlier post about the idea of a country putting all its research funding (and perhaps other resources too) into a few top places. I think this is a very interesting topic for debate.

Recently there was another article on the matter in The Guardian. The former president of DCU posted about it.

One of the arguments in favour of funding only the top universities is that “the money is going there anyway.” The Guardian cites that 75% is going to the top 30 institutions, and things like that. Therefore it is seemingly correct to say that most of the funding goes to the top universities. This is probably because most of the top researchers are at the top universities. But the word “most” is the key point to me. The point is that there are some excellent researchers at universities that are not ranked highly. Those researchers account for the other 25% of the funding. If you take away that 25% from them, you are saying to them that they either need to move to a top ranked university, or stop getting funded. Presumably this will result in the movement of those people to the top ranked universities. This cements the two-tier world, cements the position of the top ranked universities, and makes it impossible for other universities to move up there. In the long term we will get a polarized situation where only the top ranked places do funded research. We will move from a 75-25 split to a 100-0 split.

Next question: is that good or bad? It depends on your point of view, and what you are trying to achieve.

Next consider a similar scenario where a country puts all its funding into a few selected areas/subjects, instead of a few selected universities. The same story plays out. Excellent researchers in the non-chosen areas do not get funded. They have a choice, either move research area, move country, or stop getting funded.

Next question: is that good or bad? It depends on your point of view, and what you are trying to achieve.

There is also an indirect way of putting all eggs in one basket – which could be happening right now in the UK. It is interesting to observe what is happening there – there was an article in the Guardian the other day about it. The upshot of raising the limit on fees to 9,000 pounds is that the top universities are thriving, and weaker ones are possibly struggling. Applications have dropped, so the weaker universities have to admit students with bad grades. Furthermore, they have to pass these students all the way through, because they need the money. Talk about buying a degree. It will take some years to see how this plays out.

They say all things are cyclical. I think this will probably happen here – we will put all our eggs in one basket for a while, then diversify, then go back, then diversify again, etc. As new people come into power, they need to make changes. Nobody gets noticed and rewarded for saying it ain’t broke, so I’m not going to fix it.

Taiwan Research Rankings

Through ninth level Ireland I saw a post by Richard Holmes on the Taiwan rankings. These are university rankings just for research, and just for science and engineering.

Here is how they compute their rankings, which are based on the Thomson-Reuters (formerly ISI) databases.

  • 25%  to research productivity (number of articles over the last 11 years, number of articles in the current year),
  • 35% to research impact (number of citations over the last 11 years, number of citations in the current year, average number of citations over the last 11 years)
  • 40% to research excellence (h-index over the last 2 years, number of highly cited papers, number of articles in the current year in highly cited journals).

Some of the measures seem to be *absolute* numbers, like the total number of articles over the last 11 years, and not relative numbers. This  favours larger universities. Also, arts and humanities are not counted.

I looked up the Irish universities.

235 Trinity College Dublin

277 University College Dublin

311 Queen’s University Belfast

398 University College Cork

No others in top 500.

I find it interesting that University of London, Royal Holloway comes outside the top 500, and on the other hand is number 11 in the world for research impact according to the THE world rankings. Why is there such a big difference?