Sunday, April 26, 2015

Supervoid superhype, or the publicity problem in science

Part of the reason this blog has been quiet recently is that I decided at the start of this year to try to avoid — as far as possible — purely negative comments on incorrect, overhyped papers, and focus only on positive developments. (The other part of the reason is that I am working too hard on other things.)

Unfortunately, last week a cosmology story hit the headlines that is so blatantly incorrect and yet so unashamedly marketed to the press that I'm afraid I am going to have to change that stance. This is the story that a team of astronomers led by Istvan Szapudi of the University of Hawaii have found "the largest structure in the Universe", which is a "huge hole" or "supervoid" that "solves the cosmic mystery" of the CMB Cold Spot. This story was covered by all the major UK daily news outlets last week, from the Guardian to the Daily Mail to the BBC, and has been reproduced in various forms in all sorts of science blogs around the world. 

There are only three things in these headlines that I disagree with: that this thing is a "structure", that it is the largest in the Universe, and that it solves the Cold Spot mystery.

Let's focus on the last of these. Readers of this blog may remember that I wrote about the Cold Spot mystery in August last year, referring to a paper my collaborators and I had written which conclusively showed that this very same supervoid could not explain the mystery. Our paper was published back in November in Phys. Rev. D (journal link, arXiv link). And yet here we are six months later, with the same claims being repeated!

Does the paper by Szapudi et al. (journal link, arXiv link) refute the analysis in our paper? Does it even acknowledge the results in our paper? No, it pretends this analysis does not exist and makes the claims anyway.

Just to be clear, it's possible that Szapudi's team are unaware of our paper and the fact that it directly challenged their conclusions several months before their own paper was published, even though Phys. Rev. D is a very high profile journal. This is sad and would reflect a serious failure on their part and that of the referees. The only alternative explanation would be that they were aware of it but chose not to even acknowledge it, let alone attempt to address the argument within it. This would be so ethically inexcusable that I am sure it cannot be correct.

I am also frankly amazed at the standard of refereeing which I'm afraid reflects extremely poorly on the journal, MNRAS.

Coming to the details. In our paper last year, we made the following points:
  1. Unless our understanding of general relativity in general, and the $\Lambda$CDM cosmological model in particular, is completely wrong, this particular supervoid, which is large but only has at most 20% less matter than average, is completely incapable of explaining the temperature profile of the Cold Spot.
  2. Unless our understanding is completely wrong as above, the kind of supervoid that could begin to explain the Cold Spot is incredibly unlikely to exist — the chances are about 1:1,000,000!
  3. The corresponding chances that the Cold Spot is simply a random fluctuation that requires no special explanation are at worst 1:1000, and, depending on how you analyse the question, probably a lot better.
  4. This particular supervoid is big and rare, but not extremely so. In fact several voids that are as big or bigger, and as much as 4 times emptier, have already been seen elsewhere in the sky, and theory and simulation both suggest there could be as many as 20 of them.
To illustrate point 1 graphically, I made the following figure showing the actual averaged temperature profile of the Cold Spot versus the prediction from this supervoid:

Image made by me.

If this counts as a "solution to a cosmic mystery" then I'm Stephen Hawking.

The supervoid can only account for less than 10% of the total temperature decrement at the centre of the Cold Spot (angle of $0^\circ$). At other angles it does worse, failing to even predict the correct sign! And remember, this prediction only assumes that our current understanding of cosmology is not completely, drastically wrong in some way that has somehow escaped our attention until now.

You'll also notice that if the entire red line is somehow magically (through hypothetical "modified gravity effects") scaled down to match the blue line at the centre, it remains wildly, wildly wrong at every other angle. This is a direct consequence of the fact that the supervoid is very large, but really not very empty at all.

By contrast, the simple fact that the Cold Spot is chosen to be the coldest spot in the entire CMB already accounts for 100% of the cold temperature at the centre:

The red line is the observed Cold Spot temperature profile. 95% 68% of the coldest spots chosen in random CMB maps have temperatures lying within the dark blue band, and 99% 95% lie within the light blue band. Image credit:

Similarly, the fact that Mt. Everest is much higher than sea level is not at all surprising. The highest mountains on other planets (Mars, for instance) can be a lot higher still.

But how to explain the fact that a large void does appear to lie in the same direction as the Cold Spot? Is this not a huge coincidence that should be telling us something?

Let's try the following calculation. Take the hypothesis that this particular void is causing the Cold Spot, let's call it hypothesis H1. Denote the probability that this void exists by $p_\mathrm{void}$, and the probability that all of GR is wrong and that some unknown physics leads to a causal relationship as $p_\mathrm{noGR}$. Then
$$p_\mathrm{H1}=p_\mathrm{void}p_\mathrm{noGR}$$On the other hand, let H2 be the hypothesis that the void and the Cold Spot are separate rare occurrences that happen by chance to be aligned on the sky. This gives
$$p_\mathrm{H2}=p_\mathrm{void}p_\mathrm{CS}p_\mathrm{align},$$where $p_\mathrm{CS}$ is the probability that the Cold Spot is a random fluctuation on the last scattering surface, and $p_\mathrm{align}$ the probability that two are aligned.

The relative likelihood of the two rival hypotheses is given by the ratio of the probabilities:
$$\frac{p_\mathrm{H1}}{p_\mathrm{H2}}=\frac{p_\mathrm{noGR}}{p_\mathrm{CS}p_\mathrm{align}}.$$Suppose we assume that $p_\mathrm{CS}=0.05$, and that the chance of alignment at random is $p_\mathrm{align}=0.001$.[1] Then the likelihood we should assign to "supervoid-caused-the-Cold-Spot" hypothesis depends on whether we think $p_\mathrm{noGR}$ is more or less than 1 in 20,000.

This exact calculation appears in Szapudi et al's paper, except that they mysteriously leave out the numerator on the right hand side. This means that they assume, with probability 1, that general relativity is wrong and that some unknown cause exists which makes a void with only a 20% deficit of matter create a massive temperature effect. In other words, they've effectively assumed their conclusion in advance.

Well, call me old-fashioned, but I don't think that makes any sense. We have a vast abundance of evidence, gathered over the last 100 years, which show that if indeed GR is not the correct theory of gravity it is still pretty damn close to it. What's more, we have lots of cosmological evidence — from the Planck CMB data, from cross-correlation measurements of the ISW effect, as well as from weak lensing — that gravity behaves very much as we think it does on cosmological scales. Looking at the figure above, for the supervoid to explain the Cold Spot requires at least a factor of 10 increase in the ISW effect at the void centre, as well as a dramatic effect on the shape of the temperature profile. And all this for a void with only a 20% deficit of matter! If the ISW effect truly behaved like this we would have seen evidence of it in other data.

For my money, I would put $p_\mathrm{noGR}$ at no higher than $2.9\times10^{-7}$, i.e. I would rule out the possibility at $5\sigma$ confidence. This is a lot less than 1:20,000, so I would say chance alignment is strongly favoured. Of course you should feel free to put your own weight on the validity of all of the foundations of modern cosmology, but I suggest you would be very foolish indeed to think, as Szapudi et al. seem to do, that it is absolutely certain that these foundations are wrong.

So much for the science, such as it is. The sociological observation that this episode brings me back to is that, almost without exception, whenever a paper on astronomy or cosmology is accompanied by a big press release, either the science is flawed, or the claims in the press release bear no relation to the contents of the paper. This is a particularly blatant example, where the authors have generated a big splash by ignoring (or being unaware of) existing scientific literature that runs contrary to their argument. But the phenomenon is much more ubiquitous than this.

I find this deeply depressing. Like most other young researchers (I hope), I entered science with the naive impression that what counted in this business was the accuracy and quality of research, the presentation of evidence, and — in short — facts. I thought the scientific method would ensure that papers would be rigorously peer-reviewed. I did not expect that how seriously different results are taken would instead depend on the seniority of the lead author and the slickness of their PR machine. Do we now need to hold press conferences every time we publish a paper just to get our colleagues to cite our work?

One possible response to this is that I was hopelessly naive, so more fool me. Another, which I still hope is closer to the truth, is that, in the long run, the crap gets weeded out and that truth eventually prevails. But in an era when public "impact" of scientific research is an important criterion for career advancement, and such impact can be simply achieved by getting the media to hype up nonsense papers [2], I am sadly rather more skeptical of the integrity of scientists [3].

[1] This probability for alignment is the number quoted by Szapudi's team, based on the assumption that there is only one such supervoid, which could be anywhere in the sky. In fact, as I've already said, theory and simulation suggest there should be as many as 20 supervoids, and several have already been seen elsewhere in the sky (including one other by Szapudi's team themselves!). The probability that any one supervoid should be aligned with the Cold Spot should therefore be roughly 20 times larger, or 0.02.

[2] Not everything in Szapudi's paper is nonsense, of course. For instance, it seems quite likely that there is indeed a large underdensity where they say. But there is still a deal of nonsense (described above) in the actual paper, and vastly more in the press releases, especially the one from the Institute for Astronomy in Hawaii. 

[3] On the whole, given the circumstances I though journalists handled the hype quite well, especially Hannah Devlin in the Guardian, who included a skeptical take from Carlos Frenk. (I suspect Carlos at least was aware of our paper!)


  1. Sesh, did you realize that they submitted their paper to arXiv [] about 3 months before your paper appeared there? This means that they have done the main writing possibly more than 6 months before your paper was available. From MNRAS I find "received In original form 2014 May 6" and "Received 2015 February 24, Accepted 2015 March 4." So, the review process has taken an exceptionally long time - it is not clear whether the delay between the first and second version is due to a slow referee or the slowness of the response from the authors. Anyway, it is very likely that the referee has read the paper and written the report MUCH before your paper appeared! Then it has taken surprisingly long from the authors to produce the final version. (I did not compare the first arXiv version and the published one, but it seems that the referee has required some quite elaborate changes, additions, corrections, or the authors have been just very busy and/or slow.) I would not blame the review process in these circumstances as strongly as you do in your blog. As a referee (though not a referee of this paper ;-) I usually only check that the changes I asked for have been made and even more often click "I do not need to see the paper after the changes have been made", if I think my requests are straightforward. Let's assume that the referee indeed read/received the second version. If she/he was not a hard-core specialist of voids AND ISW effect (simultaneously) as you are, he/she might actually have missed your paper just by chance by not checking arXiv of PRD at a right time. On the other hand, the authors might have focused on the changes required by the referee and not on checking the literature that appeared after their original submission. However, I agree with you that it seems quite strange that they have missed your paper, since they work on the very same subject as you do. Given the length of the author list, it is even more strange that ALL OF THEM MISSED YOUR PAPER. However, in practice, often even papers with long author list are in reality written by just some of them. In this case it seems likely that most authors had a small contribution on some observational issues at early stages of the project and the paper (and in particular the theory part of it) was finalized just by one or two authors. So, if both of them missed your paper, then they missed it and we cannot do anything anymore. And, as said, I don't wonder why the referee missed your paper.

    I have a question to you. You certainly knew their paper when you were writing yours. And you knew that their paper had stayed unpublished for almost one year! Didn't you at any point think that their paper might still be under review and that they actually hadn't given up with it and possibly had not seen your paper. I ask these questions, since you actually might have been able to prevent this publicity fiasco. Did you contact at any point any of them? Maybe not, since it is not so nice to write (to a more senior guy) a 'cruel' email: "BTW, I recently submitted a paper in arXiv that shows that your arguments in are flawed and thus you might want to withdraw/modify your paper". ;-) But I think this would have prevented what happened now, since I do not believe they ignored your paper on purpose. (Well, the cynical side of me says that they could have ignored your paper on purpose, if the reviewing and resubmission process were far enough at the time your paper appeared, but taking into account the above-mentioned time stamps, this fortunately seems quite a distant scenario.)

    1. Hi, thanks for the comment. Let me first state that in my opinion when you are substantially revising a preprint of a paper (Szapudi et al's paper has increased in length by 50% from May to February) it is incumbent on you to also update it in light of the new scientific literature. Sure, people make honest mistakes, but honest mistakes are still mistakes.

      In answer to your other detailed questions: when the Szapudi et al paper went on the arXiv in May 2014, it was accompanied by a companion paper by Finelli et al (with Szapudi and Kovacs the two authors in common on both) which presented calculations claiming the void as an explanation for the Cold Spot. Our paper in August showed that the Finelli paper was simply wrong, and that some of the claims in the Szapudi paper were overstated.

      I personally met some of the authors of the Finelli paper at workshops in August 2014 and in January 2015, where I talked about our work and had further detailed private conversations with them. I had previously also exchanged several emails with them in May/June 2014. In August (a couple of weeks before our paper hit the arXiv) they still believed their calculation was correct and were surprised to hear my talk; in January they said they were aware of our paper (and my previous blog post) and accepted that the results in Finelli et al were wrong.

      Neither Szapudi nor Kovacs were amongst the people I met. It is possible that their co-authors simply never informed them about these conversations, or about our paper that they had read, or about my blog post which they had also read. It's also possible that they didn't read the arXiv themselves. This still doesn't reflect very well on them.

    2. Two other things are worth mentioning. The May version of Szapudi et al claims that the Finelli et al paper shows that the void model "matches well the profile observed on the CMB". The February version now says that Finelli et al find that it doesn't match. (By the way, the Finelli paper has not changed since May.)

      And also Andras Kovacs commented on a blog post of mine in July 2014 (it's a shame he didn't continue reading in August) where I pointed him to earlier papers which had found even larger and much emptier voids in other data, elsewhere in the sky. And yet in April 2015 he is quoted in the Guardian as saying "This is the greatest supervoid ever discovered."

  2. Hey Sesh, I share you frustration about this. The only explanation I have is that Szapudi probably didn't notice your paper. Anyway, I don't think that other scientist working in the field take this seriously - except maybe the cosmic-texture fans (I wonder why there hasn't been a paper by them). Another point is that there is something wrong with the communication between science and media. Anyone can give a press release anytime. The fact that a paper has been published in a sophisticated journal should be the boundary here, but as you said, the refereeing process just doesn't always work as it should.

  3. I don't go to that many conferences anymore (mainly because, surprisingly, there seem to have been more which interested me, say, 15 or 20 years ago), but I've seen you, and István, and Joe, all during the last year, and seen presentations on this topic. Were you really not at any conference where one of the authors of the other paper was? If not, then go to more conferences. :-) If so, then did you ask any penetrating questions?

    In this particular case where, as you note, the paper spent a long time between submission of the original and revised version (not that long between submission of the latter and acceptance), I think that if the authors put it on arXiv before acceptance, presumably because they want people to read it, then it is their duty to check for citations to the arXiv version during the revision process and address them at this stage, even if the referee did not (in this case: could not) point out the existence of your paper. (Even if they disagree, they should at least acknowledge the criticism.)

    In general, I don't think that one should blame the referee for this. The referee should ask to see the revised version if changes were substantial and make sure that enough of his suggestions were implemented.

    In general. In this particular case, anyone who follows blogs, goes to conferences, or reads the literature (or even the Daily Mail, presumably---at least the Guardian) and has just a general interest in cosmology cannot have failed to notice this, especially because of the hype. This isn't even in my field and even I was aware of your refutation of their paper.

    Sadly, it does happen that people just ignore papers which refute their own, even if they are aware of them. In the long run, I think this hurts their reputation, but if those doing the ignoring already have permanent jobs, and those doing the refuting don't, and might never get one because their papers don't have enough "impact", then it is nevertheless not in the interest of healthy science. Just because justice might be done in the long run doesn't mean that this can't be improved.

    Time to moot another suggestion which I have heard a lot recently: After acceptance, the name of the referee should be published. Best might be to do so as a footnote on the title page of the paper.

    In this particular case, I think that you should write to all of the authors and ask them to cite your paper in the upcoming erratum. :-|

    1. While I was commenting, you were as well, and so answered my question about going to enough conferences!

    2. I'm not sure I understand this assumption that you and Anonymous seem to share - correct me if I am wrong - that the referee(s) could not have seen the revised draft with the (substantial) changes. The revised version was received by MNRAS on Feb 24th, but not accepted until March 4th. To me this suggests that it was sent back to the referee(s) for further comment in the intervening period before eventual acceptance.

    3. Sorry if I was unclear. Though I said that the referee should see the revised version if there are substantial changes, I didn't mean to imply that he didn't. As you note, there was a good week of time during which he could have, and probably did. (If he didn't want to see it, it probably would have been accepted shortly after receipt.)

      The question is whether the referee, when looking at the revised draft, should take into account events which have happened since the original version was submitted. I would say that, in general, this might be expecting too much but, in this particular case, I'm surprised that he (apparently) didn't.

      Of course, the referee couldn't have pointed out your paper in his first comments, as it had not yet appeared (assuming, as is probably the case, that the long revision time was not due to waiting so long for the initial report).

    4. MNRAS has a 6-month time limit for authors to submit revisions in response to the referee's comments. August 20th (when our paper came out) to February 24th is slightly more than six months.

    5. Right. (I think the six-month deadline is "we assume the author isn't interested in revision if we don't hear anything by then".) I don't see how that's relevant here, though. However, since the original paper was "received In original form 2014 May 6", it is possible that they got the referee report before your paper appeared. As I said, since they posted their paper to the arXiv before acceptance, I think it is the job of the authors to see if anyone has cited it and take this into account, especially with such a long time between the original and revised versions. They didn't, and that's bad, but 6 months is neither here nor there.

      In other words, I don't think the fact that your paper and their revised version are a bit more than 6 months apart is relevant here. The 6-month deadline doesn't mean that they sent in their revised version on 24 August and thus could have (just) seen your paper. Rather, I suspect that they sent in the revised version before your paper came out, and that correspondence kept the deadline extended. (Sometimes, one asks for clarification by email or whatever, which doesn't result in a revised version.) Again, I think the 6-month deadline is a timeout to remove a paper from consideration if nothing is heard and doesn't imply that the revised version must be submitted within 6 months of the original. (It would imply that only if there was no email correspondence, etc.) Here's the official boilerplate: "If you do not submit your revision within six months, we may consider it withdrawn and request it be resubmitted as a new submission. ".

    6. I'm suggesting that at least one referee report was probably sent out after August 20th.

    7. 24 August and thus could have (just) seen your paper ---> 24 August at the latest and thus must have seen your paper, though possibly only shortly before submission

    8. Maybe, but maybe just a response to a question which had nothing to do with your paper.

      Again, I think the authors should have reacted to your paper, and in this special case even the referee, but I don't think one can deduce much from the various dates, especially since not every extension of the deadline means a new version, or even a formal referee report.

    9. Sorry, I'm clearly being too cryptic. Setting aside what the authors should or should not have done on their own, my point in this instance is that the referee also bears responsibility.

      If there were not one but two (Jim Zibin's arXiv comment also appeared in August) new papers on the arXiv casting doubt on aspects of a paper I were reviewing, I would regard it as part of my job to make sure that the authors accounted for or at least acknowledged these works. This would still be the case even if I had previously sent a report that only asked for other minor changes.

      Maybe the referee didn't read the arXiv either, but one should be allowed expect better from referees.

    10. I agree, although I might think that a bit more responsibility lies with the authors than with the referee, at least if his report went in before your paper appeared.

      I think this is a good example where one can see that it would make sense that a referee loses his anonymity once a paper is accepted. We then know whom to blame. :-) (This allows the valid reasons for anonymity before publication (which, of course, might never happen) to stand.)

      I do suggest you write the authors and ask them to cite your paper in the erratum.

    11. "I'm clearly being too cryptic"

      Is that even possible? :-)

  4. Another case is when a paper has already been published (i.e. final version) and then another paper refutes it. Ignoring this in followup papers to the refuted paper is, in my view, unethical (even if one disagrees with the criticism), but what if no followup papers appear? This might be seen as tacit recognition that the authors of the original paper agree with the criticism, but might also be a result of ignorance, or just "nolo contendere".

    Maybe in such cases the journal should ask for a one-page reply from the authors of the original paper, noting the fact if they choose not to reply, and also allow the author of the refutation a one-page reply to the reply. This would at least make it clear whether the original authors noticed the refutation.

  5. Maybe they were just jealous that the BICEP-2 people have been getting so much press for jumping to conclusions, and are trying to emulate them. :-)

    Seriously, do all these "impact" analyses which count citations count those which refute the paper in question as adding to the impact?

  6. I just realized that about a third of my papers point out mistakes in papers by others. Not sure what, if anything, this says about me or about anyone else. :-|

  7. Theo NieuwenhuizenApril 27, 2015 at 9:43 PM

    Well, dear learned gentlemen, I face more brutal and trivial injustice: rejection of papers that model data, but point to an interpretation outside LCDM. In todays status quo only issues within LCDM can be discussed, and that is what you do at length. A discussion outside LCDM (just a model) is routinely rejected by editors.

    1. Post a link to the paper and the referee report. Otherwise, how can anyone judge whether this is unjust?

      I regularly see papers which question the LCDM framework. While stubborn orthodoxy does exist to some extent, its influence is usually exaggerated, especially by people who blame any and all criticism of their work on stubborn orthodoxy. If one disagrees with orthodoxy then, yes, criticism might be due to stubborn folks wanting to retain the status quo, but more probably the work is just wrong.

      Again, without seeing the paper and the referee report, no-one can judge.

      It isn't that long ago that LCDM itself was a radical alternative; at least, it was seen by some (to some extent defenders of the old orthodoxy) as such. But I think it's not a case of one orthodoxy supplanting another (again, the extent of such orthodoxy is usually exaggerated); many LCDM folks remember when they were on the other side of the fence, and tend to be more open-minded. But, as Matt Strassler said, there is a difference between having an open mind and having an empty head.

      "There are only two things wrong with LambdaCDM: Lambda and CDM." ---Tom Shanks

    2. This is off topic, so please don't continue.

  8. I'm sorry, I think I should clarify this point again - this blog is not the place to discuss your own alternative theories of everything. Sensible, informative comments on topic are great, questions are encouraged, but expositions of crackpot ideas will be deleted as soon as I see them.