Monday, September 2, 2013

A long summer

Indeed it has been a long summer, though the good weather appears to be drawing to a close. Over the last few months, I have attended three cosmology conferences or workshops and also been on a two-week holiday in the Dolomites, where I occupied my time by doing things like this:

La Guglia Edmondo de Amicis, near the Misurina lake.
and enjoying views like this:

Cima Piccola di Lavaredo, from the Dibona route on Cima Grande.
This explains the lack of activity here in recent times.

Returning home a couple of weeks ago, I was full of ideas for several exciting blog posts, including a summary of all the hottest topics in cosmology that were discussed at the conferences I attended, and perhaps an account of my argument stimulating discussion with Uros Seljak. However, it has come to my attention that there are other physicists in other parts of the world who happen to be working on the exact same topic that my collaborators and I have been investigating for the last few months. The rule in the research world is of course "publish or perish" (though some wit has suggested that "publish and perish" is more accurate) – so most of my time now will be spent on avoiding being scooped, and the current hiatus on this blog will continue for a short period. Looking on the bright side, once normal service resumes, I hope to have some interesting science results to describe!

In the meantime, I can only direct you to other blogs for your entertainment and enlightenment. Those of you who like physics discussions and have not already read Sean Carroll's blog (a vanishingly small number perhaps?) might enjoy this post about Boltzmann brains. I personally also enjoyed this argument against philosopher Tom Nagel.

For people interested in climbing news, I can report that my friends on the Oxford Greenland Expedition that I mentioned once here have returned safely after a successful series of very impressive climbs. I found their regular reports of their activities in the expedition diary well-written and rather thrilling – not just the climbing, but also the account of the journey to Greenland by sea in the face of seemingly never-ending gales! Well worth a read, as is this.

Thursday, July 11, 2013

Quasars, homogeneity and Einstein

[A little note: This post, like many others on this blog, contains a few mathematical symbols which are displayed using MathJax. If you are reading this using an RSS reader such as Feedly and you see a lot of $ signs floating around, you may need to click through to the blog to see the proper symbols.]

People following the reporting of physics in the popular press might remember having come across a paper earlier this year that claimed to have detected the "largest structure in the Universe" in the distribution of quasars, that "challenged the Cosmological Principle". This was work done by Roger Clowes of the University of Central Lancashire and collaborators, and their paper was published in the Monthly Notices of the Royal Astronomical Society back in March (though it was available online from late last year). 

The reason I suspect people might have come across it is that it was accompanied by a pretty extraordinary amount of publicity, starting from this press release on the Royal Astronomical Society website. This was then taken up by Reuters, and featured on various popular science websites and news outlets, including New ScientistThe Atlantic, National Geographic, Space.com, The Daily Galaxy, Phys.orgGizmodo, and many more. The structure they claimed to have found even has its own Wikipedia entry.

Obligatory artist's impression of a quasar.

One thing that you notice in a lot of these reports is the statement that the discovery of this structure violates Einstein's theory of gravity, which is nonsense. This is sloppy reporting, sure, but the RAS press release is also partly to blame here, since it includes a somewhat gratuitous mention of Einstein, and this is exactly the kind of thing that non-expert journalists are likely to pick up on. Mentioning Einstein probably helps generate more traffic after all, which is why I've put him in the title as well.

But aside from the name-dropping, what about the main point about the violation of the cosmological principle? As a quick reminder, the cosmological principle is sometimes taken to be the assumption that, on large scales, the Universe is well-described as homogeneous and isotropic. 

The question of what constitutes "large scales" is sometimes not very well-defined: we know that on the scale of the Solar System the matter distribution is very definitely not homogeneous, and we believe that on the scale of size of the observable Universe it is. Generally speaking, people assume that on scales larger than about $100$ Megaparsecs, homogeneity is a fair assumption. A paper by Yadav, Bagla and Khandai from 2010 showed that if the standard $\Lambda$CDM cosmological model is correct, the scale of homogeneity must be less than at most $370$ Mpc. 

On the other hand, this quasar structure that Clowes et al. found is absolutely enormous: over 4 billion light years, or more than 1000 Mpc, long. Does the existence of such a large structure mean that the Universe is not homogeneous, the cosmological principle is not true, and the foundation on which all of modern cosmology is based is shaky?

Well actually, no. 

Unfortunately Clowes' paper is wrong, on several counts. In fact, I have recently published a paper myself (journal version here, free arXiv version here) which points out that it is wrong. And, on the principle that if I don't talk about my own work, no one else will, I'm going to try explaining some of the ideas involved here.

The first reason it is wrong is something that a lot of people who should know better don't seem to realise: there is no reason that structures should not exist which are larger than the homogeneity scale of $\Lambda$CDM. You may think that this doesn't make sense, because homogeneity precludes the existence of structures, so no structure can be larger than the homogeneity scale. Nevertheless, it does and they can.

Let me explain a little more. The point here is that the Universe is not homogeneous, at any scale. What is homogeneous and isotropic is simply the background model we use the describe its behaviour. In the real Universe, there are always fluctuations away from homogeneity at all scales – in fact the theory of inflation basically guarantees this, since the power spectrum of potential fluctuations is close to scale-invariant. The assumption that all cosmological theory really rests on is that these fluctuations can be treated as perturbations about a homogeneous background – so that a perturbation theory approach to cosmology is valid.

Given this knowledge that the Universe is never exactly homogeneous, the question of what the "homogeneity scale" actually means, and how to define it, takes on a different light. (Before you ask, yes it is still a useful concept!) One possible way to define it is as that scale above which density fluctuations $\delta$ generally become small compared to the homogeneous background density. In technical terms, this means the scale at which the two-point correlation function for the fluctuations, $\xi(r)$, (of which the power spectrum $P(k)$ is the Fourier transform) becomes less than $1$. Based on this definition, the homogeneity scale would be around $10$ Mpc.

It turns out that this definition, and the direct measurement of $\xi(r)$ itself, is not very good for determining whether or not the Universe is a fractal, which is a question that several researchers decided was an important one to answer a few years ago. This question can instead be answered by a different analysis, which I explained once before here: essentially, given a catalogue with the positions of many galaxies (or quasars, or whatever), draw a sphere of radius $R$ around each galaxy, and count how many other galaxies lie within this sphere, and how this number changes with $R$. The scale above which the average of this number for all galaxies starts scaling as the cube of the radius, $$N(<R)\propto R^3,$$ (within measurement error) is then the homogeneity scale (if it starts scaling as some other constant power of $R$, the Universe has a fractal nature). This is the definition of the homogeneity scale used by Yadav et al. and it is related to an integral of $\xi(r)$; typically measurements of the homogeneity scale using this definition come up with values of around $100-150$ Mpc.

The figure that proves that the distribution of quasars is in fact homogeneous on the expected scales. For details, see  arXiv:1306.1700

To get back to the original point, neither of these definitions of the homogeneity scale makes any claim about the existence of structures that are larger than that. In fact, in the $\Lambda$CDM model, the correlation function for matter density fluctuations is expected to be small but positive out to scales larger than either of the two homogeneity scales defined above (though not as large as Yadav et al.'s generous upper limit). The correlation function that can actually be measured using any given population of galaxies or quasars will extend out even further. So we already expect correlations to exist beyond the homogeneity scale – this means that, for some definitions of what constitutes a "structure", we expect to see large "structures" on these scales too.

The second reason that the claim by Clowes et al. is wrong is however less subtle. Given the particular definition of a "structure" they use, one would expect to find very large structures even if density correlations were exactly zero on all scales.

Yes, you read that right. It's worth going over how they define a "structure", just to make this absolutely clear. About the position of each quasar in the catalogue they draw a sphere of radius $L$. If any other quasars at all happen to lie within this sphere, they are classified as part of the same "structure", which can now be extended in other directions by repeating the procedure about each of the newly added member quasars. After repeating this procedure over all $18,722$ quasars in the catalogue, the largest such group of quasars identified becomes the "largest structure in the Universe".

It should be pretty obvious now that the radius $L$ chosen for these spheres, while chosen rather arbitrarily, is crucial to the end result. If it is too large, all quasars in the catalogue end up classified as part of the same truly ginormous "structure", but this is not very helpful. This is known as "percolation" and the critical percolation threshold has been thoroughly studied for Poisson point sets – which are by definition random distributions of points with no correlation at all. The value of $L$ that  Clowes et al. chose to use, for no apparent reason other than that it gave them a dramatic result, was $100$ Mpc – far too large to be justified on any theoretical grounds, but slightly lower than the critical percolation threshold would be if the quasar distribution was similar to that of a Poisson set. On the other hand, the "largest structure in the Universe" only consists of $73$ quasars out of $18,722$, so it could be entirely explained as a result of the poor definition ...

Now I'll spare you all the details of how to test whether, using this definition of a "structure", one would expect to find "structures" extending over more than $1000$ Mpc in length or with more than $73$ members or whatever, even in a purely random distribution of points, which are by definition homogeneous. Suffice it to say that it turns out one would. This plot shows the maximum extent of such "structures" found in $10,000$ simulations of completely uncorrelated distributions of points, compared to the maximum extent of the "structure" found in the real quasar catalogue.

The probability distribution of extents of largest "structures" found in 10,000 random point sets for two different choices of $L$. Vertical lines show the actual values found for "structures" in the quasar catalogue. The actual values are not very unusual. Figure from arXiv:1306.1700

To summarise then: finding a "structure" larger than the homogeneity scale does not violate the cosmological principle, because of correlations; on top of that, the "largest structure in the Universe" is actually not really a "structure" in any meaningful sense. In my professional opinion, Clowes' paper and all the hype surrounding it in the press is nothing more than that – hype. Unfortunately, this is another verification of my maxim that if a paper to do with cosmology is accompanied by a big press release, it is odds-on to turn out to be wrong.

Finally, before I leave the topic, I'll make a comment about the presentation of results by Clowes et al. Here, for instance, is an image they presented showing their "structure", which they call the 'Huge-LQG', with a second "structure" called the 'CCLQG' towards the bottom left:

3D representation of the Huge-LQG and CCLQG. From arXiv:1211.6256.

Looks impressive! Until you start digging a bit deeper, anyway. Firstly, they've only shown the quasars that form part of the "structure", not all the others around it. Secondly, they've drawn enormous spheres (of radius $33$ Mpc) at the position of each quasar to make it look more dramatic. In actual fact the quasars are way smaller than that. The combined effect of these two presentational choices is to make the 'Huge-LQG' look far more plausible as a structure than it really is. Here's a representation of the exact same region of space that I made myself, which rectifies both problems:

Quasar positions around the "structures" claimed by Clowes et al.

Do you still see the "structures"?

Sunday, June 23, 2013

Across the Himalayan Axis

I had promised to try to write a summary of the workshop on cosmological perturbations post Planck that took place in Helsinki in the first week of June, but although the talks were all interesting, I didn't feel very inspired to write much about them. Plus life has been intervening, so I'll have to leave you to read Shaun's accounts at the Trenches of Discovery instead.

I also recently put a new paper on the arXiv; despite promising to write about my own papers when I put them out, I'm going to have to postpone an account of this one until next week. This is because I am spending the next week at a rather unique workshop in the Austrian Alps. (This is one of the perks of being a physicist, I suppose!)

Therefore today's post is going to be about mountaineering instead. It is an account I wrote of a trek I did with my father and sister almost exactly seven years ago: we crossed the main Himalayan mountain range from south to north over a mountain pass known as the Kang La (meaning 'pass of ice' in the local Tibetan dialect, I believe), and then crossed back again from north to south over another pass as part of a big loop. In doing so we also crossed from the northern Indian state of Himachal Pradesh into Zanskar, a province of the state of Jammu and Kashmir, and then back again.

The account below was first written as a report for the A.C. Irvine Travel Fund, who partly funded this trip, and it has been available via a link on their website for several years. At the time, the Kang La was a very infrequently-used pass, in quite a remote area and only suitable for strong hikers with high-altitude mountain experience. But in the seven years since my trip it has seen quite a rise in popularity — I sometimes flatter myself that my account had something to do with raising the profile of the area!

Anyway, the account itself follows after the break. There is also a sketch map of the area I drew myself (it's hard to obtain decent cartographical maps of the area, and illegal to possess them in India due to the proximity to the border), and a few photographs to illustrate the scenery ...

Tuesday, June 4, 2013

CPPP 13

This week I am attending this workshop in Helsinki. The focus of the workshop is on re-evaluating theoretical issues in cosmology in light of the new data from the Planck satellite. 

Although the data were released in March, so far as I know they have not yet inspired any major theoretical breakthroughs. This is partly because the results were somewhat disappointingly boring, in that there is no smoking-gun indication in the data of failures of our current cosmological model (for more on this, see here), and therefore no clear hints of which extensions of the model we should be looking to explore further. There are still some niggles in the data, to be sure – such as the much advertised "anomalies". But these have not yet led to any major advances either. As a community it seems we cosmologists are still digesting the Planck results.

This workshop should aid that process of digestion. There are many scientists from all over the world attending, and I'm looking forward to hearing what they think about what the data mean. The way the workshop has been organised deliberately leaves plenty of time for discussion in between the scheduled talks, which I think is always the best way to go. I'm not giving a talk myself, though some work I did recently with Shaun Hotchkiss and Samuel Flender, who are both based in Helsinki, will feature in the poster session. Samuel gets most of the credit for preparing our poster though!

I'm not going to attempt to blog about the workshop in real time. Instead I will try to make a few notes and provide a single post at the end of the week touching on what I thought were the most interesting topics of the week. If you want more detail on each day, you should read Shaun's introduction and day-by-day accounts at The Trenches of Discovery. He did a similar thing before for the official ESA Planck conference, which was very successful. But I'd rather him than me, especially as internet access is rather expensive at my hotel!


Meanwhile, the most remarkable fact to note about Helsinki right now is the weather: a thermometer in my hotel said it was 28ºC this morning, which is wonderful outdoors but when combined with quadruple-glazed windows makes nights rather uncomfortable ...

Monday, May 27, 2013

An inconsistent CMB?

When the Planck science team announced their results in March, they also put out a great flood of papers. You can find the list here; there are 29 of them, plus an explanatory statement.

Except if you look carefully, only 28 of the papers have actually been released. Paper XI, 'Consistency of the data', is still listed as "in preparation". Now, what this paper was supposed to cover was the question of how consistent Planck results were with previous CMB experiments, such as WMAP. We already knew that there were some inconsistencies, both in the derived cosmological parameters such as  the dark energy density and the Hubble parameter, and in the overall normalization of the power seen on large scales. We might expect this missing paper to tell us the reason for the inconsistencies, and perhaps to indicate which experiment got it wrong (if any). The problem is that at present there is no indication when we can expect this paper to arrive – when asked, members of the Planck team only say "soon". I presume that the reason for the delay is that they are having some unforeseen difficulty in the analysis.

However, if you were paying attention last week, you might have noticed a new submission to the arXiv that provided an interesting little insight into what might be going on. This paper by Anne Mette Frejsel, Martin Hansen and Hao Liu – the authors are at the Niels Bohr Institute in Copenhagen, and in fact all three recently visited Bielefeld for our Kosmologietag workshop – applied a particular consistency check to Planck and WMAP data ... and found WMAP wanting.

The test they applied is really pleasingly simple. Suppose you want to measure the CMB temperature anisotropies on the sky using your wonderful satellite – either WMAP or Planck. Unfortunately, there's a great big galaxy (our galaxy) in the way, obscuring quite a large fraction of the sky:

The CMB sky as seen by Planck in the 353 GHz channel.  Obviously there's a lot of foreground in the way. (This is not the best frequency for viewing the CMB, by the way. I chose it only because it illustrates the foregrounds quite nicely!)

Now, as I've mentioned before, there are clever ways of removing this foreground and getting to the underlying CMB signal. The CMB signal is what is interesting for cosmologists, because that is what gives us the insight into fundamental physics. Foregrounds are about messy stuff to do with the distribution of dust in our galaxy: the details are complicated, but the underlying physics is not that interesting (ok, maybe it is, but to different people). Anyway, using their clever techniques (and measurements of the CMB+foreground at several different frequencies), the guys at Planck or WMAP can come up with the best map they can that they think represents the CMB with foreground removed.

Planck's SMICA map of the CMB.

The map above shows the Planck team's effort. Well actually they produced four different such "CMB only" maps, constructed by four different methods of removing the foregrounds. These are known as the SMICA, SEVEM, NILC and Commander-Ruler maps, the names indicating the different foreground-removal algorithms used. For some reason, Commander-Ruler appears not to be recommended for general use. WMAP on the other hand produced only one, known as the Internal Linear Combination or ILC map. (Planck's NILC is meant to be a counterpart to WMAP's ILC.)

Now, although the algorithms used to produce these maps are, as I said, very clever, the resultant maps are never going to be completely foreground-free. Let's express this as the equation

map = CMB + noise 

where the "noise" term includes foregrounds as well as instrument noise, systematics and other contaminants. If you have more than one map, they see the same fundamental CMB, but the noise contribution to each is different. So you can subtract one from the other to get a new map consisting of their difference:

difference = map1 – map2 = noise1 – noise2.

Since most of the residual noise should be due to the galactic foreground, most of the features in the difference map should be around the galactic plane. If the foreground removal has been reasonably successful, these features should also be small. And for the Planck maps, that is in fact what Frejsel, Hansen and Liu find:

 NILC–SMICA, NILC–SEVEM and SMICA–SEVEM difference maps. Figure from arXiv:1305.4033.

So the various different methods used by Planck seem to give self-consistent answers.

The same is not true, however, for WMAP. Of course WMAP only use the one method of removing foreground, but they did provide different maps based on the data they had collected after 7 years of operation and after 9 years. The ILC9–ILC7 difference map looks quite different:

ILC9–ILC7 difference map on the left, and with a galactic mask overlaid on the right. Figure from arXiv:1305.4033.

Most of the difference appears well away from the galactic plane, as you can see in the right-hand figure, where the galaxy is masked out. So there is some important source of noise that is not foregrounds – so probably some systematics – that has affected the WMAP ILC map. Even more importantly, it is some kind of systematic effect that has changed between the 7-year and the 9-year WMAP data releases, meaning that the ILC9 and ILC7 maps do not appear to be consistent with each other. Frejsel et al. discuss a method of quantifying this, but I won't go into that here because the impression created by the images is both dramatic enough and entirely in line with the quantitative analysis.

As you might have expected, the same method shows that WMAP's ILC9 map is thoroughly inconsistent with the various Planck maps (the picture here is even worse than that between the two ILC maps). But perhaps surprisingly, ILC7 is perfectly consistent with Planck. So it appears that whatever might have affected the WMAP results only affected the final data release.

I guess one should be careful not to make too much of a fuss about this. The results from Planck and WMAP are, generally speaking, in pretty good agreement, except for some problems at the very largest scales. It is also true that the WMAP team themselves do not use the ILC map for most of their analysis (except for the low multipoles, $\ell<32$ – that is, the very largest scales!). But I'm sure this paper will provoke some head-scratching among the WMAP team as they try to figure out what has happened here. Oh, and if you are cosmologist using the ILC9 map for your own analysis, you should probably check whatever conclusions you draw using some other maps before publishing!

All in all, I think I'm rather looking forward to Planck's consistency paper when it does finally come out!

Tuesday, May 21, 2013

One year on

It has been a long month-and-a-bit since I last had the time to write a proper post here. Primarily this is because I am not very good at doing more than one thing at a time – at least while attempting to do those things properly – and I have been working on some real research papers, which I thought I should try to do properly. As a result science communication, and blogging in general, has had to wait. But I will get back in the swing of things very soon: there are some interesting new results, some interesting rehashes of old arguments about inflation, and one of my own papers that I will write about over the next couple of weeks.

One of the many things that I omitted to mention during this little break was the fact that Blank On The Map had its first anniversary a couple of weeks ago. All in all, I would say it has been quite a satisfying year of blogging, and, at least relative to my prior expectations, a reasonably successful one too. Blogger doesn't provide me with many detailed statistics, but it does tell me that despite the low recent rate of new postings, there have been roughly 23,000 clicks over the last twelve months. A sizeable portion of of this traffic came from one link on Peter Woit's blog (boy he must have a huge number of readers!) – though the majority appear to arrive via Google search, either by accident or design.

Anyway, to all those who have arrived here at some point over the last year, welcome! I hope you found what you were looking for, and enjoyed what you read. This explains what this blog is about. In case you haven't worked your way through the archives, here is a short selection of some highlights from the last year:


I should also use this post to try to get some feedback from (all three) regular readers. Do you think I should post more often? Less often? More posts about cosmology and fewer about mountaineering and other stuff in general? Or the other way round? Let me know through the comments box. Constructive criticism of the writing style or any other aspect of this blog will also be well received, though I can't promise to change ...

Tuesday, April 9, 2013

Celebrating Tom Lehrer

This is not a post about physics, but one to mark the birthday today of mathematician, teacher, satirist, lyricist and performer Tom Lehrer. Today he turns 85 – or, since he apparently prefers to measure his age in Centigrade – 29 (I must remember to use that one myself sometime!).

To commemorate the occasion, the BBC ran a half-hour long radio feature on his life and work last Saturday. This is available to listen to here for another four days; do try to catch it before then!

Even readers who have not heard of Lehrer might have heard of some of his better-known songs, such as The Elements Song. Other pieces of simple comedy gold include Lobachevsky, or New Math. But for me the best of Lehrer's songs are the ones with darkly satirical lyrics juxtaposed with curiously uplifting melodies. (These were probably also part of the reason that he never achieved the mainstream popularity he deserved.) So I want to feature one such example here:


Kim Jong-un, I hope you are listening.

Sunday, April 7, 2013

Unnecessary spin

A few people have asked me why I have not blogged about the recent announcement about, and publication of, results from the Alpha Magnetic Spectrometer, which were widely touted as a possible breakthrough in the search for dark matter.

The reason I have not is simply that there are many other better informed commenters who have already done so. In case you have not yet read these accounts, you could do worse than going to Résonaances, or Ethan Siegel, or Stuart Clark in the Guardian, who provide commentary at different levels of technical detail. The simple short summary would be: AMS has not provided evidence about the nature of dark matter, nor is it likely to do so in the near future. The dramatic claims to the contrary are spin, pure and simple. Siegel in fact goes so far as to say "calling it misleading is generous, because I personally believe it is deceitful" (emphasis his own).

So I'm not going to make any more comments about that.

However, since this incident brought it up again, I do want to comment on a related piece of annoying spin, which is the habit of physicists in the business of communicating science to the public of making vastly exaggerated claims about the possible practical applications of fundamental physics. The example that caught my attention this week occurred when Maggie Aderin-Pocock – who is apparently a space science research fellow at UCL – appeared on the BBC's Today programme on Thursday to discuss the significance of the AMS findings.

At one point in the discussion the interviewer John Humphrys asked a slightly tricky question: I understand that dark matter and dark energy are endlessly fascinating, he said, and that learning about the composition of the universe is very exciting. But what practical benefits might it bring? The answer Aderin-Pocock gave was that if we understood what dark matter and dark energy were, we might be able to use them to supply ourselves with energy – dark matter as a fuel source.

I'm sorry, but that is just rubbish.

Unfortunately, it's the kind of rubbish that is increasingly commonly voiced by scientists. You may argue that Aderin-Pocock was simply commenting on something she didn't understand – and if you  listen to the whole interview (available here for a few days; skip to the segment between 1h 23m and 1h 26m), including the cringe-worthy suggestion that dark matter and dark energy are the same thing really (because E=mc2, apparently), it's hard to avoid that conclusion. But a few weeks ago I thought I heard Andrew Pontzen and Tom Whyntie suggest something similar about the Higgs boson on BBC Radio 5 (unfortunately this episode is no longer on the iPlayer so I can't check the exact words they used). And here is Jon Butterworth seeming to suggest (in the midst of an otherwise reasonable piece) that the Higgs could be used to power interstellar travel ...

Why do people feel the need to do this? It's patently rubbish, and they know better. Do we as a scientific community feel that continued public support of science is so important that we should mislead or deceive the public in order to guarantee our future access to it? Do we feel that there is no convincing honest case to be made instead? Or are we just too lazy to make the honest case, and so rely on the catchy but inaccurate soundbite instead?

I think the sensible answer to the question John Humphrys posed would go something like this. Discovering the nature dark matter is a fascinating and exciting adventure. Knowing the answer will almost certainly have no practical applications whatsoever. However, on the journey to the answer we will have to develop new technologies and equipment (made of ordinary matter!) which may serendipitously turn out to have spin-off applications that we cannot yet foresee. More importantly, the very fact that the search is fascinating is part of what draws talented and creative young minds to physics – indeed to science – in the first place, from where they go on to enrich our society in a myriad different ways, none of which may later be connected to dark matter at all. I tried to make this case at greater length here in the early days of this blog.

It's a more subtle argument than just throwing empty phrases about "energy source" around, and it might be hard to reduce to a sound-byte. But it is justifiable, and also honest. And since science is after all about careful argumentation, let's have less spin all round please.

Wednesday, March 27, 2013

Explaining Planck by analogy


Explaining physics to the public is hard. Most physicists do a lousy job of conveying a summary of what their research really means and why it is important, without the use of jargon and in terms that can be readily understood. So it is not particularly surprising that occasionally non-experts trying to translate these statements for the benefit of other non-experts come up with misleading headlines such as this, or this.

Just to be clear: Planck has not mapped the universe as it was in the first tiny fraction of a second. (To be fair, most other reports correctly make this distinction, though they differ widely on when inflation is supposed to have occurred.) I think this is an important thing to get right, and I'm going to try to explain why, and what the CMB actually is.

However, I'm going to try to do so with the help of an analogy. This analogy is not my original invention – I heard Simon White use it during the Planck science briefing – but I think it is brilliant, simple to understand and not vastly misleading. So, despite the health warning about analogies above, I'm going to run with it and see how far we get.

Thursday, March 21, 2013

What Planck has seen

Update at 16:30 CET: I've now had a chance to listen to the main science briefing, and also to glance at some of the scientific papers released today, albeit very briefly. So here are a few more thoughts, though in actual fact it will take quite some time for cosmologists to fully assess the Planck results.

The first thing to say – and it's something easy to forget to say – is just what a wonderful achievement it is to send a satellite carrying two such precise instruments up into space, station it at L2, cool the instruments to a tenth of a degree above absolute zero with a fluctuations of less than one part in a million about that, spin the satellite once per minute, scan the whole sky in 9 different frequency bands, subtract all the messy foreground radiation from our own galaxy and even our solar system, all to obtain this perfect image of the universe as it was nearly 14 billion years ago:

The CMB sky according to Planck.

So congratulations and thanks to the Planck team!

Now I said all that first up because I don't want to now sound churlish when I say that overall the results are a little disappointing for cosmologists. This is because, as I noted earlier in the day, there isn't much by way of exciting new results to challenge our current model of the universe. And of course physicists are more excited by evidence that what they have hitherto believed was wrong than by evidence that it continues to appear to be right.

There are however still some results that will be of interest, and where I think you can expect to see a fair amount of debate and new research in the near future.

Firstly, as I pointed out earlier, Planck sees the same large scale anomalies as WMAP, thus confirming that they are real rather than artifacts of some systematic error or foreground contamination (I believe Planck even account for possible contamination from our own solar system, which WMAP didn't do). These anomalies include not enough power on large angular scales ($\ell\leq30$), an asymmetry between the power in two hemispheres, a colder-than-expected large cold spot, and so on.

The problem with these anomalies is that they lie in the grey zone between being not particularly unusual and being definitely something to worry about. Roughly speaking, they're unlikely at around a 1% level. This means that how seriously you take them depends a lot on your personal prejudices priors. One school of thought – let's call it the "North American school" – tends to downplay the importance of anomalies and question the robustness of the statistical methods by which they were analysed. The other – shall we say "European" – school tends instead to play them up a bit: to highlight the differences with theory and to stress the importance of further investigation. Neither approach is wrong, because as I said this is a grey area. But the Planck team, for what it's worth, seem to be in the "European" camp.

The second surprise is the change in the best-fit values for the parameters of the simplest $\Lambda$CDM model. In particular the Hubble parameter is lower than WMAP's, which was already getting a bit low compared to distance-ladder measurements from supernovae. This will be a cause for concern for the people working on distance-ladder measurements, and potentially something interesting for inventive theorists.

And finally, something close to my own heart. A few days ago I wrote a post about the discrepancy in the integrated Sachs-Wolfe signal seen from very rare structures, and pointed out that this effect had now been confirmed in two independent measurements. Almost immediately I had to change that statement, because one of those independent measurements had been partially retracted.

Well, the Planck team have been on the case (here, paper XIX), and have now filled that gap with a second independent measurement (as well as re-confirming the first one). The effect is definitely there to be seen, and it is still discrepant with $\Lambda$CDM theory (though I'll need to read the paper in more detail before quantifying that statement).

So there's a ray of hope for something exciting.

 
11:30 am CET: Well, ESA's first press conference to announce the cosmological results from Planck has just concluded. The full scientific papers will be released in about an hour, and there will be a proper technical briefing on the results in the afternoon (this first announcement was aimed primarily at the media). However, here is a very quick summary of what I gathered is going to be presented:
  • The standard Lambda Cold Dark Matter Model continues to be a good fit to CMB data
  • However, the best fit parameters have changed: in particular, Planck indicates slightly more dark matter and ordinary (baryonic) matter than WMAP did, and slightly less dark energy. (This is possibly not a very fair comparison – my hunch is that the Planck values are obtained from Planck data alone, whereas the "WMAP values" that were quoted were actually the best fit to WMAP plus additional (non-CMB) datasets.)
  • The value of the Hubble parameter has decreased a bit, to around 67 km/s/Mpc. Given the error bars this is actually getting a bit far away from the value measured from supernovae, whch is around 74 km/s/Mpc. I think the quoted error bars on the measurement from supernovae are underestimated.  
  • The Planck value of the spectral tilt is a bit smaller than, but consistent with, what WMAP found.
  • There is no evidence for extra neutrino-like species.
  • There is no evidence for non-zero neutrino masses.
  • There is no evidence for non-Gaussianity.
  • There is no evidence for deviations from a simple power-law form of the primordial power spectrum.
  • No polarisation data, and therefore no evidence of gravitational waves or their absence, for around another year.
  • There is evidence for anomalies in the large-scale power, consistent with what was seen in WMAP. We'll have to wait and see how statistically significant this is – the general response to the anomalies WMAP saw could be summarised as "interesting, but inconclusive"; I don't think Planck is going to do a lot better than this (and the bigging-up of it in the press conference might have had more to do with the lack of other truly exciting discoveries), but I'd love to be surprised!
That's about all I got out of the media briefing. Obviously we are all waiting for more details this afternoon! 

Wednesday, March 20, 2013

The Planck guessing game

At 10 am CET on Thursday morning, the Planck mission will hold a press conference and announce the first cosmology results based on data from their satellite, which has now been in orbit for nearly 1406 days, according to the little clock on their website. (I think the conference information will be available live here, though the website's not as clear as it could be.)

Planck is an incredible instrument, which has been measuring the pattern of cosmic microwave background (CMB) temperature anisotropies with great precision. And the CMB itself is an incredible treasure trove of information about the history of the universe, telling us not only about how it began, but what it consists of, and what might happen to it in the future. When the COBE and WMAP satellites first published detailed data from measurements of the CMB, the result was basically a revolution in cosmology and our understanding of the universe we live in. Planck will provide a great improvement in sensitivity over WMAP, which in turn was a great improvement on everything that came before it.

Another feature of the Planck mission has been the great secrecy with which they have guarded their results. The members of the mission themselves have known most of their results for some time now. Apparently on the morning of March 21st they will release a cache of something like 20 to 30 scientific papers detailing their findings, but so far nobody outside the Planck team itself has much of an idea what will be in them.

So let's have a little guessing game. What do you think they will announce? Dramatic new results, or a mere confirmation of WMAP results and nothing else? I'll list below some of the things they might announce and how likely I think they are (I have no inside information about what they actually have seen). Add your own suggestions via the comments box!
 
Tensors: Planck is much more sensitive to a primordial tensor perturbation spectrum than the best current limits. If they did see a non-zero tensor-to-scalar ratio, indicative of primordial gravitational waves, this would be pretty big news, because it is a clear smoking gun signal for the theory of inflation. Of course there are other bits of evidence that make us think that inflation probably did happen, but this really would nail it.

Unfortunately, I think it is unlikely that they will see any tensor signal – not least because many (and some would argue the most natural) inflation models predict it should be too small for Planck's sensitivity.

Number of relativistic species: CMB measurements can place constraints on the number of relativistic species in the early universe, usually parameterised as the effective number of neutrino species. I wrote about this a bit here. The current best fit value is $N_{\rm eff}=3.28\pm0.40$ according to an analysis of the latest WMAP, ACT and SPT data combined with measurements of baryon acoustic oscillations and the Hubble parameter (though some other people find a slightly larger number).

I would be very surprised indeed if Planck did not confirm the basic compatibility of the data with the Standard Model value $N_{\rm eff}=3.04$. It will help to resolve the slight differences between the ACT and SPT results and the error bars will probably shrink, but I wouldn't bet on any dramatic results.

Non-Gaussianity: One thing that all theorists would love to hear is that Planck has found strong evidence for non-zero non-Gaussianity of the primordial perturbations. At a stroke this would rule out a large class of models of inflation (and there are far too many models of inflation to choose between), meaning we would have to somehow incorporate non-minimal kinetic terms, multiple scalar fields or complicated violations of slow-roll dynamics during inflation. Not that there is a shortage of these sorts of models either …

Current WMAP and large-scale structure data sort of weakly favour a positive value of the non-Gaussianity parameter $f_{\rm NL}^{\rm local}$ that is larger than the sensitivity claimed for Planck before its launch. So if it lives up to that sensitivity billing we might be in luck. On the other hand, my guess (based on not very much) is it's more likely that they will report a detection of the orthogonal form, $f_{\rm NL}^{\rm ortho}$, which is more difficult – but not impossible – to explain from inflationary models. Let's see.

Neutrino mass: The CMB power spectrum is sensitive to the total mass of all neutrino species, $\Sigma m_\nu$, through a number of different effects. Massive neutrinos form (hot) dark matter, contributing to the total mass density of the universe and affecting the distance scale to the last-scattering surface. They also increase the sound horizon distance at decoupling and increase the early ISW effect by altering the epoch of matter-radiation equality.

WMAP claim a current upper bound of $$\Sigma m_\nu<0.44\;{\rm eV}$$ at 95% confidence from the CMB and baryon acoustic oscillations and the Hubble parameter value. But a more recent SPT analysis suggests that WMAP and SPT data alone give weak indications of a non-zero value, so it is possible that Planck could place a lower bound on $\Sigma m_\nu$. This would be cool from an observational point of view, but it's not really "new" physics, since we know that neutrinos have mass.

Running of the spectral index: Purely based on extrapolating from WMAP results, I expect Planck will find some evidence for non-zero running of the spectral index. But given the difficulty in explaining such a value in most inflationary models, I also expect the community will continue to ignore this, especially since the vanilla model with no running will probably still provide an acceptable fit to the data.

Anything else? Speculate away … we'll find out on Thursday!

Tuesday, March 19, 2013

A real puzzle in cosmology: part II

(This post continues the discussion of the very puzzling observation of the integrated Sachs-Wolfe effect, the first part of which is here. Part II is a bit more detailed: many of these questions are real ones that have been put to me in seminars and other discussions.)

Update: I've been informed that one of the papers mentioned in this discussion has just been withdrawn by the authors pending some re-investigation of the results. I'll leave the original text of the post up here for comparison, but add some new material in the relevant sections.

Last time you told me about what you said was an unusual observation of the ISW effect made in 2008.

Yes. Granett, Neyrinck and Szapudi looked at the average CMB temperature anisotropies along directions on the sky where they had previously identified large structures in the galaxy distribution. For 100 structures (50 "supervoids" and 50 "superclusters"), after applying a specific aperture photometry method, they found an average temperature shift of almost 10 micro Kelvin, which was more than 4 standard deviations away from zero.

Then you claimed that this observed value was five times too large. That if our understanding of the universe was correct, the value they should have seen should not have been definitely bigger than zero. Theory and observation grossly disagree.

Right again. Our theoretical calculation showed the signal should have been at most around 2 micro Kelvin, which is pretty much the same size as the contamination from random noise.

But you used a simple theoretical model for your calculations. I don't like your model. I think it is too simple. That's the answer to your problem – your calculation is wrong.

That could be true – thought I don't think so. Why don't we go over your objections one by one?

Thursday, March 7, 2013

Higgs animations

The news recently from the LHC experiments hasn't been very exciting for my colleagues on the particle theory side of things (see for instance here for summaries and discussion). But via the clever chaps at ATLAS we do have a series of very nice gif animations showing how the evidence for the existence of the Higgs changed with time, as they collected more and more data.

This example shows the development of one plot, for the Higgs-to-gamma-gamma channel:

ATLAS Higgs boson diphoton channel animation
That's pretty cool. Also nice to see the gif format being put to better use than endless animations of cats doing silly things! (Though if you are a PhD student, you might find this use of gifs amusing ... )

Here's another one, this time for the decay channel to 4 leptons:

ATLAS Higgs boson 4 lepton channel animation

Note that in this case the scale on the y axis is also changing with time! There's a version of this animation with a fixed axis here, and one of the gamma-gamma channel with a floating axis here.

Tuesday, March 5, 2013

A real puzzle in cosmology: part I

In a previous post, I wrote about recent updates to the evidence from the cosmic microwave background for extra neutrino species. This was something that a lot of people in cosmology were prepared to get excited about, but I argued that reality turned out to be really rather boring. This is because the new data neither showed anything wrong with the current model of what the universe is made of, nor managed to rule out any competing models.

Today I'd like to write about something else, which currently is a really exciting puzzle. Measurements have been made of a particular cosmological effect, known as the integrated Sachs-Wolfe or ISW effect, and the data show a measured value that is five times larger than it should be if our understanding of gravitational physics, our model of the universe, and our analysis of the experimental method are correct. No one yet knows why this should be so. The point of this post is to try to explain what is going on, and to speculate on how we might hope to solve the puzzle. It has been written in a conversational format with the lay reader in mind, but there should be some useful information even for experts.

Before beginning, I should point out that this what a lot of my own research is about at the moment. In fact, this was the topic of a seminar I gave at the University of Helsinki last week (and much of this post is taken from the seminar). My host in Helsinki, Shaun Hotchkiss, with whom I have written two papers on this "ISW mystery", has also put up several posts about it at The Trenches of Discovery blog over the last year (see here for parts I, II, III, IV, V, and VI). I will be more concise and limit myself to just two!

Obviously you could view this as a bit of an effort at self-publicity. But at a time when, both in particle physics and cosmology, many experiments are disappointingly failing to provide much guidance on new directions for theorists to follow, this is one of the few results that could do so. (Unlike a lot of the rubbish you might read in other popular science reports, it also has a pretty good chance of being true.) So I won't apologise for it!

What is the integrated Sachs-Wolfe effect?

The entire universe is filled with very cold photons. These photons weren't always very cold; on the contrary, they are leftovers from the time soon after the Big Bang when the universe was still very young and very small and very hot, so hot that all the protons and electrons (and a few helium nuclei) formed a single hot plasma, the photons and electrons bouncing off each so often that they all had the same temperature. And as the universe was expanding, this plasma was also cooling, until suddenly it was cool enough for the electrons and protons to come together to form hydrogen atoms, without immediately getting swept apart again. And when this happened, the photons stopped bouncing off the electrons, and instead just continued travelling straight through space minding their own business, cooling as the universe continued to expand. (The neutrinos, which only interact weakly with other stuff, had stopped bouncing and started minding their own business some time before this.)

What I've just told you is a cartoon picture of the history of the early universe. These cooling photons streaming through space form the cosmic microwave background radiation, or CMB for short. They fill the universe, and they arrive at Earth from all directions – they even make up about 1% of the 'snow' you see on an (old-school) untuned TV set. 

The most important property of the CMB photons is that, to a very great degree of accuracy, they are all at the same temperature, whichever direction they come from. This is how we know that the universe used to be very hot, and how we learned that it has been expanding since then. It is also why we think it is probably very uniform. The second important property of CMB photons is that they are not all at the same temperature – by looking carefully enough with an extremely sensitive instrument, we can see tiny anisotropies in temperature across the sky. These differences in temperature are the signs of the very small inhomogeneities in the early matter-radiation plasma which are responsible for all the structure we see around us in the night sky today. When the photons decoupled from the primordial plasma, they kept the traces of the tiny inhomogeneities as they streamed across the universe. The matter, on the other hand, was subject to gravity, which took the small initial lumpiness and over billions of years caused it to become bigger and lumpier, forming stars, galaxies, clusters of galaxies and vast clusters of clusters.

The CMB sky as seen by the WMAP satellite. The colours represent deviations of the measured CMB temperature from the mean value – the CMB anisotropies (red is hot and blue is cold). This map uses the Mollweide projection to display a sphere in two dimensions. Image credit: NASA / WMAP Science team. 

Yes I knew that, but what is the integrated Sachs-Wolfe effect?

Thursday, February 28, 2013

The nature of publications

A paper in the journal of Genome Biology and Evolution has been doing the rounds on the internet recently and was shown to me by a friend. It is titled "On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE", by Graur et al. The title is blunt enough, but the abstract is extraordinarily so. Let me quote the entire thing here:
A recent slew of ENCODE Consortium publications, specifically the article signed by all Consortium members, put forward the idea that more than 80% of the human genome is functional. This claim flies in the face of current estimates according to which the fraction of the genome that is evolutionarily conserved through purifying selection is under 10%. Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that at least 80 – 10 = 70% of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect. Here, we detail the many logical and methodological transgressions involved in assigning functionality to almost every nucleotide in the human genome. The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.
Ouch.

The paper that Graur et al. implicitly deride as "marketing, mass-media hype and public relations" is one of series of publications in Nature (link here for those interested) by the ENCODE consortium. I'm not going to claim any expertise in genetics, though the arguments put forward by Graur appear sensible and convincing.1 But I do think it is interesting that the ENCODE papers were published in Nature.

Nature is of course a very prestigious journal to publish in. In some fields, the presence or lack of a Nature article on a young researcher's CV can make or break their career chances. It is very selective in accepting articles: not only must contributions meet all the usual requirements of peer-review, they should also be judged to be in "the five most significant papers" published in that discipline that year. It has a very high Impact Factor rating, probably one of the highest of all science journals. In fact it is apparently one of the very few journals that does better on citation counts than the arXiv, which accepts everything.

But among some cosmologists, Nature has a reputation for often publishing claims that are over-exaggerated, describe dramatic results that turn out to be less dramatic in subsequent experiments, or are just plain wrong.One professor even once told me – and he was only half-joking – that he wouldn't believe a particular result because it had been published in Nature.

It is easy to see how such things can happen. The immense benefit of a high-profile Nature publication to a scientist's career leads to a pressure to find results that are dramatic enough to pass the "significance test" imposed by the journal, or to exaggerate the interpretation of results that are not quite dramatic enough. On the other hand, if a particular result does start to look interesting enough for Nature, the authors may be – perhaps unwittingly – less likely to subject it to the same level of close scrutiny they would otherwise give it. The journal then is more reliant on its referee's to provide the scrutiny to weed out the hype from the substance, but even with the most efficient refereeing system in the world given enough submitted papers full of earth-shattering results, some amount of rubbish will always slip through.

I was thinking along these lines after seeing Graur et al.'s paper, and I was reminded of a post by Sabine Hossenfelder at the Backreaction blog, which linked to this recent pre-print on the arXiv titled "Deep Impact: Unintended Consequences of Journal Rank". As Sabine discusses, the authors point to quite a few undesirable aspects of the ranking of journals according to "impact factor", and the consequent rush to try to publish in the top-ranked journals. The publication bias effect (and in some cases, the subsequent retractions that follow) appear to be influenced to a degree by the impact factor of the journal in which the study is published. Another thing that might be interesting (though probably hard to check) is the link between the likelihood of scientists holding a press conference or issuing a press release to announce a result, and the likelihood of that result being wrong. I'd guess the correlation is quite high!

Of course the only real reason that the impact factor of the journal in which your paper is published matters is that it can be used a proxy indication of the quality of your work for the benefit of people who can't be bothered, or are unable, to read the original work and judge it on merit.

The other yardstick by which researchers are often judged is the number of citations their papers receive, which at least has the (relative) merit of being based on those papers alone, rather than other people's papers. Combining impact factor and citation count is even sillier – unless they are counted in opposition, so that a paper that is highly cited despite being in a low-impact journal gets more credit, and a moderately cited one in a high-impact journal gets less!

Anyway, bear these things in mind if you ever find yourself making a reflexive judgement about the quality of a paper you haven't read based on where it was published.

1The paper includes a quote which pretty well sums up the problem for ENCODE:
"The onion test is a simple reality check for anyone who thinks they can assign a function to every nucleotide in the human genome. Whatever your proposed functions are, ask yourself this question: Why does an onion need a genome that is about five times larger than ours?"
2Cosmologists (the theoretical ones, at any rate) actually hardly ever publish in Nature. Even observational cosmology is rarely included. So you might regard this as a bit a of a case of sour grapes. I don't think that is the case, simply because it isn't really relevant to us. Not having a Nature publication is not a career-defining gap for a cosmologist: it's just normal.

Tuesday, February 19, 2013

Things to Read, 19th February

I have just arrived in Helsinki, where I am visiting current collaborators and future colleagues at the Helsinki Institute of Physics for a few days. I will give a talk next Wednesday, about which more later. In the meantime though, a quick selection of interesting things I have read recently:
  • Did you know that about 6 million years ago, the Mediterranean sea is believed to have basically evaporated, leaving a dry seabed? This is called the Messinian Salinity Crisis, which I first learned about from this blog. There's also an animated video showing a hypothesised course of events leading to the drying up:


    Very soon after, the Atlantic probably came flooding back in over the straits of Gibraltar – an event known as the Zanclean Flood – and, according to some models, could have refilled the whole basin back up in a very short time. Spare a thought for the poor hippopotamuses that got stuck on the seabed ...
  • A long feature in next month's issue of National Geographic Magazine is called The Drones Come Home, by John Horgan. Horgan has written a blog piece about this at Scientific American, which he has titled 'Why Drones Should Make You Afraid'. In the blog piece he has a bullet-point summary of the most disturbing facts about unmanned aircraft (military or otherwise) taken from the main piece. Some of these include:

    - "The Air Force has produced [a video showing] possible applications of Micro Air Vehicles [...] swarming out of the belly of a plane and descending on a city, where [they] stalk and kill a suspect."
    - "The Obama regime has quietly compiled legal arguments for assassinations of American citizens without a trial"
    - "The enthusiasm of the U.S. for drones has triggered an international arms race. More than 50 other nations now possess drones, as well as non-governmental militant groups such as Hezbollah."

    Scary stuff; worth reading the whole thing.
  • I wrote some time ago about Niall Ferguson's argument about economics with Paul Krugman (this was in the context of a lot of nonsense Ferguson was coming up with at the time, both in his Reith lectures for the BBC, and in other publications). I just learned (via a post by Krugman, who also just learned) that Ferguson had already apparently admitted that he got it wrong, about a year ago. Krugman's response to that is here; I'd add that I notice this admission didn't seem to stop Ferguson continuing the same economic reasoning in his Reith lectures a few months later!
  • A review of John Lanchester's new novel Capital, by Michael Lewis in the New York Review of Books. Quite often Almost always with the NYRB, I read reviews of books before I have read the actual book. In this case the result was to make me resolve to buy a copy.

Saturday, February 16, 2013

Oxford Greenland Expedition

I sometimes wonder about the mix of topics I mention on this blog – is there too much physics, or not enough? Well, today's post is about something completely different: it is a straight-up publicity plug for the Oxford Greenland Expedition. This is a sea-faring and rock-climbing expedition organised by a group of students, and is exactly the kind of exciting exploratory adventure to which the title of this blog refers, so I'd like to support it as best I can. They have a fundraising page here. As I'll try to explain, I think it is a great plan, and they deserve all the support they can get!

The climbing members of the expedition team are all current or former students of Oxford University, and members of the Oxford University Mountaineering Club, whom I have known for a varying number of years (I've climbed on the same rope as many but not all of them). Between them, they have dreamt up an outrageously bold and brilliant plan for a climbing expedition in the Arctic – as the 'Objectives' page of their website puts it, their goals are:
  1. Sail to Greenland!
  2. Climb the Horn of Upernavik!
  3. Sail north! Find an 800 m pillar! Climb it!
  4. Explore for new climbs!
  5. Sail back!
(The rest of the website is quite amusing too, if you poke around it.) The Horn of Upernavik appears to be a very dramatic 1000-metre-high piece of rock rising straight out of the sea near Uummannaq:

The Horn of Upernavik.
It has apparently also been the objective of several previous expeditions to climb the Horn, but none of them have succeeded. So if this team succeeds, they will be making (a small amount of) history ...

Although all are very good climbers and mountaineers, they are definitely firmly within the ranks of the enthusiastic amateurs rather than elite professionals. I think this makes the audacity of the undertaking all the more wonderful, and appealing to romantic ideals.

Incidentally, two of the members of this team (Ian Faulkner and Tom Codrington) were last year part of a different and equally inspiring expedition to Krygyzstan, during which they, along with Ian Cooper, made only the second ever free ascent of the Mirror Route or Rusyaev Route on Peak 4810, after Alex Lowe and Lynn Hill. A report of this climb and some other stuff they did was published here. So they've definitely got a track record with achieving amazing things!

Anyway, best of luck to the team in their efforts this summer!

Tuesday, February 12, 2013

Seeing neutrinos in the CMB

One of the really cool things about cosmology is the fact that you can look at the sky and use the cosmic microwave background to determine that there must be at least three generations of neutrinos, without having directly observed them in any ground-based experiments.

Well, actually we already knew that there were at least three neutrino generations from ground-based experiments before we had anywhere near the technological capability to determine this from the CMB, but it's arguably still pretty cool. What would inarguably be pretty amazing is if we could use CMB observations to prove that there were further, yet-undiscovered, generations of neutrinos that we could then go and look for elsewhere. Or, for a particular type of person, it might be quite satisfying to use the CMB to prove that there definitely weren't any more than the three generations we already know about, thus killing a lot of other people's pet theories stone dead.

All this is (somewhat) topical because of the release of new data from three experiments measuring the CMB: the Wilkinson Microwave Anisotropy Probe (WMAP) satellite, the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT), that could in principle do something like this. Given my various recent blogging delays, however, I haven't been particularly quick about writing about this, and there are other good discussions elsewhere on the web, especially at Résonaances, that many readers will already have seen. I'm also afraid I don't have time for a layperson-level introduction to the topic right now. But there's a bit of a jumble of information from the different collaborations, some of it mildly contradictory, and in this post I'd like to – partly simply as a note for myself – summarise it all and sort it into a logical order.

In case you'd like to just see the executive summary, it is that – in my opinion – the new data doesn't tell us anything very interesting. For a fuller justification of this statement, keep reading.

Thursday, January 31, 2013

Type Ia single degenerate survivors must be overluminous

I noticed a paper on the arXiv today with exactly this title (well, except that I removed the superfluous capitalisation of words), that is due to be published in the Astrophysical Journal. The abstract of the paper says:
In the single-degenerate (SD) channel of a Type Ia supernovae (SN Ia) explosion, a main-sequence (MS) donor star survives the explosion but it is stripped of mass and shock heated. An essentially unavoidable consequence of mass loss during the explosion is that the companion must have an overextended envelope after the explosion. While this has been noted previously, it has not been strongly emphasized as an inevitable consequence. We calculate the future evolution of the companion by injecting $2$-$6\times10^{47}$ ergs into the stellar evolution model of a $1\,M_\odot$ donor star based on the post-explosion progenitors seen in simulations. We find that, due to the Kelvin-Helmholtz collapse of the envelope, the companion must become significantly more luminous ($10$-$10^3\, L_\odot$) for a long period of time ($10^3$-$10^4$ years). The lack of such a luminous "leftover" star in the LMC supernova remnant SNR 0609-67.5 provides another piece of evidence against the SD scenario. We also show that none of the stars proposed as the survivors of the Tycho supernova, including Tycho G, could plausibly be the donor star. Additionally, luminous donors closer than $\sim10$ Mpc should be observable with the Hubble Space Telescope starting $\sim2$ years post-peak. Such systems include SN 1937C, SN 1972E, SN 1986G, and SN 2011fe. Thus, the SD channel is already ruled out for at least two nearby SNe Ia and can be easily tested for a number of additional ones. We also discuss similar implications for the companions of core-collapse SNe.
Now, technical scientific papers are full of jargon and maybe the meaning of that paragraph isn't immediately clear to everyone (there's an accompanying Youtube video purporting to explain the content of the paper, but I didn't think it quite achieved that aim!). But I think this result is really quite interesting and probably important in a broader cosmological sense.