tag:blogger.com,1999:blog-69760714879225276182024-03-13T18:35:35.626+01:00Blank On The MapExplorations of physics and whimsy.Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.comBlogger85125tag:blogger.com,1999:blog-6976071487922527618.post-61443556336010087112015-10-28T19:00:00.000+01:002015-10-28T19:00:01.086+01:00Change of scenery<div dir="ltr" style="text-align: left;" trbidi="on">
I should have reported this a while ago, but better late than never: I have moved to a new job, at a new institution, in a new country. In fact since the 1st of October this year I have been employed by the <a href="http://www.icg.port.ac.uk/">Institute for Cosmology and Gravitation</a> at the University of Portsmouth, where I now hold a <a href="http://ec.europa.eu/research/mariecurieactions/about-msca/actions/index_en.htm">Marie Skłodowska-Curie</a> individual fellowship and the ICG's Dennis Sciama fellowship (though unfortunately I do not get paid two salaries at once!).<br />
<br />
It's partly a sign of how much I have been neglecting this blog in recent times that I've only just got around to posting about this now, nearly a month after arriving here. But it is also partly due to the fact that I am <i>still</i> waiting for a functioning internet connection in my new home, so any blog postings must be done while still at my desk!<br />
<br />
Anyway, I'm very excited to be working here at the ICG, because it is one of the leading cosmology institutes in the UK, and therefore by extension in Europe and also the world. The institute directors mentioned several times during staff induction meetings the statistics for how much of the research output here was graded "world-leading" or "internationally excellent" in the recent UK <a href="http://www.ref.ac.uk/">REF review</a> — but I forget the numbers. In any case what's more important is that the ICG is home to world experts in many of the fields that I work in, and — crucially — that it is a very large, exciting and young department, with around 60 members, of whom 20 or so (20!) are young postdoctoral researchers, and another 20 are PhD students.<br />
<br />
Since I like including pictures with my posts, let me put some up of the famous names associated with my fellowships. Here is Marie Curie, from her 1903 Nobel Prize portrait:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://upload.wikimedia.org/wikipedia/commons/9/93/Marie_Curie_1903.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://upload.wikimedia.org/wikipedia/commons/9/93/Marie_Curie_1903.jpg" width="282" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">"Marie Curie 1903" by the Nobel foundation. (Public domain)</td></tr>
</tbody></table>
and here is Dennis Sciama:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/en/f/f9/Sciama2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="396" src="https://upload.wikimedia.org/wikipedia/en/f/f9/Sciama2.jpg" width="400" /></a></div>
<br />
Marie Curie is of course justly famous around the world and I'm sure everyone reading this blog is aware of her fantastic achievements — two Nobel prizes, in two different sciences, first woman to win a Nobel prize, discoverer of two elements, the theory of radioactivity, and so on.<br />
<br />
Dennis Sciama is perhaps not quite so well known to those outside cosmology, but my what a towering figure he is within the field, and in the history of British science. Even the <a href="http://www.genealogy.math.ndsu.nodak.edu/id.php?id=72653">list of his PhD students</a> reads like a who's-who of modern cosmology: Stephen Hawking, Martin Rees, George Ellis, Gary Gibbons, John Barrow, James Binney ...<br />
<br />
The ICG in particular seems to have rather a fondness for Sciama — in addition to the fellowship I've already mentioned, we work in the Dennis Sciama building, and many of us make use of the SCIAMA supercomputer. I was a little puzzled by this, because although Sciama moved from Cambridge to Oxford to Trieste during his career, I wasn't aware of any special link to Portsmouth.<br />
<br />
In fact the answer appears to be that a large proportion of the staff at the ICG happened to be (academically speaking) his grandchildren, having received the PhDs under the supervision of George Ellis or John Barrow. That, and the fact that it is always nice to name new buildings after really famous people!<br />
<br />
Anyway, this will hopefully mark the start of a rather more regular series of posts about cosmology here — for one thing, my Marie Curie proposal included a proposal included a commitment to write short explanations of each new paper I produce over the next two years!<br />
<br />
PS: A small factoid that caught my attention about Dennis Sciama is that although born in Manchester, his family was actually of Syrian Jewish origin. They originally came from Aleppo, in fact, though his mother was born in Egypt. In light of recent events it seems worth pondering on that. </div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com9tag:blogger.com,1999:blog-6976071487922527618.post-30527896735318524402015-09-17T11:29:00.000+02:002015-09-17T11:29:24.860+02:0010 tips for making postdoc applications (Part 2)<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<div style="text-align: left;">
This post is part 2 of a series with unsolicited advice for postdoc applicants. Part 1, which includes a description of the motivation behind the posts and tips 1 through 5, can be found <a href="http://blankonthemap.blogspot.com/2015/09/10-tips-for-making-postdoc-applications.html">here</a>.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
----</div>
<h3>
6. Promote yourself</h3>
</div>
<div>
<br /></div>
<div>
This sounds sort of obvious, but for cultural reasons may come easier to some people than to others. I don't mean to suggest you should be boastful or oversell yourself in your CV and research statement. But be aware that people reading through hundreds of applications will not have time to read between the lines to discover your unstated accomplishments — so present the information that supports your application as clearly and as matter-of-factly as possible.</div>
<div>
<br /></div>
As a very junior member of the academic hierarchy, it is quite likely that nobody in the hiring department has heard of you or read any of your papers, no matter how good they were. They are far more likely to recognise your name if you happen to have taken some steps make yourself known to them, for instance by having arranged a research visit, given a talk at their local journal club or seminar series, initiated a collaboration on topics of mutual interest, or simply introduced yourself and your research at some recent conference.<br />
<br />
Organisers of seminar series and journal clubs are generally more than happy to have volunteers help fill some speaking slots — put anxieties to one side and just email them to ask! And if they do have time for you, make sure you give a good talk.<br />
<br />
<h3>
7. Know what type of postdoc you're applying for</h3>
<div>
<br /></div>
<div>
There are, roughly speaking, three different categories of postdoc positions in high-energy and astro (and probably more generally in all fields of physics, if not in all sciences).</div>
<div>
<br /></div>
<div>
The first category is <i>fellowships</i>. These are positions which provide funding to the successful candidate to pursue a largely independent line of research. They therefore require you to propose a detailed and interesting research plan. They may sometimes be tied to a particular institution, but often they provide an external pot of money that you will be bringing to the department you go to. They are also highly prestigious. Examples applicable to a cosmology context are <a href="http://www.stsci.edu/institute/smo/fellowships/hubble">Hubble</a> and <a href="http://cxc.harvard.edu/fellows/">Einstein</a> fellowships in the US, <a href="http://www.cita.utoronto.ca/opportunities/national-fellows-programs/">CITA national fellowships</a> in Canada, <a href="https://royalsociety.org/grants-schemes-awards/grants/university-research/">Royal Society</a> and <a href="https://www.ras.org.uk/awards-and-grants/fellowships">Royal Astronomical Society</a> fellowships in the UK, <a href="https://www.humboldt-foundation.de/web/humboldt-fellowship-postdoc.html">Humboldt fellowships</a> in Germany, <a href="http://ec.europa.eu/programmes/horizon2020/en/h2020-section/marie-sklodowska-curie-actions">Marie Skłodowska-Curie fellowships</a> across Europe, and many others.</div>
<div>
</div>
<div>
<br />
The second category is roughly a <i>research assistant</i> type of position. Here you are hired by some senior person who has won a grant for <i>their</i> research proposal, or has some other source of funding out of which to pay your salary. You are expected to work on their research project, in a pretty closely defined role.<br />
<br />
The third category is a sort of mix of the above two, which I'll call a <i>semi-independent postdoc</i>. This is where the postdoc funding comes from someone's grant, but they do not specify a particular research programme at the outset, giving you a large degree of independence in what work you actually want to do.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://www.phdcomics.com/comics/archive/phd060713s.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://www.phdcomics.com/comics/archive/phd060713s.gif" height="276" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Image credit: Jorge Cham.</td></tr>
</tbody></table>
<br />
When you apply, it is imperative that you know which of these three types of positions the people hiring you have in mind. There is no point trying to sell a detailed independent research plan — no matter how exciting — to someone who is only interested in whether you have the specific skills and experience to do what they tell you to. Equally, if what they want is evidence that you will drive research in your own directions, an application that lists your technical skills but doesn't present a coherent plan of what you will do with them is no good.<br />
<br />
Unfortunately postdoc ads don't use these terms, so it is generally not clear whether the position is of the second or third type. If in doubt, contact the department and find out what they want.<br />
<br />
Also, even when the distinction may be clear, many people still produce the same type of application for all jobs they apply for in one cycle. I know of an instance of a highly successful young scientist who managed to win not one but <i>two</i> prestigious individual fellowship grants worth hundreds of thousands of Euros each, and yet did not get a single offer from any of non-fellowship positions they applied to! So tailor your applications to the situation.</div>
<br />
<div>
</div>
<h3 style="text-align: left;">
8. Apply for fellowships</h3>
<div>
<br /></div>
<div>
Applications for the more prestigious fellowships require more work — a lot more work — than other postdoc applications. They will require you to produce a proper research <i>proposal</i>, which will need to include a clear and inspiring outline — of anything between one and twenty pages length — of what you intend to do with the fellowship. This will take a long time to think of and longer to write. They will probably require you to work on the application in collaboration with your host department, and they may have a million other specifically-sized hoops for you to jump through.<br />
<br />
Nevertheless, you should make a serious attempt to apply for them, for the following reasons.<br />
<br />
<ul style="text-align: left;">
<li>Any kind of successful academic career requires you to write lots of such proposals, so you might as well start practising now.</li>
<li>Writing a proposal forces you to prepare a serious plan of what research you want to do over the next few years, which will help clarify a lot of things in your mind, including how much you actually want to stay in the field (<a href="http://blankonthemap.blogspot.com/2015/09/10-tips-for-making-postdoc-applications.html">see point 2 again</a>!). </li>
<li>You will often write the proposal in collaboration with your potential host department. This makes it far more likely that they will think favourably of you should any other opportunities arise there later! For instance, I know of many cases where applicants for Marie Curie fellowships have ended up with positions in the department of their choice even though the fellowship application itself was ultimately unsuccessful.</li>
<li>Major fellowship programmes are more likely to have the resources and the procedures in place to thoroughly evaluate each proposal, reducing the unfortunate random element I'll talk about below. Many will provide individual feedback and assessments, which will help you if you reapply next year.</li>
<li>Counter-intuitively, success rates may be significantly higher for major fellowships than for standard postdoc jobs! Last year, the success rate for Hubble fellowships was 5%, for Einstein fellowships 6%, and for Marie Curie fellowships (over all fields of physics) almost 18%. All of these numbers compare quite well with those for standard postdoc positions! Granted, this is at least partly due to self-selection by applicants who don't think they can prepare a good enough proposal in the first place, but it is still something to bear in mind.</li>
<li>Obviously, a successful fellowship application counts for a lot more in advancing your career than a standard postdoc.</li>
</ul>
<h3 style="text-align: left;">
</h3>
<h3 style="text-align: left;">
<br /></h3>
<h3 style="text-align: left;">
9. Recognize the randomness</h3>
<div>
<br /></div>
<div>
Potential employers are faced with a very large number of applications for each postdoc vacancy; a ratio of 100:1 is not uncommon. Even with the best of intentions, it is just not possible to give each application equal careful consideration, so some basic pre-filtering is inevitable.<br />
<br />
Unfortunately for you, each department will have its own criteria for pre-filtering, and you do not know what those criteria are. Some will filter on recommendation letters, some on number of publications, some on number of citations. (As a PhD student I was advised by a well-meaning faculty member at a leading UK university that although they found my research very interesting, I did not yet meet their cutoff of X citations for hiring postdocs.) Others may deduce your field of interest only from existing publications rather than your research statement — this is particularly hard on recent PhDs who may be trying to broaden their horizons beyond their advisor's influence.<br />
<br />
Beyond this, it's doubtful that two people in different departments will have the same opinion of a given application anyway. They're only human, and their assessments will always be coloured by their own research interests, their plans for the future of the department, their different personal relationships with the writers of your recommendation letters, maybe even what they had for breakfast that morning.<br />
<br />
You can't control any of this. Your job is to produce as good and complete an application as possible (remember to send everything they ask for!), to apply to lots of suitable places, and then to learn not to fret.<br />
<br />
<h3 style="text-align: left;">
10. Don't tie your self-esteem to the outcome</h3>
<div>
<br /></div>
<div>
You will get rejections. Many of them. Even worse, there will be many places who don't even bother to let you know you were rejected. You will sometimes get a rejection at the exact same time as someone else you know gets an offer, possibly for the same position. (Things are made worse if you read the postdoc rumour mills regularly.)</div>
<div>
<br /></div>
<div>
It's pretty hard to prevent these rejections from affecting you. It's all too easy to see them as a judgement of your scientific worth, or to develop a form of <a href="https://en.wikipedia.org/wiki/Impostor_syndrome">imposter syndrome</a>. Don't do this! Read point 9 again. </div>
<br />
I'd also highly recommend reading <a href="https://medium.com/@reneehlozek/academic-job-hiring-a-letter-from-the-trenches-d3f25a341060">this post</a> by Renée Hlozek, which deals with many of the same issues. (Renée is one of the rising stars of cosmology, with a new faculty position after a very prestigious postdoc fellowship, but she too got multiple rejections the first time she applied. So it does happen to the best too, though people rarely tell you that.)<br />
<br />
-------<br />
<br />
That's it for my 10 tips on applying for (physics) postdocs. They were written primarily as advice I would have liked to have given my former self at the time I was completing my PhD.<br />
<br />
There's plenty of other advice available elsewhere on the web, some of it good and some not so good. I personally felt that far too much of it concerned how to choose the best of multiple offers, which is both a bit pointless (if you've got so many offers you'll be fine either way) and really quite far removed from the experience of the vast majority of applicants. I hope some people find this a little more useful.</div>
</div>
<div>
<br /></div>
<div style="-webkit-text-stroke-width: 0px; color: black; font-family: 'Times New Roman'; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: left; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px;">
<div style="margin: 0px;">
<br /></div>
</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com14tag:blogger.com,1999:blog-6976071487922527618.post-33491992612601116772015-09-13T12:30:00.000+02:002015-09-17T11:33:51.708+02:0010 tips for making postdoc applications (Part 1)<div dir="ltr" style="text-align: left;" trbidi="on">
Around this time of year in the academic cycle, thousands of graduate students around the world will be starting to apply for a limited supply of short-term postdoctoral research positions, or 'postdocs'. They will not only be competing against each other, but also slightly more senior colleagues applying for their second or possibly third or fourth postdocs.<br />
<br />
The lucky minority who are successful — and, as Richie Benaud once said about cricket captaincy, it is a matter of 90% luck and 10% skill, but don't try it without the 10% — will probably need to move their entire life and family to a new city, country or continent. The entire application cycle can last two or three months — or much longer for those who are not successful in the first round — and is by far the most stressful part of an early academic career.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://phdcomics.com/comics/archive/phd081409s.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://phdcomics.com/comics/archive/phd081409s.gif" height="273" width="640" /></a></div>
<br />
<br />
What I'd like to do here is to provide some unsolicited advice on how best to approach the application process, which I hope will be of help to people starting out on it. This advice mostly consists of a collection of things that I wish people had told me when I was starting out myself, plus things that people <i>did</i> tell me, but that for whatever reason I didn't understand or appreciate.<br />
<br />
My own application experience has been in the overlapping fields of cosmology, astrophysics and high-energy particle physics, and most of my advice is written with these fields in mind. Some points are likely to be more generally useful, but I don't promise anything!<br />
<br />
I'm also not going to claim to know much about what types of things hiring professors or committees <i>actually</i> look for — in fact, I strongly suspect that there are very few useful generalizations that can be made which cover all types of jobs and departments. So I won't tell you what to wear for an interview, or what font to use in your CV. Instead I'll try to focus on things that might help make the application process a bit less stressful for you, the applicant, giving you a better chance of coming out the other side still happy, sane, and excited about science.<br />
<br />
With that preamble out of the way, here are the first 5 of my tips for applying for postdocs! The next 5 follow in <a href="http://blankonthemap.blogspot.com/2015/09/10-tips-for-making-postdoc-applications_17.html">part 2</a> of this post.<br />
<br />
<h3 style="text-align: left;">
1. Start early</h3>
<div>
<br /></div>
<div>
At least in the high-energy and astro fields, the way the postdoc job market works means that for the vast majority of jobs starting in September or October of a given year, the application deadlines fall around September to October <i>of the previous year</i>. Sometimes — particularly for positions at European universities — the deadlines may be a month or two later. However, for most available positions job offers are made around Christmas or early in the new year, and the number of positions still advertised after about February is small to start with and decreases fast with each additional month.</div>
<div>
<br /></div>
<div>
This means if you want to start a postdoc in 2016, you should already have started preparing your application materials. If not, it's not too late, but start immediately!</div>
<div>
<br /></div>
<div>
Applying for research jobs is a very different type of activity to doing research, is not as interesting, requires learning a different set of skills, and therefore can be quite daunting. This makes it all too easy to procrastinate and put it off! In my first application cycle, I came up with a whole lot of excuses and didn't get around to seriously applying for anything until at least December, which is <i>way</i> too late. </div>
<div>
<br /></div>
<h3 style="text-align: left;">
2. Consider other options</h3>
<div>
<br /></div>
<div>
This sounds a bit harsh, but I think it is vital. My point is not that getting into academia is a bad career move, necessarily. But don't get into it out of inertia. I've met a few people who, far too many months into the application cycle, with their funding due to run out, and despite scores of rejections, continue the desperate search for a postdoc position somewhere, anywhere, simply because they <i>cannot imagine what else they might do</i>.</div>
<div>
<br /></div>
<div>
Don't be that person. There are lots of cool things you can do even if you don't get a postdoc. There are many other interesting and fulfilling careers out there, which will provide greater security, won't require constant upheaval, and will almost certainly pay better. Many of them still require the kinds of skills we've spent so many years learning — problem solving, tricky mathematics, cool bits of coding, data analysis — but most projects outside academia will be shorter and less nebulous, success will be more quantifiable and the benefits of success may well be more tangible.</div>
<div>
<br /></div>
<div>
If you have no idea what kinds of jobs you could do outside academia, find out now. Get in touch with previous graduate students from your department who went that way, find out what they are doing and how they got there. The AstroBetter website provides a great <a href="http://www.astrobetter.com/blog/category/career/career-profiles/">collection of career profiles</a> which may provide inspiration. </div>
<div>
<br /></div>
<div>
If after examining the alternatives you decide you'd still prefer that postdoc, great. But at least when you apply you won't be doing it purely out of inertia, and you'll have the reassurance that if you don't get it, there are other cool things you could do instead. And I'm pretty sure this will help your peace of mind during the weeks or months you spend re-drafting those research statements!<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://phdcomics.com/comics/archive/phd082313s.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://phdcomics.com/comics/archive/phd082313s.gif" height="276" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Image credit: Jorge Cham.</td></tr>
</tbody></table>
<br />
<h3 style="text-align: left;">
3. Apply everywhere</h3>
</div>
<div>
<br /></div>
<div>
It's not uncommon in physics for some postdoc ads to attract 100 qualified applicants or more per available position, and the numbers of advertised positions isn't that large. So apply for as many as you can! It's not a great idea to decide where to apply based on 'extraneous' reasons — e.g., you only want to live in California or Finland or some such. </div>
<div>
<br /></div>
<div>
Particularly if you're starting within Europe, you will probably have to move to a new country, and probably another new country after that. So if you have a strong aversion to moving countries, I'd suggest going back to point 2 above. </div>
<div>
<br /></div>
<div>
On the other hand, you can and should take a more positive view: living somewhere new, learning a new language and discovering a new culture and cuisine can be tremendous <i>fun</i>! Even remote places you've never heard of, or places you think you might not like, can provide you with some of the best memories of your life. Just as small example, before I moved to Helsinki, my mental image of Finland was composed of endless dark, depressing winter. After two years here this image has been converted instead to one of summers of endless sunshine and beautiful days spent at the beach! (Disclaimer: of course Finland is also dark, cold and miserable sometimes. Especially November.)</div>
<div>
<br /></div>
<div>
So for every advertised position, unless you are absolutely 100% certain that you would rather quit academia than move there for a few years — don't think about it, just apply. For the others, think about it and then apply anyway. If you get offered the job you'll always be able to say no later.</div>
<div>
<br /></div>
<h3 style="text-align: left;">
4. Don't apply everywhere</h3>
<div>
<br />
However, life is short. Every day you spend drafting a statement telling people what great research you would do if they hired you is a day spent not doing research, or indeed anything else. If you apply for upwards of 50 different postdoc jobs (not an uncommon number!), all that time adds up.<br />
<br />
So don't waste it. Read the job advertisement carefully, and assess your chances realistically. There's not much to be gained from applying to departments which are not a good academic fit for you.<br />
<br />
When I first applied for postdocs several years ago, I would read an advert that said something like "members of the faculty in Department X have interests in, among other things, string theory, lattice QCD, high-temperature phase transitions, multiloop scattering amplitudes, collider phenomenology, BSM physics, and cosmology," and I'd focus on those two words "and cosmology". So despite knowing that "cosmology" is a very broad term that can mean different things to different people, and despite not being qualified to work on string theory, lattice QCD, high-temperature phase transitions etc., I'd send off my application talking about analysis of CMB data, galaxy redshift surveys and so on, optimistically reasoning that "they <i>said</i> they were interested in cosmology!" And then I'd never hear back from them.<br />
<br />
Nowadays, my rule of thumb would be this: look through the list of faculty, and if it doesn't contain at least one or two people whose recent papers you have read carefully (not just skimmed the abstract!) because they intersected closely with your own work, don't bother applying. If you don't know them, they almost certainly won't know you. And if they don't know you or your work, your application probably won't even make it past the first round of sorting — faced with potentially hundreds of applicants, they won't even get around to reading your carefully crafted research statement or your glowing references.<br />
<br />
Being selective in where you apply will save you a heap of time, allow you to produce better applications for the places which really do fit your profile, and most importantly leave you feeling a lot less jaded and disillusioned at the end of the process.<br />
<br /></div>
<div>
<h3>
5. Choose your recommendations well</h3>
<div>
<br /></div>
<div>
Almost all postdoc adverts ask for three letters of recommendation in addition to research plans and CVs. These letters will probably play a crucial part in the success of your application. Indeed for a lot of PhD students applying for their first postdoc, the decision to hire is based almost entirely on the recommendation letters - there's not much of an existing track record by this stage, after all.<br />
<br />
So it's important to choose well when asking senior people to write these recommendation for you. As a graduate student, your thesis advisor has to be one of them. It helps if one of the others is from a different university to yours. If possible, all three should be people you have worked, or are working, closely with, e.g. coauthors. But if this is not possible, one of the three could also be a well-known person in the field who <i>knows your work</i> and can comment on its merit and significance in the literature.<br />
<br />
Having said that, there are several other factors that go into choosing who to get recommendations from. Some professors are much better at supporting and promoting their students and postdocs in the job market than others. You'll notice these people at conferences and seminars: in their talks they will go out of their way to praise and give credit to the students who obtained the results they are presenting, whereas others might not bother. These people will likely write more helpful recommendations; they also generally provide excellent career advice, and may well help your application in other, less obvious, ways. They are the ideal mentors, and all other things being equal, their students typically fare much better at getting that first and all-important step on the postdoc ladder. Of course ideally your thesis advisor will be such a person, but if not, find someone in your department who is and ask them for help.<br />
<br />
Somewhat unfortunately, I'm convinced that how well your referees themselves are known in the department to which you are applying is almost as important as how much they praise you. If neither you nor any of your referees have links — previous collaborations, research visits, invitations to give seminars — with members of the advertising department, I think the chances of your application receiving the fullest consideration are unfortunately much smaller. (I realise this is a cynical view and having never been on a hiring committee myself I have no more than anecdotal evidence in support of it. But I do see which postdocs get hired where.) So choose wisely.<br />
<br />
It is also a good idea to talk frankly to your professors/advisor beforehand. Explain where you are planning to apply, what they are looking for, and what aspects of your research skills you would like their letters to emphasise. Get their advice, but also provide your own input. You don't want to end up with a research statement saying you're interested in working in field A, while your recommendations only talk about your contributions in field B.<br />
<br />
---<br />
<br />
That's it for part 1 of this lot of unsolicited advice. Part 2 is available <a href="http://blankonthemap.blogspot.com/2015/09/10-tips-for-making-postdoc-applications_17.html">here</a>!</div>
</div>
<div>
<h3 style="text-align: left;">
</h3>
</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com10tag:blogger.com,1999:blog-6976071487922527618.post-31475554730088819392015-04-26T14:08:00.000+02:002015-04-28T09:01:38.535+02:00Supervoid superhype, or the publicity problem in science<div dir="ltr" style="text-align: left;" trbidi="on">
Part of the reason this blog has been quiet recently is that I decided at the start of this year to try to avoid — as far as possible — purely negative comments on incorrect, overhyped papers, and focus only on positive developments. (The other part of the reason is that I am working too hard on other things.)<br />
<br />
<div>
Unfortunately, last week a cosmology story hit the headlines that is so blatantly incorrect and yet so unashamedly marketed to the press that I'm afraid I am going to have to change that stance. This is the story that a team of astronomers led by Istvan Szapudi of the University of Hawaii have found "<a href="http://www.theguardian.com/science/2015/apr/20/astronomers-discover-largest-known-structure-in-the-universe-is-a-big-hole#comments">the largest structure in the Universe</a>", which is a "<a href="http://www.independent.co.uk/life-style/gadgets-and-tech/news/biggest-structure-in-the-universe-is-huge-hole-scientists-find-10191344.html">huge hole</a>" or "<a href="https://www.ras.org.uk/news-and-press/2616-cold-spot-suggests-largest-structure-in-universe-a-supervoid-1-3-billion-light-years-across">supervoid</a>" that "<a href="http://www.ifa.hawaii.edu/info/press-releases/ColdSpot/">solves the cosmic mystery</a>" of the CMB Cold Spot. This story was covered by all the major UK daily news outlets last week, from the Guardian to the Daily Mail to the BBC, and has been reproduced in various forms in all sorts of science blogs around the world. </div>
<div>
<br />
There are only three things in these headlines that I disagree with: that this thing is a "structure", that it is the largest in the Universe, and that it solves the Cold Spot mystery.<br />
<br />
Let's focus on the last of these. Readers of this blog may remember that I wrote about the Cold Spot mystery <a href="http://blankonthemap.blogspot.com/2014/08/a-supervoid-cannot-explain-cold-spot.html">in August last year</a>, referring to a paper my collaborators and I had written which conclusively showed that this very same supervoid could <i>not</i> explain the mystery. Our paper was published back in November in Phys. Rev. D (<a href="http://journals.aps.org/prd/abstract/10.1103/PhysRevD.90.103510">journal link</a>, <a href="http://arxiv.org/abs/1408.4720">arXiv link</a>). And yet here we are six months later, with the same claims being repeated!<br />
<br />
Does the paper by Szapudi et al. (<a href="http://mnras.oxfordjournals.org/lookup/doi/10.1093/mnras/stv488">journal link</a>, <a href="http://arxiv.org/abs/1405.1566">arXiv link</a>) refute the analysis in our paper? Does it even acknowledge the results in our paper? No, it pretends this analysis does not exist and makes the claims anyway.<br />
<br />
Just to be clear, it's possible that Szapudi's team are unaware of our paper and the fact that it directly challenged their conclusions several months before their own paper was published, even though Phys. Rev. D is a very high profile journal. This is sad and would reflect a serious failure on their part and that of the referees. The only alternative explanation would be that they were aware of it but chose not to even acknowledge it, let alone attempt to address the argument within it. This would be so ethically inexcusable that I am sure it cannot be correct.<br />
<br />
I am also frankly amazed at the standard of refereeing which I'm afraid reflects extremely poorly on the journal, MNRAS.<br />
<br />
Coming to the details. In our paper last year, we made the following points:<br />
<ol style="text-align: left;">
<li>Unless our understanding of general relativity in general, and the $\Lambda$CDM cosmological model in particular, is completely wrong, <i>this particular</i> supervoid, which is large but only has at most 20% less matter than average, is completely incapable of explaining the temperature profile of the Cold Spot.</li>
<li>Unless our understanding is completely wrong as above, the kind of supervoid that could <i>begin</i> to explain the Cold Spot is incredibly unlikely to exist — the chances are about 1:1,000,000!</li>
<li>The corresponding chances that the Cold Spot is simply a random fluctuation that requires no special explanation are <i>at worst</i> 1:1000, and, depending on how you analyse the question, probably a lot better.</li>
<li>This particular supervoid is big and rare, but not extremely so. In fact several voids that are as big or bigger, and as much as 4 times emptier, have <i>already been seen</i> elsewhere in the sky, and theory and simulation both suggest there could be as many as 20 of them.</li>
</ol>
<div>
To illustrate point 1 graphically, I made the following figure showing the actual averaged temperature profile of the Cold Spot versus the prediction from this supervoid:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-WfXMb_WjLx0/VTzI1bFACAI/AAAAAAAAHpw/5ZzsSnxkwiI/s1600/supervoid.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-WfXMb_WjLx0/VTzI1bFACAI/AAAAAAAAHpw/5ZzsSnxkwiI/s1600/supervoid.png" height="500" width="600" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Image made by me.</td></tr>
</tbody></table>
<br />
If this counts as a "solution to a cosmic mystery" then I'm Stephen Hawking.</div>
<br />
The supervoid can only account for less than 10% of the total temperature decrement at the centre of the Cold Spot (angle of $0^\circ$). At other angles it does worse, failing to even predict the correct sign! And remember, this prediction only assumes that our current understanding of cosmology is not completely, drastically wrong in some way that has somehow escaped our attention until now.<br />
<br />
You'll also notice that if the entire red line is somehow magically (through hypothetical "modified gravity effects") scaled down to match the blue line at the centre, it remains wildly, wildly wrong at every other angle. This is a direct consequence of the fact that the supervoid is very large, but really not very empty at all.<br />
<br />
By contrast, the simple fact that the Cold Spot is chosen to be the coldest spot in the entire CMB <i>already accounts for 100% of the cold temperature at the centre</i>:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-YY_cTzM9qEU/U_segLAjG0I/AAAAAAAAGKI/Gfh3xqaBSmM/s1600/coldspotprof.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-YY_cTzM9qEU/U_segLAjG0I/AAAAAAAAGKI/Gfh3xqaBSmM/s1600/coldspotprof.png" height="400" width="600" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The red line is the observed Cold Spot temperature profile. <strike>95%</strike> 68% of the coldest spots chosen in random CMB maps have temperatures lying within the dark blue band, and <strike>99%</strike> 95% lie within the light blue band. Image credit: <a href="http://arxiv.org/abs/1408.4720">http://arxiv.org/abs/1408.4720</a>.</td></tr>
</tbody></table>
<br />
Similarly, the fact that Mt. Everest is much higher than sea level is not at all surprising. The highest mountains on other planets (<a href="http://en.wikipedia.org/wiki/Olympus_Mons">Mars</a>, for instance) can be a lot higher still.<br />
<br />
But how to explain the fact that a large void does appear to lie in the same direction as the Cold Spot? Is this not a huge coincidence that should be telling us something?<br />
<br />
Let's try the following calculation. Take the hypothesis that this particular void is <i>causing</i> the Cold Spot, let's call it hypothesis H1. Denote the probability that this void exists by $p_\mathrm{void}$, and the probability that all of GR is wrong and that some unknown physics leads to a causal relationship as $p_\mathrm{noGR}$. Then<br />
$$p_\mathrm{H1}=p_\mathrm{void}p_\mathrm{noGR}$$On the other hand, let H2 be the hypothesis that the void and the Cold Spot are separate rare occurrences that happen by chance to be aligned on the sky. This gives<br />
$$p_\mathrm{H2}=p_\mathrm{void}p_\mathrm{CS}p_\mathrm{align},$$where $p_\mathrm{CS}$ is the probability that the Cold Spot is a random fluctuation on the last scattering surface, and $p_\mathrm{align}$ the probability that two are aligned.<br />
<br />
The relative likelihood of the two rival hypotheses is given by the ratio of the probabilities:<br />
$$\frac{p_\mathrm{H1}}{p_\mathrm{H2}}=\frac{p_\mathrm{noGR}}{p_\mathrm{CS}p_\mathrm{align}}.$$Suppose we assume that $p_\mathrm{CS}=0.05$, and that the chance of alignment at random is $p_\mathrm{align}=0.001$.[1] Then the likelihood we should assign to "supervoid-caused-the-Cold-Spot" hypothesis depends on whether we think $p_\mathrm{noGR}$ is more or less than 1 in 20,000.<br />
<br />
This exact calculation appears in Szapudi et al's paper, except that they mysteriously leave out the numerator on the right hand side. This means that they assume, <i>with probability 1</i>, that general relativity is wrong and that some unknown cause exists which makes a void with only a 20% deficit of matter create a massive temperature effect. In other words, they've effectively assumed their conclusion in advance.<br />
<br />
Well, call me old-fashioned, but I don't think that makes any sense. We have a vast abundance of evidence, gathered over the last 100 years, which show that if indeed GR is not the correct theory of gravity it is still pretty damn close to it. What's more, we have lots of cosmological evidence — from the Planck CMB data, from cross-correlation measurements of the ISW effect, as well as from weak lensing — that gravity behaves very much as we think it does on cosmological scales. Looking at the figure above, for the supervoid to explain the Cold Spot requires at least a factor of 10 increase in the ISW effect at the void centre, as well as a dramatic effect on the shape of the temperature profile. And all this for a void with only a 20% deficit of matter! If the ISW effect truly behaved like this <i>we would have seen evidence of it in other data.</i><br />
<br />
For my money, I would put $p_\mathrm{noGR}$ at no higher than $2.9\times10^{-7}$, i.e. I would rule out the possibility at $5\sigma$ confidence. This is a lot less than 1:20,000, so I would say chance alignment is strongly favoured. Of course you should feel free to put your own weight on the validity of all of the foundations of modern cosmology, but I suggest you would be very foolish indeed to think, as Szapudi et al. seem to do, that it is absolutely certain that these foundations are wrong.<br />
<br />
So much for the science, such as it is. The sociological observation that this episode brings me back to is that, almost without exception, whenever a paper on astronomy or cosmology is accompanied by a big press release, either the science is flawed, or the claims in the press release bear no relation to the contents of the paper. This is a particularly blatant example, where the authors have generated a big splash by ignoring (or being unaware of) existing scientific literature that runs contrary to their argument. But the phenomenon is much more ubiquitous than this.<br />
<br />
I find this deeply depressing. Like most other young researchers (I hope), I entered science with the naive impression that what counted in this business was the accuracy and quality of research, the presentation of evidence, and — in short — facts. I thought the scientific method would ensure that papers would be rigorously peer-reviewed. I did not expect that how seriously different results are taken would instead depend on the seniority of the lead author and the slickness of their PR machine. Do we now need to hold press conferences every time we publish a paper just to get our colleagues to cite our work?<br />
<br />
One possible response to this is that I was hopelessly naive, so more fool me. Another, which I still hope is closer to the truth, is that, <i>in the long run,</i> the crap gets weeded out and that truth eventually prevails. But in an era when public "impact" of scientific research is an important criterion for career advancement, and such impact can be simply achieved by getting the media to hype up nonsense papers [2], I am sadly rather more skeptical of the integrity of scientists [3].<br />
<div style="text-align: center;">
----</div>
<br />
<span style="font-size: small;">[1] This probability for alignment is the number quoted by Szapudi's team, based on the assumption that there is only one such supervoid, which could be anywhere in the sky. In fact, as I've already said, theory and simulation suggest there should be as many as 20 supervoids, and several have already been seen elsewhere in the sky (including one other by Szapudi's team themselves!). The probability that any one supervoid should be aligned with the Cold Spot should therefore be roughly 20 times larger, or 0.02.</span><br />
<span style="font-size: small;"><br />
</span> <span style="font-size: small;">[2] Not everything in Szapudi's paper is nonsense, of course. For instance, it seems quite likely that there is indeed a large underdensity where they say. But there is still a deal of nonsense (described above) in the actual paper, and vastly more in the press releases, especially <a href="http://www.ifa.hawaii.edu/info/press-releases/ColdSpot/">the one</a> from the Institute for Astronomy in Hawaii. </span><br />
<span style="font-size: small;"><br />
</span> <span style="font-size: small;">[3] On the whole, given the circumstances I though journalists handled the hype quite well, especially <a href="http://www.theguardian.com/science/2015/apr/20/astronomers-discover-largest-known-structure-in-the-universe-is-a-big-hole">Hannah Devlin in the Guardian</a>, who included a skeptical take from Carlos Frenk. (I suspect Carlos at least was aware of our paper!)</span><br />
<br /></div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com23tag:blogger.com,1999:blog-6976071487922527618.post-69312437801324012182014-12-23T14:00:00.000+01:002014-12-24T06:19:10.782+01:00Planck's starry sky<div dir="ltr" style="text-align: left;" trbidi="on">
Well December 22nd has come and gone, and the promised release of Planck data has, perhaps unsurprisingly, not materialised. Some of the talks presented at the Ferrara conference are available <a href="http://www.cosmos.esa.int/web/planck/ferrara2014">here</a>, and there was a second conference in Paris more recently, video recordings from which can be found <a href="http://webcast.in2p3.fr/videos-lcdm_extension">here</a>.<br />
<br />
I've seen a bit of speculation about the delays and what the data might or might not be showing on a few physics blogs, some of which I think are a little mistaken. So I thought I'd put up a quick post summarising the situation as I see it — but note that my opinion is not at all official and may be wrong on some of the details (especially since I wasn't at either of the conferences).<br />
<br />
For a start, you may notice that some of the talks at Ferrara are not made available on the website. I'm informed that this internal censorship was applied by the Planck science team, and it is based on their estimation that the censored talks are ones containing results which are still preliminary and liable to change before the eventual data release. The flip side of this is that the talks that are available contain data which they are confident will <i>not</i> change, so these are the ones you'd want to pay attention to in any case.<br />
<br />
In terms of the data itself, there appear to be two and a half important improvements so far. The first is that the overall calibration of the temperature power spectrum — which was previously somewhat discrepant with the WMAP measurement — has been improved, and now Planck and WMAP agree very well with each other. The second is that the apparent anomaly in the temperature power spectrum at multipole values of $\ell\sim1800$ has been identified as being due to a glitch in the 217 GHz data, and has been corrected. The anomaly has therefore disappeared. This can be seen by comparing the 2013 and 2014 versions of the TT power (if you look carefully):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/--isQFDaWly8/VJj8l2hKmpI/AAAAAAAAG2g/3Mbeic19jwo/s1600/PlanckTT2013.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/--isQFDaWly8/VJj8l2hKmpI/AAAAAAAAG2g/3Mbeic19jwo/s1600/PlanckTT2013.jpg" height="312" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-M92U-7QWLQE/VJj83sSIMoI/AAAAAAAAG2o/AvyQUk1Xn5Y/s1600/PlanckTT2014.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-M92U-7QWLQE/VJj83sSIMoI/AAAAAAAAG2o/AvyQUk1Xn5Y/s1600/PlanckTT2014.jpg" height="310" width="400" /></a></div>
<br />
The remaining half an improvement comes from the polarization data. Previously, this was so badly affected by systematics at large scales that the Planck team were only able to even show the data points at $\ell>100$, and were unable to use them for any science analysis, relying instead on the WMAP polarization. These systematics have still not been completely resolved — apparently it is the HFI instrument which is the problematic one — but they have been somewhat improved, such that the EE and TE power spectra are trustworthy at $\ell>30$, which is enough to start using them for parameter constraints in place of the WMAP data. (This means that the error bars on various derived parameter values have decreased a little from 2013, but they will decrease a lot more when all the data is finally available.)<br />
<br />
This last half improvement is somewhat relevant to the BICEP2 issue which I discussed <a href="http://blankonthemap.blogspot.com/2014/09/biting-dust.html">here</a>, since the improved polarization data in 2014 was an important reason that Planck was able to say something about the dust polarization in the BICEP2 window. The fact that they still aren't 100% happy with this data yet could be a bit concerning. On the other hand, the relevant range of multipoles for BICEP2 is $\ell\sim80$ rather than $\ell<30$.<br />
<br />
In terms of what these new data tell us, I'm afraid the story appears mostly rather boring, since there is very little change from what we learned already in 2013. As expected, the values of all cosmological parameters are consistent with what Planck announced in 2013; insofar as there have been any minor changes in the values, they tend to move in the direction of making Planck and WMAP more consistent with each other, but really these shifts appear very small and not worth worrying about. It appears the only parameter which has shifted at all significantly is the optical depth $\tau$. Constraints on $\tau$ rely on the use of polarization data; the previous constraints were obtained by combining Planck temperature with WMAP polarization measurements whereas the current value comes from Planck alone.<br />
<br />
At some point I suppose the various systematics with the HFI polarization data will be sorted out to the extent that we will get the long-awaited release and the papers. But I have given up trying to predict when. In the meantime, I thought the coolest thing to come out of the recent conferences was this image:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-ZDwf4o_gDrs/VJlkr_WeI4I/AAAAAAAAG24/5nUzpLXZsKE/s1600/LIC_3-7.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ZDwf4o_gDrs/VJlkr_WeI4I/AAAAAAAAG24/5nUzpLXZsKE/s1600/LIC_3-7.jpg" height="400" width="400" /></a></div>
<br />
which rather reminded me of this:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-QDPdotmVBA4/VJllX2KNaNI/AAAAAAAAG3I/Ieyc6s2fBSw/s1600/USA-Museum_of_Modern_Art-Vincent_van_Gogh0t.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-QDPdotmVBA4/VJllX2KNaNI/AAAAAAAAG3I/Ieyc6s2fBSw/s1600/USA-Museum_of_Modern_Art-Vincent_van_Gogh0t.jpg" height="265" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Detail from <i><a href="http://en.wikipedia.org/wiki/The_Starry_Night">The Starry Night</a>.</i></td></tr>
</tbody></table>
and this:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-ot9Q7wcfk4o/VJllBZvnMVI/AAAAAAAAG3A/0NTCNf1uNSc/s1600/haystacks.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-ot9Q7wcfk4o/VJllBZvnMVI/AAAAAAAAG3A/0NTCNf1uNSc/s1600/haystacks.jpg" height="160" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Detail from <i>Haystacks Near a Farm in Provence.</i> </td></tr>
</tbody></table>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com0tag:blogger.com,1999:blog-6976071487922527618.post-32236473961079221982014-12-01T14:42:00.000+01:002014-12-01T14:42:15.041+01:00Planck at Ferrara<div dir="ltr" style="text-align: left;" trbidi="on">
There is a <a href="http://www.cieffeerre.it/Eventi/eventi-in-programmazione-nel-2014/planck-2014-the-microwave-sky-in-temperature-and-polarization/PLANCK-2014">conference</a> starting today in Ferrara on the final results from Planck.<br />
<br />
Though actually these won't be the final results from Planck, since although all scientists in the Planck team have been scrambling like mad to prepare for this date, they haven't been able to get all their results ready for presentation yet. So the actual release of most of the data and the scientific papers is scheduled for later this month. December 22nd, in fact — for European scientists, almost the last working day of the year (Americans tend to have some conferences between Christmas and New Year) — so at least we will technically have the results in 2014.<br />
<br />
Except even that isn't really it, because the actual Planck likelihood code will only be released in January 2015. Or at least, I'm pretty sure that's what <a href="http://www.cosmos.esa.int/web/planck">the Planck website</a> <i>used</i> to say: now it doesn't mention the likelihood code by name, referring instead to "a few of the derived products."<br />
<br />
If you're confused, well, so am I. The likelihood code is one of the most important Planck products for anyone planning to actually use Planck data for their own research — to do so properly normally means re-running fits to the data for your favourite model, which means you need the likelihood code. (Of course, some people do take the short cut of simply quoting Planck constraints on parameters derived in other contexts, and this is not always wrong.) This means that having the final, correct version likelihood code is rather important even for Planck scientists themselves to be completely confident in the results they are presenting. So it would make more sense to me if the likelihood code were released at the same time as the rest of the data. Perhaps that is what is actually going to happen, I suppose we'll find out soon.<br />
<br />
Incidentally, my information is that the "final, correct" version of the likelihood code was distributed for internal use within the Planck collaboration about 4 weeks ago or so. Considering that it is only after this happens that proper model comparison projects can begin, that obtaining parameter constraints for each model can take a surprisingly large amount of computing time, that the various Planck teams responsible for this step had scores of different models to investigate, that the "final, correct" version may well have undergone a subsequent revision, and that the process of drafting each paper at the end of the analysis must itself take a couple of weeks minimum ... I suppose I'm not very surprised that the date for data release has been pushed back.<br />
<br />
There's some uncertainty about whether the videos from the conference will be made available, as a statement on the website saying this would happen has been removed. For those interested here is <a href="https://www.youtube.com/user/unifetv/videos?flow=grid&view=2">a Youtube channel</a> purporting to provide video from the conference, but disappointingly it doesn't appear to actually work. </div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com0tag:blogger.com,1999:blog-6976071487922527618.post-71089842642070127992014-11-26T20:00:00.000+01:002014-11-26T20:00:00.063+01:00Quasar structures: a postscript<div dir="ltr" style="text-align: left;" trbidi="on">
A few days ago I discussed the purported 'spooky' alignment of quasar spins and the cosmological principle <a href="http://blankonthemap.blogspot.com/2014/11/a-spooky-alignment-of-quasars-or-just.html">here</a>. So as to focus better on the main point, I left a few technical comments out of that discussion which I want to mention here. These don't have any direct bearing on the main argument made in that post — rather they are interesting asides for a more expert audience.<br />
<div>
<br /></div>
<h4 style="text-align: left;">
Quasars can't prove the Universe is homogeneous</h4>
<div style="text-align: left;">
<span style="font-weight: normal;"><br />
</span> <span style="font-weight: normal;">Readers of the original post might have noticed that I was quite careful to always state that the distribution <i>of quasars</i> was statistically homogeneous, but not that the quasars showed <i>the Universe </i>was homogeneous. The reason for this lies in the properties of the quasar sample</span> itself.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
There are two main ways of constructing a sample of galaxies or quasars to use for further analysis, such as testing homogeneity. The first is that you simply include every object seen by the survey instruments within a certain patch of sky that lies between two redshifts of interest. But these objects will vary in their intrinsic brightness, and the survey instruments have a limited sensitivity, so can only record dim objects when they are relatively close to us. Intrinsically bright objects are rarer, but if they are very far away we will only be able to see the rare bright ones. So this strategy results in a sample with very many, but largely dim, galaxies or quasars relatively close to us, and fewer but brighter objects far away. This is known as a flux-limited sample.<br />
<br />
The other strategy is to correct the measured brightness of each object for the distance from us, to determine its 'intrinsic' brightness (otherwise known as its absolute magnitude), and then select a sample of only those objects which have similar absolute magnitudes. The magnitude range is chosen in accordance with the range of distances such that within the volume of the Universe surveyed, we can be confident we have seen every object of that magnitude that exists. This is called a volume-limited sample.<br />
<br />
Testing the homogeneity of the Universe requires a volume-limited survey of objects. For a flux-limited sample the distribution in redshift (i.e., in the line-of-sight direction) would not be expected to be uniform in the first place: the number density of objects would ordinarily decrease sharply with redshift. But looking out away from Earth also involves looking back in <i>time</i>; so if the redshift range of the survey is large, the farthest objects are seen as they were at an earlier time than the closest ones. If the objects in question had evolved significantly in that time, near and far objects could represent significantly different populations even in a volume-limited sample, and once again we wouldn't expect to see homogeneity along the line of sight, even if the Universe were homogeneous.<br />
<br />
So to really test the cosmological principle without having to assume homogeneity at the outset,<sup>1</sup> we really need a volume-limited sample of galaxies that cover a very large volume of the Universe but span a relatively narrow range of redshifts. Such surveys are hard to come by. For example, the study confirming homogeneity in WiggleZ galaxies (see <a href="http://blankonthemap.blogspot.com/2012/08/the-largest-patterns-in-universe.html">here</a> and <a href="http://telescoper.wordpress.com/2014/06/27/the-fractal-universe-part-2/">here</a>) actually used a flux-limited sample, so required additional assumptions. In this case one doesn't obtain a proof, rather a check of the self-consistency of those assumptions — which people may regard as good enough, depending on taste.<br />
<br />
Anyway, the key point is that the DR7QSO quasar sample everyone uses is most definitely flux-limited and not volume-limited (I was myself reminded of this point by Francesco Sylos Labini). Despite this, the redshift distribution of quasars is remarkably uniform (between redshifts 1 and 1.8). So what's going on? Well, unlike certain types of galaxies that live much closer to home, distant quasar populations are expected to evolve rather quickly with time. And the age difference between objects at redshifts 1 and 1.8 is more than 2 billion years!<br />
<br />
It would appear that this effect and the flux-limited nature of the survey coincidentally roughly cancel each other out for the sample in question. A volume-limited subset of these quasars would be (is) highly <i>in</i>homogeneous — but then because of the time evolution the homogeneity or otherwise of any sample of quasars says nothing much about the homogeneity or otherwise of the Universe in general.<br />
<br />
Luckily this is only incidental to the main argument. The fact that the distribution of these (flux-limited) quasars is statistically homogeneous on scales of 100-odd Megaparsecs despite claims for the existence of Gigaparsec-scale 'structures' simply demonstrates the point that the existence of single structures of any kind doesn't have any bearing on the question of overall homogeneity. Which is the main point.<br />
<br />
<h4 style="text-align: left;">
Homogeneity is sample-dependent </h4>
<div>
<br />
Of course the argument above cuts both ways.<br />
<br />
Let's imagine that a study has shown that the distribution of a particular type of galaxy — call them luminous red galaxies — approaches homogeneity above a certain distance scale, say 100 Megaparsecs. Such a study was done by <a href="http://arxiv.org/abs/astro-ph/0411197">David Hogg and others in 2005</a>. From this we may reasonably conclude (though not, strictly speaking, prove) that the matter distribution in the Universe is homogeneous above at most 100 Mpc. But we are not allowed to conclude that the distribution of some other sample of objects — radio galaxies, quasars, blue galaxies etc. — approaches homogeneity above the same scale, or indeed at all!<br />
<br />
Even in a Universe with a homogeneous matter distribution, the scale above which a volume-limited sample of galaxies whose properties are constant with time approaches homogeneity depends on the <a href="http://ned.ipac.caltech.edu/level5/March12/Coil/Coil5.html">galaxy bias</a>. This number depends on the type of galaxies in question, and so too to a lesser extent will the expected homogeneity scale. Of course if the sample is not volume-limited, or does evolve with time, all bets are off anyway.<br />
<br />
More generally, for <i>each</i> sample of galaxies that we wish to use for higher order statistical measurements, the statistical homogeneity <i>of that particular sample</i> must in general be demonstrated first. This is because higher order statistical quantities, such as the correlation function, are conventionally normalized in units of the sample mean, but in the absence of statistical homogeneity this becomes meaningless.<br />
<br />
There was a time when the homogeneity of the Universe was less well accepted than it is today, and the possibility of a fractal distribution of matter was still an open question. At that time demonstrating the approach to homogeneity on large scales in a well-chosen sample of galaxies was worth a publication (even a well-cited publication) in itself. This is probably no longer the case, but it remains a necessary sanity check to perform for each galaxy survey.<br />
<br /></div>
<div>
<sup>1</sup><span style="font-size: small;">Properly speaking, even the creation of a volume-limited sample requires an assumption of homogeneity at the outset, since the determination of absolute magnitudes requires a cosmological model, and the cosmological model used will assume homogeneity. In this sense all "tests" of homogeneity are really consistency checks of our assumption thereof.</span></div>
<br /></div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com0tag:blogger.com,1999:blog-6976071487922527618.post-54542821631975677872014-11-23T16:38:00.002+01:002014-11-24T08:58:08.944+01:00A 'spooky alignment' of quasars, or just hype?<div dir="ltr" style="text-align: left;" trbidi="on">
In the news this week we've had a story on the alignment of quasar spins with large-scale structure, based on <a href="http://arxiv.org/abs/1409.6098">this paper</a> by Hutsemekers et al. The paper was accompanied by <a href="http://www.eso.org/public/news/eso1438/">this press release</a> from the European Space Observatory, which was then reproduced in <a href="http://www.dailygalaxy.com/my_weblog/2014/11/spooky-discovery-about-the-largest-structure-in-the-universe-vast-quasar-groupings-found-in-astoundi.html">various</a> <a href="http://news.discovery.com/space/study-shows-quasars-black-holes-spinning-in-sync-141119.htm">forms</a> in a <a href="http://www.world-science.net/othernews/141119_quasar.htm">number</a> of <a href="http://astronomynow.com/2014/11/19/quasar-axes-align-with-large-scale-cosmic-structures/">blogs</a> and <a href="http://www.dailymail.co.uk/sciencetech/article-2842418/Mystery-spooky-pattern-universe-Scientists-supermassive-black-holes-aligned.html">news outlets</a> — almost all of which stress the 'spooky' or 'mysterious' nature of the claimed alignment 'over billions of light years'.<br />
<br />
At least one of these blogs (the one at <a href="http://www.dailygalaxy.com/my_weblog/2014/11/spooky-discovery-about-the-largest-structure-in-the-universe-vast-quasar-groupings-found-in-astoundi.html">The Daily Galaxy</a>) explicitly claims that the alignment of these quasar spins is a challenge for the <a href="http://en.wikipedia.org/wiki/Cosmological_principle">cosmological principle</a>, which is the assumption of large-scale statistical homogeneity and isotropy of the Universe, on which all of modern cosmology is based. This claim is not contained in the press release, but originates from a statement in the paper itself, where the authors say<br />
<blockquote class="tr_bq">
The existence of correlations in quasar axes over such extreme scales would constitute a serious anomaly for the cosmological principle.</blockquote>
I'm afraid that this claim is completely unsupported by any of the actual results contained within the paper, and is therefore one of those annoying examples of scientific hype. In this post I will try to explain why.<br />
<br />
I have actually covered much of this ground before — in a <a href="http://blankonthemap.blogspot.com/2013/07/quasars-homogeneity-and-einstein.html">blog post here</a>, but more importantly in a <a href="http://arxiv.org/abs/1306.1700">paper</a> published in <i>Monthly Notices</i> last year — and I must admit I am a little surprised at having to repeat these points (especially since my paper is cited by Hutsemekers et al.). Nevertheless, in what follows I shall try not to sound too grumpy.<br />
<br />
The immediate story started with a <a href="http://arxiv.org/abs/1211.6256">paper by Roger Clowes and collaborators</a>, who claimed to have detected the 'largest structure' in the Universe (dubbed the 'Huge-LQG') in the distribution of <a href="http://en.wikipedia.org/wiki/Quasar">quasars</a>, and also claimed that this structure violated the cosmological principle. My paper last year was a response to this, and made the following points:<br />
<br />
<ol style="text-align: left;">
<li>the detection of a single large structure has essentially no relevance to the question of whether the Universe is statistically homogeneous and isotropic;</li>
<li>the quasar sample within which the Huge-LQG was identified <i>is</i> statistically homogeneous, and approaches homogeneity at the scale we expect theoretically, thus providing an explicit demonstration of point 1;</li>
<li>the definition of 'structure' by which the Huge-LQG counts as a structure is so loose that by using it we would find equally vast 'structures' even in completely random distributions of points which (by construction!) contain no correlations and therefore no structure whatsoever; and </li>
<li>therefore the classification of the Huge-LQG set of quasars as a 'structure' is essentially empty of meaning.</li>
</ol>
<h4 style="text-align: left;">
<br />
</h4>
<h4 style="text-align: left;">
Quasar structures don't violate homogeneity</h4>
<div>
<br /></div>
<div>
Since I am already repeating myself, let me elaborate a little more on points 1 and 2. Our Universe is <i>not</i> exactly homogeneous. The fact that you exist — more generally, the fact that stars, galaxies and clusters of galaxies exist — is sufficient proof of this, so it would a very poor advertisement for cosmology indeed if it were all founded on the assumption of <i>exact</i> homogeneity. Luckily it isn't. In fact our theories could be said to predict the existence of structure in the potential $\Phi$ on all scales (that's what a scale-invariant power spectrum from inflation means!), and even the galaxy-galaxy correlation function only goes asymptotically to zero at large scales.</div>
<div>
<br /></div>
<div>
Instead we have the assumption of <i>statistical</i> homogeneity and isotropy, which means that we assume that when looked at on large enough scales, different regions of the Universe are <i>on average</i> the same. Clearly, since this is a statement about averages, it can only be tested statistically by looking at large numbers of different regions, not by finding one particular example of a 'structure'. In fact there is a well-established procedure for checking the statistical homogeneity of the distribution of a set of points (the positions of galaxies or quasars, in this case), which involves measuring its fractal dimension and checking the scale above which this is equal to 3. I've described the procedure before, <a href="http://blankonthemap.blogspot.fi/2012/08/the-largest-patterns-in-universe.html">here</a> and <a href="http://blankonthemap.blogspot.fi/2013/07/quasars-homogeneity-and-einstein.html">here</a>, and Peter Coles describes a bit of the history of it <a href="http://telescoper.wordpress.com/2014/06/27/the-fractal-universe-part-2/">here</a>.</div>
<div>
<br /></div>
<div>
The bottom line is that, as I showed last year, the quasar distribution in question <i>is </i>statistically homogeneous above scales of at most $\sim130h^{-1}$Mpc. There is therefore no 'structure' you can find in this data which could violate the cosmological principle. End of story.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-0UT6t1pMNIY/VHHixKQsFZI/AAAAAAAAGxE/4utSQY-a1Qg/s1600/quasar_homogeneity.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-0UT6t1pMNIY/VHHixKQsFZI/AAAAAAAAGxE/4utSQY-a1Qg/s1600/quasar_homogeneity.png" height="430" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Scaled number counts in spheres as a measure of the fractal dimension of the quasar distribution. On scales where this number approaches 1, the distribution is statistically homogeneous. From <a href="http://arxiv.org/abs/1306.1700">arXiv:1306.1700</a>.</td></tr>
</tbody></table>
<div>
<h4 style="text-align: left;">
<br />
</h4>
<h4 style="text-align: left;">
Structures and probability</h4>
</div>
<div>
<br />
Of course, there are many different ways of being statistically homogeneous. It is perfectly possible that within a statistically homogeneous distribution one could find a particular structure or feature whose existence in our specific cosmological model (which is one of many possible models satisfying the cosmological principle) is either very unlikely or impossible. This would then be a problem for that cosmological model despite not having any wider implications for the cosmological principle. But to prove this requires some serious analysis, which should include a proper treatment of probabilities — you can't just say "this structure is big, so it must be anomalous."</div>
<div>
<br /></div>
<div>
In particular, any serious analysis of probabilities must take into account how a 'structure' is defined. Given infinitely many possible choices of definition, and a very large Universe in which to search, the probability of finding <i>some </i>'structure' that extends over billions of light years is practically unity. In fact the definition used for the Huge-LQG would be likely to throw up equally vast 'structures' even if quasar positions were not at all correlated with each other (and we know they must be at least somewhat correlated, because of gravity). So it really isn't a very useful definition at all.</div>
<h4 style="text-align: left;">
<br />
</h4>
<h4 style="text-align: left;">
'Spooky' alignments</h4>
<div>
<br />
This brings us to the current paper by Hutsemekers et al. The starting assumption of this paper is that the Huge-LQG <i>is</i> a real structure which is somehow distinguished from its surroundings. This assumption is manifest in the decision that the authors make to try to measure the polarization of light from only those quasars that are classified as part of the Huge-LQG rather than a more general sample of quasars. This classic case of circular reasoning is the first flaw in the logic, but let's put it to one side for a minute.</div>
<div>
<br /></div>
<div>
The press release then tells us that the scientists</div>
<blockquote class="tr_bq">
found that the rotation axes of the central supermassive black holes in a sample of quasars are parallel to each other over distances of billions of light years</blockquote>
<div>
and that the spins of the central black holes are aligned along the filaments of large-scale structure in which they reside.</div>
<div>
<br /></div>
<div>
I find this statement extremely problematic. Here is a figure from the paper in question, showing the sky positions of the 93 quasars in question, along with the polarization orientations for the 19 which are used in the actual analysis:</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-jmTNNL5_MhQ/VHHr9NhcHCI/AAAAAAAAGxU/I-vTp7Y0nrE/s1600/quasar_alignment.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-jmTNNL5_MhQ/VHHr9NhcHCI/AAAAAAAAGxU/I-vTp7Y0nrE/s1600/quasar_alignment.png" height="400" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Quasar positions (black dots) and polarization alignments (red lines). From <a href="http://arxiv.1409.6098/">arXiv.1409.6098</a>.</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
Do you see the alignment? No, me neither. In fact, looking at the distribution of angles in panel b, I would say that looks very much like a sample drawn from a perfectly uniform distribution.</div>
<div>
<br /></div>
<div>
So what is the claim actually based on? Well, for a start one has to split up the (arbitrarily defined) 'structure' into several (even more arbitrarily defined) 'sub-structures'. Each of these sub-structures then defines a different reference angle on the sky:</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-_4xmi37QOmg/VHHtp9YQmNI/AAAAAAAAGxg/wJuF4JX_Q5g/s1600/quasar_substructures.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-_4xmi37QOmg/VHHtp9YQmNI/AAAAAAAAGxg/wJuF4JX_Q5g/s1600/quasar_substructures.png" height="610" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Chopping the data to suit the argument (Figure 4 of <a href="http://arxiv.org/abs/1409.6098">arXiv:1409.6098</a>). On what basis are sub-structures 1 and 2 defined as separate from each other?</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
And now one has to measure the angles between the quasar polarization direction and the reference direction of the particular sub-structure, <i>and</i> the direction perpendicular to the reference direction, and <i>choose the smaller of the two</i>. In other words, rather than prove that quasars are aligned parallel to each other over distances extending over billions of light years (the claim in the press release), what Hutsemekers et al. are actually doing is attempting to show that given arbitrary choices of some smaller sub-structures and reference directions, quasars in different sub-structures are typically aligned <i>either</i> parallel to <i>or perpendicular to</i> this direction. This is a much less exacting standard.</div>
<div>
<br /></div>
<div>
Even this claim is not particularly well supported by the evidence. That is, looking at the distribution of angles, I am really not at all convinced that this shows evidence for a bimodal distribution with peaks at 0 and 90 degrees:</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-aEsxY51bdgs/VHHw6c0yH_I/AAAAAAAAGxs/CR0c8xYlnw0/s1600/bimodal_evidence.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-aEsxY51bdgs/VHHw6c0yH_I/AAAAAAAAGxs/CR0c8xYlnw0/s1600/bimodal_evidence.png" height="300" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Distribution of angles purportedly showing two distinct peaks at 0 and 90. Figure 5 of <a href="http://arxiv.org/abs/1409.6098">arXiv:1409.6098</a>.</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
So in summary I think the statistical evidence of alignment of quasar spins is already pretty weak. I don't see any analysis in the paper dealing with the effects of a different arbitrary choice of sub-structures, nor do I see any error analysis (the error in measuring the polarization direction of a quasar can be as large as 10 degrees!). And I haven't even dealt with the fact that the polarization data is used for only 19 quasars out of the full 93 — in other words, for the majority of quasars in the sample the central black hole spins are aligned along some other, undetermined, direction such that we can't measure the polarization.</div>
<div>
<h4 style="text-align: left;">
<br />
</h4>
<h4 style="text-align: left;">
Extraordinary claims require extraordinary evidence</h4>
</div>
<div>
<br />
Now, it's worth repeating that we've already seen that in fact the space distribution of quasars is statistically homogeneous in accordance with the cosmological principle. That simple test has been done, the cosmological principle survives. So if you've got some more nuanced claim of an anomaly, I think the onus is on you not only to describe the measurement you made, but also say what exactly is anomalous about it. What is the theoretical prediction we should compare it to? Which model is being rejected (or otherwise) by the new data?</div>
<div>
<br /></div>
<div>
So, for instance, if quasar spins in sub-structures are indeed aligned either parallel or perpendicular to each other (and I still remain to be convinced that they are), is this really something 'spooky', or would we expect some degree of alignment in the standard $\Lambda$CDM model?<br />
<br />
Such an analysis has not been presented, but even if it had, it's worth bearing in mind the principle that extraordinary claims require extraordinary evidence. I'm afraid throwing out a <i>p</i>-value of about 1% simply doesn't cut it. Not only is that actually not an enormously impressive number (especially given all the other things I mentioned above), such a frequentist statistic doesn't take account of all our prior knowledge.<br />
<br />
Other people have <a href="http://telescoper.wordpress.com/2014/09/15/frequentism-the-art-of-answering-the-wrong-question/">banged this drum at length</a> before, but the point is easily summarized: the <i>p</i>-value tells us the probability of getting this data given the model, but doesn't tell us the probability of the model being correct despite the new data appearing to contradict it. This is the question we really wish to answer. To do this requires a Bayesian analysis, in which one must account for the prior belief in the model, which is the result of confidence built up from all other experimental results that agree with it. We have an incredible amount of observational evidence in favour of our current model, that would probably not be consistent with a model in which gigantic structures could exist (I say 'probably' because no such model actually exists at present). </div>
<div>
<br /></div>
<div>
So my prior in favour of $\Lambda$CDM is pretty high — 19 quasars and an analysis so full of holes are not going to change that so quickly.</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com8tag:blogger.com,1999:blog-6976071487922527618.post-86960655380282922422014-09-22T17:30:00.000+02:002014-10-22T13:38:22.029+02:00Biting the dust<div dir="ltr" style="text-align: left;" trbidi="on">
Sorry about the obvious pun in the title. Today's important announcement is of course the long-awaited <a href="http://arxiv.org/abs/1409.5738">Planck verdict</a> on the level at which the BICEP2 "discovery" of primordial gravitational waves had been contaminated by foreground dust. That verdict does not look good for BICEP.<br />
<br />
(Incidentally, back in July I <a href="http://blankonthemap.blogspot.com/2014/07/short-news-items.html">reported</a> a Planck source as saying this paper would be ready in "two or three weeks". Clearly that was far too optimistic. But interestingly many members of the Planck team themselves were confidently expecting today's paper to appear about 10 days ago, and the rumour is that the current version has been "toned down" a little, perhaps accounting for some of the additional delay. Despite that it's still pretty devastating.)<br />
<br />
Let me attempt to summarize the new results. Some important points are made right in the abstract, where we read:<br />
<blockquote class="tr_bq">
"... even in the faintest dust-emitting regions there are no "clean" windows in the sky where primordial CMB B-mode polarization measurements could be made without subtraction of foreground emission"</blockquote>
and that<br />
<blockquote class="tr_bq">
"This level [of the dust power in the BICEP2 window, over the multipole range of the primordial recombination bump] is <i>the same magnitude</i> as reported by BICEP2 ..."</blockquote>
(my emphasis). Although<br />
<blockquote class="tr_bq">
"the present uncertainties are large and will be reduced through an ongoing, joint analysis of the Planck and BICEP2 data sets,"</blockquote>
from where I am looking unfortunately it now does not look as if there is a realistic chance that what BICEP2 reported was anything more than a very precise measurement of dust.<br />
<br />
The Planck paper is pretty thorough, and actually quite interesting in its own right. They make use of the fact that Planck observes the sky at many frequencies to study the properties of dust-induced polarization. Whereas BICEP2 was limited to a single frequency channel at 150 GHz, the Planck HFI instrument has 4 different frequencies, of which the most useful is at 353 GHz. Previous Planck results have already shown that dust emission behaves sort of like a (modified) blackbody spectrum at a temperature of 19.6 Kelvin. Since this is a significantly higher temperature than the CMB temperature of 2.73 K, dust emission dominates at higher frequencies, which means that the 353 GHz channel essentially sees only dust and nothing else. Which makes it perfect for the task at hand, since in this particular situation roles are reversed and it is the dust that is the signal and the primordial CMB is noise!<br />
<br />
The analysis proceeds in a number of steps. First, they study the power spectra of the two polarization modes (EE and BB) in several different large regions in the sky:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-APkFjimYr0I/VCAwgTGy3MI/AAAAAAAAGtc/JN86k1E9wTE/s1600/fsky.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-APkFjimYr0I/VCAwgTGy3MI/AAAAAAAAGtc/JN86k1E9wTE/s1600/fsky.png" height="201" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The different large sky regions studied are shown as increments of red, orange, yellow, green and two different shades of blue. The darkest blue region is always excluded. Figure from <a href="http://arxiv.org/abs/1409.5738">arXiv:1409.5738</a>.</td></tr>
</tbody></table>
In all these different regions, both power spectra $C_\ell^{EE}$ and $C_\ell^{BB}$ are proportional to $\ell^{\alpha}$, consistent with a value of $\alpha=-2.42\pm0.02$. Fixing $\alpha$ to this value, the amplitude of the power spectra in the different large regions then shows a characteristic dependence on the mean intensity of the dust emission — i.e. regions with more dust overall also show more polarization power — and this purely empirical relationship is characterized by<br />
$$A^{EE,BB}\propto\langle I_{353}\rangle^{1.9},$$though with a bit of uncertainty in the fit. The amplitudes of the polarization power spectra then also show a dependence on frequency from 353 GHz down to 100 GHz which matches previous Planck results (the dependence is something close to a blackbody spectrum at 19.6 K, but with a specific modification).<br />
<br />
It then turns out that if the sky is split into very many much smaller regions close to the poles rather than the 6 large ones above, the same results continue to hold on average, though obviously there is some scatter introduced by the fact that dust in different bits of the sky behaves differently. So this allows the Planck team to take the measured dust intensity in any one of these smaller regions and extrapolate down to see what the contribution to the BB power would be if measured at the BICEP2 frequency of 150 GHz. The result looks like this:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-D6haghfRAmQ/VCA20btzWnI/AAAAAAAAGts/Yu2fU_hgriE/s1600/BBdustmap.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-D6haghfRAmQ/VCA20btzWnI/AAAAAAAAGts/Yu2fU_hgriE/s1600/BBdustmap.png" height="217" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The level of dust contamination across the in measurements of the primordial B-mode signal. Blue is good, red is bad. The BICEP2 window is the black outline on the right.</td></tr>
</tbody></table>
This really sucks for BICEP2, who chose their particular patch of the sky precisely because, according to estimates of the 1990s and early 2000s, it was supposed to have very little dust. Planck is now saying that isn't true, and that there is a better region just a little further south. Even that better region isn't perfect, of course, but it may be clean enough to see a primordial GW signal of $r\sim 0.1$ to $0.2$ — if such a signal exists, and if we're lucky and/or figure out cleverer ways of subtracting the dust foreground.<br />
<br />
The problem with the BICEP2 region is that Planck's estimate of the dust contribution there looks like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-ts08ysz_2fA/VCA5S_bS4iI/AAAAAAAAGuA/q19JcXzheAo/s1600/BBdustpower.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-ts08ysz_2fA/VCA5S_bS4iI/AAAAAAAAGuA/q19JcXzheAo/s1600/BBdustpower.png" height="263" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Planck's estimate of the dust contribution to the BB power spectrum at 150 GHz and in the BICEP2 sky window. The first bin is the one that's most relevant. The black line is the contribution primordial GW with $r=0.2$ would make, if they existed.</td></tr>
</tbody></table>
So it appears that in the BICEP2 window, in the $\ell$ region where primordial gravitational waves produce a measurable BB signal (and BICEP2 has measured something), dust is expected to produce the same amplitude of signal as does an $r=0.2$. In fact, even accounting for the uncertainties in the Planck analysis (the extent of the pink error bars on the plot) it is clear that (a) dust <i>will</i> be contributing significantly to the BICEP2 measurement, and (b) it's pretty likely that <i>only</i> dust is contributing.<br />
<br />
Planck avoid explicitly saying that BICEP2 haven't seen anything but dust. This is because they haven't directly measured the dust contribution in that window and at 150 GHz. Rather what's shown in the plot above is based on a number of little steps in the chain of inference:<br />
<ol style="text-align: left;">
<li>generally, the BB polarization amplitude is dependent on the average total dust intensity in a region;</li>
<li>the relationship between these two doesn't vary too much across the sky;</li>
<li>generally, the frequency dependence of the amplitude shows a certain behaviour;</li>
<li>and again this doesn't appear to vary too much across the sky</li>
<li>Planck have measured the average dust intensity in the BICEP2 window, and this gives the value shown in the plot above when extrapolated to 150 GHz;</li>
<li>and the BICEP2 window doesn't <i>appear to be</i> a special outlier region on the sky that would wildly deviate from these average relationships;</li>
<li>so, the dust amplitude calculated is probably correct.</li>
</ol>
<div>
<i>Update: See the correction in the comments — the Planck paper actually does better than this. That is to say, they present one analysis that relies on all steps 1-7, but </i>in addition <i>they also measure the BB amplitude directly at 353 GHz and extrapolate that down to 150 GHz relying only on steps 3 and 4. The headline result is the one based on the second method, which actually gets a lower number for the dust amplitude. </i><br />
<i><br />
</i> So they leave open the small possibility that despite having been unlucky in the original choice of the BICEP2 window, we've somehow ultimately got very lucky indeed and nevertheless measured a true primordial gravitational wave signal. </div>
<div>
<br /></div>
<div>
Time will tell if this is true ... but the sensible betting has now got to be that it is not.</div>
<br />
Incidentally, I have just learned that in two days' time I will be presenting a 30 minute lecture to a group of graduate students about this result. The lecture is not supposed to be very detailed, but I'm also not very much of an expert on this. So if you spot any errors or omissions above, please do let me know through the comments box!</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com21tag:blogger.com,1999:blog-6976071487922527618.post-63044070761651470452014-08-25T14:40:00.002+02:002014-08-28T18:42:47.063+02:00A Supervoid cannot explain the Cold Spot<div dir="ltr" style="text-align: left;" trbidi="on">
In my <a href="http://blankonthemap.blogspot.fi/2014/07/short-news-items.html">last post</a>, I mentioned the <a href="http://www.newscientist.com/article/mg22329762.800-biggest-void-in-universe-may-explain-cosmic-cold-spot.html#.U8OtUHV53UZ">claim</a> that the Cold Spot in the cosmic microwave background is caused by a very large void — a "supervoid" — lying between us and the last scattering surface, distorting our vision of the CMB, and I promised to say a bit more about it soon. Well, my colleagues (Mikko, Shaun and Syksy) and I have just written a paper about this idea which came out <a href="http://arxiv.org/abs/1408.4720">on the arXiv</a> last week, and in this post I'll try to describe the main ideas in it.<br />
<br />
First, a little bit of background. When we look at sky maps of the CMB such as those produced by WMAP or Planck, obviously they're littered with very many hot and cold spots on angular scales of about one degree, and a few larger apparent "structures" that are discernible to the naked eye or human imagination. However, as I've <a href="http://blankonthemap.blogspot.com/2013/07/quasars-homogeneity-and-einstein.html">blogged about</a> before, the human imagination is an extremely poor guide to deciding whether a particular feature we see on the sky is real, or important: for instance, Stephen Hawking's initials are <a href="http://www.newscientist.com/article/dn18489-found-hawkings-initials-written-into-the-universe.html#.U_ruoJ-uZoA">quite easy to see</a> in the WMAP CMB maps, but this doesn't mean that Stephen Hawking secretly created the universe.<br />
<br />
So to discover whether any particular unusual features are actually significant or not we need a well-defined statistical procedure for evaluating them. The statistical procedure used to find the Cold Spot involved filtering the CMB map with a special wavelet (a spherical Mexican hat wavelet, or SMHW), of a particular width (in this case $6^\circ$), and identifying the pixel direction with the coldest filtered temperature with the direction of the Cold Spot. Because of the nature of the wavelet used, this ensures that the Cold Spot is actually a reasonably sizable spot on the sky, as you can see in the image below:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-4RNxYJRST5A/U_r5Ikr3_kI/AAAAAAAAGJ0/MinJJGNqL9A/s1600/cmb7_wmap_circle_900.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-4RNxYJRST5A/U_r5Ikr3_kI/AAAAAAAAGJ0/MinJJGNqL9A/s1600/cmb7_wmap_circle_900.jpg" height="300" width="500" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The Cold Spot in the CMB sky. Image credit: <a href="http://apod.nasa.gov/apod/ap110321.html">WMAP/NASA</a>.</td></tr>
</tbody></table>
<br />
Well, so we've found a cold spot. To elevate it to the status of "Cold Spot" in capitals and worry about how to explain it, we first need to quantify how unusual it is. Obviously it is unusual compared to other spots on our observed CMB, but this is true by construction and not very informative. Instead the usual procedure quite rightly compares the properties of the cold spots found in random Gaussian maps using exactly the same SMHW technique to the properties of the Cold Spot in our CMB. It is this procedure which results in the conclusion that our Cold Spot is statistically significant at roughly the "3-sigma level", i.e. only about 1 in every 1000 random maps has a coldest spot that is as "cold" as* our Cold Spot.** (The reason why I'm putting scare quotes around everything should become clear soon!)<br />
<br />
So there appears to be a need to explain the existence of the Cold Spot using additional new physics of some kind. One such idea that that of the supervoid: a giant region hundreds of millions of light years across which is substantially emptier than the rest of the universe and lies between us and the Cold Spot. The emptiness of this region has a gravitational effect on the CMB photons that pass through it on their way to us, making them look colder (this is called the integrated Sachs-Wolfe or ISW effect) — hence the Cold Spot.<br />
<br />
Now this is a nice idea in principle. In practice, unfortunately, it suffers from a problem: the ISW effect is very weak, so to produce an effect capable of "explaining" the Cold Spot the supervoid would need to be truly super — incredibly large and incredibly empty. And no such void has actually been seen in the distribution of galaxies (a previous claim to have seen it turned out to not be backed up by further analysis).<br />
<br />
It was therefore quite exciting when in May a group of astronomers, led by Istvan Szapudi of the Institute for Astronomy in Hawaii, <a href="http://arxiv.org/abs/1405.1566">announced</a> that they had found evidence for the existence of a large void in the right part of the sky. Even more excitingly, in a <a href="http://arxiv.org/abs/1405.1555">separate theoretical paper</a>, Finelli <i>et al.</i> claimed to have modeled the effect of this void on the CMB and proven that it exactly fit the observations, and that therefore the question had been effectively settled: the Cold Spot was caused by a supervoid.<br />
<br />
Except ... things aren't quite that simple. For a start, the void they claimed to have found doesn't actually have a large ISW effect — in terms of central temperature, less than one-seventh what would be needed to explain the Cold Spot. So Finelli <i>et al.</i> relied on a rather curious argument: that the second-order effect (in perturbation theory terms) of this void on CMB photons was somehow much larger than the first-order (i.e. ISW) effect. A puzzling inversion of our understanding of perturbation theory, then!<br />
<br />
In fact there were a number of other reasons to be a bit suspicious of the claim, among which were that N-body simulations don't show this kind of unusual effect, and that several other larger and deeper voids have already been found that aren't aligned with Cold Spot-like CMB features. In our paper we provide a fuller list of these reasons to be skeptical before diving into the details of the calculation, where one might get lost in the fog of equations.<br />
<br />
At the end of the day we were able to make several substantive points about the Cold Spot-as-a-supervoid hypothesis:<br />
<ol style="text-align: left;">
<li>Contrary to the claim by Finelli <i>et al.</i>, the void that has been found is neither large enough nor deep enough to leave a large effect on the CMB, either through the ISW effect or its second-order counterpart — in simple terms, it is not a super enough supervoid.</li>
<li>In order to explain the Cold Spot one needs to postulate a supervoid that is so large and so deep that the probability of its existence is essentially zero; if such a supervoid did exist it would be more difficult to explain that the Cold Spot currently is!</li>
<li>The possible ISW effect of any kind of void that could reasonably exist in our universe is already sufficiently accounted for in the analysis using random maps that I described above.</li>
<li>There's actually very little need to postulate a supervoid to explain the central temperature of the Cold Spot — the fact that we chose the coldest spot in our CMB maps already does that!</li>
</ol>
<div>
Point number 1 requires a fair bit of effort and a lot of equations to prove (and coincidentally it was also shown in an independent paper by Jim Zibin that appeared just a day before ours), but in the grand scheme of things it is probably not a supremely interesting one. It's nice to know that our perturbation theory intuition is correct after all, of course, but mistakes happen to the best of us, so the fact that one paper on the arXiv contains a mistake somewhere is not tremendously important.</div>
<div>
<br /></div>
<div>
On the other hand, point 2 is actually a fairly broad and important one. It is a result that cosmologists with a good intuition would perhaps have guessed already, but that we are able to quantify in a useful way: to be able to produce even half the temperature effect actually seen in the Cold Spot would require a hypothetical supervoid almost twice as large and twice as empty as the one seen by Szapudi's team, and the odds of such a void existing in our universe would be something like a one-in-a-million or one-in-a-billion (whereas the Cold Spot itself is at most a one-in-a-thousand anomaly in random CMB maps). A supervoid therefore cannot help to explain the Cold Spot.***</div>
<div>
<br /></div>
<div>
Point 3 is again something that many people probably already knew, but equally many seem to have forgotten or ignored, and something that has not (to my knowledge) been stated explicitly in any paper. My particular favourite though is point 4, which I could — with just a tiny bit of poetic licence — reword as the statement that</div>
<blockquote class="tr_bq">
"the Cold Spot is not unusually cold; if anything, what's odd about it is only that it is surrounded by a <i>hot</i> ring"</blockquote>
I won't try to explain the second part of that statement here, but the details are in our paper (in particular Figure 7, in case you are interested). Instead what I will do is to justify the first part by reproducing Figure 6 of our paper here:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-YY_cTzM9qEU/U_segLAjG0I/AAAAAAAAGKE/5xEKUCRZkhY/s1600/coldspotprof.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-YY_cTzM9qEU/U_segLAjG0I/AAAAAAAAGKE/5xEKUCRZkhY/s1600/coldspotprof.png" height="300" width="500" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The averaged temperature anisotropy profile at angle $\theta$ from the centre of the Cold Spot (in red), and the corresponding 1 and $2\sigma$ contours from the coldest spots in 10,000 random CMB maps (blue). Figure from <a href="http://arxiv.org/abs/1408.4720">arXiv:1408.4720</a>.</td></tr>
</tbody></table>
<br />
What the blue shaded regions show is the confidence limits on the <i>expected</i> temperature anisotropy $\Delta T$ at angles $\theta$ from the direction of the coldest spots found in random CMB maps using exactly the SMHW selection procedure. The red line, which is the measured temperature for our actual Cold Spot, never goes outside the $2\sigma$ equivalent confidence region. In particular, at the centre of the Cold Spot the red line is pretty much exactly where we would expect it to be. The Cold Spot is not actually unusually cold.<br />
<br />
Just before ending, I thought I'd also mention that Syksy has written about this subject on <a href="https://www.ursa.fi/blogit/kosmokseen-kirjoitettua/index.php/laikka-onkalon-takana">his own blog</a> (in Finnish only): as I understand it, one of the points he makes is that this form of peer review on the arXiv is actually more efficient than the traditional one that takes place in journals.<br />
<br />
<i>Update: You might also want to have a look at <a href="http://trenchesofdiscovery.blogspot.com/2014/08/the-cold-spot-is-not-particularly-cold.html">Shaun's take on the same topic</a>, which covers the things I left out here ...</i><br />
<div>
<br /></div>
<span style="font-size: x-small;">* <i>People often compare other properties of the Cold Spot to those in random maps, for instance its kurtosis or other higher-order moments, but for our purposes here the total filtered temperature will suffice.</i></span><br />
<i><span style="font-size: x-small;"><br />
</span></i> <i><span style="font-size: x-small;">** Although as Zhang and Huterer <a href="http://arxiv.org/abs/0908.3988">pointed out</a> a few years ago, this analysis doesn't account for the particular choice of the SMHW filter or the particular choice of $6^\circ$ width — in other words, that it doesn't account for what particle physicists call the "look-elsewhere effect". Which means it is actually much less impressive.</span></i><br />
<i><span style="font-size: x-small;"><br />
</span></i> <i><span style="font-size: x-small;">*** If we'd actually seen a supervoid which had the required properties, we'd have a proximate cause for the Cold Spot, but also a new and even bigger anomaly that required an explanation. But as we haven't, the point is moot.</span></i></div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com32tag:blogger.com,1999:blog-6976071487922527618.post-64446329259110952382014-07-14T12:25:00.000+02:002014-07-21T09:40:18.786+02:00Short news items<div dir="ltr" style="text-align: left;" trbidi="on">
Over the past two months I have been on a two-week seminar tour of the UK, taken a short holiday, attended a conference in Estonia and spent a week visiting collaborators in Spain. Posting on the blog has unfortunately suffered as a result: my apologies. Here are some items of interest that have appeared in the meantime:<br />
<ul style="text-align: left;">
<li>The BICEP and Planck teams are to share their data — here's the <a href="http://www.bbc.com/news/science-environment-28127576">BBC report</a> of this news. The information I have from Planck sources is that Planck will put out a paper with new data very soon (about a week ago I heard it would be "maybe in two weeks", so let's say two or three weeks from today). This new data will then be shared with the BICEP team, and the two teams will work together to analyse its implications for the BICEP result. From the timescales involved my guess is that what Planck will be making available is a measurement of the polarised dust foreground in the BICEP sky region, and the joint publication will involve cross-correlating this map with the B-mode map measured by BICEP. A significant cross-correlation would indicate that most (or all) of the signal BICEP detected was due to dust.</li>
</ul>
<ul style="text-align: left;">
<li> What Planck will not be releasing in the next couple of weeks is their own measurement of the polarization of the CMB, in particular their own estimate of the value of $r$. The timetable for this release is still October: this is a deadline imposed by the fact that ESA requires Planck to release the data by December, but another major ESA mission (I forget which) is due to be launched in November and ESA don't like scheduling "competing" press conferences in the same month because there's only so much science news Joe Public can absorb at a time. From what I've heard, getting the full polarization data ready for October is a bit of a rush as it is, so it's fairly certain that's not what they're releasing soon.</li>
</ul>
<ul style="text-align: left;">
<li> By the way, I think I've recently understood a little better how a collaboration as enormous as Planck manage to remain so disciplined and avoid leaking rumours: it's because most of the people in the collaboration don't know the full details of the results either! That is to say, the collaboration is split into small sub-groups with specified responsibilities, and these sub-groups don't share results with each other. So if you ask a randomly chosen Planck member what the preliminary polarization results are looking like, chances are they don't know any better than you. (Though this may not stop them from saying "Well, I've seen some very interesting plots ..." and smiling enigmatically!)</li>
</ul>
<ul style="text-align: left;">
<li> The conference I attended in Estonia was the IAU symposium in honour of the 100th birth anniversary of the great Ya. B. Zel'dovich, on the general topic of large-scale structure and the cosmic web. I'll try to write a little about my general impressions of the conference next time. In the meantime all the talks are available for download from the website <a href="http://iau.maido.ee/programme">here</a>.<br />
</li>
</ul>
<ul style="text-align: left;">
<li> A science news story you may have seen recently is <a href="http://www.newscientist.com/article/mg22329762.800-biggest-void-in-universe-may-explain-cosmic-cold-spot.html#.U8OtUHV53UZ">"Biggest void in universe may explain cosmic cold spot"</a>: this is a claim that a recently detected region with a relative deficit of galaxies (the "supervoid") explains the existence of the unusual <a href="http://en.wikipedia.org/wiki/CMB_cold_spot">Cold Spot</a> that has been seen in the CMB, without the need to invoke any unusual new physics. The claim of the explanation is based on <a href="http://arxiv.org/abs/1405.1555">this paper</a>. Unfortunately this claim is wrong, and the paper itself has several problems. My collaborators and I are in the process of writing a paper of our own discussing why, and when we are done I will try to explain the issues on here as well. In the meantime, you heard it here first: a supervoid does not explain the Cold Spot!<br />
</li>
</ul>
<div>
<b>Update:</b> It has been pointed out to me that last week Julien Lesgourgues gave a talk about Planck and particle physics at the <a href="http://www.sewm14.unibe.ch/index.html">Strong and Electroweak Matter (SEWM14)</a> symposium, in which he also discussed the timeline of forthcoming Planck and BICEP papers. You can see this on page 12 of his talk (<a href="http://www.sewm14.unibe.ch/lesgourgues.pdf">pdf</a>) and it is roughly the same as what I wrote above (except that there's a typo in the year — it should be 2014 not 2015!).</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com7tag:blogger.com,1999:blog-6976071487922527618.post-35573108917766145172014-05-16T15:28:00.001+02:002014-05-16T15:31:50.993+02:00BICEP and listening to real experts<div dir="ltr" style="text-align: left;" trbidi="on">
First up, I'd like to provide a health warning for all people landing here after following links from Sean Carroll or Peter Woit (thanks for the traffic!): I am not a CMB data analysis expert. What I provide on this blog is my own interpretation and understanding of the news and papers I have read, largely because writing such things out helps me understand them better myself. If it also helps people reading this blog, that's great, and you're welcome. But there are no guarantees that any of what I have written about BICEP is correct! If you truly want the best expert opinions on CMB analysis issues, you should listen to the best CMB experts — in this case, probably people who were in the WMAP collaboration, but are not in either Planck or BICEP. Also, if you want to ask somebody to write a scholarly review article on BICEP (yes, I get strange emails!), please don't ask me.<br />
<br />
Having said that, I'm not sure whether any WMAP scientists write blogs, so I can at least try to provide some sources for the non-expert reader to refer to. One thing that you definitely should look at is Raphael Flauger's talk (<a href="http://pcts.princeton.edu/PCTS/SpecialEventSimplicity2014/SpecialEventSimplicity2014.html">slides and video</a>) at Princeton yesterday. I think it is this work which was the source of the "<a href="http://resonaances.blogspot.fi/2014/05/is-bicep-wrong.html">is BICEP wrong</a>" rumours first publicly posted at Resonaances, and indeed I see that Resonaances today has a <a href="http://resonaances.blogspot.fi/2014/05/follow-up-on-bicep.html">follow-up</a> referring to these very slides.<br />
<br />
There are several interesting things to take away from this talk. The first is to do with the question of whether BICEP misinterpreted the preliminary Planck data that they admit having taken from a digitized version of a slide shown at a meeting. Here Flauger essentially simulates the process by digitizing the slide in question (and a few others) himself and analyzing them both with and without the correct CIB subtraction. His conclusion is that with the correct treatment, the dust models appear to predict higher dust contamination than BICEP accounted for; the inference being, I guess, that they didn't subtract the CIB correctly.<br />
<br />
How important is this dust contribution? Here there is a fair amount of uncertainty: even if the digitization procedure were foolproof, one of the dust models underestimates the contamination and another one overestimates it. Putting the two together, "foregrounds may be OK if the lower end of the estimates is correct, but are potentially dangerous" (page 40). Flauger tries another method of estimation based on the HI column density, using yet more unofficial Planck "data" taken from digitized slides. This seems to give much the same bottom line.<br />
<br />
A key point here is that everybody who isn't privy to the actual Planck data is really just groping in the dark, digitizing other people's slides. Flauger acknowledges by trying to estimate the effect of the process of converting real data into a gif image, converting that into a pdf as part of a talk, somebody nicking the pdf and converting it back to gif and then back to useable data. As you can imagine, the amount of noise introduced in this version of Chinese Whispers is considerable! So I think the following comment from Lyman Page towards the end of the video (as helpfully<a href="https://www.facebook.com/groups/574544055974988/"> transcribed by Eiichiro Komatsu for the Facebook audience</a>!) is perhaps the most relevant:<br />
<blockquote class="tr_bq">
"This is, this is a really, peculiar situation. In that, the best evidence for this not being a foreground, and the best evidence for foregrounds being a possible contaminant, both come from digitizing maps from power point presentations that were not intended to be used this way by teams just sharing the data. So this is not - we all know, this is not sound methodology. You can't bank on this, you shouldn't. And I may be whining, but if I were an editor I wouldn't allow anything based on this in a journal. Just this particular thing, you know. You just can't, you can't do science by digitizing other people's images."</blockquote>
Until Planck answers (or fails to definitively answer) the question of foregrounds in the BICEP window, or some other experiment confirms the signal, we should bear that in mind.<br />
<br />
There are some other issues that remain confusing at the moment: the cross-correlation of dust models with BICEP signal doesn't seem to support the idea that all the signal is spurious (though there are possibly some other complicating factors here), and the frequency evidence — such as it is — from the cross power with BICEP1 also doesn't seem to favour a dust contaminant. But all in all, the BICEP result is currently under a lot of pressure. Having seen this latest evidence, I now think the <a href="http://resonaances.blogspot.fi/2014/05/follow-up-on-bicep.html">Resonaances verdict</a> ("until [BICEP convincingly demonstrate that foregrounds are under control], I think their result does not stand") is — at least — a justifiable position.<br />
<br />
<i>Footnote: I should also perhaps explain that throughout my physics education I have been taught, and had come to believe, that the types of models of inflation BICEP provided evidence for (those with inflaton field values larger than the Planck scale) were fundamentally unnatural and incomplete, and that those, small-field, models that BICEP apparently ruled out were much more likely to be true. So perhaps my conscious attempts to compensate for this acknowledged theoretical prejudice could have biased me too far in the opposite direction in some previous posts!</i></div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com4tag:blogger.com,1999:blog-6976071487922527618.post-67121789887060640472014-05-14T18:40:00.002+02:002014-05-14T18:40:52.538+02:00New BICEP rumours: nothing to see here<div dir="ltr" style="text-align: left;" trbidi="on">
This week there has been a minor kerfuffle about some rumours, originally posted on Adam Falkowski's <a href="http://resonaances.blogspot.co.uk/2014/05/is-bicep-wrong.html">Resonaances blog</a>, regarding the claimed gravitational wave detection by BICEP. The rumours asserted that Planck had proven BICEP had made a mistake, BICEP had admitted the mistake, and that this might mean that all the excitement about the detection of gravitational waves was misplaced and all that BICEP had seen was some foreground dust emission contaminating their maps. (Since then there has been a <a href="http://news.sciencemag.org/physics/2014/05/blockbuster-big-bang-result-may-fizzle-rumor-suggests">strong public denial</a> of this by the BICEP team.)<br />
<br />
Now, with the greatest respect to Resonaances, which is an excellent particle physics blog, this is really a non-issue, and certainly not worth offending lots of people for (see for instance Martin Bucher's comment <a href="http://resonaances.blogspot.co.uk/2014/05/is-bicep-wrong.html?showComment=1399968445075#c3021741269505250715">here</a>). I really do not see what substantial information these rumours have provided us with that was not already known in March, and therefore why we should alter <a href="http://blankonthemap.blogspot.com/2014/03/bicep2-reasons-to-be-sceptical-part-1.html">assessments</a> of the data <a href="http://blankonthemap.blogspot.com/2014/03/bicep2-reasons-to-be-sceptical-part-2.html">made at that time</a>.<br />
<br />
Let me explain a bit more. One of the important limitations of the BICEP2 experiment is that it essentially measured the sky at only one frequency (150 GHz) — the data from BICEP1, which was at 100 GHz, was not good enough to see a signal, and the data from the Keck Array at 100 GHz has not yet been analysed. When you only have one frequency it is much harder to rule out the possibility that the "signal" seen is not due to primordial gravitational waves at all but due to intervening dust or other contamination from our own Galaxy.<br />
<br />
The way that BICEP addressed this difficulty was to use a set of different models for the dust distribution in that part of the sky, and to show that all of them predict that the possible level of dust contamination is <i>an order of magnitude</i> too small to account for the signal that they see. Now, some of these models may not be correct. In fact none of them are likely to be exactly right, because they may be based on old and likely less accurate measurements of the dust distribution or rely on a bit of extrapolation, wishful thinking, whatever. But the point is that they all roughly agree about the order of magnitude of dust contamination. This does not mean that we <i>know</i> there is or isn't any foreground contamination; this is merely a plausibility argument from BICEP (that is supported by and supports some other plausibility arguments in the paper).<br />
<br />
Now the "new" rumour is based on the fact that it turns out that one of the dust models was based on BICEP's interpretation of preliminary Planck data, and that this data was not officially sanctioned but digitally extracted from a pdf of a slide shown at a talk somewhere. This is not exactly news, since the slide in question is in fact referenced in the BICEP paper. What's new is that now somebody unnamed is suggesting that the slide was in fact misinterpreted, and therefore this one dust model is more wrong than we thought, though we already accepted it was probably somewhat wrong. This is not the same as proving that the BICEP signal has been definitively shown to be caused by dust contamination! In fact I don't see how it changes the current picture we have at all. Ultimately the only way we can be sure about whether the observed signal is truly primordial or due to dust is to have measurements that combine several different frequencies. For that we have to wait a bit for other experiments — and that's the same as we were saying in March.<br />
<br />
It's worth noting that when BICEP quote their result in terms of the tensor-to-scalar ratio <i>r</i>, the headline number $r=0.2$ assumes that there is literally <i>zero</i> foreground contamination. This was always an unrealistic assumption, but that hasn't stopped some 300 theorists from writing papers on the arXiv that take the number as face value and use it to rule out or support their favourite theories. The foreground uncertainty means that while we can be reasonably confident that the gravitational wave signal does exist (see here), model comparisons that strongly depend on the precise value of <i>r</i> are probably going to need some revision in the future.<br />
<br />
So what new information <i>have</i> we gained since March? Well, Planck released some more data, this time a map of the polarized dust emission close to the Galactic plane.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-I4vjR3NBiNs/U3M7zjJIuGI/AAAAAAAAFgk/qyGgHAz0XoA/s1600/Planck_pfrac.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-I4vjR3NBiNs/U3M7zjJIuGI/AAAAAAAAFgk/qyGgHAz0XoA/s1600/Planck_pfrac.jpg" height="321" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The polarization fraction at 353 GHz observed by Planck. From <a href="http://arxiv.org/abs/1405.0871">arXiv:1405.0871</a>.</td></tr>
</tbody></table>
<br />
<div>
Since these maps do not include the part of the sky that BICEP looked at (which is mostly in the grey region at the bottom), they don't tell us a huge amount about whether that part of the sky is or is not contaminated by polarized dust emission! Some people have speculated that this is something to do with the rivalry between Planck and BICEP, which is a bit over-the-top. Instead the reason is more scientific: the mask excludes areas where the error in determining the polarisation fraction is high, or the overall dust signal itself is too small. So the fact that the BICEP patch is in the masked region indicates that the dust emission does <i>not</i> dominate the total emission there, at least at 353 GHz (dust emission increases with frequency). This means there is not a whole lot of dust showing up in the BICEP region — if anything, this is good news! But even this interpretation should be treated with caution: dust doesn't contribute too much to the total <i>intensity</i> in that region, but it may well still contribute a large fraction of whatever B-mode polarization is seen. Based on my understanding and things I have learned from conversations with colleagues, I don't think Planck is going to be sensitive enough to make <i>definitive</i> statements about the dust in that <i>specific</i> region of the sky.</div>
<div>
<br /></div>
<div>
Another interesting paper that has come out since March has been <a href="http://arxiv.org/abs/1404.1899">this one</a>, which claims evidence for some contamination in the CMB arising from the "radio loops" of our Galaxy. It also has the great benefit of being an actual scientific paper rather than a rumour on somebody's blog. <i>(Full disclaimer: one of the authors of this paper was my PhD advisor, and another is a friend who was a fellow student when I was at Oxford.)</i> </div>
<div>
<br /></div>
<div>
The radio loops are believed to be due to ejected material from past supernovae explosions; the idea is that if this dust contains ferrimagnetic molecules or iron, it would contribute polarized emission that might be mistaken for true CMB when it is in fact more local. What this paper argues is that does appear to be some evidence that one of the CMB maps produced by the WMAP satellite (which operated before Planck) does show some correlation between map temperature and the position of one of these radio loops ("Loop I"). In particular, synchrotron emission from Loop I appears to be correlated with the temperature in the WMAP Internal Linear Combination (or ILC) map. I'm not going to comment on the strength of the statistical evidence for this claim; doubtless someone more expert than I will thoroughly check the paper before it is published. For the time being let us treat it as proven.</div>
<div>
<br /></div>
<div>
The relevance of this to BICEP is somewhat intricate, and proceeds like this: given our physical understanding of how the radio loops formed, it seems likely that they produce both synchrotron and dust emission which follow the same pattern on the sky. Therefore perhaps the correlation of the synchrotron emission from Loop I with the ILC map is because both are correlated with dust emission from the loop. If the correlation is because of dust emission, this might be polarized because of the postulated ferrimagnetic molecules etc., leading to a correlation between the WMAP polarization and Loop I. And if Loop I is contaminating the WMAP ILC map, it is perhaps plausible that a different radio loop, called the "New Loop", is also contaminating other CMB maps, in particular those of BICEP. Whereas Loop I doesn't get very close to the BICEP region, the New Loop goes right through the centre of it (see the figure below), so it is possible that there is some polarized contamination appearing in the BICEP data because of the New Loop. At any rate, the foreground dust models that BICEP used didn't account for any radio loops, so likely underestimate the true contamination.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-GMr97t6oY6Q/U3NSQEUYtNI/AAAAAAAAFg0/-RjKoXO2CPw/s1600/bicep2_loops.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-GMr97t6oY6Q/U3NSQEUYtNI/AAAAAAAAFg0/-RjKoXO2CPw/s1600/bicep2_loops.png" height="394" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Position of some Galactic radio loops and the BICEP window. "Loop I" is large one in the upper centre, that only skims the BICEP window; the "New Loop" is the one in the lower centre that passes through the centre of it. Figure from Philipp Mertsch.</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
So far so good, but this is quite a long chain of reasoning and it doesn't <i>prove</i> that it is actually dust contamination that accounts for any part of the BICEP observation. Instead it makes a plausible argument that it might be important; further investigation is required.</div>
<div>
<br /></div>
<div>
At the end of the day then, we are left in pretty much the same position we were in back in March. The BICEP result is exciting, but because it is only at one frequency, it cannot rule out foreground contamination. Other observations at other frequencies are required to confirm whether the signal is indeed cosmological. One scenario is that Planck, operating on the whole sky at many frequencies but with a lower sensitivity than BICEP, confirms a gravitational wave signal, in which case pop the champagne corks and prepare for Stockholm. The other scenario is that Planck can't confirm a detection, but also can't definitively say that BICEP's detection was due to foregrounds (this is still reasonably likely!), in which case we wait for other very sensitive ground-based telescopes pointed at that same region of sky but operating at different frequencies to confirm whether or not dust foregrounds are actually important in that region, and if so, how much they change the inferred value of <i>r.</i></div>
<div>
<i><br />
</i></div>
<div>
Until then I would say ignore the rumours.</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com6tag:blogger.com,1999:blog-6976071487922527618.post-81058160714912436092014-03-24T18:36:00.000+01:002014-03-24T18:37:20.978+01:00BICEP2: reasons to be sceptical, part 2<div dir="ltr" style="text-align: left;" trbidi="on">
This is the second part of three posts in which I wanted to lay out the various possible causes of concern regarding the BICEP2 result, and provide my own opinion on how seriously we should take these worries. I arranged these reasons to be sceptical into three categories, based on the questions<br />
<ul style="text-align: left;">
<li>how certain can we be that BICEP2 observed a real B-mode signal?</li>
<li>how certain can we be that this B-mode signal is cosmological in origin, i.e. that it is due to gravitational waves rather than something less exciting?</li>
<li>how certain can we be that these gravitational waves were caused by inflation?</li>
</ul>
<div>
The <a href="http://blankonthemap.blogspot.fi/2014/03/bicep2-reasons-to-be-sceptical-part-1.html">first post</a> dealt with the first of the three questions, this one addresses the second, and a post yet to be written will deal with the third.</div>
<div>
<br /></div>
<h4 style="text-align: left;">
How certain can we be that the observed B-mode signal is cosmological? </h4>
<div>
<br /></div>
<div>
Let's take it as given that none of the concerns in the previous post turn out to be important, i.e. that the observed B-mode signal is not an artefact of some hidden systematics in the analysis, leakage or whatever. From my position of knowing a little about data in general, but nothing much about CMB polarization analysis, I guessed that the chances of any such systematic being important were about 1 in 100.</div>
<div>
<br /></div>
<div>
The next question is then whether the signal could be caused by something other than the primordial gravitational waves that we are all so interested in. The most important possible contaminant here is other nearby sources of polarized radiation, particularly dust in our own Galaxy. We don't actually <i>know</i> how much polarized dust or synchrotron emission there might be in the sky maps here, so a lot of what BICEP have done is educated guesswork.<br />
<br />
To start with, the region of the sky that BICEP looks at was chosen on the basis of a study by <a href="http://arxiv.org/abs/astro-ph/9905128">Finkbeiner et al.</a> from 1999, which extrapolated from measurements of dust emission at certain other frequencies to estimate that, at the frequency of relevance to CMB missions such as BICEP, that particular part of the sky would be exceptionally "clean", i.e. with exceptionally low foreground dust emission. Whether this is actually true or not is not yet known for certain, but there exist a number of <i>models</i> of the dust distribution, and most of these models predict that the level of contamination to the B-mode detection from polarized dust emission would be <i>an order of magnitude smaller</i> than the observed signal. Similar model-dependent extrapolation to the observation frequency based on WMAP results suggests that synchrotron contamination is also an order of magnitude too small.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-uDAS4phiNNA/UzBr0tpn-DI/AAAAAAAAFfY/Xbn5Pe_QcKQ/s1600/bicep2_dust.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-uDAS4phiNNA/UzBr0tpn-DI/AAAAAAAAFfY/Xbn5Pe_QcKQ/s1600/bicep2_dust.png" height="297" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Predictions for foreground contamination for different dust models (the coloured lines at the bottom) versus the actual B-mode signal observed by BICEP2 (black points).</td></tr>
</tbody></table>
<br />
<br />
Now one real test of these assumptions will come from Planck, because Planck will soon have the best map of dust in our Galaxy and therefore the best limits on the possible contamination. This is one of the reasons to look forward to Planck's own polarization results, due in about October or November. In the absence of this information, the other thing that we would like to see from BICEP in order to be sure their signal is cosmological is evidence that the signal exists at multiple frequencies (and has the expected frequency dependence).<br />
<br />
BICEP do not detect the signal at multiple frequencies. The current experiment, BICEP2, operates at 150 GHz only, and that is where the signal is seen. A previous experiment, BICEP1, did run at 100 GHz as well, but BICEP1 did not have the same sensitivity and could only place an upper limit on the B-mode signal. Data from the Keck Array will eventually also include observations at 100 GHz, but this is not yet available. Until we have confirmation of the signal at different frequencies, most cosmologists will treat the result very carefully.<br />
<br />
In the absence of this, we must look at the cross-correlation between B2 and B1. Remember that although B1$\times$B1 did not have the sensitivity to make a detection of non-zero power, B2$\times$B1 can still tell us something useful. If B1 maps were purely noise, or B2 maps were due to dust, we would not expect them to be correlated. If both were due to synchrotron radiation, we would expect them to be strongly correlated. In fact the B2$\times$B1 cross power is non-zero at the $3\sigma$ level or about 99% confidence, which is something <a href="http://telescoper.wordpress.com/2014/03/19/time-for-a-cosmological-reality-check/">Peter Coles' sceptical summary</a> ignores. This is indeed evidence that the signal seen at 150 GHz is cosmological.<br />
<br />
Still, some level of cross-correlation could be produced even if both B2 and B1 were only seeing foregrounds. Combining the B2$\times$B1 data with B2$\times$B2 and B1$\times$B1 means that polarized dust or synchrotron emission of unexpected strength are rejected as explanations – though at a not-particularly-exciting significance of about $2.2-2.3\sigma$.<br />
<br />
<h4 style="text-align: left;">
Verdict </h4>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
It's fair to say, on the basis of models of the distribution of polarized dust and synchrotron emission, that the BICEP2 signal <i>probably</i> isn't due to either of these contaminants. However, we don't yet have confirmation of the detection at multiple frequencies, which is required to judge for sure. At the moment, the frequency-based evidence against foreground contamination is not very strong, but we'd still need some quite unexpected stuff to be going on with the foregrounds to explain the <i>amplitude</i> of the observed signal.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Overall, I'd guess the odds are about 1:100 against foregrounds being the whole story. (This should still be compared with the quoted headline result of 1:300,000,000,000 against $r=0$ assuming no foregrounds at all!)</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The chances are <i>much </i>higher – I'd be tempted to say perhaps even as much as better than even money – that foregrounds contribute a <i>part</i> of the observed signal, and that therefore the actual value of the tensor-to-scalar ratio will come down from $r=0.2$, perhaps to as low as $r=0.1$, when Planck checks this result using their better dust mapping.</div>
</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com8tag:blogger.com,1999:blog-6976071487922527618.post-86310211575974781452014-03-21T15:40:00.000+01:002014-03-24T18:37:52.725+01:00BICEP2: reasons to be sceptical, part 1<div dir="ltr" style="text-align: left;" trbidi="on">
As the dust begins to settle following the amazing announcement of the discovery of gravitational waves by the BICEP2 experiment, physicists around the world are taking stock and scrutinizing the results.<br />
<br />
Remember that the claimed detection is enormously significant, in more ways than one. The BICEP team have apparently detected an exceedingly faint B-mode polarization pattern in the CMB, at an order of magnitude better sensitivity than any previous experiment probing the same scales. They have then claimed to have been able to ascribe this B-mode signal unambiguously to cosmological gravitational waves, rather than any astrophysical effects due to intervening dust or other sources of radiation. And finally they have interpreted these results as direct evidence for the theory of inflation, which is really the source of all the excitement, because if true it would pin down the energy scale of inflation at an incredibly high level, with extensive and dramatic consequences for our understanding of high energy particle physics.<br />
<br />
However, as all physicists have been saying, with results of this magnitude it is important to be very careful indeed. Speculating who should get the Nobel Prize (or Prize<i>s</i>) for this is still premature. The paper containing the results will of course be subjected to anonymous peer review when it is submitted to a journal, but it has also already faced a rather extraordinary open peer review by social media, with a <a href="https://www.facebook.com/groups/574544055974988/">live group on Facebook</a>, and all sorts of other discussion on blogs, Twitter and the like. (And to the great credit of the scientists on the BICEP team, they have patiently responded to questions and comments on these forums, and the whole process has been carried out very civilly!)<br />
<br />
What I wanted to do today is to possibly contribute to that by gathering together all the main points of concern and reasons to be sceptical of the BICEP result. This is partly for my own purposes, since writing things down helps to clarify my thoughts. I will divide these concerns into three main categories, addressing the following questions:<br />
<br />
<ul style="text-align: left;">
<li>how certain can we be that BICEP2 observed a real B-mode signal?</li>
<li>how certain can we be that this B-mode signal is cosmological in origin, i.e. that it is due to gravitational waves rather than something less exciting?</li>
<li>how certain can we be that these gravitational waves were caused by inflation?</li>
</ul>
<div>
<br /></div>
<div>
I'll discuss the first category of concerns in part 1 of this post and the next two <strike>together</strike> in <a href="http://blankonthemap.blogspot.com/2014/03/bicep2-reasons-to-be-sceptical-part-2.html">parts 2</a> and 3. I do not claim that any of the concerns I raise here are original, however any mistakes are definitely mine alone. I'd like to encourage discussion of any of these points via the comments below.</div>
<div>
<br /></div>
<h4 style="text-align: left;">
How certain can we be that BICEP2 observed a real B-mode signal?</h4>
<div>
<br />
This is obviously the most basic issue. The general reason for concern here — and this applies to any B-mode detection experiment — is that the experimental pipeline has to be able to decompose the polarization signal seen into two components, the E-mode and the B-mode, and the level of the signal in the B-mode is orders of magnitude smaller than in E. Now, <a href="http://telescoper.wordpress.com/2014/03/19/time-for-a-cosmological-reality-check/">as Peter Coles explains here</a>, the E and B polarization components are in principle orthogonal to each other when the spherical harmonic decomposition can be performed over the whole sky, but this is in practice impossible. BICEP observes only a small portion of the sky, and therefore there is the possibility of "leakage" from E to B when the separating out the components. It would not take much leakage to spoil the B-mode observation.</div>
<div>
<br /></div>
<div>
Obviously the BICEP team implemented many tests of the obtained maps to check for such systematics. One of the ways to do this is to cross-correlate the E and B maps: if there is no leakage the cross-correlation should be consistent with zero. Another important test is the jackknife technique, also nicely explained <a href="http://philbull.wordpress.com/2014/03/17/how-solid-is-the-bicep2-b-mode-result/">here</a>: you split your data into two equal halves, and subtract the signal found in one half from that in the other; the answer should also be consistent with zero.</div>
<div>
<br /></div>
<div>
Now one source of concern arises because of a combination of these two tests. The blue points in the following figure show the results of a jackknife test on the BB power:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-crsG316EKaI/UywJPQM1xKI/AAAAAAAAFe0/-f5rzwGMSxM/s1600/bicep_bb_jack.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-crsG316EKaI/UywJPQM1xKI/AAAAAAAAFe0/-f5rzwGMSxM/s1600/bicep_bb_jack.jpg" height="400" width="397" /></a></div>
<div>
<br /></div>
<div>
These points are consistent with zero ... but they are possibly <i>too</i> consistent with zero! The $1\sigma$ error bars of each one of them passes through zero, whereas it would be more natural to expect some more scatter. In fact from the number on the plot you can see that there is only a 1% chance that all 9 blue points should be so close to zero.</div>
<div>
<br /></div>
<div>
This raises the possibility, pointed out by <a href="http://www.mn.uio.no/astro/english/people/aca/hke/">Hans Kristian Eriksen</a>, that the errorbars on the blue points are <i>over</i>estimated. It may then be the case that the errorbars on other points in other jackknife tests are also too large. If that were the case then reducing those errors might mean that some of the other jackknife tests now fail — the points are no longer consistent with zero. As it happens, of the 168 jackknife test results listed in the table in the paper, quite a large number (about 7) of them already "fail" by the stricter standards (2% probability) some other experiments such as <a href="http://quiet.uchicago.edu/">QUIET</a> might apply. Obviously some number of tests are always expected to fail, but more than 7 out of 168 starts to look like quite a large number. This then becomes a little worrying.</div>
<div>
<br /></div>
<div>
On the other hand, this extrapolation may be a little exaggerated, because we are surmising that the errorbars might be too large purely on the basis of the one figure above. Clearly if you do a large number of jackknife tests, it becomes less surprising that one of them gives a surprising result, if you see what I mean. Looking through the table for the other BB jackknife results, the particular example from the figure is the only one that stands out as being odd, so it is hard to conclude from this that the errorbars <i>are</i> too large. Overall I'm not convinced that there is necessarily a problem here, but it is something that deserves a little more quantitative attention.<br />
<br />
The second source of concern that has been highlighted is that the data at large multipole values appear to be doing something odd. Look at the 5th, 6th and 7th black points from the figure above, which are quite a long way from the theoretical expectation. Peter Coles helpfully drew a little blue circle around them:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-o28U-NLmv2s/UyxCsplY1RI/AAAAAAAAFfE/QlERim2V57I/s1600/bicep2worry.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-o28U-NLmv2s/UyxCsplY1RI/AAAAAAAAFfE/QlERim2V57I/s1600/bicep2worry.png" height="153" width="400" /></a></div>
<br />
The worry here is that even if the data appear to be passing jackknife tests for internal consistency and null tests for EB cross power, the fact that these points are so high suggests that there is still some undetected systematic that has crept in somewhere. This hypothesized systematic could account for the measured values of the crucial first four points, which constitute the detection of the gravitational waves.<br />
<br />
Similarly, people are worried about the EE power spectrum, which appears to be too high in the $50< \ell<100$ region — again this could be a sign of leakage from temperature into polarization, which could perhaps be contaminating the B-mode maps despite not explicitly showing up in the jackknife consistency checks.<br />
<br />
Now, the BICEP response to this is that you shouldn't judge things simply "by eye". The EE excess does not appear to be statistically significant. It's also not incredibly unlikely that the final two of the circled BB data points could simultaneously be as high as they are just due to random chance — they say "their joint significance is $<3\sigma$", which means that the chance is about 1%. (Of course the chance that all <i>three</i> of the circled points could simultaneously be high is smaller than that, and so presumably less than 1% ... )<br />
<br />
Another justification some people have been providing (mostly people from outside the BICEP collaboration to be fair, though some from within it as well) is that the preliminary data from the Keck array, which is a similar instrument to BICEP but with higher sensitivity, appear to show no anomaly in that region. I think this is a somewhat dangerous argument, because the Keck data also don't seem to be quite so high in the region of the crucial first four bandpowers! In any case, the "official" word from BICEP is that any such speculation on the basis of Keck is to be discouraged, because the Keck data is still very preliminary and has not been properly checked.<br />
<br />
<h4 style="text-align: left;">
Verdict</h4>
</div>
<div>
I'm a little bit worried about the various issues raised here, though overall I would say the odds are in favour of the B-mode detection being secure (this is a different issue to whether this detected signal is due to gravitational waves! More on that in the next post). I would not, however, put those odds at anywhere near 1 in 300,000,000,000 against there being an error, which is the headline significance claimed for the detection of a non-zero tensor-to-scalar ratio ($7\sigma$). If I were forced to quantify my belief, I would say something more like 1 or 2 in 100. That's not <i>particularly</i> secure, but luckily there are follow-up experiments, such as Keck and Planck itself, which should be able to reassure us on that score soon.</div>
<div>
<br /></div>
<div>
A final point: seeing the preliminary Keck data shown in a figure in the paper suggests to me that perhaps the final analysis of Keck data will now not be done "blind". I hope that's not the case, it would be very disturbing indeed if it were. </div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com3tag:blogger.com,1999:blog-6976071487922527618.post-58264605681117930052014-03-17T18:42:00.000+01:002014-03-17T19:35:39.106+01:00First Direct Evidence for Cosmic Inflation<div dir="ltr" style="text-align: left;" trbidi="on">
That was the title of the BICEP2 presentation today. Gives you some idea about the magnitude of the result, if it holds up: it really is astonishingly exciting.<br />
<br />
Unfortunately it was so exciting that we in Helsinki couldn't even access the Harvard server and so couldn't watch any of the webcast at all. It seems the same was true for most other cosmologists around the world. So my comments here are based purely on a preliminary reading of the paper itself, and a distillation of the conversations occurring via Facebook and the like.<br />
<br />
Firstly, the headline results: the BICEP team claim to have detected a B-mode signal in the CMB at exceedingly high statistical significance. Their headline claim is<br />
<br />
<div style="text-align: center;">
$r=0.2^{+0.07}_{-0.05}$, with $r=0$ disfavoured at $7.0\sigma$</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
That is frankly astonishing. Here's the likelihood plot:<br />
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-JSbnfmHYf3M/UycluPxfIsI/AAAAAAAAFd0/-S6kzzglGEM/s1600/bicep-r1.jpeg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-JSbnfmHYf3M/UycluPxfIsI/AAAAAAAAFd0/-S6kzzglGEM/s1600/bicep-r1.jpeg" height="392" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">BICEP2 constraint on the tensor-to-scalar ratio r. </td></tr>
</tbody></table>
<br />
(All figures are taken from the paper avalaible <a href="http://bicepkeck.org/b2_respap_arxiv_v1.pdf">here</a>.)<br />
<div>
<br /></div>
<div>
The actual measurement of the BB power spectrum looks like this:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-5wxbEpCVK6s/Uycmfo8DNgI/AAAAAAAAFd8/XWStLpg48a4/s1600/b2_and_previous_limits.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-5wxbEpCVK6s/Uycmfo8DNgI/AAAAAAAAFd8/XWStLpg48a4/s1600/b2_and_previous_limits.png" height="316" width="400" /></a></div>
<div>
<br /></div>
<div>
The black points are the new measurements, the other coloured points are the previously available best upper limits. The solid red curve is the theoretical expectation from lensing (the relatively boring contribution to BB), the dashed red curve that dies off is the theoretical expectation from a model with inflationary gravitational waves and $r=0.2$, and the other dashed red curve (were they short of colours?!) is the total.</div>
<div>
<br /></div>
<div>
They've also done a pretty good job of eliminating other foreground sources (dust, synchrotron emission etc.) as possible explanations for the signal seen, which means it is much more likely that the signal is actually due to primordial gravitational waves from inflation. In doing this, it helps that the signal they see is actually as large as it is, since there's less chance of confusing it with these foregrounds (which are much smaller). [<b>Update:</b><i style="font-weight: bold;"> </i><i>I'm not an expert here, apparently some others were less convinced about the removal of foregrounds. Not sure why though – I'd have thought other systematic errors were far more likely to be a problem than foregrounds.</i>]</div>
<div>
<br /></div>
<div>
So far so good. In fact — and I really can't stress this enough — this is an extraordinary, wonderful, unexpected result and huge congratulations to the BICEP team for achieving it. It will mean a lot of happy theorists as well, because we finally have something new to try to explain!</div>
<div>
<br /></div>
<div>
However, it is very important that as a community we remain skeptical, particularly so when - as here - the result is one that we would so desperately love to be true. Given that, I'm going to list a serious of things that are potentially worrying/things to think about/things I don't understand. (Some of these are not things I noticed myself, but were points raised by Dave Spergel, Scott Dodelson and other experts at the ongoing <a href="https://www.facebook.com/groups/574544055974988/">live discussion on Facebook</a>.) Doubtless these are questions the BICEP team will have thought about themselves; perhaps they already have all the answers and will tell us about them in due course — as I said, no one I know was able to watch the webcast live.</div>
<div>
<br /></div>
<div>
<ul style="text-align: left;">
<li>In the BB-spectrum plot above, the data seem to be showing a significant excess above expectations for multipoles about $\ell\sim200-350$. What's going on with that?</li>
<li>This is particularly noticeable in another figure (Fig. 9) in the paper:</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-qn1aK-5zne4/UycyQX8t2zI/AAAAAAAAFec/FXIZxs-2qhE/s1600/bicep_BB.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-qn1aK-5zne4/UycyQX8t2zI/AAAAAAAAFec/FXIZxs-2qhE/s400/bicep_BB.png" /></a></div>
<ul style="text-align: left;">
<li>From the above figure, preliminary results of the cross-correlation with Keck don't show the excess at high-$\ell$ (a reason to believe it might go away), but the same cros-correlation also shows less power at lower $\ell$ (which is a bit confusing).</li>
<li>At lower values of $\ell$ the EE power spectrum also shows an excess (Fig. 7):</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-ZC_f1MkRGvk/UycyxaMMupI/AAAAAAAAFek/QRFSCM1mm1w/s1600/bicep_EE.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ZC_f1MkRGvk/UycyxaMMupI/AAAAAAAAFek/QRFSCM1mm1w/s400/bicep_EE.png" /></a></div>
<ul style="text-align: left;">
<li>All the above points put together suggest that perhaps there is some leakage in the polarisation maps coming from the temperature anisotropy — a large part of the analysis work is concerned with accounting for and correcting for any such leakage, of course, the question is to what extent independent experts will be satisfied that these methods worked.</li>
<li>Although the headline figure is $r=0.2$, they rather confusingly later say that when the best possible dust model is used for foreground subtraction, this becomes $r=0.16^{+0.06}_{-0.05}$. But if this the the best possible dust model, why is this not the quoted headline number? Is this related somehow to the power excess at $\ell\sim200-350$?</li>
<li>If $r$ is as large as they have measured why was it not seen by Planck? Actually this is a fairly complicated question: the point being that if the tensor amplitude is so large, it should make a non-negligible contribution to the temperature power spectrum as well, which would have affected Planck's results. Planck had a constraint $r<0.11$, but this specifically assumed that the primordial power spectrum had a power-law form with no running (sorry about the technical jargon, unfortunately not enough time to explain here today). So BICEP suggest one way around this tension is to simply introduce a running, but it seems (but this bit was not entirely clear to me from the paper) that you need a fairly large value of the running for this explanation to fly. And if you've got a large running then you have to worry about why not a running of the running, a running of the running of the running and so on <i>ad infinitum</i> - in fact how do we know that the power-law expansion form of the $P(k)$ is the correct way to go at all?</li>
<li>Besides, are there viable inflationary models that predict both large $r$ as well as large running (or non-power-law form of the primordial power)? Given the vast array of inflationary models, the answer to this question is almost certainly yes, but people may consider some other explanations more worthwhile ...</li>
</ul>
<div>
Phew. There are probably lots of other things to think about, but that's about all I can manage today. It's been a very exciting day!</div>
</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com4tag:blogger.com,1999:blog-6976071487922527618.post-3784638015434761562014-03-15T15:38:00.000+01:002014-03-17T15:16:32.539+01:00B-modes, rumours, and inflation<div dir="ltr" style="text-align: left;" trbidi="on">
<i><b>Update:</b> The announcement will definitely be about a major discovery by BICEP2, meaning it can only really be about a B-mode signal. You can follow the webcast at <a href="http://www.cfa.harvard.edu/news/news_conferences.html"> http://www.cfa.harvard.edu/news/news_conferences.html</a>, starting at 10:45 am EDT (14:45 GMT) for scientists, or 12:00 pm EDT (16:00 GMT) for the general public and news organisations.</i><br />
<br />
The big news in cosmology circles at the minute is the rumour that the "major discovery" due to be announced <a href="http://spaceref.com/news/viewpr.html?pid=42751">at a press conference</a> on Monday the 17th is in fact a claimed detection of the B-mode signal in the CMB by the the <a href="http://www.cfa.harvard.edu/CMB/bicep2/">BICEP2 experiment</a>.<br />
<br />
Now, I'm not particularly well placed to comment on this rumour, since all the information I have comes at second- or third-hand, via people who have heard something from someone, people who <i>think</i> they heard something from someone, or people who are simply unashamedly speculating. (Perhaps this is a function of being on the wrong side of the Atlantic: although the BICEP2 experiment is based at the South Pole, the only non-North-American university participating in the <a href="http://www.cfa.harvard.edu/CMB/bicep2/collaboration.html">collaboration</a> is Cardiff University in Wales. Even worse, I'm not on Twitter.) In any case, by reading <a href="http://www.theguardian.com/science/2014/mar/14/gravitational-waves-big-bang-universe-bicep?CMP=twt_gu">this</a>, <a href="http://resonaances.blogspot.co.uk/2014/03/plot-for-weekend-flexing-biceps.html">this</a>, <a href="http://trenchesofdiscovery.blogspot.de/2014/03/a-major-discovery-bicep2-and-b-modes.html">this</a> and <a href="http://cosmobruce.wordpress.com/2014/03/14/108/">this</a>, you will be starting with essentially the same information as me.<br />
<br />
But having got that health warning out of the way, let's pretend that the rumours are entirely accurate and that on Monday we will have an announcement of a detection of a significant B-mode signal. What would this mean for cosmology?<br />
<br />
Firstly, the B-mode signal refers to a particular polarisation of the CMB (for a short and somewhat technical introduction, see <a href="http://background.uchicago.edu/~whu/polar/webversion/node8.html">here</a>; for a slightly longer one, see <a href="http://cosmology.berkeley.edu/~yuki/CMBpol/CMBpol.htm">here</a>). This polarisation can arise in various ways, one of which is the polarisation induced in the CMB by gravitational lensing, as the CMB photons travel through the inhomogeneous Universe on their way from the last scattering surface to us. There have been a few experiments, such as <a href="http://arxiv.org/abs/1403.2369">POLARBEAR</a>, which have already claimed a detection of this lensing contribution to the B-mode signal (though in this particular case after skim-reading the paper I was a little underwhelmed by the claim).<br />
<br />
Now, detecting a lensing B-mode would be cool, but significantly less exciting than detecting a <i>primordial</i> B-mode. This is because whereas the lensing signal comes from late-time physics that is quite well understood, a primordial signal would be evidence of primordial tensor fluctuations or primordial gravitational waves. And this is cool because inflation provides a possible way to produce primordial gravitational waves – therefore their detection could be a major piece of evidence in favour of inflation.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-XcKxc0xyj5U/UyRW0ZZ8D-I/AAAAAAAAFdY/eRa5ckfwL-Y/s1600/HDplate1b.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-XcKxc0xyj5U/UyRW0ZZ8D-I/AAAAAAAAFdY/eRa5ckfwL-Y/s1600/HDplate1b.jpg" height="306" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The contributions to the B-mode signal coming from gravitational waves and lensing are differentiated on the basis of the multipoles (essentially the length scale) at which they are important. Figure from Hu and Dodelson 2002.</td></tr>
</tbody></table>
<br />
People often say that detection of this tensor signal would be a "smoking gun" for inflation; something that would be very welcome, because although inflation has proved to be an attractive and fertile paradigm for cosmology, there is still a bit of a lack of direct, incontrovertible evidence in favour of it. Coupled with certain unresolved theoretical issues it faces, this lack of a smoking gun meant that <a href="http://arxiv.org/abs/1312.7619">arguments</a> <a href="http://arxiv.org/abs/1402.0526">for</a> or <a href="http://arxiv.org/abs/1402.6980">against</a> inflation were threatening to degenerate into what you might call "multiverse territory", definitely an unhealthy place to be.<br />
<br />
It may be worth introducing a note of caution about this "smoking gun" though. Although inflation is a possible source of primordial gravitational waves, it is not the only one. Artefacts of possible phase transitions in the early universe, known as cosmic defects, can also produce a spectrum of gravitational waves – and what's more, this spectrum can be <a href="http://arxiv.org/abs/1212.5458v2">exactly scale-invariant</a>, just as that from inflation. I don't know a huge amount about this field, so I am not sure whether the amplitude of the perturbations which could be produced by these cosmic defects could be sufficiently large, nor – if it is – whether there are any other features which could help distinguish this scenario from inflation if the rumours turn out to be true. Perhaps better informed people could comment below.<br />
<br />
Suppose we put that issue to one side though, and assume that not only has a significant tensor signal been detected, we have also been able to prove that it could not be due to anything other than inflation. The rumour is that the detection corresponds to value for the tensor-to-scalar ratio <i>r</i> of about 0.2. What are the implications of this for the different inflation models?<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-QTbm4OzAfjk/UyRbl8KeetI/AAAAAAAAFdk/eFF29lxMQo4/s1600/planckinflation.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-QTbm4OzAfjk/UyRbl8KeetI/AAAAAAAAFdk/eFF29lxMQo4/s1600/planckinflation.png" height="310" width="600" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Planck limits on various inflationary models.</td></tr>
</tbody></table>
Not all models of inflation do result in tensor modes large enough to observed in the CMB, so an observation of a large <i>r</i> would rule out a large class of these models. Generally speaking, the understanding is that models in which the inflaton field $\phi$ takes large values (i.e., values larger than the Planck mass $M_P$) are the ones which could produce observably large <i>r</i>, whereas the so-called "small-field models" where $\phi\ll M_P$ usually predict tiny values of <i>r</i> which could never be observed. (A note for non-experts: irrespective of the field value, the <i>energy scale</i> in both small-field and large-field models is always much less than the Planck scale.) Therefore, at a stroke, all small-field inflation models would be ruled out. Many people regard these as the better-motivated models of inflation, with in some respects fewer theoretical issues than the large-field models, so this would be quite significant.<br />
<br />
There are two small caveats to this statement: firstly, it isn't strictly necessary for $\phi$ itself to be larger than $M_P$ to generate a large <i>r</i>, only that the change in $\phi$ be large. So models in which the inflaton field winds around a cylinder, in effect travelling a large distance without actually getting anywhere, can still give large <i>r</i> (hat-tip to Shaun for that phrasing). Also, it is not even strictly true that the change in $\phi$ must be large: if some other rather specific conditions (including the temporary breakdown of the slow-roll approximation) are met, this one can be avoided and even small field models can produce enough gravitational waves. This was something pointed out by <a href="http://arxiv.org/abs/arXiv:1110.5389">a paper</a> I wrote with Shaun Hotchkiss and Anupam Mazumdar in 2011, though other people had similar ideas at about the same time. Such rather forced small-field models would have other specific features though, so could be distinguished by other measurements.<br />
<br />
One of the more interesting consequences of a detection of large <i>r</i> (aside from the earth-shattering importance of a confirmation of inflation itself) would be that the Higgs inflation model – which has been steadily gaining in popularity given the results from the LHC and Planck, and has begun to be regarded by many as the most plausible mechanism by which inflation could have occurred – would be disfavoured. In the plot above, the Higgs inflation prediction is shown by the orange points at the bottom centre of the figure. So a BICEP2 detection of $r\sim0.2$ as suggested by the rumours would be pretty serious for this model.<br />
<br />
On the other hand, a BICEP2 detection of $r\sim0.2$ would also <strike>strongly contradict</strike> appear to be at odds with the results from the Planck and WMAP satellites. Which probably goes to show that there is not much point believing every rumour ...<br />
<br />
We will find out on Monday!</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com6tag:blogger.com,1999:blog-6976071487922527618.post-58223451143454189162014-02-03T20:13:00.000+01:002014-02-05T17:50:43.129+01:00Does the multiverse explain the cosmological constant?<div dir="ltr" style="text-align: left;" trbidi="on">
At the end of the last <a href="http://blankonthemap.blogspot.com/2014/01/is-falsifiability-scientific-idea-due.html">post on falsifiability</a>, I mentioned the possibility that the multiverse hypothesis might provide an explanation for the famous <i>cosmological constant problem</i>. Today I'm going to try to elaborate a little on that argument and why I find it unconvincing.<br>
<br>
Limitations of space and time mean that I cannot possibly start this post as I would like to, with an explanation of what the cosmological problem <i>is</i>, and why it is so hard to resolve it. Readers who would like to learn a bit more about this could try reading <a href="http://preposterousuniverse.com/writings/skytel-mar05.pdf">this</a>, <a href="http://profmattstrassler.com/articles-and-posts/particle-physics-basics/quantum-fluctuations-and-their-energy/">this</a>, <a href="http://arxiv.org/pdf/astro-ph/0004075v2.pdf">this</a> or <a href="http://www.itp.kit.edu/~schreck/general_relativity_seminar/The_cosmological_constant_problem.pdf">this</a> (arranged in roughly descending order of accessibility to the non-expert). For my purposes I will have to simply summarise the problem by saying that our models of the history of the Universe contain a parameter $\rho_\Lambda$ – which is related to the vacuum energy density and sometimes called the dark energy density – whose expected value, according to our current understanding of quantum field theory, should be <i>at least </i>$10^{-64}$ (in units of the Planck scale energy) and quite possibly as large as 1, but whose actual value, deduced from our reconstruction of the history of the Universe, is approximately $1.5\times10^{-123}$. (As ever with this blog, the mathematics may not display correctly in RSS readers, so you might have to click through.)<br>
<br>
This enormous discrepancy between theory and observation, of somewhere between 60 and 120 orders of magnitude, has for a long time been one of the outstanding problems – not to say embarrassments – of high energy theory. Many very smart people have tried many ingenious ways of solving it, but it turns out to be a very hard problem indeed. Sections 2 and 3 of <a href="http://arxiv.org/abs/arXiv:0708.4231">this review</a> by Raphael Bousso provide some sense of the various attempts that have been made at explanation and how they have failed (though this review is unfortunately also at a fairly technical level).<br>
<br>
This is where the multiverse and the anthropic argument comes in. In this <a href="http://prl.aps.org/abstract/PRL/v59/i22/p2607_1">very famous paper</a> back in 1987, Steven Weinberg used the hypothesis of a multiverse consisting of causally separated universes which have different values of $\rho_\Lambda$ to explain why we might be living in a universe with a very small $\rho_\Lambda$, and to predict that if this were true, $\rho_\Lambda$ in our universe would nevertheless be large enough to measure, with a value a few times larger than the energy density of matter, $\rho_m$. This was particularly important because the value of $\rho_\Lambda$ had not at that time been conclusively measured, and many theorists were working under the assumption that the cosmological constant problem would be solved by some theoretical advance which would demonstrate why it had to be exactly zero, rather than some exceedingly small but non-zero number.<br>
<br>
Weinberg's prediction is generally regarded as having been successful. In 1998, <a href="http://blankonthemap.blogspot.fi/2012/11/the-structure-of-scientific-revolution.html">observations of distant supernovae</a> indicated that $\rho_\Lambda$ was in fact non-zero, and in the subsequent decade-and-a-half increasingly precise cosmological measurements, especially of the CMB, have confirmed its value to be a little more than three times that of $\rho_m$.<br>
<br>
This has been viewed as strong evidence in favour of the multiverse hypothesis in general and in particular for string theory, which provides a potential mechanism for the realisation of this multiverse. Indeed in the absence of any other observational evidence for the multiverse (perhaps even in principle), and the ongoing lack of experimental lack of experimental evidence for other predictions of string theory, Weinberg's anthropic prediction of the value of the cosmological constant is often regarded as the most important reason for believing that these theories are part of the correct description of the world. For instance, to provide just three arbitrarily chosen examples, Sean Carroll argues this <a href="http://www.edge.org/response-detail/25322">here</a>, Max Tegmark <a href="http://www.math.columbia.edu/~woit/wordpress/?p=6551&cpage=2#comment-204369">here</a>, and Raphael Bousso in the review linked to above.<br>
<br>
I have a problem with this argument, and it is not a purely philosophical one. (The philosophical objection is loosely the one made <a href="http://rationallyspeaking.blogspot.com/2014/01/sean-carroll-edge-and-falsifiability.html">here</a>.) Instead I disagree that Weinberg's argument still correctly predicts the value of $\rho_\Lambda$. This is partly because Weinberg's argument, though brilliant, relied upon a few assumptions about the theory in which the multiverse was to be realised, and theory has subsequently developed not to support these assumptions but to negate them. And it is partly because, even given these assumptions, the argument gives the wrong value when applied to cosmological observations from 2014 rather than 1987. Both theory and observation have moved away from the anthropic multiverse.<br>
</div><a href="http://blankonthemap.blogspot.com/2014/02/does-multiverse-explain-cosmological.html#more">Read more »</a>Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com27tag:blogger.com,1999:blog-6976071487922527618.post-72314082771445206462014-01-22T21:19:00.000+01:002014-01-23T11:55:23.523+01:00Is falsifiability a scientific idea due for retirement?<div dir="ltr" style="text-align: left;" trbidi="on">
Sean Carroll <a href="http://www.edge.org/response-detail/25322">argues</a> that it is.<br>
<br>
He characterises the belief that "theories should be falsifiable" as a "fortune-cookie-sized motto"; it's a position adopted only by "armchair theorizers" and "amateur philosophers", and people who have no idea how science really works. He thinks we need to move beyond the idea that scientific theories need to be falsifiable; this appears to be because he wants to argue that string theory and the idea of the multiverse are not falsifiable ideas, but are still scientific.<br>
<div>
<br></div>
<div>
This position is not just wrong, it's ludicrous. </div>
<div>
<br></div>
<div>
What's more, I think deep down Sean – who is normally a clear, precise thinker – realises that it is ludicrous. Midway through his essay, therefore, he flaps around trying to square the circle and get out of the corner he has painted himself into: a scientific theory must, apparently, still be "judged on its ability to account for the data", and it's still true that "nature is the ultimate guide". But somehow it isn't necessary for a theory to be falsifiable to be scientific.</div>
<div>
<br></div>
<div>
Now, I'm not a philosopher by training. Therefore what follows could certainly be dismissed as "amateur philosophising". I'm almost certain that what I say has been said before, and said better, by other people in other places. Nevertheless, as a practising scientist with an argumentative tendency, I'm going to have to rise to the challenge of defending the idea of falsifiability as the essence of science. Let's start by dismantling the alternatives.<br>
</div></div><a href="http://blankonthemap.blogspot.com/2014/01/is-falsifiability-scientific-idea-due.html#more">Read more »</a>Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com17tag:blogger.com,1999:blog-6976071487922527618.post-83768952678180193822014-01-15T19:54:00.000+01:002014-01-15T19:54:26.935+01:00A new start to blogging in 2014<div dir="ltr" style="text-align: left;" trbidi="on">
Well, <i>Blank On The Map</i> has been sadly silent for rather longer than I intended.<br />
<br />
There were several reasons for this. I mentioned one of them in the <a href="http://blankonthemap.blogspot.fi/2013/09/a-long-summer.html">last post</a> on here a few months ago – the need to put my nose to the postdoc research grindstone in order to try to avoid being scooped. As it turns out, we were scooped after all, but there is still more to be said on the matter and in any case the result we were gunning for turned out to be not quite so exciting as we were hoping. More news on that in some future posts perhaps.<br />
<br />
Another reason for radio silence was that I found that quite a lot of my work over the last couple of months has turned out to involve more intensive writing – including a lot of time worrying over the careful choice of words, precise phrasing and tone of my written output – than I'd have liked, and almost more of that than actual <i>research</i>. This was mostly because of a recent <a href="http://arxiv.org/abs/1310.2791">paper</a> I wrote which led to a bit of a <a href="http://arxiv.org/abs/1310.5067">bad-tempered</a> <a href="http://arxiv.org/abs/1310.6911">spat</a> ... anyway, the upshot of this was that I did not feel much in the mood for more writing on here.<br />
<br />
It also turns out that any kind of a break from blogging is sort of self-sustaining. When you haven't have much time for writing, the simple fact of its scarcity makes you start to place unreasonably high expectations on your output: is this topic <i>really</i> more interesting than that other topic I didn't have time to write about last week?<br />
<br />
Ah well. I'll start the new year with this simple post, which also serves as a way of mentioning that I've moved universities and countries: I now live in Helsinki, and work at the <a href="http://www.helsinki.fi/university/">University of Helsinki</a> and the <a href="http://www.hip.fi/">Helsinki Institute of Physics</a>. As a result, I now have a <a href="http://research.hip.fi/user/nadathur/">new webpage</a>! (Indeed, for complicated reasons, I actually have <a href="http://www.helsinki.fi/~senadath/index.html">a second one</a> as well, but it's got the same content.)<br />
<br />
When I arrived here in October, Helsinki looked like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-XntPta2oJtM/UtbXAaV2amI/AAAAAAAAFGs/npx66abMy_A/s1600/P1020641.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://2.bp.blogspot.com/-XntPta2oJtM/UtbXAaV2amI/AAAAAAAAFGs/npx66abMy_A/s400/P1020641.JPG" width="400" /></a></div>
<br />
Now it looks like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-3fKT2naWVN0/UtbY2jsIriI/AAAAAAAAFG4/Dy1sGp1PFFM/s1600/Helsinki_beach.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://3.bp.blogspot.com/-3fKT2naWVN0/UtbY2jsIriI/AAAAAAAAFG4/Dy1sGp1PFFM/s400/Helsinki_beach.jpg" width="400" /></a></div>
<br />
The next post of this year will deal with more interesting topics!</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com0tag:blogger.com,1999:blog-6976071487922527618.post-50780364299490803502013-09-02T10:13:00.002+02:002013-09-02T11:21:10.359+02:00A long summer<div dir="ltr" style="text-align: left;" trbidi="on">
Indeed it has been a long summer, though the good weather appears to be drawing to a close. Over the last few months, I have attended three cosmology conferences or workshops and also been on a two-week holiday in the Dolomites, where I occupied my time by doing things like this:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-w02Ntqp4vhs/UiRBHSmPRRI/AAAAAAAAFE8/FCs0UGcGeOI/s1600/P036.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="" border="0" height="400" src="http://2.bp.blogspot.com/-w02Ntqp4vhs/UiRBHSmPRRI/AAAAAAAAFE8/FCs0UGcGeOI/s400/P036.jpg" title="La Guglia Edmondo de Amicis" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">La Guglia Edmondo de Amicis, near the Misurina lake.</td></tr>
</tbody></table>
and enjoying views like this:<br />
<div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-40IReWbIvZQ/UiRD4zCv8tI/AAAAAAAAFFI/cRzHBQU4zpc/s1600/P067.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="" border="0" height="450" src="http://4.bp.blogspot.com/-40IReWbIvZQ/UiRD4zCv8tI/AAAAAAAAFFI/cRzHBQU4zpc/s640/P067.jpg" title="Cima Piccola di Lavaredo, view from Dibona route on Cima Grande" width="600" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Cima Piccola di Lavaredo, from the Dibona route on Cima Grande.</td></tr>
</tbody></table>
<div>
This explains the lack of activity here in recent times.<br />
<br />
Returning home a couple of weeks ago, I was full of ideas for several exciting blog posts, including a summary of all the hottest topics in cosmology that were discussed at the conferences I attended, and perhaps an account of my <strike>argument</strike> stimulating discussion with Uros Seljak. However, it has come to my attention that there are other physicists in other parts of the world who happen to be working on the exact same topic that my collaborators and I have been investigating for the last few months. The rule in the research world is of course "publish or perish" (though some wit has suggested that "publish <i>and</i> perish" is more accurate) – so most of my time now will be spent on avoiding being scooped, and the current hiatus on this blog will continue for a short period. Looking on the bright side, once normal service resumes, I hope to have some interesting science results to describe!<br />
<br />
In the meantime, I can only direct you to other blogs for your entertainment and enlightenment. Those of you who like physics discussions and have not already read Sean Carroll's blog (a vanishingly small number perhaps?) might enjoy <a href="http://www.preposterousuniverse.com/blog/2013/08/22/the-higgs-boson-vs-boltzmann-brains/">this post</a> about Boltzmann brains. I personally also enjoyed <a href="http://www.preposterousuniverse.com/blog/2013/08/22/mind-and-cosmos/">this argument</a> against philosopher Tom Nagel.<br />
<br />
For people interested in climbing news, I can report that my friends on the Oxford Greenland Expedition that I mentioned once <a href="http://blankonthemap.blogspot.de/2013/02/oxford-greenland-expedition.html">here</a> have returned safely after a successful series of very impressive climbs. I found their regular reports of their activities in the <a href="http://oxfordgreenlandexpedition.com/expedition-diary/">expedition diary</a> well-written and rather thrilling – not just the climbing, but also the account of the journey to Greenland by sea in the face of seemingly never-ending gales! Well worth a read, as is <a href="http://jacobclimbsthings.blogspot.co.uk/2013/08/upon-arriving-in-greenland-ian-and-i.html">this</a>.</div>
</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com1tag:blogger.com,1999:blog-6976071487922527618.post-13210843161542270982013-07-11T10:48:00.001+02:002013-07-11T12:00:45.863+02:00Quasars, homogeneity and Einstein<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="p1">
<i>[A little note: This post, like many others on this blog, contains a few mathematical symbols which are displayed using MathJax. If you are reading this using an RSS reader such as Feedly and you see a lot of $ signs floating around, you may need to click through to the blog to see the proper symbols.]</i></div>
<div class="p1">
<i><br />
</i></div>
<div class="p1">
People following the reporting of physics in the popular press might remember having come across a paper earlier this year that claimed to have detected the "largest structure in the Universe" in the distribution of <a href="http://en.wikipedia.org/wiki/Quasar">quasars</a>, that "challenged the Cosmological Principle". This was work done by Roger Clowes of the University of Central Lancashire and collaborators, and their paper was published in the <i><a href="http://mnras.oxfordjournals.org/content/429/4/2910">Monthly Notices of the Royal Astronomical Society</a> </i>back in March (though it was available online from late last year). </div>
<div class="p1">
<br /></div>
<div class="p1">
The reason I suspect people might have come across it is that it was accompanied by a pretty extraordinary amount of publicity, starting from <a href="http://www.ras.org.uk/news-and-press/224-news-2013/2212-astronomers-discover-the-largest-structure-in-the-universe">this press release</a> on the Royal Astronomical Society website. This was then taken up by <a href="http://www.reuters.com/article/2013/01/12/us-space-quasars-idUSBRE90B01S20130112">Reuters</a>, and featured on various popular science websites and news outlets, including <i><a href="http://www.newscientist.com/article/dn23074-largest-structure-challenges-einsteins-smooth-cosmos.html#.Ud54fD6FCWt">New Scientist</a>, </i><a href="http://www.theatlantic.com/technology/archive/2013/01/the-largest-structure-ever-observed-in-the-universe/267161/" style="font-style: italic;">The Atlantic</a><i>, </i><a href="http://news.nationalgeographic.co.uk/news/2013/01/130111-quasar-biggest-thing-universe-science-space-evolution/" style="font-style: italic;">National Geographic</a>, <a href="http://www.space.com/19220-universe-largest-structure-discovered.html">Space.com</a>, <a href="http://www.dailygalaxy.com/my_weblog/2013/01/the-largest-structure-universe-discovered-quasar-group-4-billion-light-years-wide-challenges-current.html">The Daily Galaxy</a>, <a href="http://phys.org/news/2013-01-astronomers-largest-universe.html">Phys.org</a>, <a href="http://gizmodo.com/5976024/the-new-biggest-structure-in-the-universe-is-too-large-to-comprehend">Gizmodo</a>, and many more. The structure they claimed to have found even has its own <a href="https://en.wikipedia.org/wiki/Huge-LQG">Wikipedia entry</a>.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-LMZqSEaCkrk/Ud1hD3fjd8I/AAAAAAAAFDc/KvVwJLz3gbQ/s1600/quasar.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="236" src="http://3.bp.blogspot.com/-LMZqSEaCkrk/Ud1hD3fjd8I/AAAAAAAAFDc/KvVwJLz3gbQ/s400/quasar.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Obligatory artist's impression of a quasar.</td></tr>
</tbody></table>
<div>
<br /></div>
<div class="p1">
One thing that you notice in a lot of these reports is the statement that the discovery of this structure violates Einstein's theory of gravity, which is nonsense. This is sloppy reporting, sure, but the RAS press release is also partly to blame here, since it includes a somewhat gratuitous mention of Einstein, and this is exactly the kind of thing that non-expert journalists are likely to pick up on. Mentioning Einstein probably helps generate more traffic after all, which is why I've put him in the title as well.</div>
<div class="p1">
<br /></div>
<div class="p1">
But aside from the name-dropping, what about the main point about the violation of the cosmological principle? As a quick reminder, the cosmological principle is sometimes taken to be the assumption that, on large scales, the Universe is well-described as homogeneous and isotropic. </div>
<div class="p1">
<br /></div>
<div class="p1">
The question of what constitutes "large scales" is sometimes not very well-defined: we know that on the scale of the Solar System the matter distribution is very definitely not homogeneous, and we believe that on the scale of size of the observable Universe it is. Generally speaking, people assume that on scales larger than about $100$ Megaparsecs, homogeneity is a fair assumption. A paper by <a href="http://arxiv.org/abs/1001.0617">Yadav, Bagla and Khandai</a> from 2010 showed that if the standard $\Lambda$CDM cosmological model is correct, the scale of homogeneity <i>must be less than</i> at most $370$ Mpc. </div>
<div class="p1">
<br /></div>
<div class="p1">
On the other hand, this quasar structure that Clowes et al. found is absolutely enormous: over 4 billion light years, or more than 1000 Mpc, long. Does the existence of such a large structure mean that the Universe is not homogeneous, the cosmological principle is not true, and the foundation on which all of modern cosmology is based is shaky?</div>
<div class="p1">
<br /></div>
<div class="p1">
Well actually, no. </div>
<div class="p1">
<br /></div>
<div class="p1">
Unfortunately Clowes' paper is wrong, on several counts. In fact, I have recently published a paper myself (journal version <a href="http://mnras.oxfordjournals.org/content/early/2013/07/04/mnras.stt1028">here</a>, free arXiv version <a href="http://arxiv.org/abs/1306.1700">here</a>) which points out that it is wrong. And, on the principle that if I don't talk about my own work, no one else will, I'm going to try explaining some of the ideas involved here.</div>
<div class="p1">
<br /></div>
<div class="p1">
The first reason it is wrong is something that a lot of people who should know better don't seem to realise: there is no reason that structures should not exist which are larger than the homogeneity scale of $\Lambda$CDM. You may think that this doesn't make sense, because homogeneity precludes the existence of structures, so no structure can be larger than the homogeneity scale. Nevertheless, it does and they can.</div>
<div class="p1">
<br /></div>
<div class="p1">
Let me explain a little more. The point here is that the Universe is <i>not </i>homogeneous, at any scale. What is homogeneous and isotropic is simply the background model we use the describe its behaviour. In the real Universe, there are always fluctuations away from homogeneity at all scales – in fact the theory of inflation basically guarantees this, since the power spectrum of potential fluctuations is close to scale-invariant. The assumption that all cosmological theory really rests on is that these fluctuations can be treated as perturbations about a homogeneous background – so that a perturbation theory approach to cosmology is valid.</div>
<div class="p1">
<br /></div>
<div class="p1">
Given this knowledge that the Universe is never exactly homogeneous, the question of what the "homogeneity scale" actually means, and how to define it, takes on a different light. (Before you ask, yes it is still a useful concept!) One possible way to define it is as that scale above which density fluctuations $\delta$ generally become small compared to the homogeneous background density. In technical terms, this means the scale at which the two-point correlation function for the fluctuations, $\xi(r)$, (of which the power spectrum $P(k)$ is the Fourier transform) becomes less than $1$. Based on this definition, the homogeneity scale would be around $10$ Mpc.</div>
<div class="p1">
<br /></div>
<div class="p1">
It turns out that this definition, and the direct measurement of $\xi(r)$ itself, is not very good for determining whether or not the Universe is a fractal, which is a question that several researchers decided was an important one to answer a few years ago. This question can instead be answered by a different analysis, which I explained once before <a href="http://blankonthemap.blogspot.com/2012/08/the-largest-patterns-in-universe.html">here</a>: essentially, given a catalogue with the positions of many galaxies (or quasars, or whatever), draw a sphere of radius $R$ around each galaxy, and count how many other galaxies lie within this sphere, and how this number changes with $R$. The scale above which the average of this number for all galaxies starts scaling as the cube of the radius, $$N(<R)\propto R^3,$$ (within measurement error) is then the homogeneity scale (if it starts scaling as some other constant power of $R$, the Universe has a fractal nature). This is the definition of the homogeneity scale used by Yadav <i>et al</i>. and it is related to an integral of $\xi(r)$; typically measurements of the homogeneity scale using this definition come up with values of around $100-150$ Mpc.</div>
<div class="p1">
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-R7H6eFOSP1I/Ud5qODO97NI/AAAAAAAAFEM/wYyDn7UOEwQ/s1600/fractalscaling.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="227" src="http://1.bp.blogspot.com/-R7H6eFOSP1I/Ud5qODO97NI/AAAAAAAAFEM/wYyDn7UOEwQ/s640/fractalscaling.jpg" width="620" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The figure that proves that the distribution of quasars is in fact homogeneous on the expected scales. For details, see <a href="http://arxiv.org/abs/1306.1700">arXiv:1306.1700</a>. </td></tr>
</tbody></table>
<br /></div>
<div class="p1">
To get back to the original point, neither of these definitions of the homogeneity scale makes any claim about the existence of structures that are larger than that. In fact, in the $\Lambda$CDM model, the correlation function for matter density fluctuations is expected to be small but positive out to scales larger than either of the two homogeneity scales defined above (though not as large as Yadav <i>et al</i>.'s generous upper limit). The correlation function that can actually be measured using any given population of galaxies or quasars will extend out even further. So <i>we already expect</i> correlations to exist beyond the homogeneity scale – this means that, for some definitions of what constitutes a "structure", we <i>expect</i> to see large "structures" on these scales too.<br />
<br />
The second reason that the claim by Clowes <i>et al. </i>is wrong is however less subtle. Given the particular definition of a "structure" they use, one would expect to find very large structures even if density correlations were exactly zero on <i>all</i> scales.</div>
<div class="p1">
<br /></div>
<div class="p1">
Yes, you read that right. It's worth going over how they define a "structure", just to make this absolutely clear. About the position of each quasar in the catalogue they draw a sphere of radius $L$. If any other quasars at all happen to lie within this sphere, they are classified as part of the same "structure", which can now be extended in other directions by repeating the procedure about each of the newly added member quasars. After repeating this procedure over all $18,722$ quasars in the catalogue, the largest such group of quasars identified becomes the "largest structure in the Universe".</div>
<div class="p1">
<br /></div>
<div class="p1">
It should be pretty obvious now that the radius $L$ chosen for these spheres, while chosen rather arbitrarily, is crucial to the end result. If it is too large, all quasars in the catalogue end up classified as part of the same truly ginormous "structure", but this is not very helpful. This is known as "percolation" and the critical percolation threshold has been thoroughly studied for Poisson point sets – which are by definition random distributions of points with no correlation at all. The value of $L$ that Clowes <i>et al. </i>chose to use, for no apparent reason other than that it gave them a dramatic result, was $100$ Mpc – far too large to be justified on any theoretical grounds, but slightly lower than the critical percolation threshold would be if the quasar distribution was similar to that of a Poisson set. On the other hand, the "largest structure in the Universe" only consists of $73$ quasars out of $18,722$, so it could be entirely explained as a result of the poor definition ...</div>
<div class="p1">
<br /></div>
<div class="p1">
Now I'll spare you all the details of how to test whether, using this definition of a "structure", one would expect to find "structures" extending over more than $1000$ Mpc in length or with more than $73$ members or whatever, even in a purely random distribution of points, which are by definition homogeneous. Suffice it to say that it turns out one would. This plot shows the maximum extent of such "structures" found in $10,000$ simulations of completely uncorrelated distributions of points, compared to the maximum extent of the "structure" found in the real quasar catalogue.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-BuFQRdAwI5w/Ud5mJ1Va_NI/AAAAAAAAFD8/EcAwthds1Ks/s1600/Dmax.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="293" src="http://2.bp.blogspot.com/-BuFQRdAwI5w/Ud5mJ1Va_NI/AAAAAAAAFD8/EcAwthds1Ks/s400/Dmax.png" width="450" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The probability distribution of extents of largest "structures" found in 10,000 random point sets for two different choices of $L$. Vertical lines show the actual values found for "structures" in the quasar catalogue. The actual values are not very unusual. Figure from <a href="http://arxiv.org/abs/1306.1700">arXiv:1306.1700</a>. </td></tr>
</tbody></table>
<br />
To summarise then: finding a "structure" larger than the homogeneity scale does not violate the cosmological principle, because of correlations; on top of that, the "largest structure in the Universe" is actually not really a "structure" in any meaningful sense. In my professional opinion, Clowes' paper and all the hype surrounding it in the press is nothing more than that – hype. Unfortunately, this is another verification of my maxim that if a paper to do with cosmology is accompanied by a big press release, it is odds-on to turn out to be wrong.<br />
<br />
Finally, before I leave the topic, I'll make a comment about the presentation of results by Clowes <i>et al</i>. Here, for instance, is an image they presented showing their "structure", which they call the 'Huge-LQG', with a second "structure" called the 'CCLQG' towards the bottom left:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-PTABfZuuuX0/Ud5lPCrIWdI/AAAAAAAAFDs/U-_GzJNUAzY/s1600/HugeLQG.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-PTABfZuuuX0/Ud5lPCrIWdI/AAAAAAAAFDs/U-_GzJNUAzY/s400/HugeLQG.png" width="311" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">3D representation of the Huge-LQG and CCLQG. From <a href="http://arxiv.org/abs/1211.6256">arXiv:1211.6256</a>.</td></tr>
</tbody></table>
</div>
<div class="p1">
<br />
Looks impressive! Until you start digging a bit deeper, anyway. Firstly, they've only shown the quasars that form part of the "structure", not all the others around it. Secondly, they've drawn enormous spheres (of radius $33$ Mpc) at the position of each quasar to make it look more dramatic. In actual fact the quasars are way smaller than that. The combined effect of these two presentational choices is to make the 'Huge-LQG' look far more plausible as a structure than it really is. Here's a representation of the exact same region of space that I made myself, which rectifies both problems:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-ISglpaE-z6g/Ud5uL4hVjmI/AAAAAAAAFEc/1QDabMIgY88/s1600/quasar_positions.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-ISglpaE-z6g/Ud5uL4hVjmI/AAAAAAAAFEc/1QDabMIgY88/s400/quasar_positions.jpg" width="283" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Quasar positions around the "structures" claimed by Clowes <i>et al.</i></td></tr>
</tbody></table>
<br />
Do you still see the "structures"?</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com13tag:blogger.com,1999:blog-6976071487922527618.post-28631528840107548212013-06-23T17:01:00.000+02:002013-06-23T18:14:32.175+02:00Across the Himalayan Axis<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
I had promised to try to write a summary of the <a href="http://www.helsinki.fi/~lavinto/workshop/index.html">workshop on cosmological perturbations post Planck</a> that took place in Helsinki in the first week of June, but although the talks were all interesting, I didn't feel very inspired to write much about them. Plus life has been intervening, so I'll have to leave you to <a href="http://trenchesofdiscovery.blogspot.de/2013/06/cosmological-perturbations-post-planck_4.html">read</a> <a href="http://trenchesofdiscovery.blogspot.de/2013/06/cppp-ii.html">Shaun's</a> <a href="http://trenchesofdiscovery.blogspot.de/2013/06/cosmological-perturbations-post-planck_12.html">accounts</a> at the <i>Trenches of Discovery</i> instead.<br>
<br>
I also recently put a new paper on the arXiv; despite promising to write about my own papers when I put them out, I'm going to have to postpone an account of this one until next week. This is because I am spending the next week at a <a href="https://wiki.helsinki.fi/display/alpine/cosmology">rather unique workshop</a> in the Austrian Alps. (This is one of the perks of being a physicist, I suppose!)<br>
<br>
Therefore today's post is going to be about mountaineering instead. It is an account I wrote of a trek I did with my father and sister almost exactly seven years ago: we crossed the main Himalayan mountain range from south to north over a mountain pass known as the Kang La (meaning 'pass of ice' in the local Tibetan dialect, I believe), and then crossed back again from north to south over another pass as part of a big loop. In doing so we also crossed from the northern Indian state of Himachal Pradesh into Zanskar, a province of the state of Jammu and Kashmir, and then back again.<br>
<br>
The account below was first written as a report for the A.C. Irvine Travel Fund, who partly funded this trip, and it has been available via a link on their website for several years. At the time, the Kang La was a very infrequently-used pass, in quite a remote area and only suitable for strong hikers with high-altitude mountain experience. But in the seven years since my trip it has seen quite a rise in popularity — I sometimes flatter myself that my account had something to do with raising the profile of the area!<br>
<br>
Anyway, the account itself follows after the break. There is also a sketch map of the area I drew myself (it's hard to obtain decent cartographical maps of the area, and illegal to possess them in India due to the proximity to the border), and a few photographs to illustrate the scenery ...<br>
</div></div><a href="http://blankonthemap.blogspot.com/2013/06/across-himalayan-axis.html#more">Read more »</a>Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com2tag:blogger.com,1999:blog-6976071487922527618.post-85642146959911146632013-06-04T12:55:00.001+02:002013-06-04T17:45:19.426+02:00CPPP 13<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="p1">
This week I am attending <a href="http://www.helsinki.fi/~lavinto/workshop/index.html">this workshop</a> in Helsinki. The focus of the workshop is on re-evaluating theoretical issues in cosmology in light of the new data from the Planck satellite. </div>
<div class="p2">
<br /></div>
<div class="p1">
Although the data were released in March, so far as I know they have not yet inspired any major theoretical breakthroughs. This is partly because the results were somewhat disappointingly boring, in that there is no smoking-gun indication in the data of failures of our current cosmological model (for more on this, see <a href="http://resonaances.blogspot.fi/2013/03/the-universe-after-planck.html">here</a>), and therefore no clear hints of which extensions of the model we should be looking to explore further. There are still some niggles in the data, to be sure – such as the much advertised "anomalies". But these have not yet led to any major advances either. As a community it seems we cosmologists are still digesting the Planck results.</div>
<div class="p2">
<br /></div>
<div class="p1">
This workshop should aid that process of digestion. There are many scientists from all over the world attending, and I'm looking forward to hearing what they think about what the data mean. The way the workshop has been organised deliberately leaves plenty of time for discussion in between the scheduled talks, which I think is always the best way to go. I'm not giving a talk myself, though some work I did recently with Shaun Hotchkiss and Samuel Flender, who are both based in Helsinki, will feature in the poster session. Samuel gets most of the credit for preparing our poster though!</div>
<div class="p2">
<br /></div>
<div class="p1">
I'm not going to attempt to blog about the workshop in real time. Instead I will try to make a few notes and provide a single post at the end of the week touching on what I thought were the most interesting topics of the week. If you want more detail on each day, you should read Shaun's <a href="http://trenchesofdiscovery.blogspot.de/2013/06/cosmological-perturbations-post-planck.html">introduction</a> and day-by-day accounts at <i><a href="http://trenchesofdiscovery.blogspot.de/">The Trenches of Discovery</a></i>. He did a similar thing before for the official ESA Planck conference, which was very successful. But I'd rather him than me, especially as internet access is rather expensive at my hotel!</div>
<div class="p2">
<br /></div>
<br />
<div class="p1">
Meanwhile, the most remarkable fact to note about Helsinki right now is the weather: a thermometer in my hotel said it was 28ºC this morning, which is wonderful outdoors but when combined with quadruple-glazed windows makes nights rather uncomfortable ...</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com0tag:blogger.com,1999:blog-6976071487922527618.post-58999392182407116302013-05-27T19:47:00.000+02:002013-05-27T19:47:42.666+02:00An inconsistent CMB?<div dir="ltr" style="text-align: left;" trbidi="on">
When the Planck science team announced their results in March, they also put out a great flood of papers. You can find the list <a href="http://www.sciops.esa.int/index.php?project=PLANCK&page=Planck_Published_Papers">here</a>; there are 29 of them, plus an explanatory statement.<br />
<br />
Except if you look carefully, only 28 of the papers have actually been released. Paper XI, 'Consistency of the data', is still listed as "in preparation". Now, what this paper was supposed to cover was the question of how consistent Planck results were with previous CMB experiments, such as <a href="http://lambda.gsfc.nasa.gov/product/map/current/">WMAP</a>. We already knew that there were some inconsistencies, both in the derived cosmological parameters such as the dark energy density and the Hubble parameter, and in the overall normalization of the power seen on large scales. We might expect this missing paper to tell us the reason for the inconsistencies, and perhaps to indicate which experiment got it wrong (if any). The problem is that at present there is no indication when we can expect this paper to arrive – when asked, members of the Planck team only say "soon". I presume that the reason for the delay is that they are having some unforeseen difficulty in the analysis.<br />
<br />
However, if you were paying attention last week, you might have noticed a new submission to the arXiv that provided an interesting little insight into what might be going on. This <a href="http://arxiv.org/abs/1305.4033">paper</a> by Anne Mette Frejsel, Martin Hansen and Hao Liu – the authors are at the Niels Bohr Institute in Copenhagen, and in fact all three recently visited Bielefeld for our <a href="http://www2.physik.uni-bielefeld.de/kosmologietag2013.html">Kosmologietag</a> workshop – applied a particular consistency check to Planck and WMAP data ... and found WMAP wanting.<br />
<br />
The test they applied is really pleasingly simple. Suppose you want to measure the CMB temperature anisotropies on the sky using your wonderful satellite – either WMAP or Planck. Unfortunately, there's a great big galaxy (our galaxy) in the way, obscuring quite a large fraction of the sky:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-BQJ0JIjAKys/UVLAxTS6K6I/AAAAAAAAE-Q/XiiX_JvgLpQ/s1600/CMBforeground1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="216" src="http://3.bp.blogspot.com/-BQJ0JIjAKys/UVLAxTS6K6I/AAAAAAAAE-Q/XiiX_JvgLpQ/s400/CMBforeground1.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The CMB sky as seen by Planck in the 353 GHz channel. Obviously there's a lot of foreground in the way. (This is not the best frequency for viewing the CMB, by the way. I chose it only because it illustrates the foregrounds quite nicely!)</td></tr>
</tbody></table>
<br />
<div>
Now, as I've <a href="http://blankonthemap.blogspot.de/2013/03/explaining-planck-by-analogy.html">mentioned before</a>, there are clever ways of removing this foreground and getting to the underlying CMB signal. The CMB signal is what is interesting for cosmologists, because that is what gives us the insight into fundamental physics. Foregrounds are about messy stuff to do with the distribution of dust in our galaxy: the details are complicated, but the underlying physics is not that interesting (ok, maybe it is, but to different people). Anyway, using their clever techniques (and measurements of the CMB+foreground at several different frequencies), the guys at Planck or WMAP can come up with the best map they can that they think represents the CMB with foreground removed.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-kBcdpaHjX78/UUsikPkORSI/AAAAAAAAE9w/2dAmu-gmeGE/s1600/Planck_CMB.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="201" src="http://3.bp.blogspot.com/-kBcdpaHjX78/UUsikPkORSI/AAAAAAAAE9w/2dAmu-gmeGE/s400/Planck_CMB.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Planck's SMICA map of the CMB.</td></tr>
</tbody></table>
<br />
<div>
The map above shows the Planck team's effort. Well actually they produced four different such "CMB only" maps, constructed by four different methods of removing the foregrounds. These are known as the SMICA, SEVEM, NILC and Commander-Ruler maps, the names indicating the different foreground-removal algorithms used. For some reason, Commander-Ruler appears not to be recommended for general use. WMAP on the other hand produced only one, known as the Internal Linear Combination or ILC map. (Planck's NILC is meant to be a counterpart to WMAP's ILC.)</div>
<div>
<br /></div>
<div>
Now, although the algorithms used to produce these maps are, as I said, very clever, the resultant maps are never going to be <i>completely </i>foreground-free. Let's express this as the equation</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
map = CMB + noise </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
where the "noise" term includes foregrounds as well as instrument noise, systematics and other contaminants. If you have more than one map, they see the same fundamental CMB, but the noise contribution to each is different. So you can subtract one from the other to get a new map consisting of their difference:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<div style="text-align: center;">
difference = map<sub>1</sub> – map<sub>2</sub> = noise<sub>1</sub> – noise<sub>2</sub>.</div>
</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Since most of the residual noise should be due to the galactic foreground, most of the features in the difference map should be around the galactic plane. If the foreground removal has been reasonably successful, these features should also be small. And for the Planck maps, that is in fact what Frejsel, Hansen and Liu find:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-yWoRnSHiXdE/UaOTPeWGaNI/AAAAAAAAFBQ/FXvbnF6ZGjU/s1600/Planck+difference+maps.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-yWoRnSHiXdE/UaOTPeWGaNI/AAAAAAAAFBQ/FXvbnF6ZGjU/s400/Planck+difference+maps.png" width="240" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"> NILC–SMICA, NILC–SEVEM and SMICA–SEVEM difference maps. Figure from <a href="http://arxiv.org/abs/1305.4033">arXiv:1305.4033</a>.</td></tr>
</tbody></table>
<br />
So the various different methods used by Planck seem to give self-consistent answers.<br />
<br />
The same is not true, however, for WMAP. Of course WMAP only use the one method of removing foreground, but they did provide different maps based on the data they had collected after 7 years of operation and after 9 years. The ILC9–ILC7 difference map looks quite different:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-JpQDwG-JIiU/UaOUeDez1AI/AAAAAAAAFBg/4RK1rCK6yfU/s1600/WMAP+ILC+anomalies.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="179" src="http://2.bp.blogspot.com/-JpQDwG-JIiU/UaOUeDez1AI/AAAAAAAAFBg/4RK1rCK6yfU/s640/WMAP+ILC+anomalies.png" width="600" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">ILC9–ILC7 difference map on the left, and with a galactic mask overlaid on the right. Figure from <a href="http://arxiv.org/abs/1305.4033">arXiv:1305.4033</a>.</td></tr>
</tbody></table>
<br />
Most of the difference appears well away from the galactic plane, as you can see in the right-hand figure, where the galaxy is masked out. So there is some important source of noise that is not foregrounds – so probably some systematics – that has affected the WMAP ILC map. Even more importantly, it is some kind of systematic effect that has changed between the 7-year and the 9-year WMAP data releases, meaning that the ILC9 and ILC7 maps do not appear to be consistent <i>with each other</i>. Frejsel <i>et al</i>. discuss a method of quantifying this, but I won't go into that here because the impression created by the images is both dramatic enough and entirely in line with the quantitative analysis.<br />
<br />
As you might have expected, the same method shows that WMAP's ILC9 map is thoroughly inconsistent with the various Planck maps (the picture here is even worse than that between the two ILC maps). But perhaps surprisingly, ILC7 is perfectly consistent with Planck. So it appears that whatever might have affected the WMAP results only affected the final data release.<br />
<br />
I guess one should be careful not to make too much of a fuss about this. The results from Planck and WMAP are, generally speaking, in pretty good agreement, except for some problems at the very largest scales. It is also true that the WMAP team themselves do not use the ILC map for most of their analysis (except for the low multipoles, $\ell<32$ – that is, the very largest scales!). But I'm sure this paper will provoke some head-scratching among the WMAP team as they try to figure out what has happened here. Oh, and if you are cosmologist using the ILC9 map for your own analysis, you should probably check whatever conclusions you draw using some other maps before publishing!<br />
<br />
All in all, I think I'm rather looking forward to Planck's consistency paper when it does finally come out!</div>
</div>
Sesh Nadathurhttp://www.blogger.com/profile/07155102110438904961noreply@blogger.com0