## Thursday, January 31, 2013

### Type Ia single degenerate survivors must be overluminous

I noticed a paper on the arXiv today with exactly this title (well, except that I removed the superfluous capitalisation of words), that is due to be published in the Astrophysical Journal. The abstract of the paper says:
In the single-degenerate (SD) channel of a Type Ia supernovae (SN Ia) explosion, a main-sequence (MS) donor star survives the explosion but it is stripped of mass and shock heated. An essentially unavoidable consequence of mass loss during the explosion is that the companion must have an overextended envelope after the explosion. While this has been noted previously, it has not been strongly emphasized as an inevitable consequence. We calculate the future evolution of the companion by injecting $2$-$6\times10^{47}$ ergs into the stellar evolution model of a $1\,M_\odot$ donor star based on the post-explosion progenitors seen in simulations. We find that, due to the Kelvin-Helmholtz collapse of the envelope, the companion must become significantly more luminous ($10$-$10^3\, L_\odot$) for a long period of time ($10^3$-$10^4$ years). The lack of such a luminous "leftover" star in the LMC supernova remnant SNR 0609-67.5 provides another piece of evidence against the SD scenario. We also show that none of the stars proposed as the survivors of the Tycho supernova, including Tycho G, could plausibly be the donor star. Additionally, luminous donors closer than $\sim10$ Mpc should be observable with the Hubble Space Telescope starting $\sim2$ years post-peak. Such systems include SN 1937C, SN 1972E, SN 1986G, and SN 2011fe. Thus, the SD channel is already ruled out for at least two nearby SNe Ia and can be easily tested for a number of additional ones. We also discuss similar implications for the companions of core-collapse SNe.
Now, technical scientific papers are full of jargon and maybe the meaning of that paragraph isn't immediately clear to everyone (there's an accompanying Youtube video purporting to explain the content of the paper, but I didn't think it quite achieved that aim!). But I think this result is really quite interesting and probably important in a broader cosmological sense.

Essentially, the authors are questioning the canonical understanding of what causes a Type Ia supernova explosion: that is, it is caused when a degenerate white dwarf star, that is slowly stripping mass off its Main Sequence or red giant neighbouring star, finally acquires enough mass to tip it over the Chandrashekhar mass limit, triggering a thermonuclear explosion. This is called the single degenerate scenario. I wrote a fuller explanation of this here, where I also mentioned some of the problems with this picture.

One of the problems, which is the one this paper is concerned with, is the fact that in simulations of the explosion process, the supernova does not completely destroy the companion star. Therefore, after the supernova has subsided, the companion should still contribute to the spectrum of radiation seen from that location, and would probably even be visible. Generally speaking though, no companion survivors are seen, and no effects (i.e., hydrogen lines) are seen in the spectrum.

Of course, as mentioned in the abstract, this much was already known. What this paper claims is that in fact this scenario were true, the surviving companion star would necessarily become much brighter than normal, and stay bright for a long time, making it easy to spot. Conversely, the fact that one can't be seen in some nearby Type Ia supernova locations counts very strongly against the single degenerate scenario.

[As an aside: I can't comment on the technical details of the paper, so I am merely trusting the peer review system to have done its job and taking the results at face value. I'm also not familiar with the literature, so the results might not be particularly ground-breaking. After all, even an outsider like me knew about problems with the single degenerate scenario last May.]

The reason this is important for cosmology is that it sometimes appears that the just-so story of the single degenerate supernova progenitor system is one of the reasons cosmologists unfamiliar with the details of Type Ia supernovae have for believing that they can actually be used as excellent standard candles. It isn't true; they can't. Clever people have been able to "standardize" these candles to an extent in order to obtain some information about the expansion history of the Universe, but there is a limit to how much they can tell us, as I wrote about here.

In particular, something you come across quite often is cosmologists making predictions for how well data from new supernova searches will constrain such things as the equation of state of dark energy. The assumption is that larger data sets will reduce the error bars, thus providing better constraints. This isn't true: systematic errors (which are probably due to intrinsic differences in the supernovae progenitor systems) are already at least as important as statistical errors. Doubling the size of the dataset will not help with this problem, and for some reason it really annoys me to see people continue to assume that it will.

Anyway, I'm glad I got that off my chest!