In addition to the developing series of posts on probability and statistical inference, I also want to write another series discussing the patterns in the distribution of galaxies, clusters of galaxies and dark matter in the Universe: how we reconstruct these patterns from observation, and how we can use this information to learn about the very distant past back near the time of the Big Bang some 14 billion years ago. This is what much of my day-to-day research is about, so I can claim more expertise on this subject than on some of the others I post about.
I'll try to keep the majority of the discussion at a level suitable for readers with an interest in cosmology, but no detailed technical knowledge of it, though I hope to include enough information to interest more expert readers as well. However, rather than constructing a systematic development of ideas from first principles I'm afraid I will flit about like a butterfly, alighting on topics that are of particular interest to me at the moment! Questions and feedback are welcome via the comments box.
Today's post is about the homogeneity of the distribution: the absence of pattern.
The Cosmological Principle
The cosmological principle is the starting point for all of modern cosmology. You have to start somewhere, and this assumption — which states simply that we do not live in a special place in the Universe — is a philosophically and aesthetically attractive starting point. It turns out it is also a very useful starting assumption.
This is because when we look at the sky around us — more particularly, when we look at the cosmic microwave background (CMB) radiation left over from the early Universe — we see a very high degree of isotropy: the CMB radiation looks extraordinarily uniform in every direction. Assuming that we are not in a special position then amounts to assuming that the Universe looks extremely isotropic from every location, and the mathematical consequence of this is that it must be very close to perfectly homogeneous, with matter and galaxies uniformly distributed everywhere. Treating the Universe as homogeneous and isotropic allows a dramatic simplification of the complicated equations of general relativity that govern its evolution. Cosmologists' lives would be a whole lot harder without the cosmological principle. In a sense, this is very much a 'spherical cow' approximation.
Of course, just as cows aren't really spherical, so the Universe isn't really homogeneous on all scales. You and I and the Earth exist, and there is a very large amount of empty space between us and the Sun or the other planets; still more so between the Sun and the next nearest star and so on. Even on much larger scales, stars are bunched up in galaxies, galaxies cluster together in large groups, clusters of galaxies form enormous filaments and large voids and so on. Have a look at this excellent little video released by the Sloan Digital Sky Survey, featuring a simulated flight through the Universe based on the actual positions of observed galaxies, to get an idea of the structure in the matter distribution:
When cosmologists assume homogeneity, we really only mean on the very largest scales, on which even entire galaxies might be too small to notice. Exactly how big these scales are is something I will return to in a moment.
(A few side comments before moving on. Firstly, the CMB photons were emitted many billions of years ago, merely 300,000 years after the Big Bang itself. So the isotropy of the CMB at that time must really be extrapolated to the present day; this is acceptable because according to our current understanding of how structures grow, perturbations as tiny as those we see in the CMB will still be rather small today. Secondly, we actually assume the homogeneity of space itself, via the choice of the Friedmann-Robertson-Walker metric, rather than the homogeneity of the mass distribution, which is related to derivatives of the metric.)
The strategy in cosmology is to calculate the 'background' dynamics of the Universe in different models using the assumption of homogeneity, and then to include small corrections to this picture via a perturbation theory approach. The foundation of this strategy is the assumption that on 'large enough' scales, the Universe is indeed close to homogeneous today, such that perturbation theory is an appropriate tool to use. ('Large enough' is to be understood in the context of the cosmological effect one is trying to model. For instance the scale of the baryon acoustic oscillations is roughly 150 Megaparsecs.) It is not unreasonable, therefore, to want to test this assumption at some point.
Testing the cosmological principle
The current standard cosmological model based on the assumption of homogeneity, known as the $\Lambda$ Cold Dark Matter model, does such a good job of fitting all the observational data we have so far that it is tempting to treat this as indirect evidence that the case is closed. But physicists are supposed to be sceptics, and would expect this assumption to be tested rigorously. So it was nice to see a paper on the arXiv earlier this year by Scrimgeour et al. (published this week in Monthly Notices) that attempted to directly answer the question of whether galaxies are homogeneously distributed using data from the WiggleZ galaxy survey. It's a nice paper and although it addresses a fundamental question, the main physical ideas behind it are straightforward, so I'm going to attempt to describe them in this post.
It's actually quite tricky to establish whether galaxies in any particular survey are homogeneously distributed without already assuming the answer from the beginning. Simplifying slightly, what Scrimgeour et al. do is to place many imaginary spheres of radius $r$ centred on each galaxy in the survey, and to determine the number of galaxies that lie within each such sphere. This number is a stochastic variable. For a purely homogeneous distribution with no correlations the mean value $\mathcal{N}(<r)$ should increase as $r^3$. On the other hand, if galaxies were distributed in a fractal manner, with structure on all scales and no large-scale homogeneity, $\mathcal{N}(<r)\propto r^D$ for some fractal dimension $D<3$.
The real distribution is actually not expected to scale as $r^3$ at small $r$ even in the standard cosmological model because of correlations: galaxies tend to clump close together and so smaller spheres have relatively more galaxies, which means $\mathcal{N}(<r)/r^3$ decreases with increasing $r$. Indeed on small scales (tens of Mpc), the distribution is already known to be quite well described by a fractal distribution. But in the standard model the correlations die down to zero with increasing $r$, so that $\mathcal{N}(<r)$ should get closer and closer to scaling as $r^3$. On the other hand, alternative models in which the Universe really does have a fractal nature even on the largest scales do exist and are taken seriously by some people. So can the approach to homogeneity can be confirmed, and, if so, does the distribution approach homogeneity in the manner predicted in $\Lambda$CDM rather than faster or slower? The answer in this paper is that it does.
Science isn't so simple, though!
Actually there were several complications that the WiggleZ team had to overcome. One arises from the fact that galaxies have variable brightness. When they are closer to us, we can see galaxies even if they are rather dim, but of the ones that are further away we only see the brightest. So the observed distribution of galaxies in surveys which see different fractions of nearby and far-away galaxies will be different to the intrinsic distribution on the sky. Such surveys are called 'flux-limited'; WiggleZ is a flux-limited survey. There are a few other features of WiggleZ which also have a similar effect.
The nature of the survey therefore means that spheres of the same volume placed in different locations would be expected to contain different numbers of galaxies, even if the distribution were homogeneous. In order to correct for this, the authors had to assign each sphere an effective volume to account for the differences based on location, and then scale the numbers of galaxies actually seen in each sphere by this effective volume. Unfortunately calculating the appropriate effective volume involves assuming that galaxies are actually homogeneously distributed on some scale, even though this was the proposition they set out to test in the first place.
Another problem is that the WiggleZ survey region isn't neat and continuous. The boundary of the surveyed region has a peculiar shape and there are many holes within it where no observations were carried out. So when the spheres are large it is hard to fit very many of them wholly within the survey volume. Some of them then poke out of the surveyed region and so would only be partly filled with galaxies — unless this were corrected for one would get spurious effects! In the relevant range of $r$ values, about 20% or more of the volume of the average sphere in the analysis contains no galaxies. There are several different ways of accounting for these edge effects. One would be to only use spheres that are wholly within the survey volume, although this means wasting a lot of the available data. What the WiggleZ team did instead was to factor these edge effects into their calculation of the effective volume. This makes use of all the data, but has the disadvantage of again presupposing homogeneity.
Given these problems, the test carried out in this paper — as Scrimgeour et al. do carefully point out — is strictly speaking more of a consistency check. It would be a great surprise if such a test did not show an approach to homogeneity, both because we expect homogeneity anyway and because of the way the test was designed. And indeed it does show that at scales larger than around 100 Mpc, the galaxy distribution is indeed homogeneous. It even approaches homogeneity almost exactly as predicted by $\Lambda$CDM (see Figure 5 in the paper).
Scrimgeour et al. tested their method on some simulated data sets that were specifically designed to have a fractal nature: these did not approach homogeneity in the same way, which is convincing though indirect proof that although they presuppose homogeneity, their method is capable of returning a different answer as well. As a result, even with my sceptical hat on I am now convinced that the WiggleZ galaxies are sufficiently uniformly distributed that the assumption of homogeneity is justified.
However, one thing I would say is that this sort of a test is an essential prerequisite for cosmological analysis of any galaxy survey, especially because different surveys use different methods and sometimes look for different types of galaxies. WiggleZ shows an approach to homogeneity, so we can extract reliable information on BAO and so on from WiggleZ galaxies. Other surveys may not, in which case we can't get that information from them. This really should be tested every time.
Anyway, at least as far as WiggleZ is concerned, there is no need to throw out the cosmological principle. Having established the large-scale absence of pattern, we can concentrate on what the existence of small-scale patterns tells us. I'll talk about some of these things in later posts.
Further reading: Peter Coles put up a blog post a few years ago that gives some more background on the fractal Universe theories. He also has a much briefer and more recent one discussing this WiggleZ paper, though not in much detail.
I'll try to keep the majority of the discussion at a level suitable for readers with an interest in cosmology, but no detailed technical knowledge of it, though I hope to include enough information to interest more expert readers as well. However, rather than constructing a systematic development of ideas from first principles I'm afraid I will flit about like a butterfly, alighting on topics that are of particular interest to me at the moment! Questions and feedback are welcome via the comments box.
Today's post is about the homogeneity of the distribution: the absence of pattern.
The Cosmological Principle
The cosmological principle is the starting point for all of modern cosmology. You have to start somewhere, and this assumption — which states simply that we do not live in a special place in the Universe — is a philosophically and aesthetically attractive starting point. It turns out it is also a very useful starting assumption.
This is because when we look at the sky around us — more particularly, when we look at the cosmic microwave background (CMB) radiation left over from the early Universe — we see a very high degree of isotropy: the CMB radiation looks extraordinarily uniform in every direction. Assuming that we are not in a special position then amounts to assuming that the Universe looks extremely isotropic from every location, and the mathematical consequence of this is that it must be very close to perfectly homogeneous, with matter and galaxies uniformly distributed everywhere. Treating the Universe as homogeneous and isotropic allows a dramatic simplification of the complicated equations of general relativity that govern its evolution. Cosmologists' lives would be a whole lot harder without the cosmological principle. In a sense, this is very much a 'spherical cow' approximation.
Of course, just as cows aren't really spherical, so the Universe isn't really homogeneous on all scales. You and I and the Earth exist, and there is a very large amount of empty space between us and the Sun or the other planets; still more so between the Sun and the next nearest star and so on. Even on much larger scales, stars are bunched up in galaxies, galaxies cluster together in large groups, clusters of galaxies form enormous filaments and large voids and so on. Have a look at this excellent little video released by the Sloan Digital Sky Survey, featuring a simulated flight through the Universe based on the actual positions of observed galaxies, to get an idea of the structure in the matter distribution:
When cosmologists assume homogeneity, we really only mean on the very largest scales, on which even entire galaxies might be too small to notice. Exactly how big these scales are is something I will return to in a moment.
(A few side comments before moving on. Firstly, the CMB photons were emitted many billions of years ago, merely 300,000 years after the Big Bang itself. So the isotropy of the CMB at that time must really be extrapolated to the present day; this is acceptable because according to our current understanding of how structures grow, perturbations as tiny as those we see in the CMB will still be rather small today. Secondly, we actually assume the homogeneity of space itself, via the choice of the Friedmann-Robertson-Walker metric, rather than the homogeneity of the mass distribution, which is related to derivatives of the metric.)
The strategy in cosmology is to calculate the 'background' dynamics of the Universe in different models using the assumption of homogeneity, and then to include small corrections to this picture via a perturbation theory approach. The foundation of this strategy is the assumption that on 'large enough' scales, the Universe is indeed close to homogeneous today, such that perturbation theory is an appropriate tool to use. ('Large enough' is to be understood in the context of the cosmological effect one is trying to model. For instance the scale of the baryon acoustic oscillations is roughly 150 Megaparsecs.) It is not unreasonable, therefore, to want to test this assumption at some point.
Testing the cosmological principle
The current standard cosmological model based on the assumption of homogeneity, known as the $\Lambda$ Cold Dark Matter model, does such a good job of fitting all the observational data we have so far that it is tempting to treat this as indirect evidence that the case is closed. But physicists are supposed to be sceptics, and would expect this assumption to be tested rigorously. So it was nice to see a paper on the arXiv earlier this year by Scrimgeour et al. (published this week in Monthly Notices) that attempted to directly answer the question of whether galaxies are homogeneously distributed using data from the WiggleZ galaxy survey. It's a nice paper and although it addresses a fundamental question, the main physical ideas behind it are straightforward, so I'm going to attempt to describe them in this post.
It's actually quite tricky to establish whether galaxies in any particular survey are homogeneously distributed without already assuming the answer from the beginning. Simplifying slightly, what Scrimgeour et al. do is to place many imaginary spheres of radius $r$ centred on each galaxy in the survey, and to determine the number of galaxies that lie within each such sphere. This number is a stochastic variable. For a purely homogeneous distribution with no correlations the mean value $\mathcal{N}(<r)$ should increase as $r^3$. On the other hand, if galaxies were distributed in a fractal manner, with structure on all scales and no large-scale homogeneity, $\mathcal{N}(<r)\propto r^D$ for some fractal dimension $D<3$.
The real distribution is actually not expected to scale as $r^3$ at small $r$ even in the standard cosmological model because of correlations: galaxies tend to clump close together and so smaller spheres have relatively more galaxies, which means $\mathcal{N}(<r)/r^3$ decreases with increasing $r$. Indeed on small scales (tens of Mpc), the distribution is already known to be quite well described by a fractal distribution. But in the standard model the correlations die down to zero with increasing $r$, so that $\mathcal{N}(<r)$ should get closer and closer to scaling as $r^3$. On the other hand, alternative models in which the Universe really does have a fractal nature even on the largest scales do exist and are taken seriously by some people. So can the approach to homogeneity can be confirmed, and, if so, does the distribution approach homogeneity in the manner predicted in $\Lambda$CDM rather than faster or slower? The answer in this paper is that it does.
Science isn't so simple, though!
Actually there were several complications that the WiggleZ team had to overcome. One arises from the fact that galaxies have variable brightness. When they are closer to us, we can see galaxies even if they are rather dim, but of the ones that are further away we only see the brightest. So the observed distribution of galaxies in surveys which see different fractions of nearby and far-away galaxies will be different to the intrinsic distribution on the sky. Such surveys are called 'flux-limited'; WiggleZ is a flux-limited survey. There are a few other features of WiggleZ which also have a similar effect.
The nature of the survey therefore means that spheres of the same volume placed in different locations would be expected to contain different numbers of galaxies, even if the distribution were homogeneous. In order to correct for this, the authors had to assign each sphere an effective volume to account for the differences based on location, and then scale the numbers of galaxies actually seen in each sphere by this effective volume. Unfortunately calculating the appropriate effective volume involves assuming that galaxies are actually homogeneously distributed on some scale, even though this was the proposition they set out to test in the first place.
Another problem is that the WiggleZ survey region isn't neat and continuous. The boundary of the surveyed region has a peculiar shape and there are many holes within it where no observations were carried out. So when the spheres are large it is hard to fit very many of them wholly within the survey volume. Some of them then poke out of the surveyed region and so would only be partly filled with galaxies — unless this were corrected for one would get spurious effects! In the relevant range of $r$ values, about 20% or more of the volume of the average sphere in the analysis contains no galaxies. There are several different ways of accounting for these edge effects. One would be to only use spheres that are wholly within the survey volume, although this means wasting a lot of the available data. What the WiggleZ team did instead was to factor these edge effects into their calculation of the effective volume. This makes use of all the data, but has the disadvantage of again presupposing homogeneity.
Given these problems, the test carried out in this paper — as Scrimgeour et al. do carefully point out — is strictly speaking more of a consistency check. It would be a great surprise if such a test did not show an approach to homogeneity, both because we expect homogeneity anyway and because of the way the test was designed. And indeed it does show that at scales larger than around 100 Mpc, the galaxy distribution is indeed homogeneous. It even approaches homogeneity almost exactly as predicted by $\Lambda$CDM (see Figure 5 in the paper).
Scrimgeour et al. tested their method on some simulated data sets that were specifically designed to have a fractal nature: these did not approach homogeneity in the same way, which is convincing though indirect proof that although they presuppose homogeneity, their method is capable of returning a different answer as well. As a result, even with my sceptical hat on I am now convinced that the WiggleZ galaxies are sufficiently uniformly distributed that the assumption of homogeneity is justified.
However, one thing I would say is that this sort of a test is an essential prerequisite for cosmological analysis of any galaxy survey, especially because different surveys use different methods and sometimes look for different types of galaxies. WiggleZ shows an approach to homogeneity, so we can extract reliable information on BAO and so on from WiggleZ galaxies. Other surveys may not, in which case we can't get that information from them. This really should be tested every time.
Anyway, at least as far as WiggleZ is concerned, there is no need to throw out the cosmological principle. Having established the large-scale absence of pattern, we can concentrate on what the existence of small-scale patterns tells us. I'll talk about some of these things in later posts.
Further reading: Peter Coles put up a blog post a few years ago that gives some more background on the fractal Universe theories. He also has a much briefer and more recent one discussing this WiggleZ paper, though not in much detail.
W against T, reference required. #conspiracyofscience
ReplyDeleteFrom Stern's Blogger profile:
DeleteI have a new theory of physics. My theory predicts weight (W) should decrease at increasing temperature (T) in vacuum. W reduction at increasing T in vacuum disproves conservation of mass and consequently rest of physics. I contacted thousands of scientists to do the experiment and waited almost ten years and didn't get the results.