A few days ago I discussed the purported 'spooky' alignment of quasar spins and the cosmological principle here. So as to focus better on the main point, I left a few technical comments out of that discussion which I want to mention here. These don't have any direct bearing on the main argument made in that post — rather they are interesting asides for a more expert audience.
Readers of the original post might have noticed that I was quite careful to always state that the distribution of quasars was statistically homogeneous, but not that the quasars showed the Universe was homogeneous. The reason for this lies in the properties of the quasar sample itself.
Quasars can't prove the Universe is homogeneous
Readers of the original post might have noticed that I was quite careful to always state that the distribution of quasars was statistically homogeneous, but not that the quasars showed the Universe was homogeneous. The reason for this lies in the properties of the quasar sample itself.
There are two main ways of constructing a sample of galaxies or quasars to use for further analysis, such as testing homogeneity. The first is that you simply include every object seen by the survey instruments within a certain patch of sky that lies between two redshifts of interest. But these objects will vary in their intrinsic brightness, and the survey instruments have a limited sensitivity, so can only record dim objects when they are relatively close to us. Intrinsically bright objects are rarer, but if they are very far away we will only be able to see the rare bright ones. So this strategy results in a sample with very many, but largely dim, galaxies or quasars relatively close to us, and fewer but brighter objects far away. This is known as a flux-limited sample.
The other strategy is to correct the measured brightness of each object for the distance from us, to determine its 'intrinsic' brightness (otherwise known as its absolute magnitude), and then select a sample of only those objects which have similar absolute magnitudes. The magnitude range is chosen in accordance with the range of distances such that within the volume of the Universe surveyed, we can be confident we have seen every object of that magnitude that exists. This is called a volume-limited sample.
Testing the homogeneity of the Universe requires a volume-limited survey of objects. For a flux-limited sample the distribution in redshift (i.e., in the line-of-sight direction) would not be expected to be uniform in the first place: the number density of objects would ordinarily decrease sharply with redshift. But looking out away from Earth also involves looking back in time; so if the redshift range of the survey is large, the farthest objects are seen as they were at an earlier time than the closest ones. If the objects in question had evolved significantly in that time, near and far objects could represent significantly different populations even in a volume-limited sample, and once again we wouldn't expect to see homogeneity along the line of sight, even if the Universe were homogeneous.
So to really test the cosmological principle without having to assume homogeneity at the outset,1 we really need a volume-limited sample of galaxies that cover a very large volume of the Universe but span a relatively narrow range of redshifts. Such surveys are hard to come by. For example, the study confirming homogeneity in WiggleZ galaxies (see here and here) actually used a flux-limited sample, so required additional assumptions. In this case one doesn't obtain a proof, rather a check of the self-consistency of those assumptions — which people may regard as good enough, depending on taste.
Anyway, the key point is that the DR7QSO quasar sample everyone uses is most definitely flux-limited and not volume-limited (I was myself reminded of this point by Francesco Sylos Labini). Despite this, the redshift distribution of quasars is remarkably uniform (between redshifts 1 and 1.8). So what's going on? Well, unlike certain types of galaxies that live much closer to home, distant quasar populations are expected to evolve rather quickly with time. And the age difference between objects at redshifts 1 and 1.8 is more than 2 billion years!
It would appear that this effect and the flux-limited nature of the survey coincidentally roughly cancel each other out for the sample in question. A volume-limited subset of these quasars would be (is) highly inhomogeneous — but then because of the time evolution the homogeneity or otherwise of any sample of quasars says nothing much about the homogeneity or otherwise of the Universe in general.
Luckily this is only incidental to the main argument. The fact that the distribution of these (flux-limited) quasars is statistically homogeneous on scales of 100-odd Megaparsecs despite claims for the existence of Gigaparsec-scale 'structures' simply demonstrates the point that the existence of single structures of any kind doesn't have any bearing on the question of overall homogeneity. Which is the main point.
Of course the argument above cuts both ways.
Let's imagine that a study has shown that the distribution of a particular type of galaxy — call them luminous red galaxies — approaches homogeneity above a certain distance scale, say 100 Megaparsecs. Such a study was done by David Hogg and others in 2005. From this we may reasonably conclude (though not, strictly speaking, prove) that the matter distribution in the Universe is homogeneous above at most 100 Mpc. But we are not allowed to conclude that the distribution of some other sample of objects — radio galaxies, quasars, blue galaxies etc. — approaches homogeneity above the same scale, or indeed at all!
Even in a Universe with a homogeneous matter distribution, the scale above which a volume-limited sample of galaxies whose properties are constant with time approaches homogeneity depends on the galaxy bias. This number depends on the type of galaxies in question, and so too to a lesser extent will the expected homogeneity scale. Of course if the sample is not volume-limited, or does evolve with time, all bets are off anyway.
More generally, for each sample of galaxies that we wish to use for higher order statistical measurements, the statistical homogeneity of that particular sample must in general be demonstrated first. This is because higher order statistical quantities, such as the correlation function, are conventionally normalized in units of the sample mean, but in the absence of statistical homogeneity this becomes meaningless.
There was a time when the homogeneity of the Universe was less well accepted than it is today, and the possibility of a fractal distribution of matter was still an open question. At that time demonstrating the approach to homogeneity on large scales in a well-chosen sample of galaxies was worth a publication (even a well-cited publication) in itself. This is probably no longer the case, but it remains a necessary sanity check to perform for each galaxy survey.
The other strategy is to correct the measured brightness of each object for the distance from us, to determine its 'intrinsic' brightness (otherwise known as its absolute magnitude), and then select a sample of only those objects which have similar absolute magnitudes. The magnitude range is chosen in accordance with the range of distances such that within the volume of the Universe surveyed, we can be confident we have seen every object of that magnitude that exists. This is called a volume-limited sample.
Testing the homogeneity of the Universe requires a volume-limited survey of objects. For a flux-limited sample the distribution in redshift (i.e., in the line-of-sight direction) would not be expected to be uniform in the first place: the number density of objects would ordinarily decrease sharply with redshift. But looking out away from Earth also involves looking back in time; so if the redshift range of the survey is large, the farthest objects are seen as they were at an earlier time than the closest ones. If the objects in question had evolved significantly in that time, near and far objects could represent significantly different populations even in a volume-limited sample, and once again we wouldn't expect to see homogeneity along the line of sight, even if the Universe were homogeneous.
So to really test the cosmological principle without having to assume homogeneity at the outset,1 we really need a volume-limited sample of galaxies that cover a very large volume of the Universe but span a relatively narrow range of redshifts. Such surveys are hard to come by. For example, the study confirming homogeneity in WiggleZ galaxies (see here and here) actually used a flux-limited sample, so required additional assumptions. In this case one doesn't obtain a proof, rather a check of the self-consistency of those assumptions — which people may regard as good enough, depending on taste.
Anyway, the key point is that the DR7QSO quasar sample everyone uses is most definitely flux-limited and not volume-limited (I was myself reminded of this point by Francesco Sylos Labini). Despite this, the redshift distribution of quasars is remarkably uniform (between redshifts 1 and 1.8). So what's going on? Well, unlike certain types of galaxies that live much closer to home, distant quasar populations are expected to evolve rather quickly with time. And the age difference between objects at redshifts 1 and 1.8 is more than 2 billion years!
It would appear that this effect and the flux-limited nature of the survey coincidentally roughly cancel each other out for the sample in question. A volume-limited subset of these quasars would be (is) highly inhomogeneous — but then because of the time evolution the homogeneity or otherwise of any sample of quasars says nothing much about the homogeneity or otherwise of the Universe in general.
Luckily this is only incidental to the main argument. The fact that the distribution of these (flux-limited) quasars is statistically homogeneous on scales of 100-odd Megaparsecs despite claims for the existence of Gigaparsec-scale 'structures' simply demonstrates the point that the existence of single structures of any kind doesn't have any bearing on the question of overall homogeneity. Which is the main point.
Homogeneity is sample-dependent
Of course the argument above cuts both ways.
Let's imagine that a study has shown that the distribution of a particular type of galaxy — call them luminous red galaxies — approaches homogeneity above a certain distance scale, say 100 Megaparsecs. Such a study was done by David Hogg and others in 2005. From this we may reasonably conclude (though not, strictly speaking, prove) that the matter distribution in the Universe is homogeneous above at most 100 Mpc. But we are not allowed to conclude that the distribution of some other sample of objects — radio galaxies, quasars, blue galaxies etc. — approaches homogeneity above the same scale, or indeed at all!
Even in a Universe with a homogeneous matter distribution, the scale above which a volume-limited sample of galaxies whose properties are constant with time approaches homogeneity depends on the galaxy bias. This number depends on the type of galaxies in question, and so too to a lesser extent will the expected homogeneity scale. Of course if the sample is not volume-limited, or does evolve with time, all bets are off anyway.
More generally, for each sample of galaxies that we wish to use for higher order statistical measurements, the statistical homogeneity of that particular sample must in general be demonstrated first. This is because higher order statistical quantities, such as the correlation function, are conventionally normalized in units of the sample mean, but in the absence of statistical homogeneity this becomes meaningless.
There was a time when the homogeneity of the Universe was less well accepted than it is today, and the possibility of a fractal distribution of matter was still an open question. At that time demonstrating the approach to homogeneity on large scales in a well-chosen sample of galaxies was worth a publication (even a well-cited publication) in itself. This is probably no longer the case, but it remains a necessary sanity check to perform for each galaxy survey.
1Properly speaking, even the creation of a volume-limited sample requires an assumption of homogeneity at the outset, since the determination of absolute magnitudes requires a cosmological model, and the cosmological model used will assume homogeneity. In this sense all "tests" of homogeneity are really consistency checks of our assumption thereof.
No comments:
Post a Comment