I wonder if I am the only person who has found the various explanations available in the public domain, of the expansion of the universe, confusing. For about two years after learning the essentials of general relativity I struggled with the apparent contradictions in the idea of cosmological expansion. I asked several questions on internet forums but never got an answer that helped me resolve the confusion. That was probably because I was asking the wrong questions, but it is so hard to know what are the right questions to ask.
I am fortunate to have finally been able to discover the right questions and, together with answers to those, and some useful papers by cosmologists, to piece together a coherent understanding of what is actually going on. This essay is an attempt to explain that for anybody else who finds the concepts puzzling.
I’ll start by describing why the concept of an expanding universe is confusing. The usual explanation involves an analogy with a balloon that is being inflated, on which dots have been marked with a felt pen. As the balloon inflates, the dots – which correspond to galaxies – become further apart. There are three key problems with this analogy, and with the idea of expansion in general:
- The balloon analogy fails because it seems to require the existence of a physical object that is space, corresponding to the rubber of the balloon. But the Michelson Morley experiments showed that there is no ‘absolute space’, which was referred to at the time as the ‘luminiferous ether’. The balloon analogy implies the existence of a ‘privileged reference frame’, and they are not supposed to exist.
- We are told that faraway galaxies are receding from us faster than the speed of light. This appears to contradict the rule that nothing can travel faster than light.
- If the universe’s expansion is accelerating then that seems to imply that energy is being created, as the kinetic energy of the galaxies will be ever increasing, without any compensatory reduction in potential energy.
The Cosmological Principle – a Privileged Reference Frame
To start on trying to resolve these objections, we need to first state the fundamental principle on which this sort of analysis is based – the Cosmological Principle. This principle effectively says that there exists a system of coordinates for spacetime under which a snapshot of the universe at any ‘cosmic’ time coordinate is isotropic and homogeneous at the large scale.
A system of coordinates (also known as a ‘coordinate system‘ or ‘reference frame‘) is a scheme that assigns a unique set of four numbers to every point in spacetime. This enables any point in spacetime to be referred to by those numbers – its ‘coordinates’. One of those coordinates is a time coordinate and the other three are spatial – denoting a location in space.
Homogeneous means it is the same everywhere. This rules out possibilities such as there being a ‘centre of the universe’ where stars occur most frequently, with the distribution of stars getting ever sparser as you travel further from that centre.
Isotropic means it is the same in every direction. So an observer that is stationary with respect to this coordinate system would not for instance observe stars on her left moving towards her on average and stars on her right moving away from her.
At the large scale means we ignore local variations. Of course stars are more frequent within a galaxy than outside one, but once we zoom our perspective out enough to incorporate many clusters of galaxies, the density of matter should appear roughly constant. Similarly, we may have more objects within our galaxy moving towards us from the left than from the right – meaning local isotropy does not hold – but if we zoom out enough and take enough distant galaxies into our view, their motion relative to us will be the same in all directions.
The homogeneity assumption is easy enough to grasp. It simply says there is nothing special about our part of the universe, that what we can see is a fair sample of the whole thing.
The isotropy assumption is the really powerful one, because it establishes the privileged reference frame that we can associate with the rubber in the balloon analogy. Given any point P in spacetime, we can define a location in space as being the unique worldline through P for which the universe appears isotropic from every point in that worldline. In the analogy, that means that the worldline follows the path of a dot marked on the balloon’s surface, as it is being blown up.
By the way, an isotropic universe must be homogeneous, but the reverse is not true. A homogeneous but non-isotropic universe could be one in which the speed of light is different depending on which direction the light is travelling.
We can now satisfy the first objection. There is indeed a privileged reference frame, being the one that generates the required conditions of homogeneity and isotropy. It does not require the existence of a substance like the luminiferous ether, whose existence was disproven by the Michelson Morley experiments. All it requires is the existence of matter throughout the universe, and the application of the Cosmological Principle. The existence of this frame does not contradict the principles of Galilean and Special Relativity. Those principles state that there is no privileged ‘inertial reference frame’, which means a frame that is not accelerated or subject to gravitational forces. The frame identified by the Cosmological Principle is not inertial and hence does not contradict those theories. In one sense it is similar to the ‘laboratory frame’ on Earth, which is the frame in which the laboratory in which experiments are being conducted is stationary. That frame is privileged in a sense, but like the cosmological frame it is not inertial, because of the effect of the Earth’s gravity.
The spatial coordinates of a point that is stationary with respect to the cosmic reference frame are frequently called ‘co-moving coordinates’. In the balloon analogy this means that a point with those coordinates is moving in the same way as the rubber, i.e. it is stationary with respect to the rubber.
Given this special reference frame, we can now make sense of another commonly used aspect of the balloon analogy, that of ants crawling on the balloon. In the balloon analogy, there is a clear difference between the relative motion of dots marked on the balloon’s surface and the relative motion of ants crawling on that surface. In the former case, the motion is caused solely by the expansion, deflation or other deformation of the balloon. In the latter, the motion is a combination of that with the motion of the ant relative to the rubber.
We can use the same concept in our universe model. The coordinates of a stationary section of the ‘rubber’ are determined by application of the cosmological principle. Objects that are stationary in those coordinates may still be moving relative to one another due to the expansion of the universe. On top of that, an object may be moving locally, relative to that coordinate system. For instance, the Earth is orbiting the sun, which involves constant changes of direction, incorporating a full 180 degree change of direction of motion every six months. This motion is relative to the cosmic coordinates, and we call it ‘peculiar motion’. It is analogous to an ant walking around in circles on the rubber of the balloon.
The Andromeda galaxy is moving towards ours under gravitational attraction, and they seem destined to collide. Those relative motions are also peculiar motion, like two ants rushing towards one another along the surface of an expanding balloon.
One final note on this special cosmic reference frame. The standard way to determine it is by reference to the Cosmic Microwave Background Radiation (CMBR), which is the radiation left over from the Big Bang, that fills the sky in every direction. We can identify the co-moving coordinates of our location by identifying the velocity we would need to have for the wavelength of the CMBR to be the same in every direction. An observer that is not ‘co-moving’ with the cosmic reference frame will see longer wavelengths in one direction than in its opposite, because of the Doppler effect.
But how can galaxies travel faster than light?
The resolution to the second objection comes from the realisation that the prohibition on faster than light (‘superluminal’) travel, precisely stated, is not the same as how it is often popularly described. The popular description is either:
- an object cannot travel faster than light, or
- two objects cannot have a relative speed that exceeds the speed of light
Neither of these is strictly correct. They should be really stated as:
1a. There is no inertial reference frame in which an object’s velocity is faster than light, and
2a. Two objects within the same inertial reference frame cannot have a relative speed that exceeds the speed of light
We can immediately see that 2a is not breached by the motion of distant galaxies, because there is no inertial reference frame that contains both us and such a distant galaxy. A reference frame is inertial if spacetime is very close to flat within that frame, and there is too much spacetime curvature between us and a distant galaxy for any reference frame containing both to be flat. This can be understood by comparison to the Earth. It is reasonable to assume the Earth is flat in measuring travelling distances between my house and the local shops. However, if I am planning a trip between from my house in Sydney to London, I have to take the curvature of the Earth into account.
1a also is not breached by the distant galaxy, because it is not travelling faster than light in any inertial frame containing it. Nor is it travelling faster than light relative to any object close enough to it to be in the same inertial frame.
This may seem all a little unsatisfactory, as it leaves a big grey area in between the local shops and the distant galaxy, in which we are uncertain how far we can extend an inertial reference frame. Fortunately, we can resolve this by expressing the prohibition on superluminal travel in a more precise way, as follows:
- No object can have a spacelike four-velocity.
A four-velocity is a vector that can be used to represent an object’s motion, which is independent of any reference frame (‘co-ordinate independent’). Like all vectors, it has a magnitude (‘size’) and a direction, but the direction is in spacetime, not space. In any given reference frame, a four-velocity can be denoted by four numbers, called components, of which one will be a ‘time’ component and the other three will be ‘spatial’ components. The values of the four components will differ between reference frames, but they will all refer to the same physical phenomenon, whose magnitude and direction does not differ between reference frames. There is a mathematical formula, involving something called the metric tensor, for determining the magnitude of any vector. In relativity, magnitudes can be positive, negative or zero. A vector with negative magnitude can be the velocity of an object with mass, and such vectors are called ‘timelike‘. Light rays, which have no mass, must have velocity vectors with zero magnitudes, and those vectors are called ‘lightlike‘*. Vectors with positive magnitudes are called ‘spacelike‘.
The law 3 is the most general and precise statement of the prohibition on superluminal travel. Prohibitions 1a and 2a are consequences of this rule and, given a local, inertial reference frame, prohibitions 1 and 2 follow.
In an expanding universe, no object, including no galaxy, can have a spacelike velocity, so the prohibition is respected. It will be the case that the distance between two galaxies far away from one another is increasing at more than 3 x 108 ms-1, but that does not breach the prohibition.
One last point on this objection. When we refer to distance in that last paragraph, we mean what is called the ‘proper distance’. That means the shortest distance between the two galaxies in the snapshot of the universe taken at an instant of cosmic time. In an expanding universe that shortest distance will be increasing with cosmic time.
Does the expansion violate conservation of energy?
If galaxies are rushing apart at ever increasing rates that appears to increase the net energy of the universe on two counts. Firstly, their kinetic energy is increasing with their increasing relative velocities. Secondly, their gravitational potential energy is also increasing as they get farther apart. This appears to violate the principle of conservation of energy.
The answer to this objection is that conservation of energy only applies locally, within an inertial reference frame. There is no coordinate-dependent global equivalent to that local principle. In fact there is not even any global, coordinate-independent definition of total energy. There are approaches using coordinate-dependent pseudo-tensors, but these are controversial, and seem unsatisfying given their coordinate-dependence.
Even so, it appears that these pseudo-tensor approaches can be used to derive a conclusion that the net energy of the universe is zero, and will remain zero. Hence, under such an analysis, there does not appear to be any violation of conservation of energy.
* Note: If you are familiar with vectors in two and three-dimensional Euclidean space, you might assume that any vector with zero magnitude must be a trivial vector whose components are all zero in any reference frame, and hence which has no direction. While that conclusion is true for Euclidean space, it is not true for spacetime, which is non-Euclidean. Lightlike vectors have zero magnitudes but they also have a well-defined direction. They will have non-zero components, which cancel one another out when we calculate the magnitude.
In discussions about free will, consciousness or interpretations of quantum mechanics people talk a great deal about whether the world is deterministic, random or something else. Well I don’t know what the ‘something else’ bit could be but I’m also starting to wonder whether the idea of randomness makes any sense.
The general idea of the distinction between a deterministic and a random universe seems to be that, in the former, events are somehow ‘fixed’ before they occur, whereas in the latter they are not – there are multiple different possible events. That sounds clear enough if you don’t think too hard about it, but if we do then we come up against the question of what do we mean by fixed? Fixed how, and by whom?
The apparent answer is that they are fixed by the ‘laws of nature’. But what are the laws of nature? Isn’t this some fairly heavy duty reifying to posit that there exist some actual laws, perhaps dwelling in some Platonic kingdom of Forms or written on magical stone tablets? Sure we have useful laws like Schrodinger’s and Einstein’s equations, but all we know for sure is that these are ideas we use to make sense of what we see and make predictions about the future. Whether they have some mysterious metaphysical existence independent of human minds is an entirely separate matter. It seems quite unlikely to me, given that every century or so we have to tweak the laws we use when further experiments and theorising show they are not completely accurate.
A Platonist might argue that there really are laws of nature out there, which determine how the universe behaves. OK if that’s the case then what about the law that is simply a description of where every single particle in the universe will be at every instant in the history of time? This is essentially a very, very long shopping list, but it also happens to perfectly describe the behaviour of the universe. Let’s call it the M-law, as it is the ‘mother of all laws’. Is that a law of nature? If it is then the universe cannot be random because everything that ever happens is described by the M-law.
So what are our choices about universe types? If we deny that laws of nature have any existence independent of human brains then everything in the universe is random because there is nothing that fixes it. If we assert the separate existence of laws of nature then nothing is random because it is all predicted by the M-law. Or, we could try and be picky and assert the existence of laws but only count them if they meet certain criteria. But what would such criteria be?
One option is to say that a law only counts if it can be known prior to the event we want to use it to predict. That would certainly disqualify the M-law. The trouble is, it would also disqualify everything else, because we cannot prove any of the laws. All we can do is build up supportive evidence for them, and there is never enough to be certain.
Sure, says the law-enthusiast, you can’t ever be completely certain, but in practice, if we are 99% certain of something, we would consider that good enough. Alright then. That does seem a bit arbitrary, but let’s go with it and see where it leads. What can we say about the motions of the planets prior to the discoveries of Kepler, Galileo and Newton? Back then we knew nothing of the ‘laws’ we now use to describe those motions, so under this criterion those laws didn’t apply, which apparently makes the motions of planets in 1347 CE random. Is that what we want?
‘Ah yes’ replies the enthusiast, ‘but you are limiting yourself to what we managed to work out based on our imperfect interpretation of the available information and our limited ability to make observations. The validity of a law should be based on all the information that was available up to that time, to an omniscient – but not future-seeing – observer, who was able to develop the best possible theories with the available information.’
Well I have to say this is getting even weirder and more implausible! We now have an ideal scientist-observer that is our yardstick for what constitutes a law of nature. If we go with that, and accept some arbitrary threshold of confidence – say 99% – on the validity of a law (leaving aside the very difficult question of how we would try to implement that threshold and whether it would be possible to validly calculate probabilities against it) then maybe we have arrived at a definition that could be pressed into use for laws of nature while excluding the M-law.
But – haven’t we ended up with a definition of randomness that is entirely epistemological? We have effectively defined a random event as one for which our ideal observer could not have 99% confidence beforehand of what the outcome would be. And the trouble with that is that we can no longer make the distinction that metaphysicists like to make between epistemological uncertainty over deterministic but chaotic events such as a coin toss, and ‘genuine’ randomness such as the decay of a radioactive isotope. With our new definition both of these types of uncertainty are ‘merely’ epistemological, and there is no such thing as metaphysical or ontological randomness. If we take this path we have to conclude that all randomness is epistemological, and there is not distinction from ‘metaphysical randomness’.
One last point to wrap up with. I went searching for a mathematical definition of randomness and came up with a complete blank. There are definitions of random variable, random (stochastic) process, probability space and various other related objects. But none of them have anything in them that captures the idea of ‘metaphysical undeterminedness’ that lurks under the popular conception of randomness. In fact, rather oddly, of the various interpretations of quantum physics, the only one that has close parallels to any of those mathematical objects in the field of probability theory is the ‘many worlds interpretation’, which looks very like the peculiar object that is a ‘stochastic process in continuous time’. That is ironic, as the many-worlds interpretation is regarded as a ‘deterministic’ interpretation, standing in contrast to the most popular ‘Copenhagen interpretation’ which is regarded as indeterministic, ie random.
Andrew Kirk Bondi Junction July 2012
A popular argument used these days by those who seek to prove the existence of God is that a handful of key numbers that calibrate the laws of physics appear to be very ‘finely tuned’, as if a designer had specifically chosen the values that they have. ‘Finely tuned’ here means that if the numbers were significantly different, the universe would have been too chaotic for complex structures, intelligent life in particular, to evolve. The suggestion is that this is evidence that the numbers were chosen by some intelligent designer, ie God.
This argument may carry weight with some and if it makes them more secure and happy in their theistic beliefs I wish them much joy from it, as long as they don’t attempt to impose their beliefs on anybody else. But in my mind, far from providing the knockout blow some believe it possesses, it gains no traction at all. Such an observation is principally an observation of how one feels. Sometimes a new idea comes along that makes us think “oh… there could be something in that”, which leads to a questioning of our existing ideas and in some cases a revision of our beliefs. It’s the feeling that starts the process, and the rational inquiry occasioned by the feeling that completes it. But in this case I felt nothing at all, not even surprise about the ‘fine tuning’. Why?
I decided to think about this, given that some people find these constants such a compelling argument, and after a period of reflection, concluded that the ‘fine tuning’ just didn’t seem remarkable to me at all, for three principal reasons, which I discuss below.
The first reason is the possibility of multiple of universes. Perhaps there are a very large number, maybe even infinite number, of universes, each of which has different values of the physical constants. Only a very small proportion of these would be able to support intelligent life, and we find ourselves in such a universe because we could not have been in any other.
To some people, the existence of an enormous number of universes seems highly implausible and the postulation of a divine creator seems more plausible. Some assert that Occams Razor favours an explanation that the universe was designed by a god over one with no god but billions of universes. This is entirely a matter of feeling though, not logic. I cannot rebut their assertion that to them it seems implausible that there should be an enormous number of universes. However, to me it seems entirely plausible. In some of my moods, it seems inevitable, almost obvious. At other times it seems less likely. Why should there be only one universe? There’s no answer to that question. It’s like asking “what’s the best number: one, a billion or infinity?”.
The Intelligent Designer hypothesis that is proposed as an alternative to multiple universes, seems more plausible to some, but far less plausible to me. Neither hypothesis can be defended or attacked on logical grounds, as we are operating in the territory of fundamental axioms (what seems plausible or self-evident). It’s just a question of which you prefer.
From a mathematical perspective (I always have to have one of those), if you view the number of universes as a random variable from some a priori Poisson distribution then the argument just becomes one about what you consider to be a reasonable frequency parameter λ for the Poisson distribution. Theists and Deists find an extremely low value of λ most plausible, in which case there would be only one or at most a handful of universes. Atheists are more likely to be comfortable with very large frequency parameters. Again, which you find more plausible is just a matter of inclination and cannot be proved or disproved.
A deeper reason for the values
But we don’t need multiple universes to explain the constants. We may one day discover deeper scientific truths that explain why the constants have the values they do. To me it seems most likely that this will indeed turn out to be the case and that these truths will emerge as part of a grand unified theory of everything, or on the way to one. I can imagine that theory explaining why it would be impossible for the constants to have any other values. This is speculation but again it’s just a question of what appears plausible to you. To me, this seems not only plausible but highly likely. In the absence of evidence as to what is true I like to believe what is beautiful and to me a universe in which the physical constants had values that appeared to be entirely random yet necessary would be ugly. An underlying pattern that explained why they had to have the values they have could be an aesthete’s delight.
What does ‘fine tuning’ mean?
Finally, I’d like to ask: who says the constants are finely tuned? The fine tuning knob on an AM radio might make adjustments of only 10kHz per revolution, whereas the gross tuning knob might adjust by 200kHz per revolution, and the frequency range for the AM spectrum might be about 700kHz, or more than three full turns of the gross tuning knob. (It’s a long while since I’ve had a radio with tuning knobs on, so forgive me if these numbers are not entirely accurate). We give the fine tuning knob its name because it only changes the frequency by a small fraction of the overall range for each turn of the knob.
But what is the range of possible values for the physical constants? If one of them has a value of 5.6321 and values that differ from that by more than 0.0001 in either direction would not allow life to develop then that is fine tuning indeed if the range of possible values is anything from 1 to 10, but it is very coarse tuning if the range of possible values is only 5.6320009 to 5.6321001. So far as I am aware, nobody knows what the range of possible values for these constants is, or even whether there is a range, so how can we say whether they are finely or coarsely tuned?
So, in summary, if the ‘finely tuned constants’ argument makes you feel warm and secure in an enhanced belief in your Intelligent Designer, that’s great. Just don’t mistake that feeling for a logical argument.