## Thursday, November 1, 2012

### General relativity in observer space

Tuesday, Oct 2nd.
Derek Wise, FAU Erlangen
Title: Lifting General Relativity to Observer Space
PDF of the talk (700k) Audio [.wav 34MB], Audio [.aif 3MB].

by Jeffrey Morton, University of Hamburg.

You can read a more technical and precise version of this post at Jeff's own blog.

This talk was based on a project of Steffen Gielen and Derek Wise, which has taken written form in a few papers (two shorter ones, "Spontaneously broken Lorentz symmetry for Hamiltonian gravity", "Linking Covariant and Canonical General Relativity via Local Observers", and a new, longer one called "Lifting General Relativity to Observer Space").

The key idea behind this project is the notion of "observer space": a space of all observers in a given universe. This is easiest to picture when one starts with a space-time.  Mathematically, this is a manifold M with a Lorentzian metric, g, which among other things determines which directions are "timelike" at a given point. Then an observer can be specified by choosing two things. First, a particular point (x0,x1,x2,x3) = x, an event in space-time. Second, a future-directed timelike direction, which is the tangent to the space-time trajectory of a "physical" observer passing through the event x. The space of observers consists of all these choices: what is known as the "future unit tangent bundle of  M". However, using the notion of a "Cartan geometry", one can give a general definition of observer space which makes sense even when there is no underlying space-time manifold.

The result is a surprising, relatively new physical intuition saying that "space-time" is a local and observer-dependent notion, which in some special cases can be extended so that all observers see the same space-time. This appears to be somewhat related to the idea of relativity of locality. More directly, it is geometrically similar to the fact that a slicing of space-time into space and time is not unique, and not respected by the full symmetries of the theory of relativity. Rather, the division between space and time depends on the observer.

So, how is this described mathematically? In particular, what did I mean up there by saying that space-time itself becomes observer-dependent? The answer uses Cartan geometry.

### Cartan Geometry

Roughly, Cartan geometry is to Klein geometry as Riemannian geometry is to Euclidean geometry.

Klein's Erlangen Program, carried out in the mid-19th-century, systematically brought abstract algebra, and specifically the theory of Lie groups, into geometry, by placing the idea of symmetry in the leading role. It describes "homogeneous spaces" X, which are geometries in which every point is indistinguishable from every other point. This is expressed by an action of some Lie group G, which consists of all transformations of an underlying space which preserve its geometric structure. For n-dimensional Euclidean space En, the symmetry group is precisely the group of transformations that leave the data of Euclidean geometry, namely lengths and angles, invariant. This is the Euclidean group, and is generated by rotations  and translations.
But any point x will be fixed by some symmetries, and not others, so there is a subgroup H , the "stabilizer subgroup", consisting of all symmetries which leave x fixed.

The pair (G,H) is all we need to specify a homogeneous space, or Klein geometry. Thus, a point will be fixed by the group of rotations centered at that point. Klein's insight is to reverse this: we may may obtain Euclidean space from the group G itself, essentially by "ignoring" (or more technically, by "modding out") the subgroup H of transformations that leave a particular point fixed. Klein's program lets us do this in general, given a pair (G,H).  The advantage of this program is that it gives a great many examples of geometries (including ones previously not known) treated in a unified way. But the most relevant ones for now are:
• n-dimensional Euclidean space, as we just described.
• n-dimensional Minkowski space. The Euclidean group gets  replaced by the Poincaré group, which includes translations and rotations, but also the boosts of special relativity. This is the group of all transformations that fix the geometry determined by the Minkowski metric of flat space-time.
• de Sitter space and anti-de Sitter spaces, which are relevant for studying general relativity with a cosmological constant.
Just as a Lorentzian or Riemannian manifold is "locally modeled" by Minkowski or Euclidean space respectively, a Cartan geometry is locally modeled by some Klein geometry. Measurements close enough to a given point in the Cartan geometry look similar to those in the Klein geometry.

Since curvature is measured by the development of curves, we can think of each homogeneous space as a flat Cartan geometry with itself as a local model, just as the Minkowski space of special relativity is a particular example of a solution to general relativity.

The idea that the curvature of a manifold depends on the model geometry being used to measure it, shows up in the way we apply this geometry to physics.

### Gravity and Cartan Geometry

The MacDowell-Mansouri formulation of gravity can be understood as a theory in which general relativity is modeled by a Cartan geometry. Of course, a standard way of presenting general relativity is in terms of the geometry of a Lorentzian manifold. The Palatini formalism describes general relativity instead of in terms of a metric, in terms of a set of vector fields governed by the Palatini equations. This can be derived from a Cartan geometry through the theory of MacDowell-Mansouri, which  "breaks the full symmetry" of the geometry at each point, generating the vector fields that arise in the Palatini formalism.  So General Relativity can be written as the theory of a Cartan geometry modeled on a de Sitter space.

### Observer Space

The idea in defining an observer space is to combine two symmetry reductions into one. One has a model Klein geometry, which reflects the "symmetry breaking" that happens when choosing one particular point in space-time, or event.  The time directions are tangent vectors to the world-line (space-time trajectory) of a "physical" observer at the chosen event. So the model Klein geometry  is the space of such possible observers at a fixed event. The stabilizer subgroup for a point in this space consists of just the rotations of space-time around the corresponding observer - the boosts in the Lorentz transformations that relate different observers. Locally, choosing an observer amounts to a splitting of the model space-time at the point into a product of space and time. If we combine both reductions at once, we get a 7-dimensional Klein geometry that is related to de Sitter space, which we think of as a homogeneous model for the "space of observers"

This may be intuitively surprising: it gives a perfectly concrete geometric model in which "space-time" is relative and observer-dependent, and perhaps only locally meaningful, in just the same way as the distinction between "space" and "time" in general relativity. That is, it may be impossible to determine objectively whether two observers are located at the same base event or not. This is a kind of "relativity of locality" which is geometrically much like the by-now more familiar relativity of simultaneity. Each observer will reach certain conclusions as to which observers share the same base event, but different observers may not agree. The coincident observers according to a given observer are those reached by a good class of geodesics in observer space  moving only in directions that observer sees as boosts.

When one has a certain integrability condition, one can reconstruct a space-time from the observer space: two observers will agree whether or not they are at the same event. This is the familiar world of relativity, where simultaneity may be relative, but locality is absolute.

### Lifting Gravity to Observer Space

Apart from describing this model of relative space-time, another motivation for describing observer space is that one can formulate canonical (Hamiltonian) general relativity locally near each point in such an observer space. The goal is to make a link between covariant and canonical quantization of gravity. Covariant quantization treats the geometry of space-time all at once, by means of of what is known as a Lagrangian. This is mathematically appealing, since it respects the symmetry of general relativity, namely its diffeomorphism-invariance (or, speaking more physically, that its laws take the same form for all observers). On the other hand, it is remote from the canonical (Hamiltonian) approach to quantization of physical systems, in which the concept of time is fundamental. In the canonical approach, one quantizes the space of states of a system at a given point in time, and the Hamiltonian for the theory describes its evolution. This is problematic for diffeomorphism-, or even Lorentz-invariance, since coordinate time depends on a choice of observer. The point of observer space is that we consider all these choices at once. Describing general relativity in observer space is both covariant, and based on (local) choices of time direction. Then a "field of observers" is a choice, at each base event in M, of an observer based at that event. A field of observers may or may not correspond to a particular decomposition of space-time into space evolving in time, but locally, at each point in observer space, it always looks like one. The resulting theory describes the dynamics of space-geometry over time, as seen locally by a given observer, in terms of a Cartan geometry.

This splitting, along the same lines as the one in MacDowell-Mansouri gravity described above, suggests that one could lift general relativity to a theory on an observer space. This amount to describing fields on observer space and a theory for them, so that the splitting of the fields gives back the usual fields of general relativity on space-time, and the equations give back the usual equations. This part of the project is still under development, but there is indeed a lifting of the equations of general relativity to observer space. This tells us that general relativity can be defined purely in terms of the space of all possible observers, and when there is an objective space-time, the resulting theory looks just like general relativity. In the case when there is no "objective" space-time, the result includes some surprising new fields: whether this is a good or a bad thing is not yet clear.

## Thursday, October 18, 2012

### More on Shape Dynamics

Tim Koslowski, Perimeter Institute
Title: Effective Field Theories for Quantum Gravity form Shape Dynamics
PDF of the talk (0.5Mb) Audio [.wav 31MB], Audio [.aif 3MB].

By Astrid Eichhorn, Perimeter Institute

Gravity and Quantum Physics have resisted to be unified into a common theory for several decades. We know a lot about the classical nature of gravity, in the form of Einstein's theory of General Relativity, which is a field theory. During the last century, we have learnt how to quantize other field theories, such as the gauge theories in the Standard Model of Particle Physics. The crucial difference between a classical theory and a quantum theory lies in the effect of quantum fluctuations. Due to Heisenberg's uncertainty principle, quantum fields can fluctuate, and this changes the effective dynamics of the field. In a classical field theory, the equations of motion can be derived minimizing a function called the classical action. In a quantum field theory, the equations of motion for the mean value of the quantum field cannot be derived from the classical action. Instead, they follow from something called the effective action, which contains the effect of all quantum fluctuations. Mathematically, to incorporate the effect of quantum fluctuations, a procedure known as the path integral has to be performed, which, even within perturbation theory (where one assumes solutions differ little from a known one), is a very challenging task. A method to make this task doable is the so called (functional) Renormalization Group: Not all quantum fluctuations are taken into account at once, but only those with a specific momentum, usually starting with the high-momentum ones. In a pictorial way, this means that we "average" the quantum fields over small distances (corresponding to the inverse of the large momentum). The effect of the high-momentum fluctuations is then to change the values of the coupling constants in the theory: Thus the couplings are no longer constant, but depend on the momentum scale, and so we more appropriately should call them running couplings. As an example, consider Quantum Electrodynamics: We know that the classical equations of motion are linear, so there is no interaction between photons. As soon as we go over to the quantum theory, this is different: Quantum fluctuations of the electron field at high momenta induce a photon-photon interaction (however one with a very tiny coupling, so experimentally this effect is difficult to see).

Fig 1: Electron fluctuations induce a non-vanishing photon-photon coupling in Quantum Electrodynamics.

The question, if a theory can be quantized, i.e., the full path integral can be performed, then finds an answer in the behavior of the running couplings: If the effect of quantum fluctuations at high momenta is to make the couplings divergent at some finite momentum scale, the theory is only an effective theory at low energies, but not a fundamental theory. On the technical side this implies that when we perform integrals that take into account the effect of quantum fluctuations, we cannot extend these to arbitrarily high momenta, instead we have to "cut them off" at some scale.

The physical interpretation of such a divergence is that the theory tells us that we are really using effective degrees of freedom, not fundamental ones.  As an example, if we construct a theory of the weak interaction between fermions without the W-bosons and the Z-boson, the coupling between the fermions will diverge at a scale which is related to the mass scale of the new bosons. In this manner, the theory lets us know that new degrees of freedom - the W- and Z-boson - have to be included at this momentum scale. One example that we know of a truly fundamental theory, i.e., one where the degrees of freedom are valid up to arbitrarily high momentum scales, is Quantum Chromodynamics. Its essential feature is the ultraviolet-attractive Gaussian  (one that corresponds to a free, non-interacting theory) fixed point , which is nothing but the statement that the running coupling in QCD weakens towards high momenta, which is called asymptotic freedom (since asymptotically, at high momenta, the theory becomes non-interacting, i.e., free).

There is nothing wrong with a theory that is not fundamental in this sense, it simply means that it is an effective theory, which we can only use over a finite range of momenta. This concept is well-known in physics, and used very successfully. For instance, in condensed-matter systems, the effective degrees of freedom are, e.g., phonons, which are collective excitations of an atom lattice, and obviously cease to be a valid description of the system on distance scales below the atomic scale.

Quantum gravity actually exists as an effective quantum field theory, and quantum gravity effects can be calculated, treating the space-time metric as a quantum field like any other. However, being an effective theory means that it will only describe physics over a finite range of scales, and will presumably break down somewhere close to a scale known as the Planck scale (10^-33 in centimeters). This implies that we do not understand the microscopic dynamics of gravity. What are the fundamental degrees of freedom, which describe quantum gravity beyond the Planck scale, what is their dynamics and what are the symmetries that govern it?

The question, if we can arrive at a fundamental quantum theory of gravity within the standard quantum field theory framework, boils down to understanding the behavior of the running couplings of the theory. In perturbation theory, the answer has been known for a long time:  (in four space-time dimensions) instead of weakening towards high momenta, the Newton coupling increases. More formally, this means that the free fixed point (technically known as Gaussian fixed point) is not ultraviolet-attractive. For this reason, most researchers in quantum gravity gave up on trying to quantize gravity along the same lines as the gauge theories in the Standard Model of particle physics. They concluded that the metric does not carry the fundamental microscopic degrees of freedom of a continuum theory of quantum gravity, but is only an effective description valid at low energies. However, the fact that the Gaussian fixed point is not ultraviolet-attractive really only means that perturbation theory breaks down. Beyond perturbation theory, there is the possibility to obtain a fundamental quantum field theory of gravity: The arena in which we can understand this possibility is called theory space. This is an (infinite dimensional) space, which is spanned by all running couplings which are compatible with the symmetries of the theory. So, in the case of gravity, theory space usually contains the Newton coupling, the cosmological constant, couplings of curvature-squared operators, etc. At a certain momentum scale, all these couplings have some value, specifying a point in theory space. Changing the momentum scale, and including the effect of quantum fluctuations on these scales, implies a change in the value of these couplings. Thus, when we change the momentum scale continuously, we flow through theory space on a so-called Renormalization Group (RG) trajectory. For the couplings to stay finite at all momentum scales, this trajectory should approach a fixed point at high momenta (more exotic possibilities such as limit cycles, or infinitely extendible trajectories could also exist). At a fixed point, the values of the couplings do not change anymore, when further quantum fluctuations are taken into account. Then, we can take the limit of arbitrarily high momentum scale trivially, since nothing changes if we go to higher scales, i.e., the theory is scale invariant. The physical interpretation of this process is, that the theory does not break down at any finite scale: The degrees of freedom that we have chosen to parametrize the physical system are valid up to arbitrarily high scales. An example is given by QCD, which  as we mentioned is asymptotically free, the physical interpretation being that quarks and gluons are valid microscopic degrees of freedom. There is no momentum scale at which we need to expect further particles, or a possible substructure of quarks and gluons.

In the case of gravity, to quantize it we need a non-Gaussian fixed point. At such a point, where the couplings are non-vanishing, the RG flow stops, and we can take the limit of arbitrarily high momenta. This idea goes back to Weinberg, and is called asymptotic safety. Asymptotically, at high momenta, we are "safe" from divergences in the couplings, since they approach a fixed point, at which they assume some finite value. Since finite couplings imply finiteness of physical observables (when the couplings are defined appropriately), an asymptotically safe theory gives finite answers to all physical questions. In this construction, the fixed point defines the microscopic theory, i.e., the interaction of the microscopic degrees of freedom.

As a side remark, if, being confronted with an infinite-dimensional space of couplings, you might worry about how such a theory can ever be predictive, note that fixed points come equipped with what is called a critical surface: Only if the RG flow lies within the critical surface of a fixed point, it will actually approach the fixed point at high momenta. Therefore a finite-dimensional critical surface means that the theory will only have a finite number of parameters, namely those couplings spanning the critical surface. The low-momentum value of these couplings, which is accessible to measurements, is not fixed by the theory: Any value works, since they all span the critical surface. On the other hand, infinitely many couplings will be fixed by the requirement of being in the critical surface. This automatically implies that we will get infinitely many predictions from the theory (namely the values of all these so-called irrelevant couplings), which we can then (in principle) test in experiments.

Fig 2: A non-Gaussian fixed point has a critical surface, the dimensionality of which corresponds to the number of free parameters of the theory.

Two absolutely crucial ingredients in the search for an asymptotically safe theory of quantum gravity are the specification of the field content and the symmetries of the theory. These determine which running couplings are part of theory space.  They are the couplings of all possible operators that can be constructed from the fundamental fields respecting the symmetry have to be included. Imposing an additional symmetry on theory space means that some of the couplings will drop out of it. Most importantly, the (non)existence of a fixed point will depend on the choice of symmetries. A well-known example is the choice of a U(1) gauge symmetry (like that in electromagnetism) versus an SU(3) one (like the one in QCD). The latter case gives an asymptotically free theory, the former one does not. Thus the (gauge) symmetries of a system crucially determine its microscopic behavior.

In gravity, there are several classically equivalent versions of the theory (i.e., they admit the same solutions to the equations of motion). A partial list contains standard Einstein gravity with the metric as the fundamental field, Einstein-Cartan gravity, where the metric is exchanged for the vielbein (a set of vectors) and a unimodular version of metric gravity (we will discuss it in a second). The first step in the construction of a quantum field theory of gravity now consists in the choice of theory space. Most importantly, this choice exists in the path-integral framework as well as the Hamiltonian framework. So, in both cases there is a number of classically equivalent formulations of the theory, which differ at the quantum level, and in particular, only some of them might exist as a fundamental theory.

To illustrate that the choice of theory space is really a physical choice, consider the case of unimodular quantum gravity: Here, the metric determinant is restricted to be constant. This implies, that the spectrum of quantum fluctuations  differs crucially from the non-unimodular version of metric gravity, and most importantly, does not differ just in form, but in its physical content. Accordingly,  the evaluation of Feynman diagrams in perturbation theory in both cases will yield different results. In other words, the running couplings in the two theory spaces will exhibit a different behaviour, reflected in the existence of fixed points as well as critical exponents, which determine the free parameters of the theory.

This is where the new field of shape dynamics opens up important new possibilities. As explained, theories, which classically describe the same dynamics, can still have different symmetries. In particular, this actually works for gauge theories, where the symmetry is nothing else but a redundancy of description. Therefore, only a reduced configuration space (the space of all possible configurations of the field) is physical, and along certain directions, the configuration space contains physically redundant configurations. A simple example is given by (quantum) electrodynamics, where the longitudinal vibration mode of the photon is unphysical (in vacuum), since the gauge freedom restricts the photon two have two physical (transversal) polarisations.

One can now imagine how two different theories with different gauge symmetries yield the same physics. The configuration spaces can in fact be different, it is only the values of physical observables on the reduced configuration space that have to agree. This makes a crucial difference for the quantum theory, as it implies different theory spaces, defined by different symmetries, and accordingly different behavior of the running couplings.

Shape dynamics trades part of the four-dimensionsional, i.e., spacetime symmetries of General Relativity (namely refoliation invariance, so invariance under different choices of spatial slices of the four-dimensional spacetime to become "space") for what is known as local spatial conformal symmetry; which implies local scale invariance of space. This also implies a key difference in the way that spacetime is viewed in the two theories. Whereas spacetime is one unified entity in General Relativity, shape dynamics builds up a spacetime from "stacking" spatial slices (for more details, see the blog entry by Julian Barbour.) Fixing a particular gauge in each of the two formulations then yields two equivalent theories.

Although the two theories are classically equivalent for observational purposes, their quantized versions will differ. In particular, only one of them might admit a UV completion as a  quantum field theory with the help of a non-Gaussian fixed point.

A second possibility is that both theory spaces might admit the existence of a non-Gaussian fixed point, but what is known as the universality class might be different: Loosely speaking, the universality class is determined by the rate of approach to the fixed point, which is captured by what is known as the critical exponents. Most importantly, while details of RG trajectories typically depend on the details of the regularization scheme (this specifies how exactly quantum fluctuations in the path integral are integrated out), the critical exponents are universal. The full collection of critical exponents of a fixed point then determines the unversality class. Universality classes are determined by symmetries, which is very well-known from second order phase transitions in thermodynamics.  Since the correlation length in the vicinity of a second-order phase transition diverges, the microscopic details of different physical systems do not matter: The behavior of physical observables in the vicinity of the phase transition is determined purely by the field content, dimensionality, and symmetries of a system.

Different universality classes can differ in the number of relevant couplings, and thus correspond to theories with a different "amount of predictivity". Thus classically equivalent theories, when quantized, can have a different number of free parameters. Accordingly, not all universality classes will be compatible with observations, and the choice of theory space for gravity is thus crucial to identify which universality class might be "realized in nature".

Clearly, the canonical quantization of standard General Relativity in contrast to shape dynamics will also differ, since shape dynamics actually has a non-trivial, albeit non-local, Hamiltonian.

Finally, what is known as doubly General Relativity is the last step in the new construction. Starting from the symmetries of shape dynamics, one can discover a hidden BRST symmetry in General Relativity. BRST symmetries are symmetries existing in gauge-fixed path integrals for gauge theories. To do perturbation theory requires the gauge to be fixed, thus yielding a path-integral action which is not gauge invariant. The remnants of gauge invariants are encoded in BRST invariance, so it can be viewed as the quantum version of a gauge symmetry.

In the case of gravity, BRST invariance connected to gauge invariance under diffeomorphisms of general relativity is supplemented by BRST invariance connected to local conformal invariance. This is what is referred to as symmetry doubling. Since gauge symmetries restrict the Renormalization Group flow in a theory space, the discovery of a new BRST symmetry in General Relativity is crucial to fully understand the possible existence of a fixed point and its universality class. Thus the newly discovered BRST invariance might turn out to be a crucial ingredient in constructing a quantum theory of gravity.

## Monday, March 26, 2012

### Bianchi models in loop quantum cosmology

by Edward Wilson-Ewing, Marseille.

Parampreet Singh, LSU
Title: Physics of Bianchi models in LQC
PDF of the talk (500KB)
Audio [.wav 40MB], Audio [.aif 4MB].

The word singularity, in physics, is often used to denote a prediction that some observable quantity should be singular, or infinite. One of the most famous examples in the history of physics appears in the Rayleigh-Jeans distribution which attempts to describe the thermal radiation of a black body in classical electromagnetic theory. While the Rayleigh-Jeans distribution describes black body radiation very well for long wavelengths, it does not agree with observations at short wavelengths. In fact, the Rayleigh-Jeans distribution becomes singular at very short wavelengths as it predicts that there should be an infinite amount of energy radiated in this part of the spectrum: this singularity -which did not agree with experiment- was called the ultraviolet catastrophe.

This singularity was later resolved by Planck when he discovered what is called Planck's law, which is now understood to come from quantum physics. In essence, the discreteness of the energy levels of the black body ensure that the black body radiation spectrum remains finite for all wavelengths. One of the lessons to be learnt from this example is that singularities are not physical: in the Rayleigh-Jeans law, the prediction that there should be an infinite amount of energy radiated at short wavelengths is incorrect and indicates that the theory that led to this prediction cannot be trusted to describe this phenomenon. In this case, it is the classical theory of electromagnetism that fails to describe black body radiation and it turns out that it is necessary to use quantum mechanics in order to obtain the correct result.

In the figure below, we see that for a black body at a temperature of 5000 degrees Kelvin, the Rayleigh-Jeans formula works very well for wavelengths greater than 3000 nanometers, but fails for shorter wavelengths. For these shorter wavelengths, it is necessary to use Planck's law where quantum effects have been included.

Picture credit.

There are also singularities in other theories. Some of the most striking examples of singularities in physics occur in general relativity where the curvature of space-time, which encodes the strength of the gravitational field, diverges and becomes infinite. Some of the best known examples are the big-bang singularity that occurs in cosmological models and the black hole singularity that is found inside the event horizon of every black hole. While some people have argued that the big-bang singularity represents the beginning of time and space, it seems more reasonable that the singularity indicates that the theory of general relativity cannot be trusted when the space-time curvature becomes very large and that quantum effects cannot be ignored: it is necessary to use a theory of quantum gravity in order to study the very early universe (where general relativity says the big bang occurs) and the center of black holes.

In loop quantum cosmology (LQC), simple models of the early universe are studied by using the techniques of the theory of loop quantum gravity. The simplest such model (and therefore the first to be studied) is called the flat Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. This space-time is homogeneous (the universe looks the same no matter where you are in it), isotropic (the universe is expanding at the same rate in all directions), and spatially flat (the two other possibilities are closed and open models which have also been studied in LQC) and is considered to provide a good approximation to the large-scale dynamics of the universe we live in. In LQC, it has been possible to study how quantum geometry effects become important in the FLRW model when the space-time curvature becomes so large that is is comparable to one divided by the Planck length squared. A careful analysis shows that the quantum geometry effects provide a repulsive force that causes a “bounce” and ensures that the singularity predicted in general relativity does not occur in LQC. We will make this more precise in the next two paragraphs.

By measuring the rate of expansion of the universe today, it possible to use the FLRW model in order determine the size and the space-time curvature of the universe was in the past. Of course, these predictions will necessarily depend on the theory used: general relativity and LQC will not always give the same predictions. General relativity predicts that, as we go back further in time, the universe becomes smaller and smaller and the space-time curvature becomes larger and larger. This keeps on going until around 13.75 billion years ago the universe has zero volume and an infinite space-time curvature. This is called the big bang.

In LQC, the picture is not the same. So long as the space-time curvature is considerably smaller than one divided by the Planck length squared, it predicts the same as general relativity. Thus, as we go back further in time, the universe becomes smaller and the space-time curvature becomes larger. However, there are some important differences when the space-time curvature nears the critical value of one divided by the Planck length squared: in this regime there are major modifications to the evolution of the universe that come from quantum geometry effects. Instead of continuing to contract as in general relativity, the universe instead slows its contraction before starting to become bigger again. This is called the bounce. After the bounce, as we continue to go further back in time, the universe becomes bigger and bigger and the space-time curvature becomes smaller and smaller. Therefore, as the space-time curvature never diverges, there is no singularity.

Photo credit.

In loop quantum cosmology, we see that the big bang singularity in FLRW models is avoided due to quantum effects and this is analogous to what happened in the theory of black body radiation: the classical theory predicted a singularity which was resolved once quantum effects were included.

This observation raises an important question: does LQC resolve all of the singularities that appear in cosmological models in general relativity? This is a complicated question as there are many types of cosmological models and also many different types of singularities. In this talk, Parampreet Singh explains what happens to many different types of singularities in models, called the Bianchi models, that are homogeneous but anisotropic (each point in the universe is equivalent, but the universe may expand in different directions at different rates). The main result of the talk is that all “strong” singularities in the Bianchi models are resolved in LQC.

## Monday, February 13, 2012

### Inhomogeneous loop quantum cosmology

by David Brizuela, Albert Einstein Institute, Golm, Germany.

William Nelson, PennState
Title: Inhomogeneous loop quantum cosmology
PDF of the talk (500k)
Audio [.wav 32MB], Audio [.aif 3MB].

William Nelson's talk is a follow-up of the work presented by Iván Agulló a few months ago in this seminar series about their common work in collaboration with Abhay Ashtekar. Iván's talk was reviewed in this blog by Edward Wilson-Ewing, so the reader is referred to that entry for completeness. Even if substantial material will overlap with that post, here I will try to focus on other aspects of this research.

Due to the finiteness of the speed of light, when we look at a distant point, like a star, we are looking to the state of that point in the past. For our regular daily distances this fact hardly affects anything, but if we consider larger distances, the effect is noticeable even for our slow-motion human senses. For instance, the sun is 8 light-minutes away from us, so if it suddenly were switched off we would be able to do a fair number of things in the mean time until we find ourselves in the complete darkness. For cosmological distances this fact can be really amazing: we can see far back in the past! But, how far away? Can we really see the instant of the creation?

The light rays, that were emitted during the initial moments of the universe and that arrive to the Earth nowadays, form our particle horizon, which defines the border of our observable universe. As a side remark, note that the complete universe could be larger (even infinite) than the observable one, but not necessarily. We could be living in a universe with compact topology (like the surface of a balloon) and the light emitted from a distant galaxy would reach us from different directions. For instance, one directly and other once after traveling around the whole universe. Thus, what we consider different galaxies would be copies of the same galaxy in different stages of its evolution. In fact, we could even see the solar system in a previous epoch!

Since the universe has existed for a finite amount of time (around 14 billions of years), the first guess would be that the particle horizon is at that distance: 14 billions of light years. But this is not true mainly for two different reasons. On the one hand, our universe is expanding, so the sources of the light rays that were emitted during the initial moments of the universe are further away, around 46 billions of light-years away. On the other hand, at the beginning of the universe, the temperature was so high that atoms or even neutrons or protons could not be formed in a stable way. The state of the matter was a plasma of free elementary particles, in which the photons interacted very easily. The mean free path of a photon was extremely short since it was almost immediately absorbed by some particle. In consequence, the universe was opaque to light, so none of the photons emitted at that epoch could make its way to us. The universe became transparent around 380000 years after the Big Bang, in the so-called recombination epoch (when the hydrogen atoms started to form),
and the photons emitted at that time form what is known as the Cosmic Microwave Background (CMB) radiation. This is the closest event to the Big Bang that we can nowadays measure with our telescopes. In principle, depending on the technology, in the future we might be able to detect also the neutrino and gravitational-wave backgrounds. These were released before the CMB photons since both neutrinos and gravitational waves could travel through the mentioned plasma without much interaction. The CMB has been explored making use of very sophisticated satellites, like the WMAP, and we know that it is highly homogeneous. It has an almost perfect black body spectrum that is peaked on a microwave frequency corresponding to a temperature of 2.7 K. The tiny inhomogeneities that we observe in the CMB are understood as the seeds of the large structures of our current universe.

Furthermore, the CMB is one of the few places where one could look for quantum gravity effects since the conditions of the universe during its initial moments were very extreme. The temperature was very high so that the energies of interaction between particles were much larger than we could achieve with any accelerator. But we have seen that the CMB photons we observe were emitted quite after the Big Bang, around 380.000 years later. Cosmologically this time is insignificant. (If we make an analogy and think that the universe is a middle-age 50 years old person, this would correspond to 12 hours.) Nevertheless, by that time the universe had already cooled down and the curvature was low enough so that, in principle, Einstein's classical equations of general relativity should be a very good approximation to describe its evolution at this stage. Therefore, why do we think it might be possible to observe quantum gravity effects at the CMB? At this point, the inflationary scenario enters the game. According to the standard cosmological model, around 10^(-36) seconds after the Big Bang, the universe underwent an inflationary phase which produced an enormous increase of its size. In a very short instant of time (a few 10^(-32) seconds) the volume was multiplied by a factor 10^78. Think for a while on the incredible size of that number: a regular bedroom would be expanded to the size of the observable universe!

This inflationary mechanism was introduced by Alan Guth in the 1980s in order to address several conceptual issues about the early universe like, for instance, why our universe (and in particular the CMB) is so homogeneous. Note that the CMB is composed by points that are very far apart and, in a model without inflation, could not have had any kind of interaction or information exchange during the whole history of the universe. On the contrary, according to the inflationary theory, all these points were close together at some time in the past, which would have allowed them to reach this thermal equilibrium. Furthermore, inflation has had a tremendous success and it has proved to be much more useful than originally expected. Within this framework, the observational values of the small inhomogeneities of the CMB are reproduced with high accuracy. Let us see in more detail how this result is achieved.

In the usual inflationary models, at the early universe the existence of a scalar particle (called the inflaton) is considered. The inflaton is assumed to have a very large but flat potential. During the inflationary epoch it slowly loses potential energy (or, as it is usually referred, it slowly rolls down its potential), and produces the exponential expansion of the universe. At the end of this process the inflaton's potential energy is still quite large. Since nowadays we do not observe the presence of such a particle, it is argued that after inflation, during the so-called reheating process, all this potential energy is converted into "regular" (Standard Model) particles. Even though this process is not yet well understood.

It is also usually assumed that at the onset of inflation the quantum fluctuations of the inflaton (and of the different quantities that describe the geometry of the universe) were in a vacuum state. This quantum vacuum is not a static and simple object, as one might think a priori. On the contrary, it is a very dynamical and complex entity. Due to the Heisenberg uncertainty principle, the laws of physics (like the conservation of energy) are allowed to be violated during short instants of time. This is well-known in regular quantum field theory and it happens essentially because the nature does not allow to perform any observation during such a short time. Therefore, in the quantum vacuum there is a constant creation of virtual particles that, under regular conditions, are annihilated before they can be observed. Nevertheless, the expansion of the universe turns this virtual particles into real entities. Intuitively one can think that a virtual particle and its corresponding antiparticle are created but, before they can interact again to disappear, the inflationary expansion of the universe tears them so apart that the interaction is not possible anymore. This initial tiny quantum fluctuations, amplified through the process of inflation, produces then the CMB inhomogeneities we observe. Thus, the inflation is a kind of magnifying glass that allows us to have experimental access to processes that happened at extremely short scales and hence large energies, where quantum gravity effects might be significant.

On the other hand, loop quantum cosmology (LQC) is a quantum theory of gravity that describes the evolution of our universe under the usual assumptions of homogeneity and isotropy. The predictions of LQC coincide with those of general relativity for small curvature regions. That includes the whole history of the universe except for the initial moments. According to general relativity the beginning of the universe happened at the Big Bang, which is quite a misleading name. The Big Bang has nothing to do with an explosion, it is an abrupt event where the whole space-time continuous came to existence. Technically, this point is called a singularity, where different objects describing the curvature of the spacetime diverge. Thus general relativity can not be applied there and, as it is often asserted, the theory contains the seeds of its own destruction. LQC smooths out this singularity by considering quantum gravity effects and the Big Bang is replaced by a quantum bounce (the so-called Big Bounce). According to the new paradigm, the universe existed already before the Big Bounce as a classical collapsing universe. When the energy density became too large, it entered this highly quantum region, where the quantum gravity effects come with the correct sign so that gravity happens to be repulsive. This caused the universe to bounce and the expansion we currently observe began. The aim of Will´s talk is to study the inflationary scenario in the context of LQC and obtain its predictions for the CMB inhomogeneities. In fact, Abhay Ashtekar and David Sloan already showed that inflation is natural in LQC. This means that itis not necessary to choose very particular initial conditions in order to get an inflationary phase. But there are still several questions to be addressed, in particular whether there might be any observable effects due to pre-inflationary evolution of the universe.

As we have already mentioned, in the usual cosmological models, the initial state is taken as the so-called Bunch-Davies vacuum at the onset of inflation. This time might be quite arbitrary. The natural point to choose initial conditions would be the Big Bang but this is not feasible since it is a singular point and the equations of motions are no longer valid. In any case, the extended view has been that, even if there were some particles present at the onset of inflation, the huge expansion of the universe would dilute them and thus the final profile of the CMB would not be affected. Nevertheless, recently Iván Agulló and Leonard Parker showed that the presence of such initial particles does matter for the final result since it causes the so-called stimulated emission of quanta: initial particles produce more particles, which themselves produce more particles and so on. In fact, this is the same process on which the nowadays widely used laser devices are based. Contrary to the usual models based on general relativity, LQC offers a special point where suitable initial conditions can be chosen: the Big Bounce. Thus, in this research, the corresponding vacuum state is chosen at that time. The preliminary results presented in the talk seem quite promising. The simplest initial state is consistent with the observational data but, at the same time, it slightly differs from the CMB spectrum obtained within the previous models. These results have been obtained under certain technical approximations so, the next step of the research will be to understand if this deviation is really physical. If so, this could provide a direct observational test for LQC that would teach us invaluable lessons about the deep quantum regime of the early universe.