Tuesday, Nov 27th. 2012
Bianca Dittrich, Perimeter Institute
Title: Coarse graining: towards a cylindrically consistent dynamics
PDF of the talk (14Mb) Audio [.wav 41MB] Audio [.aif 4MB]
by Frank Hellmann
Coarse graining is a procedure from statistical physics. In most situations we do not know how all the constituents of a system behave. Instead we only get a very coarse picture. Rather than knowing how all the atoms in the air around us move, we are typically only aware of a few very rough properties, like pressure, temperature and the like. Indeed it is hard to imagine a situation where one would care about the location of this or that atom in a gas made of 10^23 atoms. Thus when we speak of trying to find a coarse grained description of a model, we mean that we want to discard irrelevant detail and find out how a particular model would appear to us.
The technical way in which this is done was developed by Kadanoff and Wilson. Given a system made up of simple constituents Kadanoff's idea was to take a set of nearby constituents and combine them back into a single such constituent, only now larger. In a second step we could then scale down the entire system and find out how the behavior of this new, coarse grained constituent compares to the original ones. If certain behaviors grow stronger with such a step we call them relevant, if they grow weaker we call them irrelevant. Indeed, as we build ever coarser descriptions out of our system eventually only the relevant behaviors will survive.
In spin foam gravity we are facing this very problem. We want to build a theory of quantum gravity, that is, a theory that describes how space and time behave at the most fundamental level. We know very precisely how gravity occurs to us, every observation of it we have made is described by Einsteins theory of general relativity. Thus in order to be a viable candidate for a theory of quantum gravity, it is crucial that the coarse grained theory looks, at least in the cases that we have tested, like general relativity.
The problem we face is that usually we are looking at small and large blocks in space, but in spin foam models it is space-time itself that is built up of blocks, and these do not have a predefined size. They can be large or small in their own right. Further, we can not handle the complexity of calculating with so many blocks of space-time. The usual tools, approximations and concepts of coarse graining do not apply directly to spin-foams.
To me this constitutes the most important question facing the spin foam approach to quantum gravity. We have to make sure, or, as it often is in this game, at least give evidence, that we get the known physics right, before we can speak of having a plausible candidate for quantum gravity. So far most of our evidence comes from looking at individual blocks of space time, and we see that their behaviour really makes sense, geometrically. But as we have not yet seen any such blocks of space time floating around in the universe, we need to investigate the coarse graining to understand how a large number of them would look collectively. The hope is that the smooth space time we see arises like the smooth surface of water out of blocks composed of atoms, as an approximation to a large number of discrete blocks.
Dittrich's work tries to address this question. This requires bringing over, or reinventing in the new context, a lot of tools from statistical physics. The first question is, how does one actually combine different blocks of spin foam into one larger block? Given a way to do that, can we understand how it effectively behaves?
The particular tool of choice that Dittrich is using is called Tensor Network Renormalization. In this scheme, the coarse graining is done by looking at what aspects of the original set of blocks is the most relevant to the dynamics directly and then keeping only those. Thus it combines the two steps, of first coarse graining and then looking for relevant operators into a single step.
To get more technical, the idea is to consider maps from the boundary of a coarser lattice into that of a finer one. The mapping of the dynamics for the fine variables then provides the effective dynamics of the coarser ones. If the maps satisfy so called cylindrical consistency conditions, that is, if we can iterate them, this map can be used to define a continuum limit as well.
In the classical case, the behaviour of the theory as a function of the boundary values is coded in what is known as Hamilton's principal function. The use of studying the flow of the theory under such maps is then mostly that of improving the discretizations of continuum systems that can be used for numerical simulations.
In the quantum case, the principal function is replaced by the usual amplitude map. The pull back of the amplitude under this embedding then gives a renormalization prescription for the dynamics. Now Dittrich proposes to adapt an idea from condensed matter theory called tensor network renormalization.
In order to select which degrees of freedom to map from the coarse boundary to the fine one, the idea is to evaluate the amplitude, diagonalize and only keep the eigenstates corresponding to the n largest eigenvalues.
At each step one then obtains a refined dynamics that does not grow in complexity, and one can iterate the procedure to obtain effective dynamics for very coarse variables that have been picked by the theory, rather than by an initial choice of scale, and a split into high and low energy modes.
It is too early to say whether these methods will allow us to understand whether spin foams reproduce what we know about gravity, but they have already produced a whole host of new approximations and insights into how these type of models work, and how they can behave for large number of building blocks.
Sunday, November 17, 2013
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment