tag:blogger.com,1999:blog-58266329603566940902017-05-10T08:03:07.366-05:00International Loop Quantum Gravity Seminar<a href="http://ilqgse.blogspot.com">En Español</a>
<p>
The International Loop Quantum Gravity Seminar is held every two weeks via teleconference among the main research groups in loop quantum gravity. Slides are distributed in advance and audio posted after the seminar at the Seminar's website.
This blog presents summaries for the general public of the content of the seminars.</p>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.comBlogger29125tag:blogger.com,1999:blog-5826632960356694090.post-67588452880396895032017-04-28T12:51:00.000-05:002017-04-28T12:51:22.954-05:00Transition times through the black hole bounce<span style="background-color: white;">Tuesday, Apr 4th</span><br /><b>Parampreet Singh, LSU</b><br /><b>Title: Transition times through the black hole bounce </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/singh040417.pdf">PDF</a><span style="background-color: white;"> of the talk (2M)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/singh040417.mp4">Audio+Slides</a><span style="background-color: white;"> [.mp4 18MB]</span><br /><span style="background-color: white;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-RoMaXdHqZYQ/WQNwkMR7ROI/AAAAAAAAJWs/BE_i6AfsUxw9vhWUqe0Rs3HIGQQmr5G0gCLcB/s1600/param2.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="https://4.bp.blogspot.com/-RoMaXdHqZYQ/WQNwkMR7ROI/AAAAAAAAJWs/BE_i6AfsUxw9vhWUqe0Rs3HIGQQmr5G0gCLcB/s320/param2.jpg" width="304" /></a></div><span style="background-color: white;">by Gaurav Khanna, University of Massachu</span><span style="background-color: white;">setts Dartmouth</span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span> Loop quantum cosmology (LQC) is an application of loop quantum gravity theory in the context of spacetimes with a high degree of symmetry (e.g. homogeneity, isotropy). One of the main successes of LQC is the resolution of "singularities" that generically appear in the classical theory. An example of this is the "big bang" singularity that causes a complete breakdown of general relativity (GR) in the very early universe. Models studied within the framework of LQC replace this "big bang" with a "big bounce" and do not suffer a singular breakdown like in the classical theory. <br /><br /><br />It is, therefore, natural to consider applying similar techniques to study black holes; after all, these solutions of GR are also plagued with a central singularity. In addition, it is plausible that a LQC model may shed some light on long-standing issues in black hole physics, i.e., information loss, Hawking evaporation, firewalls, etc. <br /><br /><br />Now, if one restricts the model only to the Schwarzschild black hole interior region, the spacetime can actually be considered as a homogeneous, anisotropic cosmology (the Kantowski-Sachs spacetime). This allows techniques of LQC to be readily applied to the black hole case. In fact, a good deal of study has been performed in this direction by Ashtekar, Bojowald, Modesto and many others for over a decade. While these models are able to resolve the central black hole singularity and include important improvements over previous versions, they still have a number of issues. <br /><br /><br />Recently, Singh and Corichi (2016), proposed a new LQC model for the black hole interior that attempts to address these issues. In this talk, Singh describes some of the resulting phenomenology that emerges from that improved model.<br /><br />The main emphasis of this talk is on the following questions:<br /> <br /><br />(1) Is the "bounce" in the context of a black hole LQC model, i.e., transition from a black hole to a white hole, symmetric? Isotropic and homogeneous models in LQC have generally exhibited symmetric bounces. But, that is not expected to hold in the context of more general models.<br />(2) Does quantum gravity play a role only once during the bounce process?<br />(3) What quantitative statements can be made about the time-scales of this process; and what are the full implications of those details?<br /> (4) Do all black holes, independent of size, exhibit very similar characteristics? <br /><br /><br />Based on detailed numerical calculations that Singh reviews in his presentation, he uncovers the following features from this model: <br /><br /><br />(1) The bounce is indeed not symmetric; for example, the sizes of the parent black hole and the offspring white hole are widely different. Other details on this asymmetry appear below.<br />(2) Two distinct quantum regimes appear in this model, with very different associated time-scales.<br />(3) In terms of the proper time of an observer, the time spent in the quantum white hole geometry is much larger than in the quantum black hole. And, in particular, the time for the observer to reach the white hole horizon is exceedingly large. This also implies that the formation of the white hole interior geometry happens a lot quicker than the formation of its horizon.<br />(4) The relation of the bounce time with the black hole mass, does depend on whether the black hole is large or small. <br /><br /><br />On the potential implications of such details on some of the important open questions in black hole physics, Singh speculates: <br /><br /><br />(1) For large black holes, the time to develop a white hole (horizon) is much larger than the Hawking evaporation time. This may suggest that for an external observer, a black hole would disappear long before the white hole appears!<br />(2) For small black holes, the time to form a white hole is smaller than Hawking time, i.e., small black holes explode before they can evaporate! <br /><br /><br />These could have some interesting implications for the various proposed black hole evaporation paradigms. Given the concreteness of the results Singh presents, they are also likely to be relevant to the many previous phenomenological studies on black hole to white hole transitions including Planck stars. <br /><br /><br />The two main limitations of Singh's results are: (1) the current model ignores the black hole exterior entirely; and (2) the conclusions rely on effective dynamics, and not the full quantum evolution. These may be addressed in future work. <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-32067506403398625262017-03-28T16:40:00.001-05:002017-03-28T16:40:17.981-05:00Holographic signatures of resolved cosmological singularities<span style="background-color: white;">Tuesday, March 21st</span><br /><b>Norbert Bodendorfer, LMU Munich</b><br /><b>Title: Holographic signatures of resolved cosmological singularities </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/bodendorfer032117.pdf">PDF</a><span style="background-color: white;"> of the talk (2M)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/bodendorfer032117.mp4">Audio+Slides</a><span style="background-color: white;"> [.mp4 10MB]</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-vP0SUwQJ2V4/WNrQ1FdcHwI/AAAAAAAAJNU/mGU6cKh9WPgQBJnhswsI_QWgkzDVb1duQCLcB/s1600/Norbert_Bodendorfer.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-vP0SUwQJ2V4/WNrQ1FdcHwI/AAAAAAAAJNU/mGU6cKh9WPgQBJnhswsI_QWgkzDVb1duQCLcB/s1600/Norbert_Bodendorfer.png" /></a></div><span style="background-color: white;">By Jorge Pullin, Louisiana State University</span><br /><span style="background-color: white;"><br /></span>One of the most important results in string theory is the so called “Maldacena conjecture” or “AdS/CFT correspondence” proposed by Juan Maldacena. This conjecture states that given a space-time with cosmological constant (known as anti De Sitter space-time or AdS) the behavior of gravity in it is equivalent to the behavior of a field theory living on the boundary of the space-time. These field theories are of a special type known as “conformal field theories”. Hence the AdS/CFT name. Conformal field theories are considerably better understood than quantum gravity so to make the latter equivalent to them opens several new possibilities. The discussion of AdS/CFT has mostly taken place in the context of string theory which has general relativity as a classical limit. This opens the question of what kind of imprint the singularities that are known to exist in general relativity leave in the conformal field theory. <br /><br />On the other hand, loop quantum gravity is known for eliminating the singularities that arise in general relativity. They get replaced by regions of high curvature and fluctuations of it that are not well described by a semiclassical geometry. However, nothing is singular, physical variables may take large –but finite-values. If AdS/CFT were to hold in the context of loop quantum gravity the question arises of what imprint would the elimination of the singularity leave on the conformal field theory. The seminar dealt with this point by considering certain functions known as correlation functions in the conformal field theory that characterize its behavior. In particular how the singularities of general relativity get encoded in these correlation functions and how their elimination in loop quantum gravity changes them. The work is at the moment only a model in five dimensions of a particular space-time known as the Kasner space-time. <br /><br />Future work will consist in expanding the results to other space-times. Of particular interest would be the extension to black hole spacetimes, which loop quantum gravity also rids of singularities. As is well known, black hole space-times have the problem of the “information paradox” stemming from the fact that black holes evaporate through the radiation that Hawking predicted leaving in their wake only thermal radiation no matter what process led to the formation of the black hole. It is expected that when the evaporation is viewed in terms of the conformal field theory, this loss of information about what formed the black hole will be better understood.<br /><br />In addition to the specific results, the fact that this work suggests points of contact between loop quantum gravity and string theory makes it uniquely exciting since both fields have developed separately over the years and could potentially benefit from cross pollination of ideas. Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-29109902011762004102017-02-22T12:09:00.000-06:002017-02-22T12:09:14.265-06:00Gravity as the dimensional reduction of a theory of forms in six or seven dimensions<span style="background-color: white;">Tuesday, February 21st</span><br /><b>Kirill Krasnov, University of Nottingham</b><br /><b>Title: 3D/4D gravity as the dimensional reduction of a theory of differential forms in 6D/7D </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/krasnov022117.pdf">PDF</a><span style="background-color: white;"> of the talk (5M)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/krasnov022117.mp4">Audio+Slides</a><span style="background-color: white;"> [.mp4 16MB]</span><br /><span style="background-color: white;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-vkrWIIbzM64/WK3OaVBRo8I/AAAAAAAAJI4/DcIpF_d2slI4TroeawZdApAXRECwfn3PQCLcB/s1600/re.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="https://3.bp.blogspot.com/-vkrWIIbzM64/WK3OaVBRo8I/AAAAAAAAJI4/DcIpF_d2slI4TroeawZdApAXRECwfn3PQCLcB/s320/re.png" width="320" /></a></div><span style="background-color: white;">by Jorge Pullin, Louisiana State University</span><br /><span style="background-color: white;"><br /></span><br /><div class="MsoNormal">Ordinary field theories, like Maxwell’s electromagnetism, are physical systems with infinitely many degrees of freedom. Essentially the values of the fields at all the points of space are the degrees of freedom. There exist a class of field theories that are formulated as ordinary ones in terms of fields that take different values at different points in space,<span style="mso-spacerun: yes;"> </span>but that whose equations of motion imply that the number of degrees of freedom are finite. This makes some of them particularly easy to quantize. A good example of this is general relativity in two space and one time dimensions (known as 2+1 dimensions). Unlike general relativity in four-dimensional space-time, it only has a finite number of degrees of freedom that depend on the topology of the space-time considered. This type of behavior tends to be generic for these types of theories and as a consequence they are labeled Topological Field Theories (TFT). These types of theories have encountered application in mathematics to explore geometry and topology issues, like the construction of knot invariants, using quantum field theory techniques. These theories have the property of not requiring any background geometric structure to define them unlike, for instance, Maxwell theory, that requires a given metric of space-time in order to formulate it.<o:p></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Remarkably, it was shown some time ago by Plebanski, in 1977 and later further studied by Capovilla-Dell-Jacobson and Mason in 1991 that certain four dimensional TFTs, if supplemented by additional constraints among their variables, were equivalent to general relativity. The additional constraints had the counterintuitive effect of adding degrees of freedom to the theory because they modify the fields in terms of which the theory is formulated. Formulating general relativity in this fashion leads to new perspectives on the theory. In particular it suggests certain generalizations of general relativity, which the talk refers to as deformations of GR. <o:p></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">The talk considered a series of field theories in six and seven dimensions. The theories do not require background structures for their definition but unlike the topological theories we mentioned before, they do have infinitely many degrees of freedom. Then the dimensional reduction to four dimensional of these theories was considered. Dimensional reduction is a procedure in which one “takes a lower dimensional slice” of a higher dimensional theory, usually by imposing some symmetry (for instance assuming that the fields do not depend on certain coordinates). One of the first such proposals was considered in 1919 by Kaluza and further considered later by Klein so it is known as Kaluza-Klein theory. They considered general relativity in five dimensions and by assuming the metric does not depend on the fifth coordinate, were able to show that the theory behaved like four-dimensional general relativity coupled to Maxwell’s electromagnetism and a scalar field. In the talk it was shown that the seven dimensional theory considered, when reduced to four dimensions, was equivalent to general relativity coupled to a scalar field. The talk also showed that certain topological theories in four dimensions known as BF theories (because the two variables of the theory are fields named B and F) can be viewed as dimensional reductions from topological theories in seven dimensions and finally that general relativity in 2+1 dimensions can be viewed as a reduction of a six dimensional topological theory.<o:p></o:p></div><div class="MsoNormal"><br /></div><!--[if gte mso 9]><xml> <o:DocumentProperties> <o:Revision>0</o:Revision> <o:TotalTime>0</o:TotalTime> <o:Pages>1</o:Pages> <o:Words>577</o:Words> <o:Characters>3294</o:Characters> <o:Company>Louisiana State University</o:Company> <o:Lines>27</o:Lines> <o:Paragraphs>7</o:Paragraphs> <o:CharactersWithSpaces>3864</o:CharactersWithSpaces> <o:Version>14.0</o:Version> </o:DocumentProperties> <o:OfficeDocumentSettings> <o:AllowPNG/> </o:OfficeDocumentSettings></xml><![endif]--> <!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves/> <w:TrackFormatting/> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF/> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>JA</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/> <w:DontGrowAutofit/> <w:SplitPgBreakAndParaMark/> <w:EnableOpenTypeKerning/> <w:DontFlipMirrorIndents/> <w:OverrideTableStyleHps/> <w:UseFELayout/> </w:Compatibility> <m:mathPr> <m:mathFont m:val="Cambria Math"/> <m:brkBin m:val="before"/> <m:brkBinSub m:val="--"/> <m:smallFrac m:val="off"/> <m:dispDef/> <m:lMargin m:val="0"/> <m:rMargin m:val="0"/> <m:defJc m:val="centerGroup"/> <m:wrapIndent m:val="1440"/> <m:intLim m:val="subSup"/> <m:naryLim m:val="undOvr"/> </m:mathPr></w:WordDocument></xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="276"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal"/> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9"/> <w:LsdException Locked="false" Priority="39" Name="toc 1"/> <w:LsdException Locked="false" Priority="39" Name="toc 2"/> <w:LsdException Locked="false" Priority="39" Name="toc 3"/> <w:LsdException Locked="false" Priority="39" Name="toc 4"/> <w:LsdException Locked="false" Priority="39" Name="toc 5"/> <w:LsdException Locked="false" Priority="39" Name="toc 6"/> <w:LsdException Locked="false" Priority="39" Name="toc 7"/> <w:LsdException Locked="false" Priority="39" Name="toc 8"/> <w:LsdException Locked="false" Priority="39" Name="toc 9"/> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption"/> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title"/> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font"/> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong"/> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text"/> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision"/> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote"/> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title"/> <w:LsdException Locked="false" Priority="37" Name="Bibliography"/> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading"/> </w:LatentStyles></xml><![endif]--><style><!-- /* Font Definitions */ @font-face {font-family:"ＭＳ 明朝"; panose-1:0 0 0 0 0 0 0 0 0 0; mso-font-charset:128; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:fixed; mso-font-signature:1 134676480 16 0 131072 0;} @font-face {font-family:"ＭＳ 明朝"; panose-1:0 0 0 0 0 0 0 0 0 0; mso-font-charset:128; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:fixed; mso-font-signature:1 134676480 16 0 131072 0;} @font-face {font-family:Cambria; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:3 0 0 0 1 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"ＭＳ 明朝"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"ＭＳ 明朝"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} </style><br />--> <!--[if gte mso 10]><style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} </style><![endif]--> <!--StartFragment--> <!--EndFragment--><div class="MsoNormal"><span style="mso-bidi-font-family: "Times New Roman"; mso-fareast-font-family: "Times New Roman";">At the moment is not clear whether these theories can be considered as describing nature, because it is not clear whether the additional scalar field that is predicted is compatible with the known constraints on scalar-tensor theories. However, these theories are useful in illuminating the structures and dynamics of general relativity and connections to other theories.</span><o:p></o:p></div><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-82186023070346698362017-02-07T15:02:00.002-06:002017-02-07T15:02:26.175-06:00Loop Quantum Gravity, Tensor Network, and Holographic Entanglement Entropy <span style="background-color: white;">Tuesday, February 7th</span><br /><b>Muxin Han, Florida Atlantic University</b><br /><b>Loop Quantum Gravity, Tensor Network, and Holographic Entanglement Entropy </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/han020717.pdf">PDF</a><span style="background-color: white;"> of the talk (2M)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/han020717.mp4">Audio+Slides</a><span style="background-color: white;"> [.mp4 18MB]</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-im79Xdc4yWk/WJowVPRr0DI/AAAAAAAAJGg/GzA_11oJtJUuZbEWvCPOQ6yDLDlyjYytQCLcB/s1600/re.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="213" src="https://4.bp.blogspot.com/-im79Xdc4yWk/WJowVPRr0DI/AAAAAAAAJGg/GzA_11oJtJUuZbEWvCPOQ6yDLDlyjYytQCLcB/s320/re.JPG" width="320" /></a></div><span style="background-color: white;">by Jorge Pullin, Louisiana State University</span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span>The cosmological constant is an extra term that was introduced into the equations of General Relativity by Einstein himself. At the time he was trying to show that if one applied the equations to the universe as a whole, they had static solutions. People did not know in those days that the universe expanded. Some say that Einstein called the introduction of this extra term his “biggest blunder” since it prevented him from predicting the expansion of the universe which was observed experimentally by Hubble a few years later. In spite of its origin, the term is allowed in the equations and the space-times that arise when one includes the term are known as de Sitter space-times in honor of the Dutch physicist who first found some of these solutions. Depending on the sign of the cosmological constant chosen, one could have de Sitter or anti-de Sitter (AdS) space-times. <br /><br /><br />It was observed in the context of string theory that if one considered quantum gravity in anti-de Sitter space-times, the theory was equivalent to a certain class of field theories known as conformal field theories (CFT) living on the boundary of the space-time. The result is not a theorem but a conjecture, known as AdS/CFT or Maldacena conjecture. It has been verified in a variety of examples. It is a remarkable result. Gravity and conformal field theories are very different in many aspects and the fact that they could be mapped to each other opens many possibilities for new insights. For instance, an important open problem in gravity is the evaporation of black holes. Although nothing can escape a black hole classically, Hawking showed that if quantum effects are taken into account, black holes radiate particles like a black body at a given temperature. The particles take away energy and the black hole shrinks, eventually evaporating completely. This raises the question of what happened to matter that went into the black hole. Quantum mechanics has a property named unitarity that states that ordinary matter cannot turn into incoherent radiation, so this raises the question of how it could happen in an evaporating black hole. In the AdS/CFT picture, since the evaporating black hole would be mapped to a conformal field theory that is unitary, that would provide a way to study quantum mechanically how matter turns into incoherent radiation. <br /><br /><br />Several authors have connected the AdS/CFT conjecture to a mathematical construction known as tensor networks that is commonly used in quantum information theory. Tensor networks have several points in common with the spin networks that are the quantum states of gravity in loop quantum gravity. This talk spells out in detail how to make a correspondence between the states of loop quantum gravity and the tensor networks, basically corresponding to a coarse graining or averaging at certain scales of the states of quantum gravity. This opens the possibility of connecting results from AdS/CFT with results in loop quantum gravity. In particular the so-called Ryu-Takahashi formula for the entropy of a region can be arrived from in the context of loop quantum gravity. <br /><br /><br />Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-73560028788857376502017-01-25T14:33:00.000-06:002017-01-25T14:33:07.313-06:00Symmetries and representations in Group Field Theory<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-4l7mxEBa--A/WIkBF9i_ABI/AAAAAAAAJDU/AoCkF_vVDbkWQHhTXyIEz6mgvsiWkAURQCLcB/s1600/kegeles.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="https://3.bp.blogspot.com/-4l7mxEBa--A/WIkBF9i_ABI/AAAAAAAAJDU/AoCkF_vVDbkWQHhTXyIEz6mgvsiWkAURQCLcB/s320/kegeles.png" width="239" /></a></div><span style="background-color: white;">Tuesday, January 24th</span><br /><b>Alexander Kegeles, Albert Einstein Institute</b><br /><b>Title: Field theoretical aspects of GFT: symmetries and representations</b><span style="background-color: white;"> </span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/kegeles012417.pdf">PDF</a><span style="background-color: white;"> of the talk (1M)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/kegeles012417.mp4">Audio+Slides</a><span style="background-color: white;"> [.mp4 11MB]</span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span><span style="background-color: white;">by Jorge Pullin, Louisiana State University</span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span>In loop quantum gravity the quantum states are labeled by loops, more precisely by graphs formed by lines that intersect at vertices and that are “colored”, meaning each line is associated with an integer. They are known as "spin networks". As the states evolve in time these graphs "sweep" surfaces in four dimensional space-time constituting what is known as a “spin foam”. This is a representation of a quantum space-time in loop quantum gravity. The spin foams connect an initial spin network with a final one and the formalism gives a probability for such “transition” from a given spatial geometry to a future spatial geometry to occur. The picture that emerges has some parallel with ordinary particle physics in which particles transition from initial to final states, but also some differences. <br /><br />However, it was found that one could construct ordinary quantum field theories such that the transition probabilities of them coincided with those stemming from spin foams connecting initial to final spatial geometries in loop quantum gravity. This talk concerns itself with such quantum field theories, known generically as Group Field Theories (GFTs). The talk covered two main aspects of them: symmetries and representations. <br /><br />Symmetries are important in that they may provide mathematical tools to solve the equations of the theory and identify conserved quantities in it. There is a lot of experience with symmetries in local field theories, but GFT’s are non-local, which adds challenges. Ordinary quantum field theories are formulated starting by a quantity known as the action, which is an integral on a domain. A symmetry is defined as a map of the points of such domain and of the fields that leaves the integral invariant. In GFTs the action is a sum of integrals on different domains. A symmetry is defined as a collections of maps acting on the domains and fields that leave invariant each integral in the sum. An important theorem of great generality stretching from classical mechanics to quantum field theory is Noether’s theorem, that connects symmetries with conserved quantities. The above notion of symmetry for GFTs allows to introduce a Noether’s theorem for them. The theorem could find applicability in a variety of situations, in particular certain relations that were noted between GFTs and recoupling theory and better understand various models based on GFTs. <br /><br />In a quantum theory like GFTs the quantum states structure themselves into a mathematical set known as Hilbert space. The observable quantities of the theory are represented as operators acting on such space. Hilbert spaces are generically infinite dimensional and this introduces a series of technicalities both in their own definition and in the definition of observables for quantum theories. In particular one can find different families of inequivalent operators related to the same physical observables. This is what is known as different representations of the algebra of observables. Algebra in this context means that one can compose observables to form either new observables or linear combinations of known observables. An important type of representation in quantum field theory is known as Fock representation. It is the representation on which ordinary particles are based. Another type of representations is the condensate representation which, instead of particles, describes their collective (excitations) behaviour and is very convenient for systems with large (infinite) number of particles. A discussion of Fock and condensate like representations in the context of GFTs was presented and the issue of when representations are equivalent or not was also addressed. <br /><br />Future work looks at generalizing the notion of symmetries presented to find further non-standard symmetries of GFTs. Also investigating “anomalies”. This is when one has a symmetry in the classical theory that may not survive upon quantization. The notion of symmetry can also be used to define an idea of “ground state” or fundamental state of the theory. In ordinary quantum field theory in flat space-time this is done by seeking the state with lower energy. In the context of GFTs one will invoke more complicated notions of symmetries to define the ground state. Several other results of ordinary field theories, like the spin statistics theorem, may be generalizable to the GFT context using the ideas presented in this talk. <br /><br /><br />Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-87896684992091873782016-03-11T10:00:00.000-06:002016-03-11T10:00:07.821-06:00Symmetry reductions in loop quantum gravity<span style="background-color: white;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-cvWAc6lmksE/VuLq7IIBukI/AAAAAAAAIYU/02Z6ppHbxkonAr5s6EiE5QYElVsYjQxDQ/s1600/Screen%2BShot%2B2016-03-11%2Bat%2B9.57.10%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="196" src="https://2.bp.blogspot.com/-cvWAc6lmksE/VuLq7IIBukI/AAAAAAAAIYU/02Z6ppHbxkonAr5s6EiE5QYElVsYjQxDQ/s320/Screen%2BShot%2B2016-03-11%2Bat%2B9.57.10%2BAM.png" width="320" /></a></div><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span><span style="background-color: white;"><br /></span><span style="background-color: white;">Tuesday, Dec. 8th</span><br /><b>Norbert Bodendorfer, Univ. Warsaw </b><br /><b>Title: Quantum symmetry reductions based on classical gauge fixings </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/bodendorfer120815.pdf">PDF</a><span style="background-color: white;"> of the talk (1.4MB)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/bodendorfer120815.wav">Audio</a><span style="background-color: white;"> [.wav 35MB]</span><br /><a href="https://www.youtube.com/watch?v=6pDvzlCYJx4">YouTube.</a><br /><br /><span style="background-color: white;">Tuesday, Nov. 10th</span><br /><b>Jedrzej Swiezewski, Univ. Warsaw </b><br /><b>Title: Developments on the radial gauge </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/swiezewski111015.pdf">PDF</a><span style="background-color: white;"> of the talk (4MB)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/swiezewski111015.mp3">Audio</a><span style="background-color: white;"> [.mp3 40MB]</span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;">by Steffen Gielen, Imperial College</span><br /><span style="background-color: white;"><br /></span><br /><div class="page" title="Page 1"><div class="layoutArea"><div class="column"><span style="font-family: "liberationserif"; font-size: 12.000000pt;">A few months ago, physicists around the world celebrated the centenary of the field equations of general relativity, presented by Einstein to the Prussian Academy of Sciences in November 1915. Arriving at the correct equations was the culmination of an incredible intellectual effort by Einstein, driven largely by mathematical requirements that the new theory of gravitation (superseding Newton's theory of gravitation, which proved ultimately incomplete) should satisfy. In particular, Einstein realized that its field equations should be generally covariant – they should take the same general form in any coordinate system that one chooses to use for the calculation, say whether one uses Cartesian, cylindrical, or spherical coordinates. This property sets the equations of general relativity apart from Newton's laws of motion, where changing coordinate system can lead to the appearance of additional “forces” such as centripetal or Coriolis forces. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> <span style="font-family: "liberationserif"; font-size: 12.000000pt;">Many conferences were held honoring the anniversary of Einstein's achievement. What was discussed at those conferences was partially the historical context, the beauty of the form of the equations, or the precise mathematical and conceptual significance of general covariance. However, the most important legacy of general relativity and the main inspiration for modern research have been the new physical phenomena that appear in general relativity but not in Newtonian gravity: </span><span style="font-family: "liberationserif"; font-size: 12.000000pt; font-style: italic;">black holes </span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">are regions of spacetime where gravity becomes so strong that not even light can escape; the strong gravitational field outside a black hole leads to a </span><span style="font-family: "liberationserif"; font-size: 12.000000pt; font-style: italic;">time dilation </span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">so strong that an hour nearby a black hole can correspond to years on Earth, as used recently in the film </span><span style="font-family: "liberationserif"; font-size: 12.000000pt; font-style: italic;">Interstellar; </span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">and we now believe that the universe as a whole </span><span style="font-family: "liberationserif"; font-size: 12.000000pt; font-style: italic;">is expanding</span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">, and has been since the Big Bang which is thought of as the beginning of space and time. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> <span style="font-family: "liberationserif"; font-size: 12.000000pt;">In order to understand these dramatic consequences of the Einstein equations, physicists had to find solutions to these equations. This is rather challenging in general: the Einstein equations are complicated differential equations for ten functions, depending on one time and three space dimensions, that encode the gravitational field of spacetime. Furthermore, the conceptually appealing property of general covariance means that apparently different solutions of the equations can be simply the same physical configuration looked at in different coordinates. Indeed, both issues – finding solutions to the equations at all and understanding their meaning – were challenges in the early days of the theory, when physicists tried to make sense of Einstein's equations. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> <span style="font-family: "liberationserif"; font-size: 12.000000pt;">Despite this formidable challenge, the Prussian lieutenant of the artillery Karl Schwarzschild, while serving on the Eastern front in World War I, was able to derive an exact solution of Einstein's equations in vacuum within weeks of their publication, much to the surprise of Einstein himself. This solution, now called the </span><span style="font-family: "liberationserif"; font-size: 12.000000pt; font-style: italic;">Schwarzschild solution</span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">, describes a black hole, and is one of the most important solutions of general relativity. What Schwarzschild did in order to solve the equations was to assume a </span><span style="font-family: "liberationserif"; font-size: 12.000000pt; font-style: italic;">symmetry </span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">of the solution: he assumed that the configuration of the gravitational field should be spherically symmetric. In spherical coordinates, where each point in space is specified by one radial and two angular coordinates, it should be independent of any change in the angular directions. This means that one describes space as a collection of regular, concentric spheres. What Schwarzschild found was that the spheres did not have to be glued together to simply give normal flat space, but one could form a curved geometry out of them, with curvature increasing as one heads towards the centre (eventually forming a black hole), while still solving Einstein's equations. To be able to do the calculation, Schwarzschild had to choose a particularly suitable coordinate system, hence exploiting the property of general covariance in his favor. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> <span style="font-family: "liberationserif"; font-size: 12.000000pt;">This strategy of finding solutions is typical for practitioners of general relativity: cosmological solutions could similarly be found by assuming that the universe looks exactly the same at each point and in each direction in space (in mathematical terms, it is homogeneous and isotropic), and only changes in time. This reduces the problem of solving Einstein equations to a much simpler task, and explicit solutions could be written down, again in a suitable coordinate system. These simplest solutions already exhibit the main features of our universe (overall expansion and an initial </span><span style="font-family: liberationserif; font-size: 12pt;">Big Bang singularity) and are fairly realistic – indeed our Universe seems to display only small variations between different large-scale regions, and at the very largest scales is, within an approximation, well described by a geometry that simply looks the same everywhere in space.</span></div></div></div><div class="page" title="Page 2"><div class="layoutArea"><div class="column"><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">Loop quantum gravity is an approach at a quantization of general relativity, aiming to extend general relativity by making it compatible with quantum mechanics. What distinguishes it from other approaches is that the main property of general relativity, general covariance, is taken as a central guiding principle towards the construction of a quantum theory. In some respects, the status of loop quantum gravity can be compared to the early days of general relativity: while it is now known that a quantum theory compatible with general covariance can be constructed, and its mathematical structure is well understood, one now needs to understand the new physical phenomena implied by the quantization, beyond general relativity. Just as in the time after November 1915, today's physicists should find explicit solutions to the equations of loop quantum gravity that can be used to study the physical implications of the (relatively) new framework. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> <span style="font-family: "liberationserif"; font-size: 12.000000pt;">One of the main successes of loop quantum gravity has been its application to cosmology. Homogeneous solutions of the Einstein equations that approximately describe our universe have been shown to receive modifications once loop quantum gravity techniques are used, leading to a resolution of the Big Bang singularity by a Big Bounce, and potentially observable quantum effects. However, the resulting models of the universe are not solutions of the full theory of loop quantum gravity: rather, they arise from quantization of a reduced set of solutions of classical general relativity with loop quantum gravity techniques. There is no reason, in general, to expect that these are exact solutions of loop quantum gravity. Quantum mechanics is funny: quantization can lead to many inequivalent theories, depending on how one decides to do it. By assuming that the universe is homogeneous from the outset, one obtains a quantum theory of only a finite, rather than an infinite number of “degrees of freedom”. It is well known that quantum theories can behave differently depending on whether they have a finite or infinite number of degrees of freedom. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span><span style="font-family: "liberationserif"; font-size: 12.000000pt;">In their ILQGS seminars, Jedrzej and Norbert presented work towards resolving this tension. Namely, they presented an approach in which, similar to how Schwarzschild and contemporaries proceeded 100 years ago, one identifies a suitable coordinate system in which the spacetime metric, representing the gravitational field, is represented. In a quantum theory where general covariance is implemented fundamentally, this means one has to perform a “gauge-fixing”; the freedom of changing the coordinate system must be “fixed” consistently in the quantum theory. Gauge-fixings mean that one works with fewer variables, and has to worry less about different but physically equivalent solutions that are only related by changes in the coordinate system. Achieving them is often quite hard technically. Together with collaborators in Warsaw, Jedrzej and Norbert have made progress on this issue in recent years. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> <span style="font-family: "liberationserif"; font-size: 12.000000pt;">The second step, after a convenient coordinate system (think of spherical coordinates for treating the Schwarzschild black hole) has been chosen, is to do a “symmetry reduction” in the full quantum theory: rather than on the most general quantum universes, one now focusses on those that have a certain symmetry property. Norbert showed a detailed strategy for how to do this. One identifies an equation satisfied by all classical solutions with the desired symmetry, such as isotropy (i.e. looking the same in all directions). The quantum version of this equation is then imposed in loop quantum gravity, leading to a full quantum definition of symmetries like “isotropy” or “spherical symmetry” in loop quantum gravity. The obvious applications of the mechanism, which are being explored at the moment, are identifying cosmological and black hole solutions in loop quantum gravity, studying their dynamics, and verifying whether the resulting effects are in accord with what has been found in the simpler finite-dimensional quantum models described above. In particular, one would like to know whether singularities inside black holes and at the Big Bang, where Einstein's theory simply breaks down, can be resolved by quantum mechanics, as is hoped. </span><br /><span style="font-family: "liberationserif"; font-size: 12.000000pt;"><br /></span> </div></div></div><br /><div class="page" title="Page 3"><div class="layoutArea"><div class="column"><span style="font-family: "liberationserif"; font-size: 12.000000pt;">Jedrzej also showed how the methods developed in different “gauge-fixings” for classical general relativity could be used to resolve a disputed issue in the context of the AdS/CFT correspondence in string theory, where one faces a similar problem of fixing the huge freedom under changes in the coordinate system in order to identify the invariant physical properties of spacetime. In particular, a certain choice of gauge-fixing has been discussed in AdS/CFT, which leads to unfamiliar consequences such as non-locality in the gauge-fixed version of the theory. The tools developed by Jedrzej and collaborators could be used to clarify precisely how this non-locality occurs. They hence provide a somewhat unusual example of the application of methods developed for loop quantum gravity in a string theory-motivated context, clearly a positive example that can inspire more work on closer connections between methods used in these different communities. </span></div></div></div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-83862033447230556392015-05-25T15:43:00.000-05:002015-05-29T15:12:10.508-05:00Separability and quantum mechanics<br /><a href="https://www.blogger.com/blogger.g?blogID=5826632960356694090" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><span style="background-color: white;">Tuesday, Apr 21st</span><br /><b>Fernando Barbero, CSIC, Madrid </b><br /><b>Title: Separability and quantum mechanics </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/barbero042115.pdf">PDF</a><span style="background-color: white;"> of the talk (758k)</span><br /><span style="background-color: white;"><a href="http://relativity.phys.lsu.edu/ilqgs/barbero042115.wav">Audio</a> [.wav 20MB]</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-NnrZKO65IAs/VWC9yP-725I/AAAAAAAAHlM/CZ6hGcWR-Wk/s1600/barbero.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-NnrZKO65IAs/VWC9yP-725I/AAAAAAAAHlM/CZ6hGcWR-Wk/s1600/barbero.jpg" /></a></div><br /><span style="background-color: white;">by Juan Margalef-Bentabol, UC3M-CSIC, Madrid</span> <br /><script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script> <br /><h2>Classical vs Quantum: Two views of the world</h2><a href="https://www.blogger.com/blogger.g?blogID=5826632960356694090" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a>In classical mechanics it is relatively straightforward to get information from a system. For instance, if we have a bunch of particles moving around, we can ask ourselves: where is its center of mass? What is the average speed of the particles? What is the distance between two of them? In order to ask and answer such questions in a precise mathematical way, we need to know all the positions and velocities of the system at every moment; in the usual jargon, we need to know the dynamics over the state space (also called <a href="https://en.wikipedia.org/wiki/Configuration_space">configuration space</a> for positions and velocities, or <a href="https://en.wikipedia.org/wiki/Phase_space#Conjugate_momenta">phase space</a> when we consider positions and momenta). For example, the appropriate way to ask for the center of mass, is given by the function that for a specific state of the system, gives the weighted mean of the positions of all the particles. Also, the total momentum of the system is given by the function consisting of the sum of the momenta of the individual particles. Such functions are called <strong>observables</strong> of the theory, therefore an observable is defined as a function that takes all the positions and momenta, and returns a real number. Among all the observables there are some ones that can be considered as <strong>fundamental</strong>. A familiar example is provided by the generalized position and momenta denoted as <script type="math/tex;">q^i</script> and <script type="math/tex;">p_i</script>.<br /><br />In a quantum setting answering, and even asking, such questions is however much trickier. It can be properly <a href="https://en.wikipedia.org/wiki/Matrix_mechanics">justified</a> that the needed classical ingredients have to be significantly changed:<br /><ol><li>The state space is now much more complicated, instead of positions and velocities/momenta we need a (usually infinite dimensional) complex vector space <script type="math/tex;">\mathcal{H}</script> with an inner product that is <a href="https://en.wikipedia.org/wiki/Complete_metric_space">complete</a>. Such vector space is called a <a href="https://en.wikipedia.org/wiki/Hilbert_space">Hilbert space</a> and the vectors of <script type="math/tex;">\mathcal{H}</script> are called states (up to a complex multiplication).</li><li>The observables are functions <script type="math/tex;">A</script> from <script type="math/tex;">\mathcal{H}</script> to itself that "behave well" with respect to the inner product (these are called <a href="https://en.wikipedia.org/wiki/Self-adjoint_operator">self-adjoint</a> operators). Notice in particular that the outputs of the quantum observables are complex vectors and not numbers anymore!</li><li>In a physical experiment we do obtain real numbers, so somehow we need to retrieve them from the observable <script type="math/tex;">A</script> associated with the experiment. The way to do this is by looking at the <strong>spectrum</strong> of <script type="math/tex;">A</script>, which consists of a set of real numbers called <a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors">eigenvalues</a> associated with some vectors called <a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors">eigenvectors</a> (actually the number that we obtain is a probability amplitude whose absolute value squared is the probability of obtaining as an output a specific eigenvector).</li></ol>The questions that arise naturally are: how do we choose the Hilbert space? how do we introduce <strong>fundamental</strong> observables analogous to the ones of classical mechanics? In order to answer these questions we need to take a small detour and talk a little bit about the algebra of observables.<br /><h2>Algebra of Observables</h2>Given two classical observables, we can construct another one by means of different methods. Some important ones are:<br /><ul><li> By adding them (they are real functions) <script type="math/tex;">h_1=f+g</script></li><li> By multiplying them <script type="math/tex;">h_2=f\cdot{}g</script></li><li> By a more sophisticated procedure called the <a href="https://en.wikipedia.org/wiki/Poisson_bracket">Poisson bracket</a> <script type="math/tex;">h_3=\{f,g\}</script></li></ul>The last one turns out to be fundamental in classical mechanics and plays an important role within the <a href="https://en.wikipedia.org/wiki/Hamiltonian_mechanics">Hamiltonian form</a> of the dynamics of the system. A basic fact is that the set of observables endowed with the Poisson bracket forms a <a href="https://en.wikipedia.org/wiki/Lie_algebra">Lie algebra</a> (a vector space with a rule to obtain an element out of two other ones satisfying some natural properties). The fundamental observables behave really well with respect to the Poisson bracket, namely they satisfy simple <a href="https://en.wikipedia.org/wiki/Canonical_commutation_relation">commutation relations</a> <script type="math/tex;">\{q^i,p_j\}=\delta^i_j</script> i.e. if we consider the <script type="math/tex;">i</script>-<script type="math/tex;">th</script> position observable and "Poisson-multiply" it by the <script type="math/tex;">j</script>-<script type="math/tex;">th</script> momentum observable, we obtain the constant function <script type="math/tex;">1</script> if <script type="math/tex;">i=j</script>, or the constant function <script type="math/tex;">0</script> if <script type="math/tex;">i\neq j</script>.<br /><br />One of the best approaches to construct a quantum theory associated with a classical one, is to reproduce at the quantum level some features of its classical formulation. One way to do this is to define a Lie algebra for the <strong>quantum</strong> observables such that some of such observables mimic the behavior of the Poisson bracket of some <strong>classical</strong> fundamental observables. This procedure (modulo some technicalities) is known as finding a <a href="https://en.wikipedia.org/wiki/Representation_theory#Unitary_representations">representation</a> of this algebra. In order to do this, one has to choose:<br /><ol><li>A Hilbert space <script type="math/tex;">\mathcal{H}</script>.</li><li>Some fundamental observables that reproduce the canonical commutation relations when we consider the <a href="https://en.wikipedia.org/wiki/Commutator#Ring_theory">commutator</a> of operators.</li></ol>In standard Quantum Mechanics the fundamental observables are positions and momenta. It may seem that there is a great ambiguity in this procedure, however there is a central theorem due to <a href="https://en.wikipedia.org/wiki/Stone-von_Neumann_theorem">Stone and von Neumann</a> that states that, under some reasonable hypothesis, all the representations are essentially the same.<br /><h2> Separability </h2>One of the hypotheses of the Stone-von Neumann theorem is that the Hilbert space <script type="math/tex;">\mathcal{H}</script> must be <strong>separable</strong>. This means that it is possible to find a <strong>countable</strong> set of orthonormal vectors in <script type="math/tex;">\mathcal{H}</script> (called <a href="https://en.wikipedia.org/wiki/Orthonormal_basis">Hilbert basis</a>) such that any state -vector- of <script type="math/tex;">\mathcal{H}</script> can be written as an appropriate countable sum of them. A separable Hilbert space, despite being infinite dimensional, is not "too big", in the sense that there are Hilbert spaces with uncountable bases that are genuinely larger. The separability assumption seems natural for standard quantum mechanics, but in the case of quantum field theory -with infinitely many degrees of freedom- one might expect to need much larger Hilbert spaces i.e. non separable ones. Somewhat surprisingly, most of the quantum field theories can be handled with our beloved and "simple" separable Hilbert spaces with the remarkable exception of LQG (and its derivative LQC) where non separability plays a significant role. Henceforth it seems interesting to understand what happens when one considers non separable Hilbert spaces [3] in the realm of the quantum world. A natural and obvious way to acquire the necessary intuition is by first considering quantum mechanics on a non-separable Hilbert space.<br /><h2> The Polymeric Harmonic Oscillator </h2>The authors of [2,3] discuss two inequivalent (among the infinitely many) representations of the algebra of fundamental observables which share a non familiar feature, namely, in one of them (called the position representation) the position observable is well defined but the momentum observable <strong>does not even exist</strong>; in the momentum representation the roles of positions and momenta are exchanged. Notice that in this setting, some familiar features of quantum mechanics are lost for good. For instance, the position-momentum Heisenberg uncertainty formula makes no sense at all as both position and momentum observables need to be defined.<br /><br />To improve the understanding of such systems and gain some insight for the application to LQG and LQC, the authors of [1] (re)study the <script type="math/tex;">1</script>-dimensional Harmonic Oscillator (PHO) in a non separable Hilbert space (known in this context as a polymeric Hilbert space). As the space is non separable, any Hilbert basis should be uncountable. This leads to some unexpected behaviors that can be used to obtain exotic representations of the algebra of fundamental observables.<br /><br />The motivation to study the PHO is kind of the same as always: the HO, in addition to being an excellent toy model, is a good approximation to any 1-dimensional mechanical system close to its equilibrium points. Furthermore, free quantum field theories can be thought of as ensembles of infinitely many independent HO's. There are however many ways to generalize the HO to a non separable Hilbert space and also many equivalent ways to realize a concrete representation, for instance by using Hilbert spaces based on:<br /><ul><li> the <a href="https://en.wikipedia.org/wiki/Bohr_compactification">Bohr compactification</a> of the real line.</li><li> the <a href="https://en.wikipedia.org/wiki/Almost_periodic_function#Besicovitch_almost_periodic_functions">Besicovitch almost periodic functions</a>.</li><li> constructions that generalize the usual separable Hilbert spaces based on sequences (<script type="math/tex;">\ell^2(\mathbb{R})</script> spaces).</li></ul>The eigenvalue equations in these different spaces take different forms: in some of them they are <a href="https://en.wikipedia.org/wiki/Recurrence_relation#Relationship_to_difference_equations_narrowly_defined">difference equations</a>, whereas in others they have the form of the standard Schrödinger equation with a periodic potential. It is important to notice nonetheless that writing Hamiltonian observables in this framework turn out to be really difficult, as only one of the position or momentum observables can be strictly represented. This means that for the other one it is necessary to rely on some kind of approximation (that can be obtained by introducing an arbitrary scale) and choosing a periodic potential with minima corresponding to the one of the quadratic operator. The huge uncertainty in this procedure has been highlighted by Corichi, Zapata, Vukašinac and collaborators. The standard choice leads to an equation known as the <a href="https://en.wikipedia.org/wiki/Mathieu_function#Mathieu_equation">Mathieu equation</a> but other simple choices have been explored, as the one shown in the figure.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><figure><center></center><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-il2_uOj7reM/VWC_dQBWgJI/AAAAAAAAHlY/WpTwbNcNhTY/s1600/figurablog.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="288" src="http://4.bp.blogspot.com/-il2_uOj7reM/VWC_dQBWgJI/AAAAAAAAHlY/WpTwbNcNhTY/s320/figurablog.jpg" width="320" /></a></div><figcaption><strong>Energy eigenvalues (bands) of a polymerized harmonic oscillator.</strong> The horizontal axis shows the position (or the momentum depending on the chosen representation), the vertical axis is the energy and the red line represents the particular periodic extension of the potential used to approximate the usual quadratic potential of the HO. The other lines plotted in this graph correspond to auxiliary functions that can be used to locate the edges of the bands that define the point spectrum in the present example.</figcaption></figure> <br />As we have already mentioned, the orthonormal bases in non separable Hilbert spaces are uncountable. A consequence of this is the fact that the orthonormal basis provided by the eigenstates of the Hamiltonian must be uncountable, i.e. the Hamiltonian must have an uncountable infinity worth of eigenvalues (counted with multiplicity). A somewhat unexpected result that can be proved by invoking classical theorems on functional analysis in non-separable Hilbert spaces is the fact that these eigenvalues are gathered in bands. It is important to point out here that only the lowest-lying part of the spectrum is expected to mimic reasonably well the one corresponding to the standard HO, however it is important to keep also in mind the huge difference that persists: even the narrowest bands contain a continuum of eigenvalues.<br /><h2> Some physical consequences </h2>The fact that the spectrum of the polymerized harmonic oscillator displays this band structure is relevant for some applications of polymerized quantum mechanics. Two main issues were mentioned in the talk. On one hand the statistical mechanics of polymerized systems must be handled with due care. Owing to the features of the spectrum, the counting of energy eigenstates necessary to compute the entropy in the microcanonical ensemble is ill defined. A similar problem crops up when computing the partition function of the canonical ensemble. These problems can probably be circumvented by using an appropriate regularization and also by relying on some superselection rules that eliminate all but a countable subset of energy eigenstates of the system.<br /><br />A setting where something similar can be done is in the polymer quantization of the scalar field (already considered by Husain, Pawłowski and collaborators). As this system can be thought of as an infinite ensemble of harmonic oscillators, the specific features of their (polymer) quantization will play a significant role. A way to avoid some difficulties here also relies on the elimination of unwanted energy eigenvalues by imposing superselection rules as long as they can be physically justified.<br /><h2>Bibliography</h2>[1] J.F. Barbero G., J. Prieto and E.J.S. Villaseñor, <i>Band structure in the polymer quantization of the harmonic oscillator</i>, Class. Quantum Grav. <strong>30</strong> (2013) 165011.<br />[2] W. Chojnacki, <i>Spectral analysis of Schrodinger operators in non-separable Hilbert spaces</i>, Rend. Circ. Mat. Palermo (2), <strong>Suppl. 17</strong> (1987) 13551.<br />[3] H. Halvorson, <i>Complementarity of representations in quantum mechanics</i>, Stud. Hist. Phil. Mod. Phys. <strong>35</strong> (2004) 45-56.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-12674970544961376372015-05-05T12:19:00.000-05:002015-05-05T12:19:01.547-05:00Cosmology with group field theory condensates<span style="background-color: white;">Tuesday, Feb 24th</span><br /><b>Steffen Gielen, Imperial College </b><br /><b>Title: Cosmology with group field theory condensates </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/gielen022415.pdf">PDF</a><span style="background-color: white;"> of the talk (136K)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/gielen022415.wav">Audio</a><span style="background-color: white;"> [.wav 39MB]</span><br /><span style="background-color: white;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-38P0Kz1NgRA/VTuqFHfCzyI/AAAAAAAAHbg/Ll1nIzrzcSw/s1600/re.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-38P0Kz1NgRA/VTuqFHfCzyI/AAAAAAAAHbg/Ll1nIzrzcSw/s1600/re.jpg" /></a></div><span style="background-color: white;">by Mercedes Martín-Benito, Rabdoud University</span><br /><span style="background-color: white;"><br /></span><br /><div class="p1">One of the most important open questions in physics is how gravity (or in other words, the geometry of spacetime) behaves when the energy densities are huge, of the order of the Planck density. Our most reliable theory of gravity, general relativity, fails to describe the gravitational phenomena in high energy density regimes, as it generically leads to singularities. These regimes are achieved for example at the origin of the universe or in the interior of black holes, and therefore we do not have yet a consistent explanation for these phenomena. We expect quantum gravity effects to be important in such situations, but general relativity, being a theory that treats the geometry of the spacetime as classical, do not take those quantum gravity effects into account. Thus, in order to describe black holes or the very early universe in a physically meaningful way it seems unavoidable to quantize gravity.</div><div class="p2"><br /></div><div class="p1">The quantization of gravity not only requires attaining a mathematically well-described theory with predictive power, but also the comparison of the predictions with observations to check that they agree. The regimes where quantum gravity plays a fundamental role, such as black holes or the early universe, might seem very far from our observational or experimental reach. Nevertheless, thanks to the big progress that precision cosmology has undergone in the last decades, in the near future we may be able to get observational data about the very initial instants of the universe that could be sensitive to quantum gravity effects. We need to get prepared for that, putting our quantum gravity theories at work in order to extract cosmological predictions from them.</div><div class="p2"><br /></div><div class="p1">This is the main goal of Steffen's analysis. He bases his research in the approach to quantum gravity known as Group Field Theory (GFT). GFT defines a path integral for gravity, namely, it replaces the classical notion of unique solution for the geometry of the spacetime with a sum over an infinity of possibilities to compute a quantum amplitude. The formalism that it uses is pretty much like the usual quantum field theory formalism employed in particle physics. There, given a process involving particles, the different possible interactions contributing to that process are described by so-called Feynman diagrams, that are later summed up in a consistent way to finally lead to the transition amplitude of the process that we are trying to describe. GFT follows that strategy. The corresponding Feynman diagrams are spinfoams, and represent the different dynamical processes that contribute to a particular spacetime configuration. GFT is thus linked to Loop Quantum Gravity (GFT), since spinfoams are one main proposal for defining the dynamics of LQG. The GFT Feynman expansion extends and completes this definition of the LQG dynamics by trying to determine how these diagrams must be summed up in a controlled way to obtain the corresponding quantum amplitude. </div><div class="p2"><br /></div><div class="p1">GFT is a fundamentally discrete theory, with a large number of microscopical degrees of freedom. These degrees of freedom might organize themselves, following somehow a collective behavior, to lead to different phases of the theory. The hope is to find a phase that in the continuum limit agrees with having a smooth spacetime as described by the classical theory of general relativity. In this way, we would make the link between the underlying quantum theory and the classical one that explains very well the gravitational phenomena in regimes where quantum gravity effects are negligible. To understand this, let us make the analogy with a more familiar theory: Hydrodynamics. </div><div class="p2"><br /></div><div class="p1">We know that the fundamental microscopical constituents of a fluid are molecules. The dynamics of this micro-constituents is intrinsically quantum, however these degrees of freedom display a collective behavior that leads to macroscopic properties of the fluid, such as its density, its velocity, etc. In order to study these properties it is enough to apply the classical theory of hydrodynamics. However we know that it is not the fundamental theory describing the fluid, but an effective description coming from an underlying quantum theory (condense matter theory) that explains how the atoms form the molecules, and how these interact among themselves giving rise to the fluid. </div><div class="p2"><br /></div><div class="p2">The continuum spacetime that we are used to might emerge, in a similar way to the example of the fluid, from the collective behavior of many many quantum building blocks, or atoms of spacetime. This is, in plane words, the point of view employed in the GFT approach to quantum gravity.</div><div class="p2"><br /></div><div class="p1">While GFT is still under construction, it is mature enough to try to extract physics from it. With this aim, Steffen and his collaborators, are working in obtaining effective dynamics for cosmology starting from the general framework of GFT. The simplest solutions of Einstein equations are those with spatial homogeneity. These turn out to describe cosmological solutions, which approximate rather well at large scales the dynamics of our universe. Then, in order to get effective cosmological equations from their GFT, they postulate very particular quantum states that, involving all the degrees of freedom of the GFT, are states with collective properties that can give rise to a homogeneous and continuum effective description. The similarities between GFT and condense matter physics allows Steffen and collaborators to exploit the techniques developed in condense matter. In particular, based on the experience on Bose-Einstein condensates, the states that they postulate can be seen as condensates. </div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><div class="p2"><br /></div><br /><div class="p1">The collective behavior that the degrees of freedom display leads, in fact, to a homogeneous description in the macroscopic limit. The effective equations that they obtain agree in the classical limit with cosmological equations, but remarkably retaining the main effects coming from the underlying quantum theory. More specifically, these effective equations know about the fundamental discreteness, as they explicitly get corrections (non-present in the standard classical equations) that depend on the number of quanta (spacetime “atoms”) in the condensate. These results form the basis of a general programme for extracting effective cosmological dynamics directly from a microscopic non-perturbative theory of quantum gravity. </div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-14067246795120705562014-11-24T13:16:00.000-06:002014-11-24T13:16:27.198-06:00Quantum theory from information inference principles<div class="" style="font-size: 18px; margin: 0px;"><span style="background-color: white; font-size: small;">Tuesday, Nov 11th</span><br /><b style="font-size: medium;">Philipp Hoehn, Perimeter Institute </b><br /><b style="font-size: medium;">Title: Quantum theory from information inference principles </b><span style="background-color: white; font-size: small;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/hoehn111114.pdf" style="font-size: medium;">PDF</a><span style="background-color: white; font-size: small;"> of the talk (800k)</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/hoehn111114.wav" style="font-size: medium;">Audio</a><span style="background-color: white; font-size: small;"> [.wav 40MB]</span></div><div class="" style="font-size: 18px; margin: 0px;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-Zvaow2vNRzE/VHNb_tIGxfI/AAAAAAAAHIs/ZMEsYHatPiQ/s1600/hoehn_philipp.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-Zvaow2vNRzE/VHNb_tIGxfI/AAAAAAAAHIs/ZMEsYHatPiQ/s1600/hoehn_philipp.jpg" /></a></div><div class="" style="font-size: 18px; margin: 0px;"><br /></div><div class="" style="font-size: 18px; margin: 0px;">by Matteo Smerlak, Perimeter Institute</div><div class="" style="font-size: 18px; margin: 0px;"><br /></div><div class="" style="font-size: 18px; margin: 0px;"><br /></div><div class="" style="font-size: 18px; margin: 0px;"><br /></div><div class="" style="font-size: 18px; margin: 0px;">When a new theory enters the scene of physics, a succession of events normally takes place: at first, nobody cares; then a minority starts playing with the maths while the majority insists that the theory is obviously wrong; farther down the road, we find the majority using the maths on a daily basis and all arguing that the theory is so beautiful, it can only be right; along the way, thanks to many years of practice, a new kind of intuition grows out of the formalism, and our entire picture of reality changes accordingly. This is the process of science.</div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="font-size: 18px; margin: 0px;">For some reason, though, the eventual shift from formalism to intuition never happened for quantum mechanics (QM). Ninety years after its discovery, specialists still call QM “weird”, teachers still quote Feynman claiming that “nobody really understands QM”, and philosophers still discuss whether QM requires us to be “antirealist”, “neo-Kantian”, “Bayesian”… you name it. Niels Bohr wanted new theories to be “crazy enough”, but it seems this one is just <i class="">too </i>crazy. And yet it works!</div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="font-size: 18px; margin: 0px;">In the face of this puzzle, a school of thought initiated by Birkhoff and von Neumann in the thirties has declared it its mission to <i class="">reconstruct </i>QM. The idea is simple: if you don’t get how the machine works, then roll up your sleeves, take the machine apart, and build it again—from scratch. Indeed this is how Einstein delt with the symmetry group of Maxwell’s equations (and its mysterious action on lengths and durations): he found intuitive two physical principles—the relativity principles—and <i class="">derived </i>the Lorentz group (the set of symmetries of Maxwell's equations) from them. Thus special relativity was “really understood”.<br /><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-YXQ6OR_0Ihc/VHNap9NnaPI/AAAAAAAAHH8/h7LCoo73diU/s1600/Pasted%2BGraphic.tiff" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-YXQ6OR_0Ihc/VHNap9NnaPI/AAAAAAAAHH8/h7LCoo73diU/s1600/Pasted%2BGraphic.tiff" height="163" width="320" /></a></div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="margin: 0px;"><br /></div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="font-size: 18px; margin: 0px;">Much recent work towards a reconstruction of QM has taken place within a framework called “generalized probability theories” (GPT). This approach elaborates on basic notions such as <i class="">preparations, transformations</i> and <i class="">measurements</i>. The main achievement of GPT has been to locate QM within a more general landscape of possible modifications of classical probability theory. It has showed for instance that QM is not the most non-local theory consistent with what is known as no-signaling property: stronger correlations than quantum entanglement are in principle possible, though they are not realized in nature. To understand what <i class="">is</i>, we must know what else<i class=""> could have been</i>—thus speak GPT proponents. </div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="font-size: 18px; margin: 0px;">Philipp uses a different language for his reconstruction of QM: instead of <i class="">measurements </i>and <i class="">states, </i>he talks about <i class="">questions </i>and <i class="">answers. </i>The semantic shift is not innocent: while a “measurement” uncovers the intrinsic state of a system, a “question” only brings information to whoever asks it—that is, a question relates to <i class="">two </i>entities (the system <i class="">and</i> the observer/interrogator) rather than just one (the system). Because there isn’t anybody out there to ask questions about <i class="">everything</i>, there is no such thing as the “state of the universe”, Philipp says!<br /><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-MAppSI7xsFQ/VHNaxYCea4I/AAAAAAAAHIE/mC8IFn0LPCU/s1600/world.tiff" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-MAppSI7xsFQ/VHNaxYCea4I/AAAAAAAAHIE/mC8IFn0LPCU/s1600/world.tiff" height="208" width="320" /></a></div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br /></div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="font-size: 18px; margin: 0px;">This so-called “relational” questions/answers approach to QM was advocated twenty years ago by Rovelli, who emphasized its similarity with the structure of gravitation (time is relative, remember?). He also proposed two basic informational principles: one states that the total information that an observer O can gather about a system S is limited; the second specifies that, even when O has obtained the maximum amount of information about S, she can still learn something about S by asking other, “complementary” questions. Thence non commuting operators! Similar ideas where discussed independently by Zeilinger and Brukner—and Philipp embraces them wholeheartedly.</div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div><div class="" style="font-size: 18px; margin: 0px;">But he also takes a big step further. Adding four more postulates to Rovelli’s (which he calls completeness, preservation, time evolution and locality), Philipp shows how to reconstruct the set Σ of all possible states of S relative to O (together with its isometry group, representing possible time evolutions). For a quantum system allowing only one independent question—a qubit—Σ is a three-dimensional ball, the Bloch sphere. (Note that a 3-ball is a much bigger space than a 1-ball, the state space of a classical bit—enter quantum computing…) For systems with more independent questions, i.e. N qubits, Σ is the mathematical structure known as the convex cone over some complex projective space—not quite what is known as a Calabi-Yau manifold, but still a challenge for the mind to picture.</div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-oiNjs4H60WQ/VHNbWwfe6TI/AAAAAAAAHIc/A7CpjMYR0TE/s1600/Screen%2BShot%2B2014-11-24%2Bat%2B10.22.29%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-oiNjs4H60WQ/VHNbWwfe6TI/AAAAAAAAHIc/A7CpjMYR0TE/s1600/Screen%2BShot%2B2014-11-24%2Bat%2B10.22.29%2BAM.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;">N=2 turns out to be the most difficult case: once this one is solved—Philipp says this took him a full year, with inputs from his collaborator Chris Wever—, higher N’s follow rather straightforwardly. This is a reflection of a crucial aspect of QM: quantum systems are “monogamous”, meaning that they can establish strong correlations (aka “entanglement”) with just one partner at a time. Philipp’s questions/answers formulation provides a new and detailed understanding of this peculiar correlation structure, which he represents as a spherical tiling. “QM is beautiful!”, says Philipp.<br /><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-EAE5kTcSdxw/VHNbcmlaA3I/AAAAAAAAHIk/i3eurB82TZw/s1600/pentagons.tiff" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-EAE5kTcSdxw/VHNbcmlaA3I/AAAAAAAAHIk/i3eurB82TZw/s1600/pentagons.tiff" height="194" width="320" /></a></div><div class="" style="font-size: 18px; margin: 0px;">One limitation of Philipp’s current approach—also pointed out by the audience—is the restriction to binary (or yes/no) questions. A spin-1 particle, for instance, falls outside this framework, for it can give <i class="">three</i> different answers to the question “what is your spin in the z direction?”, namely “up”, “down” or “zero”. Can Philipp deal with such ternary question, and reconstruct the 8 dimensional state space of a quantum “trit”? We wish him to find the answer within… less than a year! </div><div class="" style="font-size: 18px; margin: 0px; min-height: 22px;"><br class="" /></div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-52116688555892828342014-04-28T12:52:00.000-05:002014-04-28T12:52:39.873-05:00Holographic special relativity: observer dependent geometry<h3><b style="font-size: medium;">Derek Wise, FAU Erlangen</b><br style="font-size: medium; font-weight: normal;" /><b style="font-size: medium;">Title: Holographic special relativity: observer space from conformal geometry </b><br style="font-size: medium; font-weight: normal;" /><a href="http://relativity.phys.lsu.edu/ilqgs/wise101513.pdf" style="font-size: medium; font-weight: normal;">PDF</a><span style="background-color: white; font-size: small; font-weight: normal;"> of the talk (600k) </span><a href="http://relativity.phys.lsu.edu/ilqgs/wise101513.wav" style="font-size: medium; font-weight: normal;">Audio</a><span style="background-color: white; font-size: small; font-weight: normal;"> [.wav 38MB] </span><a href="http://relativity.phys.lsu.edu/ilqgs/wise101513.aif" style="font-size: medium; font-weight: normal;">Audio</a><span style="background-color: white; font-size: small; font-weight: normal;"> [.aif 4MB] </span></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-LJdSR1gXqQk/UH8mtAKDvtI/AAAAAAAADmA/Qdbh6bqWnoo/s1600/wise.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-LJdSR1gXqQk/UH8mtAKDvtI/AAAAAAAADmA/Qdbh6bqWnoo/s1600/wise.jpg" height="320" width="185" /></a></div><h3>by Sean Gryb, Radboud University</h3><h3><br /></h3><h3>Introduction</h3><br />In Roman mythology, <a href="http://en.wikipedia.org/wiki/Janus">Janus</a> was the god of gateways, transitions, and time, whose two distinct faces are depicted peering in opposite directions, as if bridging two different regions (or epochs) of the Universe. The term “Janus-faced” has come to mean a person or thing that simultaneously embodies two polarized features, and the Janus head has come to represent the embodiment of these two distinct features into one.<br /><br />In this talk (based off the paper <a href="http://arxiv.org/abs/1305.3258" quot="" relativity="" special="" target=""_blank"" title=""Holographic">[1]</a>), <a href="http://www.math.ucdavis.edu/~derek/index.html" quot="" s="" target=""_blank"" title=""Derek" webpage="" wise="">Derek Wise</a> explores the possibility that spacetime itself might be Janus-faced. He explores an intriguing relationship between the structure of expanding spacetime and the scale-invariant description of a sphere. What he finds is a mathematical relationship providing a bridge between these two Janus faces that distinctly represent events in the Universe. This bridge is remarkably similar to the picture of reality proposed by the <a href="http://en.wikipedia.org/wiki/Holographic_principle" principle:="" quot="" target=""_blank"" title=""Holographic" wikipedia=""><em>holographic principle</em></a> and, in particular, the <a correspondence:="" href="http://en.wikipedia.org/wiki/AdS/CFT_correspondence" quot="" target=""_blank"" title=""AdS/CFT" wikipedia=""><em>AdS/CFT correspondence</em></a> where, on one side, there is the usual spacetime description of events and, on the other, there is a way to imprint these events onto the 3-dimensional boundary of this spacetime. <br /><br />Aside from providing an alternative to spacetime, Derek's picture may even help illuminate the deeper structures behind a recent formulation of g<a href="http://en.wikipedia.org/wiki/General_relativity" quot="" relativity:="" target=""_blank"" title=""General" wikipedia="">eneral relativity (GR)</a> called <a dynamics:="" href="https://www.blogger.com/2http://en.wikipedia.org/wiki/Shape_dynamics" quot="" target=""_blank"" title=""Shape" wikipedia=""><em>Shape Dynamics</em></a>, which I will come to at the end of this post. But to begin, I will try to explain Derek's result by first giving a description of the spacetime aspect of the Janus face and then describe how a link can be established to a completely distinct face, which, as we will see, is a description of events in the Universe that is completely free of any notion of scale. The key points of the discussion are summarized beautifully in the depiction of Janus given below, by <a href="http://www.bumblenut.com/" quot="" target=""_blank"" title=""Marc's" webpage="">Marc Ngui</a>, who has provided all the images for this post. The diagram shows how, as I will describe later, events seen by observers in spacetime can be described by information on the boundary. I encourage the reader to revisit this image as its main elements are progressively explained throughout the text. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-09paOzh0T3w/Uz22kMnKtfI/AAAAAAAAE0k/ojWJCpv6NYY/s1600/janus_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-09paOzh0T3w/Uz22kMnKtfI/AAAAAAAAE0k/ojWJCpv6NYY/s1600/janus_final.jpg" height="320" width="266" /></a></div><h3>Relativity, Observers, and Spacetime</h3>In 1908, <a href="http://en.wikipedia.org/wiki/Hermann_Minkowski" minkowski:="" quot="" target=""_blank"" title=""Hermann" wikipedia="">Hermann Minkowski</a> made a great discovery: Einstein's new theory of <a href="http://en.wikipedia.org/wiki/Special_relativity" quot="" relativity:="" target=""_blank"" title=""Special" wikipedia="">Special Relativity</a> could be cast into a beautiful framework, one that Minkowski recognized as a kind of union of space and time. In his own words: “space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”<a 1952.="" 75="" a="" and="" ch.="" collection="" dover="" general="" he="" href="http://books.google.nl/books/about/The_Principle_of_Relativity.html?id=S1dmLWLhdqAC&redir_esc=y" memoirs="" minkowski="" new="" of="" on="" original="" pp.="" principle="" quot="" relativity:="" relativity="" space="" special="" target=""_blank"" the="" theory="" time="" title=""H." york:="">[2]</a> To understand what Minkowski meant, let's go back to 1904-1905 in order to retrace the discoveries that spawned Minkowski's revolution. <br /><br />Relativity concerns the way in which different observers organize information about ‘when’ and ‘where’ events take place. Einstein realized that this system of organization should have two properties: i) it should work the same way for each observer, and ii) it should involve a set of rules that allows different observers to consistently compare information about the same events. This means that different observers don't necessary need to agree on <em>when</em> and <em>where</em> a particular event took place, but they do need to agree on how to compare the information gathered by different observers. Einstein expressed this requirement in his <em>principle of relativity</em>, to which he gave primary importance within physical theories. The key point is that relativity is fundamentally a statement about <em>observers</em> and how they collect and compare information about events. Minkowski's conception of spacetime comes afterwards, and it comes about through the specific mathematical properties of the rules used to collect and compare the relevant information. <br /><br />To try to understand how spacetime works, we will use a slightly more modern version of spacetime than the one used by Minkowski — one with all the same essential properties as the original, but which can accommodate the observed accelerated expansion of space. This kind of spacetime was first studied by Willem de Sitter, and is named <a href="hhttp://en.wikipedia.org/wiki/De_Sitter_space" quot="" sitter="" spacetime:="" target=""_blank"" title=""de" wikipedia="">de Sitter (dS) spacetime</a> after him. It has the basic shape depicted by the blue grid in the Janus image above. Because this space is curved, it is most convenient to describe it by putting it into a larger dimensional flat space (just like the 2D surface of a sphere depicted in a 3D space). This means that we can label events in this spacetime by 5 numbers: 4 space components, labeled <em>(x, y, z, w)</em> and one time component, <em>t</em>, that obey the relation<br /><br /><center><em>x<sup>2</sup> + y<sup>2</sup> + z<sup>2</sup> + w<sup>2</sup> - t<sup>2</sup> = ℓ<sup>2</sup></em>. (1)</center><br />This restriction (which serves as the definition of this spacetime) means that the 4 space components are not all independent. Indeed, the single constraint above removes one independent component, leaving the 3 space directions we know and love. The parameter <em>ℓ</em> is related to the cosmological constant and dictates how fast space is expanding. Adjusting its value changes the shape of the spacetime as illustrated in the figure below. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-f092bTS3TNo/Uz229kr0PjI/AAAAAAAAE0s/M-7h5UcEvTA/s1600/dS_limits_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-f092bTS3TNo/Uz229kr0PjI/AAAAAAAAE0s/M-7h5UcEvTA/s1600/dS_limits_final.jpg" height="192" width="320" /></a></div><br />The middle spacetime in blue depicts a typical dS spacetime. Increasing the parameter ℓ decreases the rate of expansion so that, if <em>ℓ → ∞</em>, the spacetime barely expands at all and looks more like the purple cylinder on the right. The opposite extreme, when <em>ℓ → 0</em>, is the yellow <em>light cone</em>, which is named that way because the space is expanding at its maximum rate: the speed of light. This extreme limit will be very important for our considerations later. <br /><br />Although this model of spacetime is dramatically simplified, it remarkably describes, to a good approximation, two important phases of our real Universe: i) the period of exponential expansion (or <a href="http://en.wikipedia.org/wiki/Inflation_(cosmology)" quot="" target=""_blank"" title=""Inflation:" wikipedia=""><em>inflation</em></a>), which we believe took place in the early history of our Universe, and ii) the present and foreseeable future. Different observers compare the labels they attribute to each event by performing transformations that leave the form of (1), and thus the shape of dS spacetime, unchanged. Because of this property, these transformations constitute <em>symmetries</em> of dS spacetime. Since the transformations that real observers must use to compare information about real events just happen to correspond to spacetime symmetries, it is no wonder that the notion of spacetime has had such a profound influence on physicists’ view of reality. However, we will shortly see that these rules can be recast into a completely different form, which tells a different story of what is happening. <br /><!-- SPACETIME'S JANUS FACE --> <br /><h3>Spacetime's Janus Face</h3>We will now see how the symmetries of observers in dS spacetime can be rewritten in terms of symmetries that preserve angles, but not necessary distances, in space. In particular, all information about scale is removed. In mathematics, these are called <a href="http://en.wikipedia.org/wiki/Conformal_symmetry" quot="" symmetry:="" target=""_blank"" title=""Conformal" wikipedia=""><em>conformal symmetries</em></a>. This means that different observers have a choice when analyzing information that they collect about events: either they can imagine that these events have taken place in dS spacetime, and are consequently related by the dS symmetries; or they can imagine that these events are representing information that can be expressed in terms of angles (and not lengths), and are consequently related by conformal symmetries. <br /><br />To understand how this can be so, consider the very distant future and the very distant past: where dS spacetime and the light cone nearly meet. This extreme region is called the <em>conformal sphere</em> because it is a sphere and also because it is where the dS symmetries correspond to conformal symmetries. <br /><br />In fact, any cross-section of the light cone formed by cutting it with a spatial plane (as illustrated in the diagram below) is a different representative of the conformal sphere since these different cross-sections will disagree on distances but will agree on angles. Although the intersection looks like a circle (represented in dark green), it is actually a 3-dimensional sphere because we have cut out 2 of the spatial dimensions (which we can't draw on a 2 dimensional page). <br /><br />To see how events on this 3d sphere can be represented in a scale-invariant way on a 3d plane, we can use a handy technique called a <a href="http://en.wikipedia.org/wiki/Stereographic_projection" projection:="" quot="" target=""_blank"" title=""Stereographic" wikipedia="">stereographic projection</a>. The stereographic projection is often used for map drawing where the round earth has to be drawn onto a flat map. One of its key properties, namely that it preserves angles, means that maps drawn in this way are useful for navigating since an angle on the map corresponds to the same angle on the Earth. It is precisely this property that will make the stereographic projection useful for us here. <br /><br />To perform a stereographic projection, imagine picking a point on a sphere, which we can interpret as the location of a particular observer on the sphere (represented by an eye in the diagram below), and call this the South Pole. Now imagine putting a light on the North Pole and letting it shine through the space that the sphere has been drawn in. Suppose our sphere is filled with points. Then, the shadow of these points will form an image on the plane tangentially to the sphere on the South Pole. The picture below illustrates what is going on. Points on the sphere are represented by stars and the yellow rays indicate how their image is formed on the plane. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-LGWn9Y21ysQ/Uz23QnJn6zI/AAAAAAAAE00/Gjr04rppXKM/s1600/stereo_project_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-LGWn9Y21ysQ/Uz23QnJn6zI/AAAAAAAAE00/Gjr04rppXKM/s1600/stereo_project_final.jpg" height="300" width="320" /></a></div><br />It is now a relatively straightforward mathematical exercise to show that the symmetries of the light cone represent transformations on the plane that may change the size of the image, but will preserve the angles between the points. Thus, the symmetries of the cone can be understood in terms of the conformal symmetries of this plane. <br /><br />If we now move our cross-section ever further into the future or the past, then the dS spacetime begins to resemble more and more the light cone. Thus, if we can represent arbitrary events in dS spacetime by information imprinted on two cross-sections in the infinite future and infinite past, then these events can be represented in terms of the images they induce onto our projected planes, and we have obtained our objective. <br /><br />There is a simple way that this can be done. Imagine taking, as shown in the figure below, an arbitrary event in dS spacetime and drawing all the events in the distant past that could affect things that happen at this point (this region is a finite portion of the spherical cross-sections because no disturbance can travel faster than the speed of light). The result is a 2 dimensional spherical region, called the <em>particle horizon</em> indicated by the red regions in the diagram below, which grows steadily over time. You can think of this region as the proportion of dS spacetime that is visible at any particular place. In fact, you can use the relative size of this region as an indication of the time at which that event occurs. Because this is a notion of time that exists solely in terms of quantities defined in the distant past, it will transform under conformal symmetries. To give an idea of what this looks like, the motion of an observer from some point in the distant past to a new point in the distant future is represented by a series of concentric spheres, starting at the initial point and then spreading out to eventually cover the whole sphere. The diagram below shows how this works. The different regions <em>(a,b,c,d)</em> represent progressively growing regions corresponding to progressively later times. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-opqBQCLupWM/Uz23dxNfW1I/AAAAAAAAE08/claQbsA0rMY/s1600/past_horizon_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-opqBQCLupWM/Uz23dxNfW1I/AAAAAAAAE08/claQbsA0rMY/s1600/past_horizon_final.jpg" height="230" width="320" /></a></div><br />In this way, you can map information about events in dS spacetime to information on the conformal sphere. In other words, the picture of reality that one gets from Einstein's theory of Special Relativity is a story that can be told in two very different ways. In the first way, there are events which trace out histories in spacetime. ‘Where’ and ‘when’ a particular event takes place depends on who you are, and the information about these events can be transformed from one observer to another via the global symmetries of spacetime. In the new picture, it is the information about angles that is important. ‘Where’ and ‘how big’ things are depends on your point of view and the information about particular events can be transformed from one observer to another using conformal transformations. <br /><!-- From Special to General Relativity --> <br /><h3>From Special to General Relativity</h3>We have just described how to relate two very different views of how observers can collect information about the world. Until now, we have only been considering <a href="http://en.wikipedia.org/wiki/Homogeneous_space" quot="" space:="" target=""_blank"" title=""Homogeneous" wikipedia=""><em>homogeneous spaces</em></a>: i.e., those that look the same everywhere. The class of observers we were able to consider was resultantly restrictive. It was Einstein's great insight to recognize that the same mathematical machinery needed to describe events seen by <em>arbitrary</em> observers could also be used to study the properties of gravity. The machinery in question is a generalization of Minkowski's geometry, named after Riemann. <br /><br />In order to describe <a geometry:="" href="http://www.blogger.com/2http://en.wikipedia.org/wiki/Riemannian_geometry" quot="" target=""_blank"" title=""Riemannian" wikipedia="">Riemannian geometry</a>, it is easiest to first describe a generalization of it (which we will need later anyway), and then show how Riemannian geometry is just a special case. The generalization in question is called <a geometry:="" href="http://en.wikipedia.org/wiki/Cartan_geometry" quot="" target=""_blank"" title=""Cartan" wikipedia=""><em>Cartan geometry</em></a>, after the great mathematician <a cartan:="" href="http://en.wikipedia.org/wiki/%C3%89lie_Cartan" quot="" target=""_blank"" title=""Élie" wikipedia="">Élie Cartan</a>. Cartan had the idea of building general curved geometries by modelling them off homogeneous spaces. The more general spaces are constructed by moving these homogeneous spaces around in specific ways. The geometry itself is defined by the set of rules one needs to use to compare vectors after moving the homogeneous spaces. These rules split into two different kinds: those that change the point of contact between the homogeneous space and the general curved space and those that don't. These different moves are illustrated for the case where the homogeneous space is a 2D sphere in the diagram below. <br /> The moves that don't change the point of contact (in the case above, this corresponds to spinning about the point of contact without rolling) constitute the local symmetries of the geometry and could, for example, correspond to what different local observers would see (in this case, spinning observers versus stationary ones) when looking at objects in the geometry. Einstein exploited this kind of structure to implement his general principle of relativity described earlier. The moves that change the point of contact (in the case above, this means rolling the ball around without slipping) give you information about the curved geometry of the general space. Einstein used a special case of Cartan geometry, which is just Riemannian geometry, where the homogenous space is Minkowski space. He then exploited the analogue of the structure just described to explain an old phenomenon in a completely new way: gravity. In the process, he produced one of our most radical yet successful theories of physics: General Relativity. The figure below shows how the different kinds of geometry we've discussed are related. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-8yVTcUA0Qf0/Uz23yX3IopI/AAAAAAAAE1M/Q3S5dr71Fow/s1600/marc_geo_diag_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-8yVTcUA0Qf0/Uz23yX3IopI/AAAAAAAAE1M/Q3S5dr71Fow/s1600/marc_geo_diag_final.jpg" height="264" width="320" /></a></div><br />Now, consider what happens when we substitute, as we did in the last section, Minkowski's flat spacetime for de Sitter's curved, but still homogenous, spacetime. We can still describe gravity, but in a way that naturally includes a cosmological constant. However, the conformal sphere is also a homogeneous space. Moreover, as we described earlier, the symmetries of this homogeneous space can be related to the dS symmetries. This suggests that it might be possible to describe gravity in terms of a Cartan geometry modelled off the conformal sphere. <br /><h3><br /></h3><h3>From the Conformal Sphere to Shape Dynamics?</h3>Cartan geometries modelled on the conformal sphere are called <em>conformal geometries</em> because the local symmetries of these geometries preserve angles, and not scale. Although we have laid out a procedure relating the model space of conformal geometries to the model space of spacetimes with a cosmological constant, it is quite another thing to rewrite gravity in terms of conformal geometry. This is, in part, because the laws governing spacetime geometry are complicated and, in part, because our prescription for relating the model spaces is also not straightforward, since it relates local quantities in spacetime to non-local quantities in the infinite future and past. Nevertheless, this exciting possibility provides an interesting future line of research. Furthermore, there are other hints that such a description might be possible. <br /><br />Using very different methods, it is possible to show that General Relativity is actually <em>dual</em> to a theory of evolving conformal geometry <a 3d="" a="" as="" conformally="" gravity="" href="http://iopscience.iop.org/0264-9381/28/4/045005/" invariant="" quot="" target=""_blank"" theory="" title=""Einstein">[3]</a>. However, the kind of conformal geometry used in this derivation has not yet been written in terms of Cartan geometry (which makes use of slightly different structures). This new way of describing gravity, called <a dynamics:="" href="http://en.wikipedia.org/wiki/Shape_dynamics" quot="" target=""_blank"" title=""Shape" wikipedia=""><em>Shape Dynamics</em></a>, is perhaps making use of the interesting relationship between spacetime symmetries and conformal symmetries described here. Understanding exactly the nature of the conformal geometry in Shape Dynamics and its relation to spacetime could prove valuable in being able to understand this new way of describing gravity. Perhaps it could even be a window into understanding how the quantum theory of gravity should work? <br /><ul><li>[1] D. K. Wise, <em>Holographic Special Relativity, </em><a href="http://arxiv.org/abs/1305.3258" target=""_blank"">arXiv:1305.3258 [hep-th]</a>.</li><li>[2] H. Minkowski, <em>The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity</em>, ch. Space and Time, pp. 75–91. New York: Dover, 1952.</li><li>[3] H. Gomes, S. Gryb, and T. Koslowski, <em>Einstein gravity as a 3D conformally invariant theory</em>, <a href="http://iopscience.iop.org/0264-9381/28/4/045005/" target=""_blank"">Class. Quant. Grav. 28 (2011) 045005</a>, <a href="http://arxiv.org/abs/1010.2481" target=""_blank"">arXiv:1010.2481 [gr-qc]</a>.</li></ul><br />-->Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com1tag:blogger.com,1999:blog-5826632960356694090.post-86730738444006412052014-04-01T13:30:00.000-05:002014-04-01T13:30:59.082-05:00Spectral dimension of quantum geometries<b>Johannes Thürigen, Albert Einstein Institute </b><br /><b>Title: Spectral dimension of quantum geometries </b><span style="background-color: white;"></span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/thueringen112613.pdf">PDF</a><span style="background-color: white;"> of the talk (1MB) </span><a href="http://relativity.phys.lsu.edu/ilqgs/thueringen112613.wav">Audio</a><span style="background-color: white;"> [.wav 39MB] </span><a href="http://relativity.phys.lsu.edu/ilqgs/thueringen112613.aif">Audio</a><span style="background-color: white;"> [.aif 4.1MB] </span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;">By Francesco Caravelli, University College London</span><br /><span style="background-color: white;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-LyrMqwCQGZM/UziOA6rSqJI/AAAAAAAAEz8/kNA_TfXm1LQ/s1600/thur.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-LyrMqwCQGZM/UziOA6rSqJI/AAAAAAAAEz8/kNA_TfXm1LQ/s1600/thur.jpg" height="320" width="213" /></a></div>One of the fundamental goals of quantum gravity is understanding the structure of space-time at very short distances, together with predicting physical and observable effects of having a quantum geometry. This is not easy. Since the introduction of fractal dimension in Quantum Gravity, and its importance emphasized in the work done in Causal Dynamical Triangulations (Loll et al. 2005) and Asymptotic Safety (Lauscher et al. 2005), it has become more and more clear that space-time, at the quantum level, might have a radical transformation: the number of effective dimensions might change with the energy of the process involved. Various approaches to Quantum Gravity have collected evidences of a dimensional flow at high energies, which was popularized by Carlip as Spontaneous Dimensional Reduction (Carlip 2009, 2013). (The use of the term reduction is indeed a hint that a dimensional reduction is observed, but the evidences are far from conclusive. We find dimensional flow more appropriate.) <br /><br />Before commenting on the results obtained by the authors of the paper discussed in the seminar<br />(Calcagni, Oriti, Thuerigen 2013), let us first step back for a second and spend some time introducing the concept of fractal dimension, which is relevant to this discussion.<br /><br />The concept of non integer dimension was introduced by the mathematician Benoit Mandelbrot half a century ago. What is this fuss about fractals and complexity? What is the relation with spacetimes, quantum space-times? <br /><br />Everything start from an apparently simple question asked by Mandelbrot: What is the length of the coast of England (or more precisely, Cornwall)? As it turned out, the length of the cost of England, depended on the lens used to magnify the map of coast, and depending on the magnifying power, the length changed with a well defined rule, known as scaling, which we will explain shortly. <br /><br />There are several definitions of fractal dimension, but let us try to keep things as easy as possible, and see why a granular space-time might indeed imply a different dimensions at different scales (i.e., our magnifying power). The easy case is the one of a regular square lattice, which for the sake of clarity we consider infinite in any direction. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-7A37ZJl57Xw/Uzcl8P-kFWI/AAAAAAAAEzc/CaJS3t394g0/s1600/manny.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-7A37ZJl57Xw/Uzcl8P-kFWI/AAAAAAAAEzc/CaJS3t394g0/s1600/manny.jpg" height="270" width="320" /></a></div><br /> Source: Manny Lorenzo<br /><br /> The dimension of the lattice might look two dimensional, as the lattice is planar: it can be embedded into a two dimensional surface (this is what is called embedding dimension). However, if we pick any point of this lattice, and count how many points are at a distance “d” from it, we will see that the number of points increases with a scaling law, given by*:<br /><br />N ~ d^gamma <!--[if gte mso 9]><xml> <o:DocumentProperties> <o:Revision>0</o:Revision> <o:TotalTime>0</o:TotalTime> <o:Pages>1</o:Pages> <o:Characters>1</o:Characters> <o:Company>Louisiana State University</o:Company> <o:Lines>1</o:Lines> <o:Paragraphs>1</o:Paragraphs> <o:CharactersWithSpaces>1</o:CharactersWithSpaces> <o:Version>14.0</o:Version> </o:DocumentProperties> <o:OfficeDocumentSettings> <o:AllowPNG/> </o:OfficeDocumentSettings></xml><![endif]--> <!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves/> <w:TrackFormatting/> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF/> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>JA</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/> <w:DontGrowAutofit/> <w:SplitPgBreakAndParaMark/> <w:EnableOpenTypeKerning/> <w:DontFlipMirrorIndents/> <w:OverrideTableStyleHps/> <w:UseFELayout/> </w:Compatibility> <m:mathPr> <m:mathFont m:val="Cambria Math"/> <m:brkBin m:val="before"/> <m:brkBinSub m:val="--"/> <m:smallFrac m:val="off"/> <m:dispDef/> <m:lMargin m:val="0"/> <m:rMargin m:val="0"/> <m:defJc m:val="centerGroup"/> <m:wrapIndent m:val="1440"/> <m:intLim m:val="subSup"/> <m:naryLim m:val="undOvr"/> </m:mathPr></w:WordDocument></xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="276"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal"/> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9"/> <w:LsdException Locked="false" Priority="39" Name="toc 1"/> <w:LsdException Locked="false" Priority="39" Name="toc 2"/> <w:LsdException Locked="false" Priority="39" Name="toc 3"/> <w:LsdException Locked="false" Priority="39" Name="toc 4"/> <w:LsdException Locked="false" Priority="39" Name="toc 5"/> <w:LsdException Locked="false" Priority="39" Name="toc 6"/> <w:LsdException Locked="false" Priority="39" Name="toc 7"/> <w:LsdException Locked="false" Priority="39" Name="toc 8"/> <w:LsdException Locked="false" Priority="39" Name="toc 9"/> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption"/> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title"/> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font"/> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong"/> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text"/> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision"/> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote"/> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title"/> <w:LsdException Locked="false" Priority="37" Name="Bibliography"/> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading"/> </w:LatentStyles></xml><![endif]--> <!--[if gte mso 10]><style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} </style><![endif]--> <!--StartFragment--><!--EndFragment-->.<br /><br />If d is not too big, the value of gamma changes if the underlying structure is not a continuum, or is granular, and gamma can take non-integer values. This can be interpreted in various ways. For the case of fractals, this implies that the real dimension of fractals is not integer. Analogously to the case of the number of points within a certain distance d, it is possible to define a diffusion operation which will do the work of counting for us. However, the counting process depends on the operator which defines the diffusion process: how a swarm of particles move on the underlying discrete space. This is a crucial point of the procedure. <br /><br />In the continuum, the technology is developed to the point that it can to show that such an operator can be defined precisely**. The problem then is that the scaling not precise: for too long times, the scaling relation is not exact (as curvature effect might contribute). Thus, the time given to the particle to diffuse has to be appropriately tuned. This is what the authors define in Section 2 of the paper discussed in the talk and is a standard procedure in the context of the spectral dimension. Of course, what discussed insofar is valid for classical diffusion, but the operator can be defined for quantum diffusion as well, which is, put in simple terms, described by a Schroedinger unitary evolution like in ordinary quantum mechanics.<br /><br />It is important to understand that the combinatorial description of a manifold (how these are represented in the discrete setting), rather than the actual geometry, plays a very relevant role. If you calculate the fractal dimension of these lattices, although at large scale they give the right fractal dimension, on small scale they do not. This shows that in fact discreteness does have an effect on the spectral dimension, and that results do indeed depend on the number of dimensions. But more importantly the authors observe that the spectral dimension, even in the classical case, depends on the precise structure of the underlying pseudo-manifold, i.e. how the manifold is discretized. If you combine this with the fact that insofar the fractal dimension is the global observable saying in how many dimensions you live in (concept very important for other high energy approaches), the interest might be quite well justified. <br /><br />The case of a quantum geometry, considered using Loop Quantum Gravity (LQG), is then put forward at the end. The definition is different from the one given previously (Modesto 2009, assuming that the scaling is given by the area operator of LQG), and it leads to different results. <br /><br />Without going into the details (described anyway quite clearly in the paper), probably it is noteworthy to anticipate the results and explain the difficulties involved in the calculation. The first complication comes from the calculation itself: it is in fact very hard to calculate the fractal dimension in the full quantum case. However, in the semiclassical approximation (when the geometry is in part quantum and in part classical), the main "quantum" part can be neglected. The next issue is that, in order to claim the emergence of a clear topological dimension, the fractal dimension has to be constant for a wide range of distances of several orders of magnitude. It is important to say that, if you use the fractal dimension as your definition of dimension, it is not possible to assign a given dimensionality unless the number of discrete points under consideration is large enough. This is a feature of the fractal dimension which is very important for Loop Quantum Gravity in many respects, as there as been for long time a discussion on what is the right description of classical and quantum spacetime. Still, this approach gives the possibility of a bottom-up definition of dimension (in the top-down, there would not be any dimensional flow). <br /><br />As a closing remark, it is fair to say that this paper goes one step further into defining a notion of fractal dimension in Loop Quantum Gravity. The previous attempt was made by Modesto and collaborators using a rough approximation to the Laplacian. That approximation exhibited a dimensional flow towards an ultraviolet 2-dimensional space, which seems to be not present using a more elaborated Laplacian.<br /><br /> *For a square lattice, if d is big enough, \gamma is equal to two: this is the Haussdorf dimension of the lattice, and indeed this dimension can be defined through the following equation: gamma=\partial log(N)/ \partial d<br /><br />** Using the technical terminology, this is the Seeley-De Witt expansion of the heat kernel on curved manifolds. This is usually called spectral dimension. The first term of the expansion depends explicitly on the spectral dimension, while in the terms at higher orders there are also contributions from the curvature. <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-45659371209440583512013-11-24T14:32:00.002-06:002013-11-24T14:32:34.376-06:00The Platonic solids of quantum gravity<b>Hal Haggard, CPT Marseille</b><br /><b>Title: </b><span style="background-color: white;">Dynamical chaos and the volume gap </span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/haggard021213.pdf">PDF</a><span style="background-color: white;"> of the talk (8Mb) </span><a href="http://relativity.phys.lsu.edu/ilqgs/haggard021213.wav">Audio</a><span style="background-color: white;"> [.wav 37MB] </span><a href="http://relativity.phys.lsu.edu/ilqgs/haggard021213.aif">Audio</a><span style="background-color: white;"> [.aif 4MB]</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-h0O2SHe3XKg/UovPjtpmljI/AAAAAAAAEXE/eqrfqvRSwBk/s1600/HalHeadshot.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-h0O2SHe3XKg/UovPjtpmljI/AAAAAAAAEXE/eqrfqvRSwBk/s1600/HalHeadshot.jpg" /></a></div><span style="background-color: white;"><br /></span><span style="background-color: white;">by Chris Coleman-Smith, Duke University</span><br /><span style="background-color: white;"><br /></span>At the Planck scale, a quantum behavior of the geometry of space is expected. Loop quantum gravity provides a specific realization of this expectation. It predicts a granularity of space with each grain having a quantum behavior. In particular the volume of the grain is quantized and its allowed values (what is technically known as "the spectrum")have a rich structure. Areas are also naturally quantized and there is a robust gap in their spectrum. Just as Planck showed that there must be a smallest possible photon energy, there is a smallest possible spatial area. Is the same true for volumes?<br /><br /> These grains of space can be visualized as polyhedra with faces of fixed area. In the full quantum theory these polyhedra are fuzzed out and so just as we cannot think of a quantum particle as a little spinning ball we cannot think of these polyhedra as the definite Platonic solids that come to mind.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-W1VRHNXjug8/Uo1GBbHgCEI/AAAAAAAAEXU/zfVR3fj-QvQ/s1600/platonic-solids.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-W1VRHNXjug8/Uo1GBbHgCEI/AAAAAAAAEXU/zfVR3fj-QvQ/s320/platonic-solids.jpg" width="216" /></a></div><br /><div style="text-align: center;">[The Platonic Solids, by Wenzel Jamnitzer] </div><br />It is interesting to examine these polyhedra at the classical level, where we can set aside this fuzziness, and see what features we can deduce about the quantum theory.<br /><br />The tetrahedron is the simplest possible polyhedron. Bianchi and Haggard [1] explored the dynamics arising from fixing the volume of a tetrahedron and letting the edges evolve in time. This evolution is a very natural way of exploring the set of constant volume polyhedra that can be reached by smooth deformations of the orientation of the polyhedral faces. The resulting trajectories in the space of polyhedra can be quantized by Bohr and Einstein's original geometrical methods for quantization. The basic idea here is to map some parts of the smooth continuous properties of the classical dynamics into the quantum by selecting only those orbits whose total area is an integer multiple of Planck's constant. The resulting discrete volume spectrum gives excellent agreement to the fully quantum calculation. Further work by Bianchi, Donna and Speziale [2] extended this treatment to more complex polyhedra.<br /><br /> Much as a bead threaded on a wire can only move forward or backward along the wire, a tetrahedron of fixed volume and face areas only has one freedom: to change its shape. Classical systems like this are typically integrable which means that their dynamics is fairy regular and can be exactly solved. Two degree of freedom systems like the pentahedron are typically non integrable. Their dynamics can be simulated numerically but there is no closed form solution for their motion. This implies that the pentahedron has a much richer dynamics than the tetrahedron. Is this pentahedral dynamics so complex that it is actually chaotic? If so, what are the implications for the quantized volume spectrum in this case. This system has recently been partially explored by Coleman-Smith [3] and Haggard [4] and was indeed found to be chaotic.<br /><br /> Chaotic systems are very sensitive to their initial conditions, tiny deviations from some reference trajectory rapidly diverge apart. This makes the dynamics of chaotic systems very complex and endows them with some interesting properties. This rapid spreading of any bundle of initial trajectories means that chaotic systems are unlikely to spend much time 'stuck' in some particular motion but rather they will quickly explore all possible motions. Such systems 'forget' their initial conditions very quickly and soon become thermal. This rapid thermalization of grains of space is an intriguing result. Black holes are known to be thermal objects and their thermal properties are believed to be fundamentally quantum in origin. The complex classical dynamics we observe may provide clues into the microscopic origins of these thermal properties.<br /><br /> The fuzzy world of quantum mechanics is unable to support the delicate fractal structures arising from classical chaos. However its echoes can indeed be found in the quantum analogues of classically chaotic systems. A fundamental property of quantum systems is that they can only take on certain discrete energies. The set of these energy levels is usually referred to as the energy spectrum of the system. An important result from the study of how classical chaos passes into quantum systems is that we can generically expect certain statistical properties of the spectrum of such systems. In fact the spacing between adjacent energy levels of such systems can be predicted on very general grounds. For a non chaotic quantum system one would expect these spacings to be entirely uncorrelated and so be Poisson distributed (e.g the number of cars passing through a toll gate in an hour) resulting in most energy levels being very bunched up. In chaotic systems the spacings become correlated and actually repel each other so that on average one would expect these spacings to be quite large.<br /><br /> This is suggestive that there may indeed be a robust volume gap since we generically expect the discrete quantized volume levels to repel each other. However the density of the volume spectrum around the ground state needs to be better understood to make this argument more concrete. Is there really a smallest non zero volume?<br /><br /> The classical dynamics of the fundamental grains of space provide a fascinating window into the behavior of the very complicated full quantum dynamics of space described by loop quantum gravity. Extending this work to look at more complex polyhedra and at coupled netwworks of polyhedra will be very exciting and will certainly provide many useful new insights into the microscopic structure of space itself.<br /><br />[1]: "Discreteness of the volume of space from Bohr-Sommerfeld quantization", E.Bianchi & H.Haggard. PRL 107, 011301 (2011), "Bohr-Sommerfeld Quantization of Space", E.Bianchi & H.Haggard. PRD 86, 123010 (2012)<br /><br />[2]: "Polyhedra in loop quantum gravity", E.Bianchi, P.Dona & S.Speziale. PRD 83, 0440305 (2011)<br /><br />[3]: "A “Helium Atom” of Space: Dynamical Instability of the Isochoric Pentahedron", C.Coleman-Smith & B.Muller, PRD 87 044047 (2013)<br /><br />[4]: "Pentahedral volume, chaos, and quantum gravity", H.Haggard, PRD 87 044020 (2013)Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-34187408741002717162013-11-17T10:45:00.000-06:002013-11-17T10:45:08.838-06:00Coarse graining theories<span style="background-color: white;">Tuesday, Nov 27th. 2012</span><br /><b>Bianca Dittrich, Perimeter Institute </b><br /><b>Title:</b><span style="background-color: white;"> Coarse graining: towards a cylindrically consistent dynamics</span><br /><a href="http://relativity.phys.lsu.edu/ilqgs/dittrich112712.pdf">PDF</a><span style="background-color: white;"> of the talk (14Mb) </span><a href="http://relativity.phys.lsu.edu/ilqgs/dittrich112712.wav">Audio</a><span style="background-color: white;"> [.wav 41MB] </span><a href="http://relativity.phys.lsu.edu/ilqgs/dittrich112712.aif">Audio</a><span style="background-color: white;"> [.aif 4MB]</span><br /><span style="background-color: white;"><br /></span><span style="background-color: white;">by Frank Hellmann</span><br /><span style="background-color: white;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-gIUvp3AW7Ws/UnK032ccxEI/AAAAAAAAEUA/OYMCf2I87xM/s1600/bianca.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-gIUvp3AW7Ws/UnK032ccxEI/AAAAAAAAEUA/OYMCf2I87xM/s1600/bianca.jpg" /></a></div><span style="background-color: white;"><br /></span> Coarse graining is a procedure from statistical physics. In most situations we do not know how all the constituents of a system behave. Instead we only get a very coarse picture. Rather than knowing how all the atoms in the air around us move, we are typically only aware of a few very rough properties, like pressure, temperature and the like. Indeed it is hard to imagine a situation where one would care about the location of this or that atom in a gas made of 10^23 atoms. Thus when we speak of trying to find a coarse grained description of a model, we mean that we want to discard irrelevant detail and find out how a particular model would appear to us.<br /> <br />The technical way in which this is done was developed by Kadanoff and Wilson. Given a system made up of simple constituents Kadanoff's idea was to take a set of nearby constituents and combine them back into a single such constituent, only now larger. In a second step we could then scale down the entire system and find out how the behavior of this new, coarse grained constituent compares to the original ones. If certain behaviors grow stronger with such a step we call them relevant, if they grow weaker we call them irrelevant. Indeed, as we build ever coarser descriptions out of our system eventually only the relevant behaviors will survive. <br /><br />In spin foam gravity we are facing this very problem. We want to build a theory of quantum gravity, that is, a theory that describes how space and time behave at the most fundamental level. We know very precisely how gravity occurs to us, every observation of it we have made is described by Einsteins theory of general relativity. Thus in order to be a viable candidate for a theory of quantum gravity, it is crucial that the coarse grained theory looks, at least in the cases that we have tested, like general relativity. <br /><br />The problem we face is that usually we are looking at small and large blocks in space, but in spin foam models it is space-time itself that is built up of blocks, and these do not have a predefined size. They can be large or small in their own right. Further, we can not handle the complexity of calculating with so many blocks of space-time. The usual tools, approximations and concepts of coarse graining do not apply directly to spin-foams. <br /><br />To me this constitutes the most important question facing the spin foam approach to quantum gravity. We have to make sure, or, as it often is in this game, at least give evidence, that we get the known physics right, before we can speak of having a plausible candidate for quantum gravity. So far most of our evidence comes from looking at individual blocks of space time, and we see that their behaviour really makes sense, geometrically. But as we have not yet seen any such blocks of space time floating around in the universe, we need to investigate the coarse graining to understand how a large number of them would look collectively. The hope is that the smooth space time we see arises like the smooth surface of water out of blocks composed of atoms, as an approximation to a large number of discrete blocks. <br /><br />Dittrich's work tries to address this question. This requires bringing over, or reinventing in the new context, a lot of tools from statistical physics. The first question is, how does one actually combine different blocks of spin foam into one larger block? Given a way to do that, can we understand how it effectively behaves? <br /><br />The particular tool of choice that Dittrich is using is called Tensor Network Renormalization. In this scheme, the coarse graining is done by looking at what aspects of the original set of blocks is the most relevant to the dynamics directly and then keeping only those. Thus it combines the two steps, of first coarse graining and then looking for relevant operators into a single step. <br /><br />To get more technical, the idea is to consider maps from the boundary of a coarser lattice into that of a finer one. The mapping of the dynamics for the fine variables then provides the effective dynamics of the coarser ones. If the maps satisfy so called cylindrical consistency conditions, that is, if we can iterate them, this map can be used to define a continuum limit as well. <br /><br />In the classical case, the behaviour of the theory as a function of the boundary values is coded in what is known as Hamilton's principal function. The use of studying the flow of the theory under such maps is then mostly that of improving the discretizations of continuum systems that can be used for numerical simulations. <br /><br />In the quantum case, the principal function is replaced by the usual amplitude map. The pull back of the amplitude under this embedding then gives a renormalization prescription for the dynamics. Now Dittrich proposes to adapt an idea from condensed matter theory called tensor network renormalization. <br /><br />In order to select which degrees of freedom to map from the coarse boundary to the fine one, the idea is to evaluate the amplitude, diagonalize and only keep the eigenstates corresponding to the n largest eigenvalues. <br /><br />At each step one then obtains a refined dynamics that does not grow in complexity, and one can iterate the procedure to obtain effective dynamics for very coarse variables that have been picked by the theory, rather than by an initial choice of scale, and a split into high and low energy modes. <br /><br />It is too early to say whether these methods will allow us to understand whether spin foams reproduce what we know about gravity, but they have already produced a whole host of new approximations and insights into how these type of models work, and how they can behave for large number of building blocks.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-52561531049801635822013-05-06T16:49:00.000-05:002013-05-06T16:49:27.010-05:00Bianchi space-times in loop quantum cosmology<b>Brajesh Gupt, LSU </b><br /><div class="Standard"><b>Title: Bianchi I LQC, Kasner transitions and inflation</b><br /><a href="http://relativity.phys.lsu.edu/ilqgs/gupt032613.pdf">PDF</a> of the talk (800k) <a href="http://relativity.phys.lsu.edu/ilqgs/gupt032613.wav">Audio</a> [.wav 30MB] <a href="http://relativity.phys.lsu.edu/ilqgs/gupt032613.aif">Audio</a> [.aif 3MB]</div><div class="Standard"><br /></div><div class="Standard">by Edward Wilson-Ewing</div><div class="Standard"><br /></div><div class="Standard"><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-vQH7zJEpxLQ/UUCpfya69NI/AAAAAAAAD3M/tkWEUHJf_Lk/s1600/brajesh.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-vQH7zJEpxLQ/UUCpfya69NI/AAAAAAAAD3M/tkWEUHJf_Lk/s320/brajesh.jpg" width="239" /></a></div><div style="text-align: left;">The Bianchi space-times are a generalization of the simplest Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models. While the FLRW space-times are assumed to be homogeneous (there are no preferred points in the space-time) and isotropic (there is no preferred direction), in the Bianchi models the isotropy requirement is removed. One of the main consequences of this generalization is that in a Bianchi cosmology, the space-time is allowed to expand along the three spatial axes at different rates. In other words, while there is only one Hubble rate in FLRW space-times, there are three Hubble rates in Bianchi cosmologies, one for each of the three spatial dimensions.</div><br /><div class="Standard"><br /></div><div class="Standard">For example, the simplest Bianchi model is the Bianchi I space-time whose metric is given by</div><div class="Standard"><br /></div><div class="Standard">ds<sup>2</sup> = - dt<sup>2</sup> + a<sub>1</sub>(t)<sup>2</sup>(dx<sub>1</sub>)<sup>2</sup> + a<sub>2</sub>(t)<sup>2</sup>(dx<sub>2</sub>)<sup>2 </sup>+ a<sub>3</sub>(t)<sup>2</sup>(dx<sub>3</sub>)<sup>2</sup>,</div><div class="Standard"><br /></div><div class="Standard">where the a<sub>i</sub>(t) are the three scale factors. This is in contrast to the flat FLRW model where there is only one scale factor.</div><div class="Standard"><br /></div><div class="Standard">It is possible to determine the exact form of the scale factors by solving the Einstein equations. In a vacuum, or with a massless scalar field, it turns out that the i<sup>th</sup>scale factor is simply given by the time elevated to the power k<sub>i</sub>: a<sub>i</sub>(t) = t<sup>ki</sup>, where these k<sub>i</sub> are constant numbers and are called the Kasner exponents. There are some relations between the Kasner exponents that must be satisfied, so that once the matter content has been chosen, one Kasner exponent between -1 and 1 may be chosen freely, and then the values of the other two Kasner exponents are determined by this initial choice.</div><div class="Standard"><br /></div><div class="Standard">In addition to allowing more degrees of freedom than the simpler FLRW models, the Bianchi space-times are important due to the central role they play in the Belinsky-Khalatnikov-Lifshitz (BKL) conjecture. According to the BKL conjecture, as a generic space-like singularity is approached in general relativity time derivatives dominate over spatial derivatives (with the exception of some small number of “spikes” which we shall ignore here) and so spatial points decouple from each other. In essence, as the spatial derivatives become negligible, the complicated partial differential equations of general relativity reduce to simpler ordinary differential equations close to a space-like singularity. Although this conjecture has not been proven, there is a wealth of numerical studies that supports it.</div><div class="Standard"><br /></div><div class="Standard">If the BKL conjecture is correct, and the ordinary differential equations can be trusted, then the solution at each point is that of a homogeneous space-time. Since the most general homogeneous space-times are given by the Bianchi space-times, it follows that as a space-like singularity is approached, the geometry at each point is well approximated by a Bianchi model.</div><div class="Standard"><br /></div><div class="Standard">This conjecture is extremely important from the point of view of quantum gravity, as quantum gravity effects are expected to become important precisely when the space-time curvature nears the Planck scale. Therefore, we expect quantum gravity effects to become important near singularities. What the BKL conjecture is telling us is that understanding quantum gravity effects in the Bianchi models, which are relatively simple space-times, can shed significant insight into the problem of singularities in gravitation.</div><div class="Standard"><br /></div><div class="Standard">What is more, studies of the BKL dynamics show that for long periods of time, the geometry at any point is given by the Bianchi I space-time and during this time the geometry is completely determined by the three Kasner exponents introduced above in the third paragraph. Now, the Bianchi I solution does not hold at each point eternally, rather there are occasional transitions between different Bianchi I solutions called Kasner or Taub transitions. During a Kasner transition, the three Kasner exponents rapidly change values before becoming constant for another long period of time. Now, since the Bianchi I model provides an excellent approximation at each point for long periods of time, understanding the dynamics of the Bianchi I space-time, especially at high curvatures when quantum gravity effects cannot be neglected, may help us understand the behaviour of generic singularities when quantum gravity effects are included.</div><div class="Standard"><br /></div><div class="Standard">In loop quantum cosmology (LQC), for all of the space-times studied so far including Bianchi I, the big-bang singularity in cosmological space-times is resolved by quantum geometry effects. The fact that the initial singularity in the Bianchi I model is resolved in loop quantum cosmology, in conjunction with the BKL conjecture, gives some hope that all space-like singularities may be resolved in loop quantum gravity. While this result is encouraging, there remain open questions regarding the specifics of the evolution of the Bianchi I space-time in LQC when quantum geometry effects are important.</div><div class="Standard"><br /></div><div class="Standard">One of the main goals of Brajesh Gupt's talk is to address this precise question. Using the effective equations, which provide an excellent approximation to the full quantum dynamics for the FLRW space-times in LQC and are expected to do the same for the Bianchi models, it is possible to study how the quantum gravity effects that arise in loop quantum cosmology modify the classical dynamics when the space-time curvature becomes large and replace the big-bang singularity by a bounce. In particular, Brajesh Gupt describes the precise manner of how the Kasner exponents ---which are constant classically--- evolve deterministically as they go through the quantum bounce. It turns out that there is some sort of a Kasner transition that occurs around the bounce, the details of which are given in the talk.</div><div class="Standard"><br /></div><div class="Standard">The second part of the talk considers inflation in Bianchi I loop cosmologies. Inflation is a period of exponential expansion of the early universe which was initially introduced in order to resolve the so-called horizon and flatness problems. One of the major results of inflation is that it generates small fluctuations that are of exactly the form that are observed in the small temperature variations in the cosmic microwave background today. For more information about inflation in loop quantum cosmology, see the previous ILQGS talks by William Nelson, Ivan Agullo, Gianluca Calcagni and David Sloan, as well as the blog posts that accompany these presentations.</div><div class="Standard"><br /></div><div class="Standard">Although inflation is often considered in the context of isotropic space-times, it is important to remember that in the presence of matter fields such as radiation and cold dark matter, anisotropic space-times will become isotropic at late times. Therefore, it is not because our universe appears to be isotropic today that it necessarily was some 13.8 billion years ago. Because of this, it is necessary to understand how the dynamics of inflation change when anisotropies are present. As mentioned at the beginning of this blog post, there is considerably more freedom in Bianchi models than in FLRW space-times, and so the expectations coming from studying inflation in isotropic cosmologies may be misleading for the more general situation.</div><div class="Standard"><br /></div><div class="Standard">There are several interesting issues that are worth considering in this context, and in this talk the focus is on two questions in particular. First, is it easier or harder to obtain the initial conditions necessary for inflation? In other words, is more or less fine-tuning required in the initial conditions? As it turns out, the presence of anisotropies actually makes it easier for a sufficient amount of inflation to occur. The second problem is to determine how the quantum geometry effects from loop quantum cosmology change the results one would expect based on classical general relativity. The main modification found here has to do with the relation between the amount of anisotropy present in the space-time (which can be quantified in a precise manner) and the amount of inflation that occurs. While there was a monotonic relation between these two quantities in classical general relativity, this is no longer the case when loop quantum cosmology effects are taken into account. Instead, there is now a specific amount of anisotropy which extremizes the amount of inflation that will occur, and there is a turn around after this point. The details of these two results are given in the talk.</div></div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-15620551807937410382013-03-26T20:41:00.000-05:002013-03-26T20:41:51.186-05:00Reduced loop quantum gravityTuesday, Mar 12th.<br /><b>Emanuele Alesci, Francesco Cianfrani</b><br /><b>Title: </b>Quantum reduced loop gravity<br /><a href="http://relativity.phys.lsu.edu/ilqgs/cianfrani-alesci031213.pdf">PDF</a> of the talk (4Mb) <a href="http://relativity.phys.lsu.edu/ilqgs/cianfrani-alesci031213.wav">Audio</a> [.wav 29MB] <a href="http://relativity.phys.lsu.edu/ilqgs/cianfrani-alesci031213.aif">Audio</a> [.aif 3MB]<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-OvZA4v9uASo/UVI1PGZVJcI/AAAAAAAAD4c/uuJDJXULNKQ/s1600/alcian.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="172" src="http://1.bp.blogspot.com/-OvZA4v9uASo/UVI1PGZVJcI/AAAAAAAAD4c/uuJDJXULNKQ/s320/alcian.jpg" width="320" /></a></div>By Emanuele Alesci, Warszaw University and Francesco Cianfrani,Wrocław University <br /><br />We propose a new framework for the loop quantization of symmetry-reduced sectors of General Relativity, called Quantum Reduced Loop Gravity, and we apply this scheme to the inhomogeneous extension of the Bianchi I cosmological model (a cosmology that is homogeneous but anisotropic). To explain the meaning of this sentence we need several ingredients that will be presented in the next sections. But let us first focus on the meaning of “symmetry reduction”: this process simply means that a if a physical system has some kind of symmetry we can use it to reduce the number of independent variables needed to describe it. Symmetry then in general allows to restrict the variables of the theory to the true independent degrees of freedom of it. For instance, let us consider a point-like spinless particle moving on a plane under a central potential. The system is invariant under 2-dimensional rotations on the plane around the center of the potential and as a consequence the angular momentum is conserved. The angular velocity around the origin is a constant of motion and the only “true” dynamical variable is the radial coordinate of the particle. Going to the phase space (the space of positions and momenta of the theory), it can be parameterized by the radial and angular coordinates together with the corresponding momenta, but the symmetry forces the momentum associated with the angular coordinate to be conserved. The reduced phase-space associated with such a system is parameterized by the radial coordinate and momentum, from which, given the initial conditions, the whole trajectory of the particle in the plane can be reconstructed. The quantization in the reduced phase space is usually easier to handle than in the full phase space and this is the main reason why it is a technique frequently used in order to test the approaches towards Quantum Gravity, whose final theory is still elusive. In this respect, the canonical analysis of homogeneous models (Loop Quantum Cosmology) and of spherically-symmetric systems (Quantum Black Holes) in Loop Quantum Gravity (LQG) has been mostly performed by first restricting to the reduced phase space and then quantizing the resulting system (what is technically known as reduced quantization). The basic idea of our approach is to invert the order of “reduction” and “quantization”. The motivation will come directly from our analysis and, in particular, from the failure of reduced quantization to provide a sensible dynamics for the inhomogeneous extensions of the homogeneous anisotropic Bianchi I model. Hence, we will follow a different path by defining a “quantum” reduction of the Hilbert space of quantum states of the full theory down to a subspace which captures the relevant degrees of freedom. This procedure will allow us to treat the inhomogeneous Bianchi I system directly at the quantum level in a computable theory with all the ingredients of LQG (just simplified due to the quantum-reduction).<br /><br />To proceed, let us first review the main features of LQG.<br /><b><br /></b><b> Loop Quantum Gravity</b><br /> LQG is one of the most promising approaches for the quantization of the gravitational field. Its formulation is canonical and thus it is based on making a 3+1 splitting of the space-time manifold. The phase space is parameterized by the Ashtekar-Barbero connections, and the associated momenta, from which one can compute the metric of spatial sections. A key point of this reformulation is the existence of a gauge invariance (technically known as SU(2) gauge invariance), which together with background independence, lead to the so-called kinematical constraints of the theory (every time there is a symmetry in a theory an associated constraint emerges implying that the variables are not independent and one has to isolate the true degrees of freedom). The quantization procedure is inspired by the approaches developed in the 70s to describe gauge theories on the lattice in the strong-coupling limit. In particular, the quantum states are given in terms of spin networks, which are graphs with "colors" in the links between intersections. An essential ingredient of LQG is background independence. The way this symmetry is implemented is a completely new achievement in Quantum Gravity and it allows to define a regularized expression (free from infinities) for the operator associated with the Hamiltonian constraint asssociated with the dynamics of the theory. Thanks to a procedure introduced by Thiemann, the Hamiltonian constraint can be approximated over a certain triangulation of the spatial manifold. The limit in which the triangulation gets finer and finer gives us back the classical expression and it is well defined on a quantum level over s-knots (classes of spin networks related by smooth deformations). The reason is that s-knots are diffeomorphisms invariant and, thus, insensitive to the characteristic length of the triangulation. This means that the Hamiltonian constraint can be consistently regularized and, by the way, the associated algebra is anomaly-free. Unfortunately, the resulting expression cannot be analytically computed, because of the presence of the volume operator, which is complicated. This drawback appears to be a technical difficulty, rather than a theoretical obstruction, and for this reason our aim is to try to overcome it in a simplified model, like a cosmological one.<br /><br /> <b>Loop Quantum Cosmology </b><br /> Loop Quantum Cosmology (LQC) is the best theory at our disposal to threat homogeneous cosmologies. LQC is based on a quantization in the reduced phase space, which means that the reduction according with the symmetry is entirely made on a classical level. Once that the classical reduction is made, one then proceeds with a quantization of the degrees of freedom left with LQG techniques. We know that our Universe experiences a highly isotropic and homogeneous phase at scales bigger than 100Mpc. The easiest cosmological description is the one of Friedmann-Robertson-Walker (FRW), in which one deals with an isotropic and homogeneous line element, described by only one variable, the scale factor. A generalization can be given by considering anisotropic extensions, the so-called Bianchi models, in which there are three scale factors defined along some fiducial directions. In LQC one fixes the metric to be of the FRW or Bianchi type and quantizes the dynamical variables. However a direct derivation from LQG is still missing and it is difficult to accommodate in this setting inhomogenities because the theory is defined in the homogeneous reduced phase space.<br /><br /> <b>Inhomogeneous extension of the Bianchi models:</b><br /> We want to define a new model for cosmology able to retain all the nice features of LQG, in particular a sort of background independence by which the regularization of the Hamiltonian constraint can be carried on as in the full theory. In this respect, we consider the simplest Bianchi model, the type I (a homogeneous but anisotropic space-time), and we define an inhomogeneous extension characterized by scale factors that depend on space. This inhonomogeneous extension contains as a limiting case the homogeneous phase in an arbitrary parameterization. The virtue of these models is that they are invariant under what we called a reduced-diffeomorphism invariance, which is the invariance under a restricted class of diffeomorphisms preserving the fiducial directions of the anisotropies of the Bianchi I model. This is precisely the kind of symmetry we were looking for! In fact, once quantum states are based on reduced graphs, whose edges are along the fiducial directions, we can define some reduced s-knots, which will be insensitive to the length of any cubulation of the spatial manifold (we speak of a cubulation because reduced graphs admit only cubulations and not triangulations). Therefore, all we have to do is to repeat Thiemann's construction for a cubulation rather than for a triangulation. But does it give a good expression for the Hamiltonian constraint?? The answer is no and the reason is that there is an additional symmetry in the reduced phase space that prevents us from repeating the construction used by Thiemann for the Hamiltonian constraint. Henceforth, the dynamical issue cannot be addressed by standard LQG techniques in reduced quantization.<br /><br /> <b>Quantum-Reduced Loop Gravity</b><br /><b><br /></b> What are we missing in reduced quantization? The idea is that we have reduced the gauge symmetry too much and that is what prevents us from constructing the Hamiltonian. We therefore go back and do not reduce the symmetry and proceed to quantize first. We then impose the reduction of the symmetry at a quantum level. Hence, the classical expression of the Hamiltonian constraint for the Bianchi I model can be quantized according with the Thiemann procedure. Moreover, the associated matrix elements can be analytically computed because the volume operator takes a simplified form in the new Hilbert space. Therefore, we have a quantum description for the inhomogeneous Bianchi I model in which all the techniques of LQG can be applied and all the computations can be carried on analytically. This means that for the first time we have a model in which we can explicitly test numerous aspects of loop quantization: Thiemann's original graph changing Hamiltonian, the master constraint program, Algebraic Quantization or the new deparameterized approach with matter fields can all be tested. Such a model is a cuboidal lattice, whose edges are endowed with quantum numbers and with some reduced relations between those numbers at vertices. In two words we have a sort of hybrid “LQC” along the edges with LQG relationships at the nodes, but with a graph structure and diagonal volume! This means that we have an analytically tractable model closer to LQG than LQC and potentially able to threat inhomogeneities and anisotropies at once. Is this model meaningful? What we have to do now is “only” physics: as a first test try to work out the semiclassical limit. If this model will yield General Relativity in the classical regime, then we can proceed to compare its predictions with Loop Quantum Cosmology in the quantum regime, inserting matter fields and analyzing their role, discussing the behavior of inhomogeneities and so on.. We will see..Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-6814252736143709872012-11-01T12:54:00.001-05:002012-11-02T15:34:57.718-05:00General relativity in observer space<br /><a href="http://1.bp.blogspot.com/-LJdSR1gXqQk/UH8mtAKDvtI/AAAAAAAADmA/Qdbh6bqWnoo/s1600/wise.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" height="320" src="http://1.bp.blogspot.com/-LJdSR1gXqQk/UH8mtAKDvtI/AAAAAAAADmA/Qdbh6bqWnoo/s320/wise.jpg" width="185" /></a>Tuesday, Oct 2nd.<br /><b>Derek Wise, FAU Erlangen</b><br /><b>Title:</b> Lifting General Relativity to Observer Space<br /><a href="http://relativity.phys.lsu.edu/ilqgs/wise100212.pdf">PDF</a> of the talk (700k) <a href="http://relativity.phys.lsu.edu/ilqgs/wise100212.wav">Audio</a> [.wav 34MB], <a href="http://relativity.phys.lsu.edu/ilqgs/wise100212.aif">Audio</a> [.aif 3MB].<br /><br />by Jeffrey Morton, University of Hamburg.<br /><br />You can read a more technical and precise version of this post at <a href="http://theoreticalatlas.wordpress.com/2012/10/08/observer-space-cartan-gr/">Jeff's own blog.</a><br /><br />This talk was based on a project of Steffen Gielen and Derek Wise, which has taken written form in a few papers (two shorter ones, "<a href="http://arxiv.org/abs/1111.7195">Spontaneously broken Lorentz symmetry for Hamiltonian gravity</a>", "<a href="http://arxiv.org/abs/1206.0658">Linking Covariant and Canonical General Relativity via Local Observers</a>", and a new, longer one called "<a href="http://arxiv.org/abs/1210.0019">Lifting General Relativity to Observer Space</a>").<br /><br />The key idea behind this project is the notion of "observer space": a space of all observers in a given universe. This is easiest to picture when one starts with a space-time. Mathematically, this is a manifold M with a Lorentzian metric, g, which among other things determines which directions are "timelike" at a given point. Then an observer can be specified by choosing two things. First, a particular point (x0,x1,x2,x3) = <b>x</b>, an event in space-time. Second, a future-directed timelike direction, which is the tangent to the space-time trajectory of a "physical" observer passing through the event <b>x</b>. The space of observers consists of all these choices: what is known as the "future unit tangent bundle of M". However, using the notion of a "Cartan geometry", one can give a general definition of observer space which makes sense even when there is no underlying space-time manifold.<br /><br />The result is a surprising, relatively new physical intuition saying that "space-time" is a local and observer-dependent notion, which in some special cases can be extended so that all observers see the same space-time. This appears to be somewhat related to the idea of <a href="http://arxiv.org/abs/1101.0931">relativity of locality</a>. More directly, it is geometrically similar to the fact that a slicing of space-time into space and time is not unique, and not respected by the full symmetries of the theory of relativity. Rather, the division between space and time depends on the observer.<br /><br />So, how is this described mathematically? In particular, what did I mean up there by saying that space-time itself becomes observer-dependent? The answer uses Cartan geometry.<br /><br /><h3>Cartan Geometry</h3>Roughly, Cartan geometry is to Klein geometry as Riemannian geometry is to Euclidean geometry.<br /><br />Klein's Erlangen Program, carried out in the mid-19th-century, systematically brought abstract algebra, and specifically the theory of Lie groups, into geometry, by placing the idea of symmetry in the leading role. It describes "homogeneous spaces" X, which are geometries in which every point is indistinguishable from every other point. This is expressed by an action of some Lie group G, which consists of all transformations of an underlying space which preserve its geometric structure. For n-dimensional Euclidean space E<sup>n</sup>, the symmetry group is precisely the group of transformations that leave the data of Euclidean geometry, namely lengths and angles, invariant. This is the Euclidean group, and is generated by rotations and translations.<br />But any point x will be fixed by some symmetries, and not others, so there is a subgroup H , the "stabilizer subgroup", consisting of all symmetries which leave x fixed.<br /><br /><br />The pair (G,H) is all we need to specify a homogeneous space, or Klein geometry. Thus, a point will be fixed by the group of rotations centered at that point. Klein's insight is to reverse this: we may may obtain Euclidean space from the group G itself, essentially by "ignoring" (or more technically, by "modding out") the subgroup H of transformations that leave a particular point fixed. Klein's program lets us do this in general, given a pair (G,H). The advantage of this program is that it gives a great many examples of geometries (including ones previously not known) treated in a unified way. But the most relevant ones for now are:<br /><ul><li><strong>n-dimensional Euclidean space</strong>, as we just described.</li><li><strong>n-dimensional Minkowski space.</strong> The Euclidean group gets replaced by the Poincaré group, which includes translations and rotations, but also the boosts of special relativity. This is the group of all transformations that fix the geometry determined by the Minkowski metric of flat space-time.</li><li><strong>de Sitter space and anti-de Sitter spaces, </strong>which are relevant for studying general relativity with a cosmological constant.</li></ul>Just as a Lorentzian or Riemannian manifold is "locally modeled" by Minkowski or Euclidean space respectively, a Cartan geometry is locally modeled by some Klein geometry. Measurements close enough to a given point in the Cartan geometry look similar to those in the Klein geometry.<br /><br />Since curvature is measured by the development of curves, we can think of each homogeneous space as a <em>flat</em> Cartan geometry with itself as a local model, just as the Minkowski space of special relativity is a particular example of a solution to general relativity.<br /><br />The idea that the curvature of a manifold depends on the model geometry being used to measure it, shows up in the way we apply this geometry to physics.<br /><br /><h3>Gravity and Cartan Geometry</h3>The MacDowell-Mansouri formulation of gravity can be understood as a theory in which general relativity is modeled by a Cartan geometry. Of course, a standard way of presenting general relativity is in terms of the geometry of a Lorentzian manifold. The Palatini formalism describes general relativity instead of in terms of a metric, in terms of a set of vector fields governed by the Palatini equations. This can be derived from a Cartan geometry through the theory of MacDowell-Mansouri, which "breaks the full symmetry" of the geometry at each point, generating the vector fields that arise in the Palatini formalism. So General Relativity can be written as the theory of a Cartan geometry modeled on a de Sitter space.<br /><br /><br /><h3>Observer Space</h3>The idea in defining an observer space is to combine two symmetry reductions into one. One has a model Klein geometry, which reflects the "symmetry breaking" that happens when choosing one particular point in space-time, or <em>event</em>. The time directions are tangent vectors to the world-line (space-time trajectory) of a "physical" observer at the chosen event. So the model Klein geometry is the space of such possible <em>observers</em> at a fixed event. The stabilizer subgroup for a point in this space consists of just the rotations of space-time around the corresponding observer - the boosts in the Lorentz transformations that relate different observers. Locally, choosing an observer amounts to a splitting of the model space-time at the point into a product of space and time. If we combine both reductions at once, we get a 7-dimensional Klein geometry that is related to de Sitter space, which we think of as a homogeneous model for the "space of observers"<br /><br />This may be intuitively surprising: it gives a perfectly concrete geometric model in which "space-time" is relative and observer-dependent, and perhaps only locally meaningful, in just the same way as the distinction between "space" and "time" in general relativity. That is, it may be impossible to determine objectively whether two observers are located at the same base event or not. This is a kind of "relativity of locality" which is geometrically much like the by-now more familiar relativity of simultaneity. Each observer will reach certain conclusions as to which observers share the same base event, but different observers may not agree. The coincident observers according to a given observer are those reached by a good class of geodesics in observer space moving only in directions that observer sees as boosts.<br /><br />When one has a certain integrability condition, one can reconstruct a space-time from the observer space: two observers will agree whether or not they are at the same event. This is the familiar world of relativity, where simultaneity may be relative, but locality is absolute.<br /><br /><h3>Lifting Gravity to Observer Space</h3>Apart from describing this model of relative space-time, another motivation for describing observer space is that one can formulate canonical (Hamiltonian) general relativity locally near each point in such an observer space. The goal is to make a link between covariant and canonical quantization of gravity. Covariant quantization treats the geometry of space-time all at once, by means of of what is known as a Lagrangian. This is mathematically appealing, since it respects the symmetry of general relativity, namely its diffeomorphism-invariance (or, speaking more physically, that its laws take the same form for all observers). On the other hand, it is remote from the canonical (Hamiltonian) approach to quantization of physical systems, in which the concept of time is fundamental. In the canonical approach, one quantizes the space of states of a system at a given point in time, and the Hamiltonian for the theory describes its evolution. This is problematic for diffeomorphism-, or even Lorentz-invariance, since coordinate time depends on a choice of observer. The point of observer space is that we consider all these choices at once. Describing general relativity in observer space is both covariant, and based on (local) choices of time direction. Then a "field of observers" is a choice, at each base event in M, of an observer based at that event. A field of observers may or may not correspond to a particular decomposition of space-time into space evolving in time, but locally, at each point in observer space, it always looks like one. The resulting theory describes the dynamics of space-geometry over time, as seen locally by a given observer, in terms of a Cartan geometry.<br /><br />This splitting, along the same lines as the one in MacDowell-Mansouri gravity described above, suggests that one could lift general relativity to a theory on an observer space. This amount to describing fields on observer space and a theory for them, so that the splitting of the fields gives back the usual fields of general relativity on space-time, and the equations give back the usual equations. This part of the project is still under development, but there is indeed a lifting of the equations of general relativity to observer space. This tells us that general relativity can be defined purely in terms of the space of all possible observers, and when there is an objective space-time, the resulting theory looks just like general relativity. In the case when there is no "objective" space-time, the result includes some surprising new fields: whether this is a good or a bad thing is not yet clear.<br /><div><br /></div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-61484598640416919612012-10-18T12:22:00.000-05:002012-10-18T12:22:46.503-05:00More on Shape Dynamics<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-g0MgfjfbPls/UG39_LF2Q5I/AAAAAAAADkg/yK6pH-p5AYM/s1600/tim.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-g0MgfjfbPls/UG39_LF2Q5I/AAAAAAAADkg/yK6pH-p5AYM/s1600/tim.jpg" /></a></div><b>Tim Koslowski, Perimeter Institute</b><br /><b>Title:</b> Effective Field Theories for Quantum Gravity form Shape Dynamics<br /><a href="http://relativity.phys.lsu.edu/ilqgs/koslowski041012.pdf">PDF</a> of the talk (0.5Mb) <a href="http://relativity.phys.lsu.edu/ilqgs/koslowski041012.wav">Audio</a> [.wav 31MB], <a href="http://relativity.phys.lsu.edu/ilqgs/koslowski041012.aif">Audio</a> [.aif 3MB].<br /><br /><br /><div><div style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">By Astrid Eichhorn, Perimeter Institute</div></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div>Gravity and Quantum Physics have resisted to be unified into a common theory for several decades. We know a lot about the classical nature of gravity, in the form of Einstein's theory of General Relativity, which is a field theory. During the last century, we have learnt how to quantize other field theories, such as the gauge theories in the Standard Model of Particle Physics. The crucial difference between a classical theory and a quantum theory lies in the effect of quantum fluctuations. Due to Heisenberg's uncertainty principle, quantum fields can fluctuate, and this changes the effective dynamics of the field. In a classical field theory, the equations of motion can be derived minimizing a function called the classical action. In a quantum field theory, the equations of motion for the mean value of the quantum field cannot be derived from the classical action. Instead, they follow from something called the effective action, which contains the effect of all quantum fluctuations. Mathematically, to incorporate the effect of quantum fluctuations, a procedure known as the path integral has to be performed, which, even within perturbation theory (where one assumes solutions differ little from a known one), is a very challenging task. A method to make this task doable is the so called (functional) Renormalization Group: Not all quantum fluctuations are taken into account at once, but only those with a specific momentum, usually starting with the high-momentum ones. In a pictorial way, this means that we "average" the quantum fields over small distances (corresponding to the inverse of the large momentum). The effect of the high-momentum fluctuations is then to change the values of the coupling constants in the theory: Thus the couplings are no longer constant, but depend on the momentum scale, and so we more appropriately should call them running couplings. As an example, consider Quantum Electrodynamics: We know that the classical equations of motion are linear, so there is no interaction between photons. As soon as we go over to the quantum theory, this is different: Quantum fluctuations of the electron field at high momenta induce a photon-photon interaction (however one with a very tiny coupling, so experimentally this effect is difficult to see).<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-znUoM-sagmg/UG4HsgnAKMI/AAAAAAAADkw/WHdE4Q288C4/s1600/Euler_Heisenberg.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-znUoM-sagmg/UG4HsgnAKMI/AAAAAAAADkw/WHdE4Q288C4/s1600/Euler_Heisenberg.jpg" /></a></div><br /><div style="text-align: center;"><span style="font-size: x-small;"> Fig 1: Electron fluctuations induce a non-vanishing photon-photon coupling in Quantum Electrodynamics.</span></div><br /> The question, if a theory can be quantized, i.e., the full path integral can be performed, then finds an answer in the behavior of the running couplings: If the effect of quantum fluctuations at high momenta is to make the couplings divergent at some finite momentum scale, the theory is only an effective theory at low energies, but not a fundamental theory. On the technical side this implies that when we perform integrals that take into account the effect of quantum fluctuations, we cannot extend these to arbitrarily high momenta, instead we have to "cut them off" at some scale.<br /><br />The physical interpretation of such a divergence is that the theory tells us that we are really using effective degrees of freedom, not fundamental ones. As an example, if we construct a theory of the weak interaction between fermions without the W-bosons and the Z-boson, the coupling between the fermions will diverge at a scale which is related to the mass scale of the new bosons. In this manner, the theory lets us know that new degrees of freedom - the W- and Z-boson - have to be included at this momentum scale. One example that we know of a truly fundamental theory, i.e., one where the degrees of freedom are valid up to arbitrarily high momentum scales, is Quantum Chromodynamics. Its essential feature is the ultraviolet-attractive Gaussian (one that corresponds to a free, non-interacting theory) fixed point , which is nothing but the statement that the running coupling in QCD weakens towards high momenta, which is called asymptotic freedom (since asymptotically, at high momenta, the theory becomes non-interacting, i.e., free).<br /><br /> There is nothing wrong with a theory that is not fundamental in this sense, it simply means that it is an effective theory, which we can only use over a finite range of momenta. This concept is well-known in physics, and used very successfully. For instance, in condensed-matter systems, the effective degrees of freedom are, e.g., phonons, which are collective excitations of an atom lattice, and obviously cease to be a valid description of the system on distance scales below the atomic scale.<br /><br /> Quantum gravity actually exists as an effective quantum field theory, and quantum gravity effects can be calculated, treating the space-time metric as a quantum field like any other. However, being an effective theory means that it will only describe physics over a finite range of scales, and will presumably break down somewhere close to a scale known as the Planck scale (10^-33 in centimeters). This implies that we do not understand the microscopic dynamics of gravity. What are the fundamental degrees of freedom, which describe quantum gravity beyond the Planck scale, what is their dynamics and what are the symmetries that govern it?<br /><br /> The question, if we can arrive at a fundamental quantum theory of gravity within the standard quantum field theory framework, boils down to understanding the behavior of the running couplings of the theory. In perturbation theory, the answer has been known for a long time: (in four space-time dimensions) instead of weakening towards high momenta, the Newton coupling increases. More formally, this means that the free fixed point (technically known as Gaussian fixed point) is not ultraviolet-attractive. For this reason, most researchers in quantum gravity gave up on trying to quantize gravity along the same lines as the gauge theories in the Standard Model of particle physics. They concluded that the metric does not carry the fundamental microscopic degrees of freedom of a continuum theory of quantum gravity, but is only an effective description valid at low energies. However, the fact that the Gaussian fixed point is not ultraviolet-attractive really only means that perturbation theory breaks down. Beyond perturbation theory, there is the possibility to obtain a fundamental quantum field theory of gravity: The arena in which we can understand this possibility is called theory space. This is an (infinite dimensional) space, which is spanned by all running couplings which are compatible with the symmetries of the theory. So, in the case of gravity, theory space usually contains the Newton coupling, the cosmological constant, couplings of curvature-squared operators, etc. At a certain momentum scale, all these couplings have some value, specifying a point in theory space. Changing the momentum scale, and including the effect of quantum fluctuations on these scales, implies a change in the value of these couplings. Thus, when we change the momentum scale continuously, we flow through theory space on a so-called Renormalization Group (RG) trajectory. For the couplings to stay finite at all momentum scales, this trajectory should approach a fixed point at high momenta (more exotic possibilities such as limit cycles, or infinitely extendible trajectories could also exist). At a fixed point, the values of the couplings do not change anymore, when further quantum fluctuations are taken into account. Then, we can take the limit of arbitrarily high momentum scale trivially, since nothing changes if we go to higher scales, i.e., the theory is scale invariant. The physical interpretation of this process is, that the theory does not break down at any finite scale: The degrees of freedom that we have chosen to parametrize the physical system are valid up to arbitrarily high scales. An example is given by QCD, which as we mentioned is asymptotically free, the physical interpretation being that quarks and gluons are valid microscopic degrees of freedom. There is no momentum scale at which we need to expect further particles, or a possible substructure of quarks and gluons.<br /><br /> In the case of gravity, to quantize it we need a non-Gaussian fixed point. At such a point, where the couplings are non-vanishing, the RG flow stops, and we can take the limit of arbitrarily high momenta. This idea goes back to Weinberg, and is called asymptotic safety. Asymptotically, at high momenta, we are "safe" from divergences in the couplings, since they approach a fixed point, at which they assume some finite value. Since finite couplings imply finiteness of physical observables (when the couplings are defined appropriately), an asymptotically safe theory gives finite answers to all physical questions. In this construction, the fixed point defines the microscopic theory, i.e., the interaction of the microscopic degrees of freedom.<br /><br /> As a side remark, if, being confronted with an infinite-dimensional space of couplings, you might worry about how such a theory can ever be predictive, note that fixed points come equipped with what is called a critical surface: Only if the RG flow lies within the critical surface of a fixed point, it will actually approach the fixed point at high momenta. Therefore a finite-dimensional critical surface means that the theory will only have a finite number of parameters, namely those couplings spanning the critical surface. The low-momentum value of these couplings, which is accessible to measurements, is not fixed by the theory: Any value works, since they all span the critical surface. On the other hand, infinitely many couplings will be fixed by the requirement of being in the critical surface. This automatically implies that we will get infinitely many predictions from the theory (namely the values of all these so-called irrelevant couplings), which we can then (in principle) test in experiments.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-1WklhAmjKwc/UG4H_sqMrCI/AAAAAAAADk4/6biXEWhERPE/s1600/fixedpoint.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://3.bp.blogspot.com/-1WklhAmjKwc/UG4H_sqMrCI/AAAAAAAADk4/6biXEWhERPE/s320/fixedpoint.jpg" width="320" /></a></div><br /><div style="text-align: center;"> Fig 2: <span style="font-size: x-small;">A non-Gaussian fixed point has a critical surface, the dimensionality of which corresponds to the number of free parameters of the theory.</span></div><br /> Two absolutely crucial ingredients in the search for an asymptotically safe theory of quantum gravity are the specification of the field content and the symmetries of the theory. These determine which running couplings are part of theory space. They are the couplings of all possible operators that can be constructed from the fundamental fields respecting the symmetry have to be included. Imposing an additional symmetry on theory space means that some of the couplings will drop out of it. Most importantly, the (non)existence of a fixed point will depend on the choice of symmetries. A well-known example is the choice of a U(1) gauge symmetry (like that in electromagnetism) versus an SU(3) one (like the one in QCD). The latter case gives an asymptotically free theory, the former one does not. Thus the (gauge) symmetries of a system crucially determine its microscopic behavior.<br /><br /> In gravity, there are several classically equivalent versions of the theory (i.e., they admit the same solutions to the equations of motion). A partial list contains standard Einstein gravity with the metric as the fundamental field, Einstein-Cartan gravity, where the metric is exchanged for the vielbein (a set of vectors) and a unimodular version of metric gravity (we will discuss it in a second). The first step in the construction of a quantum field theory of gravity now consists in the choice of theory space. Most importantly, this choice exists in the path-integral framework as well as the Hamiltonian framework. So, in both cases there is a number of classically equivalent formulations of the theory, which differ at the quantum level, and in particular, only some of them might exist as a fundamental theory.<br /><br /> To illustrate that the choice of theory space is really a physical choice, consider the case of unimodular quantum gravity: Here, the metric determinant is restricted to be constant. This implies, that the spectrum of quantum fluctuations differs crucially from the non-unimodular version of metric gravity, and most importantly, does not differ just in form, but in its physical content. Accordingly, the evaluation of Feynman diagrams in perturbation theory in both cases will yield different results. In other words, the running couplings in the two theory spaces will exhibit a different behaviour, reflected in the existence of fixed points as well as critical exponents, which determine the free parameters of the theory.<br /><br /> This is where the new field of shape dynamics opens up important new possibilities. As explained, theories, which classically describe the same dynamics, can still have different symmetries. In particular, this actually works for gauge theories, where the symmetry is nothing else but a redundancy of description. Therefore, only a reduced configuration space (the space of all possible configurations of the field) is physical, and along certain directions, the configuration space contains physically redundant configurations. A simple example is given by (quantum) electrodynamics, where the longitudinal vibration mode of the photon is unphysical (in vacuum), since the gauge freedom restricts the photon two have two physical (transversal) polarisations.<br /><br /> One can now imagine how two different theories with different gauge symmetries yield the same physics. The configuration spaces can in fact be different, it is only the values of physical observables on the reduced configuration space that have to agree. This makes a crucial difference for the quantum theory, as it implies different theory spaces, defined by different symmetries, and accordingly different behavior of the running couplings.<br /><br /> Shape dynamics trades part of the four-dimensionsional, i.e., spacetime symmetries of General Relativity (namely refoliation invariance, so invariance under different choices of spatial slices of the four-dimensional spacetime to become "space") for what is known as local spatial conformal symmetry; which implies local scale invariance of space. This also implies a key difference in the way that spacetime is viewed in the two theories. Whereas spacetime is one unified entity in General Relativity, shape dynamics builds up a spacetime from "stacking" spatial slices (for more details, see the blog entry by Julian Barbour.) Fixing a particular gauge in each of the two formulations then yields two equivalent theories.<br /><br /> Although the two theories are classically equivalent for observational purposes, their quantized versions will differ. In particular, only one of them might admit a UV completion as a quantum field theory with the help of a non-Gaussian fixed point.<br /><br /> A second possibility is that both theory spaces might admit the existence of a non-Gaussian fixed point, but what is known as the universality class might be different: Loosely speaking, the universality class is determined by the rate of approach to the fixed point, which is captured by what is known as the critical exponents. Most importantly, while details of RG trajectories typically depend on the details of the regularization scheme (this specifies how exactly quantum fluctuations in the path integral are integrated out), the critical exponents are universal. The full collection of critical exponents of a fixed point then determines the unversality class. Universality classes are determined by symmetries, which is very well-known from second order phase transitions in thermodynamics. Since the correlation length in the vicinity of a second-order phase transition diverges, the microscopic details of different physical systems do not matter: The behavior of physical observables in the vicinity of the phase transition is determined purely by the field content, dimensionality, and symmetries of a system.<br /><br /> Different universality classes can differ in the number of relevant couplings, and thus correspond to theories with a different "amount of predictivity". Thus classically equivalent theories, when quantized, can have a different number of free parameters. Accordingly, not all universality classes will be compatible with observations, and the choice of theory space for gravity is thus crucial to identify which universality class might be "realized in nature".<br /><br /> Clearly, the canonical quantization of standard General Relativity in contrast to shape dynamics will also differ, since shape dynamics actually has a non-trivial, albeit non-local, Hamiltonian.<br /><br /> Finally, what is known as doubly General Relativity is the last step in the new construction. Starting from the symmetries of shape dynamics, one can discover a hidden BRST symmetry in General Relativity. BRST symmetries are symmetries existing in gauge-fixed path integrals for gauge theories. To do perturbation theory requires the gauge to be fixed, thus yielding a path-integral action which is not gauge invariant. The remnants of gauge invariants are encoded in BRST invariance, so it can be viewed as the quantum version of a gauge symmetry.<br /><br /> In the case of gravity, BRST invariance connected to gauge invariance under diffeomorphisms of general relativity is supplemented by BRST invariance connected to local conformal invariance. This is what is referred to as symmetry doubling. Since gauge symmetries restrict the Renormalization Group flow in a theory space, the discovery of a new BRST symmetry in General Relativity is crucial to fully understand the possible existence of a fixed point and its universality class. Thus the newly discovered BRST invariance might turn out to be a crucial ingredient in constructing a quantum theory of gravity.</div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-45046603414553075162012-03-26T11:31:00.001-05:002012-05-09T10:33:50.348-05:00Bianchi models in loop quantum cosmologyby Edward Wilson-Ewing, Marseille.<br /><b><br /></b><br /><b>Parampreet Singh, LSU</b><br /><b>Title:</b> Physics of Bianchi models in LQC<br /><a href="http://relativity.phys.lsu.edu/ilqgs/singh013112.pdf">PDF</a> of the talk (500KB)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/singh013112.wav">Audio</a> [.wav 40MB], <a href="http://relativity.phys.lsu.edu/ilqgs/singh013112.aif">Audio</a> [.aif 4MB].<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-awI6ayoh-qU/T2EWRTTZWBI/AAAAAAAACjQ/YaccYYcq818/s1600/param3.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-awI6ayoh-qU/T2EWRTTZWBI/AAAAAAAACjQ/YaccYYcq818/s1600/param3.jpg" /></a></div>The word singularity, in physics, is often used to denote a prediction that some observable quantity should be singular, or infinite. One of the most famous examples in the history of physics appears in the Rayleigh-Jeans distribution which attempts to describe the thermal radiation of a black body in classical electromagnetic theory. While the Rayleigh-Jeans distribution describes black body radiation very well for long wavelengths, it does not agree with observations at short wavelengths. In fact, the Rayleigh-Jeans distribution becomes singular at very short wavelengths as it predicts that there should be an infinite amount of energy radiated in this part of the spectrum: this singularity -which did not agree with experiment- was called the ultraviolet catastrophe.<br /><br />This singularity was later resolved by Planck when he discovered what is called Planck's law, which is now understood to come from quantum physics. In essence, the discreteness of the energy levels of the black body ensure that the black body radiation spectrum remains finite for all wavelengths. One of the lessons to be learnt from this example is that singularities are not physical: in the Rayleigh-Jeans law, the prediction that there should be an infinite amount of energy radiated at short wavelengths is incorrect and indicates that the theory that led to this prediction cannot be trusted to describe this phenomenon. In this case, it is the classical theory of electromagnetism that fails to describe black body radiation and it turns out that it is necessary to use quantum mechanics in order to obtain the correct result.<br /><br />In the figure below, we see that for a black body at a temperature of 5000 degrees Kelvin, the Rayleigh-Jeans formula works very well for wavelengths greater than 3000 nanometers, but fails for shorter wavelengths. For these shorter wavelengths, it is necessary to use Planck's law where quantum effects have been included.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-lLZDP0bu4rA/T2EdyeqHHTI/AAAAAAAACjg/izerdT_8Gmk/s1600/rayleigh-jeans.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-lLZDP0bu4rA/T2EdyeqHHTI/AAAAAAAACjg/izerdT_8Gmk/s1600/rayleigh-jeans.gif" /></a></div>Picture <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/mod6.html">credit.</a><br /><br />There are also singularities in other theories. Some of the most striking examples of singularities in physics occur in general relativity where the curvature of space-time, which encodes the strength of the gravitational field, diverges and becomes infinite. Some of the best known examples are the big-bang singularity that occurs in cosmological models and the black hole singularity that is found inside the event horizon of every black hole. While some people have argued that the big-bang singularity represents the beginning of time and space, it seems more reasonable that the singularity indicates that the theory of general relativity cannot be trusted when the space-time curvature becomes very large and that quantum effects cannot be ignored: it is necessary to use a theory of quantum gravity in order to study the very early universe (where general relativity says the big bang occurs) and the center of black holes.<br /><br />In loop quantum cosmology (LQC), simple models of the early universe are studied by using the techniques of the theory of loop quantum gravity. The simplest such model (and therefore the first to be studied) is called the flat Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. This space-time is homogeneous (the universe looks the same no matter where you are in it), isotropic (the universe is expanding at the same rate in all directions), and spatially flat (the two other possibilities are closed and open models which have also been studied in LQC) and is considered to provide a good approximation to the large-scale dynamics of the universe we live in. In LQC, it has been possible to study how quantum geometry effects become important in the FLRW model when the space-time curvature becomes so large that is is comparable to one divided by the <a href="http://en.wikipedia.org/wiki/Planck_length">Planck length </a>squared. A careful analysis shows that the quantum geometry effects provide a repulsive force that causes a “bounce” and ensures that the singularity predicted in general relativity does not occur in LQC. We will make this more precise in the next two paragraphs.<br /><br />By measuring the rate of expansion of the universe today, it possible to use the FLRW model in order determine the size and the space-time curvature of the universe was in the past. Of course, these predictions will necessarily depend on the theory used: general relativity and LQC will not always give the same predictions. General relativity predicts that, as we go back further in time, the universe becomes smaller and smaller and the space-time curvature becomes larger and larger. This keeps on going until around 13.75 billion years ago the universe has zero volume and an infinite space-time curvature. This is called the big bang.<br /><br />In LQC, the picture is not the same. So long as the space-time curvature is considerably smaller than one divided by the Planck length squared, it predicts the same as general relativity. Thus, as we go back further in time, the universe becomes smaller and the space-time curvature becomes larger. However, there are some important differences when the space-time curvature nears the critical value of one divided by the Planck length squared: in this regime there are major modifications to the evolution of the universe that come from quantum geometry effects. Instead of continuing to contract as in general relativity, the universe instead slows its contraction before starting to become bigger again. This is called the bounce. After the bounce, as we continue to go further back in time, the universe becomes bigger and bigger and the space-time curvature becomes smaller and smaller. Therefore, as the space-time curvature never diverges, there is no singularity.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-dmoC1ck5KO8/T2Eg6gzAfjI/AAAAAAAACjw/XX1YaJ277Zw/s1600/bounce-ns.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="311" src="http://4.bp.blogspot.com/-dmoC1ck5KO8/T2Eg6gzAfjI/AAAAAAAACjw/XX1YaJ277Zw/s400/bounce-ns.jpg" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>Photo <a href="http://www.newscientist.com/article/mg20026861.500-did-our-cosmos-exist-before-the-big-bang.html">credit.</a><br /><br />In loop quantum cosmology, we see that the big bang singularity in FLRW models is avoided due to quantum effects and this is analogous to what happened in the theory of black body radiation: the classical theory predicted a singularity which was resolved once quantum effects were included.<br /><br />This observation raises an important question: does LQC resolve all of the singularities that appear in cosmological models in general relativity? This is a complicated question as there are many types of cosmological models and also many different types of singularities. In this talk, Parampreet Singh explains what happens to many different types of singularities in models, called the Bianchi models, that are homogeneous but anisotropic (each point in the universe is equivalent, but the universe may expand in different directions at different rates). The main result of the talk is that all “strong” singularities in the Bianchi models are resolved in LQC.<br /><br /><p><span style="display:none">sciseekclaimtoken-4faa8e2c424dc</span></p>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-31403030399984085452012-02-13T11:09:00.000-06:002012-02-13T11:09:13.052-06:00Inhomogeneous loop quantum cosmologyby David Brizuela, Albert Einstein Institute, Golm, Germany.<br /><b><br /></b><br /><b><br /></b><br /><b>William Nelson, PennState </b><br /><b>Title:</b> Inhomogeneous loop quantum cosmology<br /><a href="http://relativity.phys.lsu.edu/ilqgs/nelson101811.pdf">PDF</a> of the talk (500k)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/nelson101811.wav">Audio</a> [.wav 32MB], <a href="http://relativity.phys.lsu.edu/ilqgs/nelson101811.aif">Audio</a> [.aif 3MB]. <br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-eCcVXBh9e2w/TzlDrBrrs-I/AAAAAAAACgI/gVaNj0t2x5s/s1600/nelson.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-eCcVXBh9e2w/TzlDrBrrs-I/AAAAAAAACgI/gVaNj0t2x5s/s1600/nelson.jpg" /></a></div><br />William Nelson's talk is a follow-up of the work <a href="http://ilqgs.blogspot.com/2011/04/by-edward-wilson-ewing-penn-state-ivan.html">presented by Iván Agulló</a> a few months ago in this seminar series about their common work in collaboration with Abhay Ashtekar. Iván's talk was reviewed in this blog by Edward Wilson-Ewing, so the reader is referred to that entry for completeness. Even if substantial material will overlap with that post, here I will try to focus on other aspects of this research.<br /><br />Due to the finiteness of the speed of light, when we look at a distant point, like a star, we are looking to the state of that point in the past. For our regular daily distances this fact hardly affects anything, but if we consider larger distances, the effect is noticeable even for our slow-motion human senses. For instance, the sun is 8 light-minutes away from us, so if it suddenly were switched off we would be able to do a fair number of things in the mean time until we find ourselves in the complete darkness. For cosmological distances this fact can be really amazing: we can see far back in the past! But, how far away? Can we really see the instant of the creation?<br /><br />The light rays, that were emitted during the initial moments of the universe and that arrive to the Earth nowadays, form our particle horizon, which defines the border of our observable universe. As a side remark, note that the complete universe could be larger (even infinite) than the observable one, but not necessarily. We could be living in a universe with compact topology (like the surface of a balloon) and the light emitted from a distant galaxy would reach us from different directions. For instance, one directly and other once after traveling around the whole universe. Thus, what we consider different galaxies would be copies of the same galaxy in different stages of its evolution. In fact, we could even see the solar system in a previous epoch!<br /><br />Since the universe has existed for a finite amount of time (around 14 billions of years), the first guess would be that the particle horizon is at that distance: 14 billions of light years. But this is not true mainly for two different reasons. On the one hand, our universe is expanding, so the sources of the light rays that were emitted during the initial moments of the universe are further away, around 46 billions of light-years away. On the other hand, at the beginning of the universe, the temperature was so high that atoms or even neutrons or protons could not be formed in a stable way. The state of the matter was a plasma of free elementary particles, in which the photons interacted very easily. The mean free path of a photon was extremely short since it was almost immediately absorbed by some particle. In consequence, the universe was opaque to light, so none of the photons emitted at that epoch could make its way to us. The universe became transparent around 380000 years after the Big Bang, in the so-called recombination epoch (when the hydrogen atoms started to form),<br />and the photons emitted at that time form what is known as the Cosmic Microwave Background (CMB) radiation. This is the closest event to the Big Bang that we can nowadays measure with our telescopes. In principle, depending on the technology, in the future we might be able to detect also the neutrino and gravitational-wave backgrounds. These were released before the CMB photons since both neutrinos and gravitational waves could travel through the mentioned plasma without much interaction. The CMB has been explored making use of very sophisticated satellites, like the WMAP, and we know that it is highly homogeneous. It has an almost perfect black body spectrum that is peaked on a microwave frequency corresponding to a temperature of 2.7 K. The tiny inhomogeneities that we observe in the CMB are understood as the seeds of the large structures of our current universe.<br /><br />Furthermore, the CMB is one of the few places where one could look for quantum gravity effects since the conditions of the universe during its initial moments were very extreme. The temperature was very high so that the energies of interaction between particles were much larger than we could achieve with any accelerator. But we have seen that the CMB photons we observe were emitted quite after the Big Bang, around 380.000 years later. Cosmologically this time is insignificant. (If we make an analogy and think that the universe is a middle-age 50 years old person, this would correspond to 12 hours.) Nevertheless, by that time the universe had already cooled down and the curvature was low enough so that, in principle, Einstein's classical equations of general relativity should be a very good approximation to describe its evolution at this stage. Therefore, why do we think it might be possible to observe quantum gravity effects at the CMB? At this point, the inflationary scenario enters the game. According to the standard cosmological model, around 10^(-36) seconds after the Big Bang, the universe underwent an inflationary phase which produced an enormous increase of its size. In a very short instant of time (a few 10^(-32) seconds) the volume was multiplied by a factor 10^78. Think for a while on the incredible size of that number: a regular bedroom would be expanded to the size of the observable universe!<br /><br />This inflationary mechanism was introduced by Alan Guth in the 1980s in order to address several conceptual issues about the early universe like, for instance, why our universe (and in particular the CMB) is so homogeneous. Note that the CMB is composed by points that are very far apart and, in a model without inflation, could not have had any kind of interaction or information exchange during the whole history of the universe. On the contrary, according to the inflationary theory, all these points were close together at some time in the past, which would have allowed them to reach this thermal equilibrium. Furthermore, inflation has had a tremendous success and it has proved to be much more useful than originally expected. Within this framework, the observational values of the small inhomogeneities of the CMB are reproduced with high accuracy. Let us see in more detail how this result is achieved.<br /><br />In the usual inflationary models, at the early universe the existence of a scalar particle (called the inflaton) is considered. The inflaton is assumed to have a very large but flat potential. During the inflationary epoch it slowly loses potential energy (or, as it is usually referred, it slowly rolls down its potential), and produces the exponential expansion of the universe. At the end of this process the inflaton's potential energy is still quite large. Since nowadays we do not observe the presence of such a particle, it is argued that after inflation, during the so-called reheating process, all this potential energy is converted into "regular" (Standard Model) particles. Even though this process is not yet well understood.<br /><br />It is also usually assumed that at the onset of inflation the quantum fluctuations of the inflaton (and of the different quantities that describe the geometry of the universe) were in a vacuum state. This quantum vacuum is not a static and simple object, as one might think a priori. On the contrary, it is a very dynamical and complex entity. Due to the Heisenberg uncertainty principle, the laws of physics (like the conservation of energy) are allowed to be violated during short instants of time. This is well-known in regular quantum field theory and it happens essentially because the nature does not allow to perform any observation during such a short time. Therefore, in the quantum vacuum there is a constant creation of virtual particles that, under regular conditions, are annihilated before they can be observed. Nevertheless, the expansion of the universe turns this virtual particles into real entities. Intuitively one can think that a virtual particle and its corresponding antiparticle are created but, before they can interact again to disappear, the inflationary expansion of the universe tears them so apart that the interaction is not possible anymore. This initial tiny quantum fluctuations, amplified through the process of inflation, produces then the CMB inhomogeneities we observe. Thus, the inflation is a kind of magnifying glass that allows us to have experimental access to processes that happened at extremely short scales and hence large energies, where quantum gravity effects might be significant.<br /><br />On the other hand, loop quantum cosmology (LQC) is a quantum theory of gravity that describes the evolution of our universe under the usual assumptions of homogeneity and isotropy. The predictions of LQC coincide with those of general relativity for small curvature regions. That includes the whole history of the universe except for the initial moments. According to general relativity the beginning of the universe happened at the Big Bang, which is quite a misleading name. The Big Bang has nothing to do with an explosion, it is an abrupt event where the whole space-time continuous came to existence. Technically, this point is called a singularity, where different objects describing the curvature of the spacetime diverge. Thus general relativity can not be applied there and, as it is often asserted, the theory contains the seeds of its own destruction. LQC smooths out this singularity by considering quantum gravity effects and the Big Bang is replaced by a quantum bounce (the so-called Big Bounce). According to the new paradigm, the universe existed already before the Big Bounce as a classical collapsing universe. When the energy density became too large, it entered this highly quantum region, where the quantum gravity effects come with the correct sign so that gravity happens to be repulsive. This caused the universe to bounce and the expansion we currently observe began. The aim of Will´s talk is to study the inflationary scenario in the context of LQC and obtain its predictions for the CMB inhomogeneities. In fact, Abhay Ashtekar and David Sloan already showed that inflation is natural in LQC. This means that itis not necessary to choose very particular initial conditions in order to get an inflationary phase. But there are still several questions to be addressed, in particular whether there might be any observable effects due to pre-inflationary evolution of the universe.<br /><br />As we have already mentioned, in the usual cosmological models, the initial state is taken as the so-called Bunch-Davies vacuum at the onset of inflation. This time might be quite arbitrary. The natural point to choose initial conditions would be the Big Bang but this is not feasible since it is a singular point and the equations of motions are no longer valid. In any case, the extended view has been that, even if there were some particles present at the onset of inflation, the huge expansion of the universe would dilute them and thus the final profile of the CMB would not be affected. Nevertheless, recently Iván Agulló and Leonard Parker showed that the presence of such initial particles does matter for the final result since it causes the so-called stimulated emission of quanta: initial particles produce more particles, which themselves produce more particles and so on. In fact, this is the same process on which the nowadays widely used laser devices are based. Contrary to the usual models based on general relativity, LQC offers a special point where suitable initial conditions can be chosen: the Big Bounce. Thus, in this research, the corresponding vacuum state is chosen at that time. The preliminary results presented in the talk seem quite promising. The simplest initial state is consistent with the observational data but, at the same time, it slightly differs from the CMB spectrum obtained within the previous models. These results have been obtained under certain technical approximations so, the next step of the research will be to understand if this deviation is really physical. If so, this could provide a direct observational test for LQC that would teach us invaluable lessons about the deep quantum regime of the early universe.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-68609778759934077702011-12-15T09:35:00.000-06:002011-12-15T09:35:13.738-06:00Shape dynamics<b>by Julian Barbour, College Farm, Banbury, UK.</b><br /><b><br /></b><br /><b>Tim Koslowski, Perimeter Institute </b><br /><b>Title:</b> Shape dynamics<br /><a href="http://relativity.phys.lsu.edu/ilqgs/koslowski110111.pdf">PDF</a> of the talk (500k)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/koslowski110111.wav">Audio</a> [.wav 33MB], <a href="http://relativity.phys.lsu.edu/ilqgs/koslowski110111.aif">Audio</a> [.aif 3MB].<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-JGW95v_Qypg/TudJdlP6T8I/AAAAAAAACTc/8k7SxgnDTb8/s1600/TimKoslowski.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-JGW95v_Qypg/TudJdlP6T8I/AAAAAAAACTc/8k7SxgnDTb8/s320/TimKoslowski.jpg" width="289" /></a></div>I will attempt to give some conceptual background to the recent seminar by Tim Koslowski (pictured left) on Shape Dynamics and the technical possibilities that it may open up. Shape dynamics arises from a method, called <i>best matching</i>, by which motion and more generally change can be quantified. The method was first proposed in <a href="http://platonia.com/barbour_bertotti_prs1982_scan.pdf">1982</a>, and its furthest development up to now is described <a href="http://arxiv.org/abs/1105.0183">here</a>. I shall first describe a common alternative.<br /><br /><b>Newton’s Method of Defining Motion</b><br /><br />Newton’s method, still present in many theoreticians’ intuition, takes space to be real like a perfectly smooth table top (suppressing one space dimension) that extends to infinity in all directions. Imagine three particles that in two instants form slightly different triangles (1 and 2). The three sides of each triangle define the relative configuration. Consider triangle 1. In Newtonian dynamics, you can locate and orient 1 however you like. Space being homogeneous and isotropic, all choices are on an equal footing. But 2 is a different relative configuration. Can one say how much each particle has moved? According to Newton, many different motions of the particles correspond to the same change of the relative configuration. Keeping the position of 1 fixed, one can place the centre of mass of 2, C2, anywhere; the orientation of 2 is also free. In three-dimensional space, three degrees of freedom correspond to the possible changes of the sides of the<br />triangle (relative data), three to the position of C2, and three to the orientation. The three relative data cannot be changed, but the choices made for the remainder are disturbingly arbitrary. In fact, Galilean relativity means<br />that the position of C2 is not critical. But the orientational data are crucial. Different choices for them put different angular momenta L into the system, and the resulting motions are very different. Two snapshots of relative configurations contain no information about L; you need three to get a handle on L. Now we consider the alternative.<br /><br /><b>Dynamics Based on Best Matching</b><br /><br />The definition of motion by best matching is illustrated in the figure. Dynamics based on it is more restrictive than Newtonian dynamics. The reason can be ‘read off’ from the figure. Best matching, as shown in b, does two things. It brings the centers of mass of the two triangles to a common point and sets their net relative rotation about it to zero. This last means that a dynamical system governed by best matching is <a href="http://platonia.com/barbour_bertotti_prs1982_scan.pdf">always constrained</a>, in Newtonian terms, to have vanishing total angular momentum L. In fact, the dynamical equations are Newtonian; the constraint L = 0 is maintained by them if it holds at any one instant.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-uTIAST4EU74/TuX62xrl99I/AAAAAAAACTU/O_3F5qnb7-g/s1600/barba.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="220" src="http://1.bp.blogspot.com/-uTIAST4EU74/TuX62xrl99I/AAAAAAAACTU/O_3F5qnb7-g/s320/barba.jpg" width="320" /></a></div><br /><br /><span class="Apple-style-span" style="font-size: x-small; text-align: left;"><b>Figure 1.</b> The Definition of Motion by Best Matching. Three particles, at the vertices of the grey and dashed triangles at two instants, move relative to each other. The difference between the triangles is fact, but can one determine unique displacements of the particles? It seems not. Even if we hold the grey triangle fixed in space, we can place the dashed triangle relative to it in any arbitrary position, as in a. There seems to be no way to define unique displacements. However, we can bring the dashed triangle into the position b, in which it most nearly ‘covers’ the grey triangle. A <a href="http://arxiv.org/abs/1105.0183">natural minimizing procedure</a> determines when ‘best matching’ is achieved. The displacements that take one from the grey to the dashed triangle are not defined relative to </span><span class="Apple-style-span" style="font-size: x-small; text-align: left;">space but relative to the grey triangle. The procedure is reciprocal and must </span><span class="Apple-style-span" style="font-size: x-small; text-align: left;">be applied to the complete dynamical system under consideration.</span><br /><br />So far, we have not considered size. This is where Shape Dynamics proper begins. Size implies the existence of a scale to measure it by. But, if our three particles are the universe, where is a scale to measure its size? Size is another Newtonian absolute. Best matching can be extended to include adjustment of the relative sizes. This is done for particle dynamics <a href="http://arxiv.org/abs/gr-qc/0211021">here</a>. It leads to a further constraint. Not only the angular momentum but also something called the dilatational momentum must vanish. The dynamics of any universe governed by best matching becomes even more restrictive than Newtonian dynamics.<br /><br /><b>Best Matching in the Theory of Gravity</b><br /><br />Best matching can be applied to the dynamics of geometry and compared with Einstein's general relativity (GR), which was created as a description of the four-dimensional geometry of spacetime. However, it can be reformulated as a dynamical theory in which three-dimensional geometry (3-geometry) evolves. This was done in the late 1950s by Dirac and Arnowitt, Deser, and Misner (ADM), who found a particularly elegant way to do it that is now called the ADM formalism and is based on the Hamiltonian form of dynamics. In the ADM formalism, the diffeomorphism constraint, mentioned a few times by Tim Koslowski, plays a prominent role. Its presence can be explained by a sophisticated generalization of the particle best matching shown in the figure. This shows that the notion of change was radically modified when Einstein created GR (though this fact is rather well hidden in the spacetime formulation). The notion of change employed in GR means that it is <a href="http://arxiv.org/abs/1003.1973">background independent </a>. In the ADM formalism as it stands, there is no constraint that corresponds to best matching with respect to size. However, in addition to the diffeomorphism constraint, or rather constraints as there are infinitely many of them, there are also infinitely many Hamiltonian constraints. They reflect the absence of an external time in Einstein's theory and the almost complete freedom to define simultaneity at spatially separated points in the universe. It has proved very difficult to take them into account in a quantum theory of gravity. Building on <a href="http://arxiv.org/abs/gr-qc/0407104">previous work</a>, Tim and his collaborators Henrique Gomes and Sean Gryb <a href="http://arxiv.org/abs/1010.2481">have found an alternative Hamiltonian representation</a> of dynamical geometry in which all but one of the Hamiltonian constraints can be swapped for conformal constraints. These conformal constraints arise from a best matching in which the volume of space can be adjusted with infinite flexibility. Imagine a balloon with curves drawn on it that form certain angles wherever they meet. One can imagine blowing up the balloon or letting it contract by different amounts everywhere on its surface. In this process, the angles at which the curves meet cannot change, but the distances between points can. This is called a conformal transformation and is clearly analogous to changing the overall size of figures in Euclidean space. The conformal transformations that Tim discusses in his talk are applied to curved 3-geometries that close up on themselves like the surface of the earth does in two dimensions. The alternative, or dual, representation of gravity through the introduction of conformal best matching seems to open up new routes to quantum gravity. At the moment, the most promising looks to be the symmetry doubling idea discussed by Tim. However, it is early days. There are plenty of possible obstacles to progress in this direction, as Tim is careful to emphasize. One of the things that intrigues me most about Shape Dynamics is that, if we are to explain the key facts of cosmology by a spatially closed expanding universe, we cannot allow completely unrestricted conformal transformations in the best matching but only the volume-preserving ones (VPCTs) that Tim discusses. This is a tiny restriction but strikes me as the very last vestige of Newton's absolute space. I think this might be <a href="http://arxiv.org/abs/1105.0183">telling us</a> something fundamental about the quantum mechanics of the universe. Meanwhile it is very encouraging to see technical possibilities emerging in the new conceptual framework.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-3249812276142311952011-10-31T13:49:00.000-05:002011-10-31T13:49:04.886-05:00Spin foams from arbitrary surfacesby Frank Hellman, Albert Einstein Institute, Golm, Germany<br /><br /><b>Jacek Puchta, University of Warszaw</b><br /><b>Title:</b> The Feynman diagramatics for the spin foam models<br /><a href="http://relativity.phys.lsu.edu/ilqgs/puchta092011.pdf">PDF</a> of the talk (3MB)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/puchta092011.wav">Audio</a> [.wav 35MB], <a href="http://relativity.phys.lsu.edu/ilqgs/puchta092011.aif">Audio</a> [.aif 3MB].<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ico3r-Sk9e8/Tqm7Oyf-ZbI/AAAAAAAACNk/xh-KH6nQu04/s1600/JacekPuchta.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="240" src="http://4.bp.blogspot.com/-ico3r-Sk9e8/Tqm7Oyf-ZbI/AAAAAAAACNk/xh-KH6nQu04/s320/JacekPuchta.jpg" width="320" /></a></div>In several previous blog posts (e.g. <a href="http://ilqgs.blogspot.com/2011/08/quantum-deformations-of-4d-spin-foam.html">here</a>) the spin foam approach to quantum gravity dynamics was introduced. To briefly summarize, this approach describes the evolution of a spin-network via a<br />2-dimensional surface that we can think of as representing how the network changes through time.<br /><br /><br />While this picture is intuitively compelling, at the technical level there have always been differences of opinions on what type of 2-dimensional surfaces should occur in this evolution. This question is particularly critical once we start trying to sum over all different type of surfaces. The original proposal for this 2-dimensional surface approach was due to Ooguri, who allowed only a very restricted set of surfaces, namely those called "dual to triangulations of manifolds".<br /><br /><br />A triangulation is a decomposition of a manifold into simplices. The simplices in successive dimensions are obtained by adding a point and "filling in". The 0-dimensional simplex is just a single point. For the 1-dimensional simplex we add a second point and fill in the line between them. For 2-dimensions we add a third point, fill in the space between the line and the third point, and obtain a triangle. In 3-d we get a tetrahedron, and in 4-d what is called a 4-simplex.<br /><br /><br />The surface "dual to a triangulation" is obtained by putting a vertex in the middle of the highest dimensional simplex, then connecting these by an edge for every simplex one dimension lower, and to fill in surfaces for every simplex two dimensions lower. An example for the case where the highest dimensional simplex is a triangle is given in the figure, there the vertex abc is in the middle of the triangle ABC, and connected by the dashed lines indicating edges, to the neighboring vertices.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-WmvKtGTccXw/Tqcrr9qmTQI/AAAAAAAACNE/P9acTOe5CDY/s1600/Dual_triangulation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="254" src="http://3.bp.blogspot.com/-WmvKtGTccXw/Tqcrr9qmTQI/AAAAAAAACNE/P9acTOe5CDY/s320/Dual_triangulation.png" width="320" /></a></div><br /><br />All current spin foam models were created with such triangulations in mind. In fact many of the crucial results of the spin foam approach rely explicitly on this feature rather technical point.<br /><br /><br />The price we pay for restricting ourselves to such surfaces is that we do not address the dynamics of the full Loop Quantum Gravity Hilbert space. The spin networks we evolve will always be 4-valent, that is, there are always four links coming into every node, whereas in the LQG Hilbert space we have spin-networks of arbitrary valence. Another issue is that we might wish to study the dynamics of the model using the simplest surfaces first to get a feeling for what to expect from the theory, and for some interesting examples, like spin foam cosmology, the triangulation based surfaces are immediately quite complicated.<br /><br /><br />The group of Jerzy Lewandowski therefore suggested to generalize the amplitudes considered so far to fairly arbitrary surfaces, and gave a method for constructing the spin foam models, considered before in the triangulation context only, on these arbitrary surfaces. This patches one of the holes between the LQG kinematics and the spin foam dynamics. The price is that many of the geometricity results from before no longer hold.<br /><br /><br />Furthermore it now becomes necessary to effectively handle these general surfaces. A priori a lot of those exist, and it can be very hard to imagine them. In fact the early work on spin foam cosmology overlooked a large number of surfaces that potentially contribute to the amplitude. The work Jacek Puchta presented in this talk solves this issue very elegantly by developing a simple diagrammatic language that allows us to very easily work with these surfaces without having to imagine them.<br /><br /><br />This is done by describing every node in the amplitude through a network, and then giving additional information that allows us to reconstruct a surface from these networks. Without going into the full details, consider a picture like in the next figure. The solid lines on the right hand side are the networks we consider, the dashed lines are additional data. Each node of the solid lines represents a triangle, every solid line is two triangles glued along an edge, and every dashed line is two triangles glued face to face. Following this prescription we obtain the triangulation on the left. While the triangulation generated by this prescription can be tricky to visualize in general, it is easy to work directly with the networks of dashed and solid lines. Furthermore we don't need to restrict ourselves to networks that generate triangulations anymore but can consider much more general cases.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-QboDxsrnhO0/TqhecqV_RFI/AAAAAAAACNc/hgpRELZKrHY/s1600/network.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="115" src="http://2.bp.blogspot.com/-QboDxsrnhO0/TqhecqV_RFI/AAAAAAAACNc/hgpRELZKrHY/s320/network.png" width="320" /></a></div><br />This language has a number of very interesting features. First of all these networks immediately give us the spin-networks we need to evaluate to obtain the spin foam amplitude of the surface reconstructed from them.<br /><br /><br />Furthermore it is very easy to read off what the boundary spin network of a particular surface is. As a strong demonstration of how this language simplifies thinking about surfaces, he demonstrated how all surfaces relevant for the spin foam cosmology context, which were long overlooked, are easily seen and enumerated using the new language.<br /><br /><br />The challenge ahead is to understand whether the results obtained in the simplicial setting can be translated into the more general setting at hand. For the geometricity results this looks very challenging. But in any case, the new language looks like it is going to be an indispensable tool for studying spin foams going forward, and for clarifying the link between the canonical LQG approach and the covariant spin foams.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-86760038608852539722011-10-01T15:18:00.000-05:002011-10-01T15:18:19.683-05:00The Immirzi parameter in spin foam quantum gravityby Sergei Alexandrov, Universite Montpellier, France.<br /><br /><b>James Ryan, Albert Einstein Institute</b><br /><b>Title: </b>Simplicity constraints and the role of the Immirzi parameter in quantum gravity<br /><a href="http://relativity.phys.lsu.edu/ilqgs/ryan041211.pdf">PDF</a> of the talk (11MB)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/ryan041211.wav">Audio</a> [.wav 19MB], <a href="http://relativity.phys.lsu.edu/ilqgs/ryan041211.aif">Audio</a> [.aif 2MB].<br /><br /><a href="http://1.bp.blogspot.com/-Y02UWtJF53I/Tl5QO64Qf7I/AAAAAAAACIs/8IYYmDKyjmM/s1600/ryan.jpg" imageanchor="1" style="clear: left; display: inline !important; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Y02UWtJF53I/Tl5QO64Qf7I/AAAAAAAACIs/8IYYmDKyjmM/s1600/ryan.jpg" /></a><br />Spin foam quantization is an approach to quantum gravity. Firstly, it is a "covariant" quantization, in that it does not break space-time into space and time as "canonical" loop quantum gravity (LQG) does. Secondly, it is "discrete" in that it assumes at the outset that space-time has a granular rather than a smooth structure assumed by "continuum" theories such as LQG. Finally, it is based on the "path integral" approach to quantization that Feynman introduced in which one sums probabilities for all possible trajectories in a system. In the case of gravity one assigns probabilities to all possible space-times.<br /><br />To write the path integral in this approach one uses a reformulation of Einstein's general relativity due to Plebanski. Also, one examines this reformulation for discrete space-times. From the early days it was considered as a very close cousin of loop quantum gravity because both approaches lead to the same qualitative picture of quantum space-time. (Remarkably, although one starts with smooth space and time in LQG, after quantization a granular structure emerges.) However, at the quantitative level, for long time there was a striking disagreement. First of all, there were the symmetries. On the one hand, LQG involves a set of symmetries known technically as the SU(2) group, while on the other, spin foam models had symmetries either associated with the SO(4) group or the Lorentz group. The latter are symmetries that emerge in space-time whereas the SU(2) symmetry emerges naturally in space. It is not surprising that working in a covariant approach the symmetries that emerge naturally are those of space-time whereas working in an approach where space is distinguished like in the canonical approach one gets symmetries associated with space. The second difference concerns the famous <a href="http://en.wikipedia.org/wiki/Immirzi_parameter">Immirzi parameter </a>which plays an extremely important role in LQG, but was not even included in the spin foam approach. This is a parameter that appears in the classical formulation that has no observable consequences there (it amounts to a change of variables). On LQG quantization, however, physical predictions depend on it, in particular the value of the quantum of area and the entropy of black holes.<br /><br />The situation has changed a few years ago with the appearance of two new spin foam models due to Engle-Pereira-Rovelli-Livine (EPRL) and Freidel-Krasnov (FK). The new models appear to agree with LQG at the kinematical level (i.e. they have similar state spaces, although their specific dynamics may differ). Moreover, they incorporate the Immirzi parameter in a non-trivial way.<br /><br />The basic idea behind these models is the following: in the Plebanski formulation general relativity is represented as a topological BF theory supplemented by certain constraints ("simplicity constraints"). BF theories are well studied topological theories (their dynamics are very simple, being limited to global properties). This straightforwardness in particular implies that it is well known how to discretize and to quantize BF theories (using, for example, the spin foam approach). The fact that general relativity can be thought of as a BF theory with additional constraints gives rise to the idea that quantum gravity can be obtained by imposing the simplicity constraints directly at quantum level on a BF theory. For that purpose, using the standard quantization map of BF theories, the simplicity constraints become quantum operators acting on the BF states. The insight of EPRL was that, once the Immirzi parameter is included, some of the constraints should not be imposed as operator identities, but in a weaker form. This allows to find solutions of the quantum constraints which can be put into one-to-one correspondence with the kinematical states of LQG.<br /><br />However, such quantization procedure does not take into account the fact that the simplicity constraints are not all the constraints of the theory. They should be supplemented by certain other ("secondary") constraints and together they form what is technically known as a system of second class constraints. These are very different from the usual kinds of constraints that appear in gauge theories. Whereas the latter correspond to the presence of symmetries in the theory, the former just freeze some degrees of freedom. In particular, at quantum level they should be treated in a completely different way. To implement second class constraints, one should either solve them explicitly, or use an elaborate procedure called the Dirac bracket. Unfortunately, in the spin foam approach the secondary constraints had been completely ignored so far.<br /><br />At the classical level, if one takes all these constraints into account for continuum space-times, one gets a formulation which is independent of the Immirzi parameter. Such a canonical formulation can be used for a further quantization either by the loop or the spin foam method and leads to results which are still free from this dependence. This raises questions about the compatibility of the spin foam quantization with the standard Dirac quantization based on the continuum canonical analysis.<br /><br />In this seminar James Ryan tried to shed light on this issue by studying a the canonical analysis of Plebanski formulation for discrete space-times. Namely, in his work with Bianca Dittrich, they analyzed constraints which must be imposed on the discrete BF theory to get a discretized geometry and how they affect the structure of the theory. They found that the necessary discrete constraints are in a nice correspondence with the primary and secondary simplicity constraints of the continuum theory.<br /><br />Besides, it turned out that the independent constraints are naturally split into two sets. The first set expresses the equality of two sectors of the BF theory, which effectively reduces SO(4) gauge group to SU(2). And indeed, if one explicitly solves this set of constraints, one finds a space of states analogous to that of LQG and the new spin foam models dependent on the Immirzi parameter.<br /><br />However, the corresponding geometries cannot be associated with piecewise flat geometries (geometries that are obtained by gluing flat simplices, just like one glues flat triangles to form a geodesic dome). These piecewise flat geometries are the geometries usually associated with spin foam models. Instead they produce the so called twisted geometries recently studied by Freidel and Speziale. To get the genuine discrete geometries appearing, for example, in the formulation of general relativity known as Regge calculus, one should impose an additional set of constraints given by certain gluing conditions. As Dittrich and Ryan succeeded in showing, the formulation obtained by taking into account all constraints is independent of the Immirzi parameter, as it is in the continuum classical formulation. This suggests that the quest for a consistent and physically acceptable spin foam model is far from being accomplished and that the final quantum theory might eventually be free from the Immirzi parameter.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-83506199038508313822011-09-12T15:30:00.000-05:002011-09-12T15:30:24.132-05:00What is hidden in an infinity?by Daniele Oriti, Albert Einstein Institute, Golm, Germany<br /><b><br /></b><br /><b>Matteo Smerlak, ENS Lyon</b><br /><b>Title:</b> Bubble divergences in state-sum models<br /><a href="http://relativity.phys.lsu.edu/ilqgs/smerlak113010.pdf">PDF</a> of the slides (180k)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/smerlak113010.wav">Audio</a> [.wav 25MB], <a href="http://relativity.phys.lsu.edu/ilqgs/smerlak113010.aif">Audio</a> [.aif 5MB].<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-_mfv1lIE8Ig/Tm5rswG8EwI/AAAAAAAACJQ/QINbqz7XH_I/s1600/smerlak.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="180" src="http://4.bp.blogspot.com/-_mfv1lIE8Ig/Tm5rswG8EwI/AAAAAAAACJQ/QINbqz7XH_I/s320/smerlak.jpg" width="320" /></a></div>Physicists tend to dislike infinities. In particular, they take it very badly when the result of a calculation they are doing turns out to be not some number that they could compare with experiments, but infinite. No energy or distance, no velocity or density, nothing in the world around us has infinity as its measured value. Most times, such infinities signal that we have not been smart enough in dealing with the physical system we are considering, that we have missed some key ingredient in its description, or used the wrong mathematical language in describing it. And we do not like to be reminded of our own lack of cleverness.<br /><br />At the same time, and as a confirmation of the above, much important progress in theoretical physics has come out of a successful intellectual fight with infinities. Examples abound, but here is a historic one. Consider a large 3-dimensional hollow spherical object whose inside is made of some opaque material (thus absorbing almost all the light hitting it), and assume that it is filled with light (electromagnetic radiation) maintained at constant temperature. This object is named a black body. Imagine now that the object has a small hole from which a limited amount of light can exit. If one computes the total energy (i.e. considering all possible frequencies) of the radiation exiting from the hole, at a given temperature and at any given time, using the well-established laws of classical electromagnetism and classical statistical mechanics, one finds that it is infinite. Roughly, this calculation looks as follows: you have to sum all the contributions to the total energy of the radiation emitted (at any given time), coming from all the infinite modes of oscillation of the radiation, at the temperature T. Since there are infinite modes, the sum diverges. Notice that the same calculation can be performed by first imagining that there exists a maximum possible mode of oscillation, and then studying what<br />happens when this supposed maximum is allowed to grow indefinitely. After the first step, the calculation gives a finite result, but the original divergence is obtained again after the second step. In any case, this sum gives a divergent result: infinity! However, this two-step procedure allows to understand better <i>how</i> the quantity of interest diverges.<br /><br />Beside being a theoretical absurdity, this is simply false on experimental grounds since such radiating objects can be realized rather easily in a laboratory. This represented a big crisis in classical physics at the end of the 19th century. The solution came from Max Planck with the hypothesis that light is in reality constituted by discrete quanta (akin to matter particles), later named photons, with a consequently different formula for the emitted radiation from the hole (more precisely, for the individual contributions). This hypothesis, initially proposed for completely different motivations, not only solved the paradox of the infinite energy, but spurred the quantum mechanics revolution which led (after the work of Bohr, Einstein, Heisenberg, Schroedinger, and many others) to the modern understanding of light, atoms and all fundamental forces (except gravity).<br /><br />We see, then, that the need to understand what was really lying inside an infinity, the need to confront it, led to an important jump forward in our understanding of Nature (in this example, of light), and to a revision of our most cherished assumptions about it. The infinity was telling us just that. Interestingly, a similar theoretical phenomenon seems now to suggest that another, maybe even greater jump forward is needed and a new understanding of gravity and of spacetime itself. <br /><br />An object that is theoretically very close to a perfect black body is a black hole. Our current theory of matter, quantum field theory, in conjunction with our current theory of gravity, General Relativity, predicts that such black hole will emit thermal radiation at a constant temperature inversely proportional to the mass of the black hole. This is called Hawking radiation. This result, together with the description of black holes provided by general relativity, also suggest that black holes have an entropy associated to them, measuring the number of their intrinsic degrees of freedom. Because a black hole is nothing but a particular configuration of space, this entropy is then a measure of the intrinsic degrees of freedom of (a region of) space itself! However, first of all we have no real clue what these intrinsic degrees of freedom are; second, if the picture of space provided by general relativity is correct, their number and their corresponding entropy is infinite!<br /><br />This fact, together with a large number of other results and conceptual puzzles, prompted a large part of the theoretical physics community to look for a better theory of space (and time), possibly based on quantum mechanics (taking on board the experience from history): a quantum theory of space-time, a quantum theory of gravity. <br /><br />It should not be understood that the transition from classical to quantum mechanics led us away from the problem of infinities in physics. On the contrary, our best theories of matter and of fundamental forces, quantum field theories, are full of infinities and divergent quantities. What we have learned, however, from quantum field theories, is exactly how to deal with such infinities in rather general terms, what to expect, and what to do when such infinities present themselves. In particular, we have learned another crucial lesson about nature: physical phenomena look very different at different energy and distance scales, i.e. if we look at them very closely or if they involve higher and higher energies. The methods by which we deal with this scale dependence go under the name of renormalization group, now a crucial ingredient of all theories of particles and materials, both microscopic and macroscopic. How this scale dependence is realized in practice depends of course on the specific physical system considered. <br /><br />Let us consider a simple example. Consider the dynamics of a hypothetical particle with mass m and no spin; assume that what can happen to this particle during its evolution is only one of the following two possibilities: it can either disintegrate into two new particles of the same type or disintegrate into three particles of the same type. Also, assume that the inverse processes are also allowed (that is, two particles can disappear and give rise to a single new one, and the same can do three particles). So there are two possible ‘interactions’ that this type of particle can undergo, two possible fundamental processes that can happen to it. To each of them, we associate a parameter, called a ‘coupling constant’ that indicates how strong each possible interaction process is (compared with each other and with other possible processes due for example to the interaction of the particles with gravity or with light etc), one for the process involving three particles, and one for the one involving four particles (this is counting incoming and outgoing particles). Now, the basic object that a quantum field theory allows us to compute is the probability (amplitude) that, if I first see a number n of particles at a certain time, at a later time I will instead see m particles, with m different from n (because some particle will have disintegrated and other will have been created). All the other quantities of physical interest can be obtained using these probabilities. <br /><br />Moreover, the theory tells me exactly how this probability should be computed. It goes roughly as follows. First, I have to consider all possible processes leading from n particles to m particles, including those involving an infinite number of elementary creation/disintegration processes. These can be represented by graphs (called Feynman graphs) in which each vertex represents a possible elementary process (see the figure for an example of such process, made out of interactions involving three particles only, with associated graph,).<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ns2FL6xR_7o/Tl-ZFWXyxpI/AAAAAAAACIw/a1WO-doXoDo/s1600/fig1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="182" src="http://1.bp.blogspot.com/-ns2FL6xR_7o/Tl-ZFWXyxpI/AAAAAAAACIw/a1WO-doXoDo/s400/fig1.jpg" width="400" /></a></div><br /><span class="Apple-style-span" style="font-size: x-small;"><br /></span><br /><span class="Apple-style-span" style="font-size: x-small;">A graph describing a sequence of 3-valent elementary interactions for a point particle, with 2 particles measured both at the initial and at the final time (to be read from left to right)</span><br /><br /><br />Second, each of these processes should be assigned a probability (amplitude), that is, a function of the mass of the particle considered and of the ‘coupling constants’. Third, this amplitude is in turn a function of the energies of each particle involved in any given process.(and corresponding to a single line in the graph representing the process), and this energy can be anything, from zero to infinity. The theory tells me what form the probability amplitude has. Now the total probability for the measurement of n particle first and m particles later is computed by summing over all processes/graphs (including those composed of infinite elementary processes) and all the energies of particles involved in them, weighted by the probability amplitudes. <br /><br />Now, guess what? The above calculation typically gives the always feared result: infinity. Basically, everything that could go wrong, actually goes wrong, as in Murphy’s law. Not only the sum over all the graphs/processes gives a divergent answer, but also the intermediate sum over energies diverges. However, as we anticipated, we now know how to deal with this kind of infinities, we are not scared anymore and, actually, we have learnt what they mean, physically. The problem mainly arises when we consider higher and higher energies for the particles involved in the process. For simplicity imagine that all the particles have the same energy E, and assume this can take any value from 0 to a maximum value Emax. Just like in the black body example, the existence of the maximum implies that the sum over energies is a finite number, so everything up to here goes fine. However, when we let the maximal energy becomes infinite, typically the same quantity becomes infinite.<br /><br />We have done something wrong; let’s face it: there is something we have not understood of the physics of the system (simple particles as they may be). It could be that, as in the case of the blackbody radiation, we are missing something fundamental about the nature of these particles, and we have to change the whole probability amplitude. Maybe other type of particles have to be considered as created out of the initial ones. All this could be. However, what quantum field theory has taught us is that, before considering these more drastic possibilities, one should try to re-write the above calculation by considering coupling constants and mass, that themselves depend on the scale Emax, and then compute again the probability amplitude, but now using these ‘scale dependent’constants, and check if one can now consider the case of Emax growing up to infinity, i.e. consider arbitrary energies for the particles involved in the process. If this can be done, i.e. if one can find coupling constants dependent on the energies such that now the result of sending Emax to infinity, i.e. considering larger and larger energies, is a finite, sensible probability then there no need for further modifications of the theory, and the physical system considered, i.e. the (system of) particles, is under control.<br /><br />What does all this teach us? It teaches us that the type of interactions that the system can<br />undergo and their relative strengths depend on the scale at which we look at the system, i.e. on what energy is involved in any process the system is experiencing. For example, it could happen that when Emax becomes higher and higher, the coupling constant as a function of Emax becomes zero. This would mean that, at very high energies, the process of disintegration of one particle into two (or two into one) does not happen anymore, and only the one involving four particles takes place. Pictorially, only graphs of a certain shape, become relevant. Or, it could happen that, at very high energies, the mass of the particles becomes zero,<br />i.e. the particles become lighter and lighter, eventually propagating just like photons do. The general lesson, beside technicalities and specific cases, is that for any given physical system it is crucial to understand exactly how the quantities of interest diverge, because in the details of such divergence lies important information about the true physics of the system considered. The infinities in our models should be tamed, explored in depth, and listened to.<br /><br />This is what Matteo Smerlak and Valentin Bonzom have done in the work presented at the seminar, for some models of quantum space that are currently at the center of attention of the quantum gravity community. These are so-called spin foam models, in which quantum space is described in terms of spin networks (graphs whose links are assigned discrete numbers, spins, representing elementary geometric data) or equivalently in terms of collections of triangles glued to one another along edges, and whose geometry is specified by the length of all such edges. Spin foam models are then strictly related to both loop quantum gravity, whose dynamical aspects they seek to define, and to other approaches to quantum gravity like simplicial gravity. These models, very much like models for the dynamics of ordinary quantum particles, aim to compute (among other things) the probability to measure a given configuration of quantum space, represented again as a bunch of triangles glued together or as a spin network graph. Notice that here a ‘configuration of quantum space’ means both a given shape of space (it could be a sphere, a doughnut, or any other fancier shape), and a given geometry (it could be a very big or a very small sphere, a sphere with some bumps here and there, etc). One could also consider computing the probability of a transition from a given configuration of quantum space to a different one.<br /><br />More precisely, the models that Bonzom and Smerlak studied are simplified ones (with respect to those that aim at describing our 4-dimensional space-time) in which the dynamics is such that, whatever the shape and geometry of space one is considering, during its evolution, should one measure the curvature of the same space at any given location, one would find zero. In other words these models only consider flat space-times. This is of course a drastic simplification but not such that the resulting models become uninteresting. On the contrary, these flat models are not only perfectly fine to describe quantum gravity in the case in which space has only two dimensions, rather than three, but are also the very basis for constructing realistic models<br />for 3-dimensional quantum space, i.e. 4-dimensional quantum spacetime. As a consequence, these models, together with the more realistic ones, have been a focus of attention of the community of quantum gravity researchers.<br /><br />What is the problem being discussed, then? As you can imagine, the usual one: when one tries to compute the mentioned probability for a certain evolution of quantum space, even within these simplified models, the answer one gets is the ever-present, but by now only slightly intimidating, infinity. How does the calculation look like? It looks very similar to the calculation for the probability of a given process of evolution of particles in quantum field theory. Consider the case in which space is 2-dimensional and therefore space-time is 3-dimensional. Suppose you want to consider the probability of measuring first n triangles glued along one another to form, say, a 2-dimensional sphere (the surface of a soccer ball) of a given size, and then m triangles now glued to form, say, the surface of a doughnut. Now take a collection of an arbitrary number of triangles and glue them to one another along edges to form a 3-dimensional object of your choice, just like kids stick LEGO blocks to one another to form a house or a car or some spaceship (you see, science is in many ways the development of children’s curiosity by other means). It could be as simple as a soccer ball, in principle, or something extremely complicated, with holes, multiple connections, anything). There is only one condition on the 3-dimensional object you can build: its surface should be formed, in the example we are considering here, by two disconnected parts: one in the shape of a sphere made of n triangles, and one in the shape of the surface of a doughnut made of m triangles. This condition would for example prevent you from building a soccer ball, which you could do, instead, if one wanted to consider only the probability of measuring n triangles forming a sphere, and no doughnut was involved. Too bad. We’ll be lazy in this example and consider a doughnut but no soccer ball. Anyway, apart from this, you can do anything.<br /><br />Let us pause for a second to clarify what it means for a space to have a given shape. Consider a point on the sphere and take a path on it that starts at the given point and after a while comes back to the same point, forming a loop. Now you see that there is no problem in taking this loop to become smaller and smaller, eventually shrinking to a point and disappearing. Now do the same operation on the surface of a doughnut. You will see that certain loops can again be shrunk to a point and made disappear, while others cannot. These are the ones that go around the hole of the doughnut. So you see that operations like these can help us determining the shape of our space. The same holds true for 3d spaces, in fact, you only need many more types of operations of this type. Ok, now you finish building your 3-dimensional object made of as many triangles as you want. Just as the triangles in the boundary of the 3d object, those forming the sphere and the doughnut, also those forming the 3d object come with numbers associated to the edges of the triangles. These numbers, as said, specify the geometry of all the triangles, and therefore of the sphere, of the doughnut and of the 3d object that has them on its boundary.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-0AJ1z_ykApU/Tl-ZMVz4_GI/AAAAAAAACI0/HXw6y6O1Srw/s1600/fig2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="153" src="http://3.bp.blogspot.com/-0AJ1z_ykApU/Tl-ZMVz4_GI/AAAAAAAACI0/HXw6y6O1Srw/s400/fig2.jpg" width="400" /></a></div><br /><span class="Apple-style-span" style="font-size: x-small;">A collection of glued triangles forming a sphere (left) and a doughnut (right); the interior 3d space can alsobe built out of glued triangles having the given shape on the boundary: for the first object, the interior is a ball; for the second it forms what is called a solid torus. Pictures from <a href="http://www.hakenberg.de/">http://www.hakenberg.de/</a></span><br /><br />The theory (the spin foam model you are studying) should give you a probability for the process considered. If the triangles forming the sphere represent how quantum space was at first, and the triangles forming the doughnut how it is in the end, the 3d object chosen represent a possible quantum space-time. In the analogy with the particle process described earlier, the n triangles forming a sphere correspond to the initial n particles, the m triangles forming the doughnut correspond to the final m particles, and the triangulated 3d object is the analogue of a possible ‘interaction process’, a possible history of triangles being created/destroyed, forming different shapes and changing their size; this size is encoded in their edge lengths, which is the analogue of the energies of the particles. The spin foam model now gives you the probability for the process in the form of a sum over the probabilities for all possible assignments of lengths to the edges of the 3d object, each probability function enforcing that the 3d object is flat (it gives probability equal to zero if the 3d object is not flat). As anticipated, the above calculation gives the usual nonsensical infinity as a result. But again, we now know that we should get past the disappointment, and look more carefully at what this infinity hides. So what one does is again to imagine that there is a maximal length that edges of triangles can have, call it Emax, define the truncated amplitude and study carefully exactly how it behaves when Emax grows, when it is allowed to become larger and larger.<br /><br />In a sense, in this case, what is hidden inside this infinity is the whole complexity of a 3d space, at least of a flat one. What one finds is that hidden in this infinity, and carefully revealed by the scaling of the above amplitude with Emax, is all the information about the shape of the 3d object, i.e. of the possible 3d spacetime considered, and all the information about how this 3d spacetime has been constructed out of triangles. That’s lots of information!<br /><br />Bonzom and Smerlak, in the work described at the seminar, have gone a very long way toward unraveling all this information, dwelling deeper and deeper into the hidden secrets of this particular infinity. Their work is developed in a series of papers, in which they offer a very elegant mathematical formulation of the problem and a new approach toward its solution, progressively sharpening their results and improving our understanding of these specific spin foam models for quantum gravity, of the way they depend on the shape and on the specific construction of each 3d spacetime, and of what shape and construction give, in some sense, the ‘bigger’infinity. Their work represented a very important contribution to an area of research that is growing fast and in which many other results, from other groups around the world, had already been obtained and are still being obtained nowadays.<br /><br />There is even more. The analogy with particle processes in quantum field theory can be made sharper, and one can indeed study peculiar types of field theories, called ‘group field theories’, such that the above amplitude is indeed generated by the theory and assigned to the specific process, as in spin foam models, and at the same time all possible processes are taken into account, as in standard quantum field theories for particles. <br /><br />This change of framework, embedding the spin foam model into a field theory language, does<br />not change much the problem of the divergence of the sum over the edge lengths, nor its infinite result.<br />And it does not change the information about the shape of space encoded in this infinity. However, it changes the perspective by which we look at this infinity and at its hidden secrets. In fact, in this new context, space and space-time are truly dynamical, all possible spaces and space-times have to be considered together and on equal footing and compete in their contribution to the total probability for a certain transition from one configuration of quantum space to another. We cannot just choose one given shape, do the calculation and be content with it (once we dealt with the infinity resulting from doing the calculation naively). The possible space-times we have to consider, moreover, include really weird ones, with billions of holes and strange connections from one regions to another, and 3d objects that do not really look like sensible space-times at all, and so on. We have to take them all into account, in this framework. This is of course an additional technical complication. However, it is also a fantastic opportunity. In fact, it offers us the chance to ask and possibly answer a very interesting question: why is our space-time, at least at our macroscopic scale, the way it is? Why does it look so regular, so simple in its shape, actually as simple as a sphere is? Try! we can consider an imaginary loop located anywhere in space and shrink it to a point making it disappear without any trouble, right? If the dynamics of quantum space is governed by a model (spin foam or group field theory) like the ones described, this is not obvious at all, but something to explain. Processes that look as nice as our macroscopic space-time are but a really tiny minority among the zillions of possible space-times that enter the sum we discussed, among all the possible processes that have to be considered in the above calculations. So, why should they ‘dominate’ and end up being the truly important ones, those that best approximate our macroscopic space-time? Why and how do they ‘emerge’ from the others and originate, from this quantum mess, the nice space-time we inhabit, in a classical, continuum approximation? What is the true quantum origin of space-time, in both its shape and geometry? The way the amplitudes grow with the increase of Emax is where the answer to these fascinating questions lies.<br /><br />The answer, once more, is hidden in the very same infinity that Bonzom, Smerlak, and their many quantum gravity colleagues around the world are so bravely taming, studying, and, step by step, understanding.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-32354657974861136512011-08-30T08:16:00.000-05:002011-08-30T08:16:35.577-05:00Spinfoam cosmology with the cosmological constantby David Sloan, Institute for Theoretical Physics, Utrecht University, Netherlands.<br /><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div></div></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">• <b>Francesca Vidotto, CNRS Marseille</b></div><b>Title:</b> Spinfoam cosmology with the cosmological constant<br /><a href="http://relativity.phys.lsu.edu/ilqgs/vidotto020111.pdf">PDF</a> of the slides (3 MB)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/vidotto020111.wav">Audio</a> [.wav 26MB], <a href="http://relativity.phys.lsu.edu/ilqgs/vidotto020111.aif">Audio</a> [.aif 2MB].</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-sNdtM3mR8tM/TkFVua5qurI/AAAAAAAACH0/Zx_fpIy0vnk/s1600/francesca.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="246" src="http://2.bp.blogspot.com/-sNdtM3mR8tM/TkFVua5qurI/AAAAAAAACH0/Zx_fpIy0vnk/s320/francesca.png" width="320" /></a></div><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div class="moz-text-plain" graphical-quote="true" lang="x-western" style="font-family: -moz-fixed; font-size: 13px;" wrap="true"><pre wrap=""></pre></div></div></div><br />Current observations of the universe show that it appears to be expanding. This is observed through the red-shift - a cosmological Doppler effect - of supernovae at large distances. These giant explosions provide a 'standard candle', a fixed signal whose color indicates relative motion to an observer. Distant objects, therefore appear not only to be moving away from us, but accelerating as they do so. This acceleration cannot be accounted for in a universe filled with 'ordinary' matter such as dust or radiation. To provide acceleration there must be a form of matter which has negative pressure. The exact nature of this matter is unknown, and hence it is referred to as being 'dark energy'.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-I5a7aqzTb3I/Tjm2DJKcHJI/AAAAAAAACHw/dWJPD3bLTnI/s1600/Supernova.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-I5a7aqzTb3I/Tjm2DJKcHJI/AAAAAAAACHw/dWJPD3bLTnI/s400/Supernova.jpg" width="320" /></a></div><br /><div style="text-align: center;"> An image of the remnants of the Tycho type Ia supernova as</div><div style="text-align: center;">recorded by NASA's Spitzer observatory and originally observed by</div><div style="text-align: center;">Tycho Brahe.</div><br />According to the standard model of cosmology, 73% of the matter content of the universe consists of dark energy. It is the dominant component of the observable universe, with dark matter making up most<br />of the remainder (ordinary matter that makes up stars, planets, and nebulae comprises just 4%). In cosmology the universe is assumed to be broadly homogeneous and isotropic, and therefore the types of matter<br />present are usually parametrized by the ratio (w) of their pressure to energy. Dark energy is unlike normal matter as it exhibits negative pressure. Indeed in observations recently made by Reiss et al. this ratio has been determined to be -1.08 ± 0.1. There are several models which attempt to explain the nature of dark energy. Among them are Quintessence which consists of a scalar field whose pressure varies with time, and void (or Swiss-cheese) models which seek to explain the apparent expansion as an effect of large scale inhomogeneities.However, the currently favored model for dark energy is that of the cosmological constant, for which w=-1.<br /><br />The cosmological constant has an interesting history as a concept in general relativity (GR). Originally introduced by Einstein, who noted there was the freedom to introduce it in the equations of the theory, it was an attempt to counteract the expansion of the universe that appeared in general relativistic cosmology. It should be remembered that at that time it was thought the universe was static. The cosmological constant was quickly shown to be insufficient to lead to a stable, static universe. Worse, later observations showed the universe did expand as general relativistic cosmology seemed to suggest. However, the freedom to introduce this new parameter into the field equations of GR remained of theoretical interest, its cosmological solutions yielding (anti) DeSitter universes which can have a different topology to the flat cases. The long-term fate of the universe is generally determined by the cosmological constant - for large enough positive values the universe will expand indefinitely, accelerating as it does so. For negative values the universe will eventually recollapse, leading to a future 'big crunch' singularity. Recently through supernovae observations the value of the cosmological constant has been determined to be small yet positive. In natural (Planck) units, its value is 10^(-120), a number so incredibly tiny that it appears unlikely to have occurred by<br />chance. This 'smallness' or 'fine tuning' problem has elicited a number of tentative explanations ranging from anthropic arguments (since much higher values would make life impossible) to virtual wormholes, however as yet there is no well accepted answer.<br /><br />The role of the cosmological constant can be understood in two separate ways - it can be considered either as a piece of the geometry or matter components of the field equations. As a geometrical entity it can be considered just as one more factor in the complicated way that geometry couples to matter, but as matter it can be associated with a 'vacuum' energy of the universe: energy associated with empty space. It is this dual nature that makes the cosmological constant an ideal test candidate for the introduction of matter into fundamental theories of gravity. The work of Bianchi, Krajevski, Rovelli and Vidotto, discussed in the ILQG seminar, concerns the addition of this term into the spin-foam cosmological models. Francesca describes how<br />one can introduce a term which yields this effect into the transition amplitudes (a method of calculating dynamics) of spin-foam models. This new ingredient allows Francesca to cook up a new model of cosmology within the spin-foam cosmology framework. When added to the usual recipe of a dipole graph, vertex expansions, and coherent states, the results described indeed appear to match well with the description<br />of our universe on large scales. The inclusion of this new factor brings insight from quantum deformed groups, which have been proposed as a way of making the theory finite.<br /><br />This is an exciting development, as the spin-foam program is 'bottom-up' approach to the problem of quantum gravity. Rather than beginning with GR as we know it, and making perturbations around solutions, the spin-foam program starts with networks representing the fundamental gravitational fields and calculates dynamics through a foam of graphs interpolating between two networks. As such, recovering known physics is not a sure thing ahead of time. The results discussed in Francesca's seminar provide a firmer footing for understanding the cosmological implications of the spin-foam models and take these closer to observable physics.Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0tag:blogger.com,1999:blog-5826632960356694090.post-9220320976688015842011-08-03T15:08:00.000-05:002011-08-03T15:08:02.019-05:00Quantum deformations of 4d spin foam models<div class="MsoNormal"></div><div class="separator" style="clear: both; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-align: left;">by Hanno Sahlmann, Asia Pacific Center for Theoretical Physics and Physics Department, Pohang University of Science and Technology, Korea.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">• <b>Winston Fairbairn, Hamburg University</b></div><b>Title:</b> Quantum deformation of 4d spin foam models<br /><a href="http://relativity.phys.lsu.edu/ilqgs/fairbairn031511.pdf">PDF</a> of the slides (300k)<br /><a href="http://relativity.phys.lsu.edu/ilqgs/fairbairn031511.wav">Audio</a> [.wav 36MB], <a href="http://relativity.phys.lsu.edu/ilqgs/fairbairn031511.aif">Audio</a> [.aif 3MB].</div><br /><div class="MsoNormal"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ss35vUAWJL4/TcB_2axoT2I/AAAAAAAAB9U/s_pVHa4StY0/s1600/winston_2.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ss35vUAWJL4/TcB_2axoT2I/AAAAAAAAB9U/s_pVHa4StY0/s1600/winston_2.jpg" /></a></div><div class="MsoNormal">The work Winston Fairbairn talked about is very intriguing because it brings together a theory of quantum gravity, and some very interesting mathematical objects called quantum groups, in a way that may be related to the non-zero cosmological constant that is observed in nature! Let me try to explain what these things are, and how they fit together. </div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b style="mso-bidi-font-weight: normal;">Quantum groups<o:p></o:p></b><br /><b style="mso-bidi-font-weight: normal;"><br /></b></div><div class="MsoNormal">A group is a set of things that you can multiply with each other to obtain yet another element of the group. So there is a product. Then there also needs to be a special element, the unit, that when multiplied with any element of the group, just gives back the same element. And finally there needs to be an inverse to every element, such that if one multiplies an element with its inverse, one gets the unit. For example, the integers are a group under addition, and the rotations in space are a group under composition of rotations. Groups are one of the most important mathematical ingredients in physical theories because they describe the symmetries of a physical system. A group can act in different ways on physical systems. Each such way is called a <i style="mso-bidi-font-style: normal;"><a href="http://en.wikipedia.org/wiki/Group_representation">representation</a></i>. </div><div class="MsoNormal"><br /></div><div class="MsoNormal">Groups have been studied in mathematics for hundreds of years, and so a great deal is known about them. Imagine the excitement when it was discovered that there exists a more general (and complicated) class of objects that nevertheless have many of the same properties as groups, in particular with respect to their mathematical representations. These objects are called quantum groups. Very roughly speaking, one can get a quantum group by thinking about the set of functions on a group. Functions can be added and multiplied in a natural way. And additionally, the group product, the inversion and the unit of the group itself, induce further structures on the set of functions in the group.<br /><br />The product of functions is <a href="http://en.wikipedia.org/wiki/Commutativity">commutative</a> - fg and gf are the same thing. But one can now consider set of functions that have all the required extra structure to make them set of functions acting over a group - except for the fact that the product is now not commutative anymore. Then the elements cannot be functions on a group anymore -- in fact they can't be functions at all. But one can still pretend that they are “functions” on some funny type of set: A quantum group. </div><div class="MsoNormal"><br /></div><div class="MsoNormal">Particular examples of quantum groups can be found by deforming the structures one finds for ordinary groups. In these examples, there is a parameter q that measures how big the deformations are. q=1 corresponds to the structure without deformation. If q is a complex number with q^n=1 for some integer n (i.e., q is a <i style="mso-bidi-font-style: normal;">root of unity</i>), the quantum groups have particular properties. Another special class of deformations is obtained for q a real number. Both of these cases seem to be relevant in quantum gravity.</div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b style="mso-bidi-font-weight: normal;">Quantum gravity<o:p></o:p></b></div><div class="MsoNormal">Finding a quantum theory of gravity is an important goal of modern physics, and it is what loop quantum gravity is all about. Since gravity is also a theory of space, time, and how they fit together in a space-time geometry, quantum gravity is believed to be a very unusual theory, one in which quantities like time and distance come in discrete bits, (atoms of space-time, if you like) and are not all simultaneously measureable. </div><div class="MsoNormal"><br /></div><div class="MsoNormal">One way to think about quantum theory in general is in terms of what is known as "path integrals". Such calculations answer the question of how probable it is that a given event (for example two electrons scattering off each other) will happen. To compute the path integral, one must sum up complex numbers (amplitudes), one for each way that the thing under study can happen. The probability is then given in terms of this sum. Most of the time this involves infinitely may possible ways, the electrons for example can scatter by exchanging one photon, or two, or three, or..., the first photon can be emitted in infinitely many different places, and have different energies etc. Therefore computing path integrals is very subtle, needs approximations, and can lead to infinite values. Path integrals were introduced into physics by Feynman. Not only did he suggest to think about quantum theory in terms of these integrals, he also introduced an ingenious device useful for their approximate calculation. To each term in an approximate calculation of some particular process in quantum field theory, he associated what we now call its <a href="http://www2.slac.stanford.edu/vvc/theory/feynman.html">Feynman diagram</a>. The nice thing about Feynman diagrams is that they not only have a technical meaning. They can also be read as one particular way in which a process can happen. This makes working with them very intuitive. </div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Z3QzUYTercM/TcB_D87QR2I/AAAAAAAAB9M/fyKrpqCjXRo/s1600/200px-MollerScattering-t.svg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-Z3QzUYTercM/TcB_D87QR2I/AAAAAAAAB9M/fyKrpqCjXRo/s1600/200px-MollerScattering-t.svg.png" /></a></div><div class="MsoNormal">(image from Wikipedia)<br /><br /></div><div class="MsoNormal">It turns out that loop quantum gravity can also be formulated using some sort of path integrals. This is often called spin foam gravity. The <a href="http://arxiv.org/abs/gr-qc/9905087">spin foams</a> in the name are actually very nice analogs to Feynman diagrams in ordinary quantum theory: They are also a technical device in an approximation of the full integral - but as for Feynman diagrams, one can read them as a space-time history of a process - only now the process is how space-time itself changes! </div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-xJgoh49olXQ/TcB_f_-Dx8I/AAAAAAAAB9Q/7ENnTkysRUU/s1600/spinfoam.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="248" src="http://3.bp.blogspot.com/-xJgoh49olXQ/TcB_f_-Dx8I/AAAAAAAAB9Q/7ENnTkysRUU/s320/spinfoam.jpg" width="320" /></a></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Associating the amplitude to a given diagram usually involves integrals. In the case of quantum field theory there is an integral over the momentum of each particle involved in the process. In the case of spin foams, there are also integrals or infinite sums, but those are over the labels of group representations! This is the magic of loop quantum gravity: Properties of quantized space-time are encoded in group representations. The groups most relevant for gravity are known as technically as SL(2,C) -- a group containing all the Lorentz transformations, and SU(2), a subgroup related to spatial rotations. </div><div class="MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b style="mso-bidi-font-weight: normal;">Cosmological constant<o:p></o:p></b></div><div class="MsoNormal">In some theories of gravity, empty space “has weight”, and hence influences the dynamics of the universe. This influence is governed by a quantity called the <i style="mso-bidi-font-style: normal;"><a href="http://en.wikipedia.org/wiki/Cosmological_constant">cosmological constant</a></i>. Until a bit more than ten years ago, the possibility of a non-zero cosmological constant was not considered very seriously, but to everybody’s surprise, astronomers then discovered strong evidence <i style="mso-bidi-font-style: normal;">that there is</i> a positive cosmological constant. Creating empty space creates energy! The effect of this is so large that it seems to dominate cosmological evolution at the present epoch (and there has been theoretical evidence for something like a cosmological constant in earlier epochs, too). Quantum field theory in fact predicts that there should be energy in empty space, but the observed cosmological constant is tremendously much smaller than what would be expected. So the explaining the observed value of the cosmological constant presents quite a mystery for physics.</div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b style="mso-bidi-font-weight: normal;">Spin foam gravity with quantum groups<o:p></o:p></b><br /><b style="mso-bidi-font-weight: normal;"><br /></b></div><div class="MsoNormal">Now I can finally come to the talk. As I've said before, path integrals are complicated objects, and infinities do crop up quite frequently in their calculation. Often these infinities are a due to problems with the approximations one has made, and sometimes several can be canceled against each other, leaving a finite result. To analyze those cases, it is very useful to first consider modifications of the path integral that remove all infinities, for example by restricting the ranges of integrations and sums. This kind of modification is called "introducing a regulator", and it certainly changes the physical content of the path integral. But introducing a regulator can help to analyze the situation and rearrange the calculation in such a way that in the end the regulator can be removed, leaving a finite result. Or one may be able to show that the existence of the regulator is in fact irrelevant at least in certain regimes of the theory. </div><div class="MsoNormal"><br /></div><div class="MsoNormal">Now back to gravity: For the case of Euclidean (meaning a theory of pure space rather than a theory of space-time, this is unphysical but simplifies certain calculations) quantum gravity in three dimensions, there is a nice spinfoam formulation due to Ponzano and Regge, but as can be anticipated, it gives divergent answers in certain situations. Turaev and Viro then realized that replacing the group by its quantum group deformation at a root of unity furnishes a very nice regularization. First of all, it does what a regulator is supposed to do, namely render the amplitude finite. This happens because the quantum group in question, with q a root of unity, turns out to have only finitely many irreducible representations, so the infinite sums that were causing the problems are now replaced by finite sums. Moreover, as the original group was only deformed, and not completely broken, one expects that the regulated results stay reasonably close to the ones without regulator. In fact, something even nicer happens: It turned out (<a href="http://arxiv.org/abs/hep-th/9110057">work</a> by Mizoguchi and Tada) that the amplitudes in which the group is replaced by its deformation into a quantum group correspond to another physical theory -- quantum gravity with a (positive) cosmological constant! The deformation parameter q is directly related to the value of the constant. So this regulator is not just a technical tool to make the amplitudes finite. It has real physical meaning. </div><div class="MsoNormal"><br /></div><div class="MsoNormal">Winston's talk was not about three dimensional gravity, but on the four dimensional version - the real thing, if you like. He was considering what is called <i style="mso-bidi-font-style: normal;"><a href="http://arxiv.org/abs/arXiv:0711.0146">the EPRL vertex</a></i>, a new way to associate amplitudes to spin foams, devised by Engle, Pereira, Rovelli and Livine, which has created a lot of excitement among people working on loop quantum gravity. The amplitudes obtained this way are finite in a surprising number of circumstances, but infinities are nevertheless encountered as well. Winston Fairbairn, together with Catherine Meusburger (and, independently, <a href="http://arxiv.org/abs/arXiv:1012.4216">Muxin Han</a>), were now able to <a href="http://arxiv.org/abs/arXiv:1012.4784">write down</a> the new vertex function in which the group is replaced by a deformation into a quantum group. In fact, they developed a nice graphical calculus to do so. What is more, they were able to show that it gives finite amplitudes. Thus the introduction of the quantum group does its job as a regulator.<br /><br /></div><div class="MsoNormal">As for the technical details, let me just say that they are fiercely complicated. To appreciate the intricacy of this work, you should know that the group SL(2,C) involved is what is known as non-compact, which makes its quantum group deformations very complicated and challenging structures (intuitively, compact sets have less chance of producing infinities than non-compact ones). Also, the EPRL vertex function relies on a subtle interplay between SU(2) and SL(2,C). One has to understand this interplay on a very abstract level to be able to translate it to quantum groups. The relevant type of deformation in this case has a real parameter q. In this case, there are still infinitely many irreducible representations – but it seems that it is the quantum version of the interplay between SU(2) and SL(2,C) that brings about the finiteness of the sums.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Thanks to this work, we now have a very interesting question on our hands: Is the quantum group deformation of the EPRL theory again related to gravity with a cosmological constant? Many people bet that this is the case, and the calculations to investigate this question have already begun, for example in a <a href="http://arxiv.org/abs/arXiv:1103.1597">recent preprint</a> by Ding and Han. Also, this begs the question of how fundamentally intertwined quantum gravity is with quantum groups. There were some interesting discussions about this already during and after the talk. At this point, the connection is still rather mysterious on a fundamental level.</div>Jorge Pullinhttp://www.blogger.com/profile/07465581283254332265noreply@blogger.com0