jump to navigation

Santa Fe Cosmology Workshop 2009 July 14, 2009

Posted by keithkchan in Cosmology.
2 comments

I am now in Santa Fe cosmology workshop 2009. I have been here for 1.5 weeks. I will be here for 1.5 more weeks. As the title suggested, it is on cosmology. The talks are online. If you are interested in cosmology, you can check it out here . Some of them are pretty boring, though I am not going to name them. Our colleagues Kwan Chuen Chan and Eyal Kazin at room 538 also gave talks there. Since there are a lot of stuffs going on here. I will shut up until I go back to New York.

Advertisements

N-body simulation of DGP model July 1, 2009

Posted by keithkchan in Cosmology, Journal club.
1 comment so far

I am very happy to have Kwan Chuen Chan from Center for Particles Physics and Cosmology, New York University to talk about their new paper. He is a grad student at NYU, working with Roman Scoccimarro. His office is in Room 538 CCPP.
Keith

Thank Keith for inviting me to blog about our recent paper. In this post I will briefly talk about the paper that Roman Scoccimarro and I just uploaded to the arXiv. I will keep it brief and elementary, so for more details, please refer to the original paper arXiv:0906.4548.

Here is the abstract

Large-Scale Structure in Brane-Induced Gravity II. Numerical Simulations
Authors: K. C. Chan, Roman Scoccimarro
(Submitted on 24 Jun 2009)
Abstract: We use N-body simulations to study the nonlinear structure formation in brane-induced gravity, developing a new method that requires alternate use of Fast Fourier Transforms and relaxation. This enables us to compute the nonlinear matter power spectrum and bispectrum, the halo mass function, and the halo bias. From the simulation results, we confirm the expectations based on analytic arguments that the Vainshtein mechanism does operate as anticipated, with the density power spectrum approaching that of standard gravity within a modified background evolution in the nonlinear regime. The transition is very broad and there is no well defined Vainshtein scale, but roughly this corresponds to k_*= 2 h/Mpc at redshift z=1 and k_*=1 h/Mpc$ at z=0. We checked that while extrinsic curvature fluctuations go nonlinear, and the dynamics of the brane-bending mode $C$ receives important nonlinear corrections, this mode does get suppressed compared to density perturbations, effectively decoupling from the standard gravity sector. At the same time, there is no violation of the weak field limit for metric perturbations associated with $C$. We find good agreement between our measurements and the predictions for the nonlinear power spectrum presented in paper I, that rely on a renormalization of the linear spectrum due to nonlinearities in the modified gravity sector. A similar prediction for the mass function shows the right trends but we were unable to test this accurately due to lack of simulation volume and mass resolution. Our simulations also confirm the induced change in the bispectrum configuration dependence predicted in paper I.

DGP model is an extra-dimension model, which has one co-dimension, and ordinary matter lives on the 3-brane. The graviton propagator is modified in the infrared. One of the interesting properties of this model is that it exhibits self-accelerating solution. The hope was that the recent observed cosmic acceleration may be due to modification of gravity rather than the mysterious dark energy. However, both theoretically and observationally, this model is proved to be unfavorable. However, this model has inspired a bunch of more sophisticated models such as degravitation, galleon. One of the serious problem in modification of gravity is that it induces new degrees of freedom. The theory can usually be approximated as a scalar-tensor theory. But any scalar degree of freedom is likely to be highly constrained by current solar system experiments. There are two nice ways have been put forward to evade this kind of constraints. One of them is chameleon mechanism, which has been realized in f(R) gravity. The other mechanism is called Vainshtein effect, which is incorporated in DGP and some massive gravity models. The scalar degree of freedom becomes strongly coupling and frozen because of the derivative self-interactions. The theory effectively becomes GR.

In this paper, using numerical simulations, we study this type of brane-induced gravity in the nonlinear regime, in particular the Vainshtein effect. We compute the cosmological observables: the power spectrum, bispectrum, mass function, and bias, which give us the signatures of the DGP model, and help us to differentiate modified gravity model from dark energy. In the companion paper arXiv:0906.4545 by Scoccimarro, the model is studied by perturbative calculations. Some of the results are checked against the numerical results in this work.

The method we used is the N-body simulation, which is largely similar to the standard gravity one. However, in GR, the field equation in the subhorizon, non-relativistic regime is just the Poisson equation, now we need to solve a fully nonlinear partial differential equation. Let me write down the equations although I am not attempting to explain it in details
\bar{\nabla}^2 \phi  -  \frac{1}{\eta} \sqrt{ - \bar{\nabla}^2  } \phi +    \frac{1}{2 \eta} \bar{\nabla}^2 C   +      \frac{ 3 \eta^2 - 5 \eta + 1 }{2 \eta^2  (2 \eta -1)  }  \sqrt{ - \bar{\nabla}^2  } C      =  \frac{3}{2} \frac{\eta  -1  }{\eta} \delta
(\bar{\nabla}^2 C)^2    +    \alpha  \bar{\nabla}^2 C   - (\bar{ \nabla}_{ij} C)^2 +    \frac{ 3 \beta (\eta -1) }{2 \eta-1 }    \sqrt{ - \bar{\nabla}^2  } C     = \frac{ 3( \eta -1 ) } {\eta } ( 1- \beta \bar{\nabla}^{-1} ) \delta,
The first equation is analogous to the Poisson equation, but now we have one more field C, whose equation of motion is given by the second one. The nonlocal term like \sqrt{ - \bar{\nabla}^2  } C can be easily handled in the Fourier space. The real headache comes from the nonlinear derivative terms (\bar{\nabla}^2 C)^2 and (\bar{ \nabla}_{ij} C)^2 . One of the major achievement in this paper is that we developed a convergent method to solve this set of equations consistently. It involves alternate use of relaxation and Fast Fourier transform (so we call it FFT-relaxation method). Although that is a main result of the paper, I am not going to talk about it in details so as not to get too technical and dry. But interested readers are welcome to read the original paper.

Let me get to the results. As I have mentioned, from the simulations we have measured the power spectrum, bispectrum, mass function and bias. Here I only show the power spectrum.
PDGP_Grun_z0
PDGP_GRH_z0
In the first figure we show the power spectrum from three different models, which are the fully nonlinear DGP model (nlDGP), linearized DGP model (lDGP) and the GR with the same expansion history as the DGP model (GRH), which essentially is the GR limit. In order to see the difference more clearly, we have shown the ratios of power spectrum from various models, P_{\rm  nlDGP} / P_{\rm lDGP}  and P_{\rm GRH} / P_{\rm nl DGP} in the lower figure. In the large scales (small k), the full nonlinear DGP model reduces to the linear one. More interestingly in the nonlinear regime (large k limit), the fully nonlinear DGP model approaches the GR with the same expansion history. This demonstrates that Vainshtein effect drives the model towards GR limit in the large k regime. The transition is broad and the limit is not yet fully attained in the range shown here.

OK, let me summarize some of the main results here. We have developed a convergent algorithm, FFT-relaxation method, to solve the fully nonlinear field equations in the DGP model. This enables to compute the observables like the power spectrum in the DGP model using numerical simulations. We have demonstrated the Vainshtein effect, and the Vainshtein radius at z =0 is about 1 h/Mpc. For more details, please refer to our original paper arXiv:0906.4548.

Nongaussiantiy in cosmology June 1, 2009

Posted by keithkchan in Cosmology.
add a comment

It seems that I haven’t talked about physics for some time. After all, in the About of this blog, it says it is mainly about physics. Obviously that’s because of my limited knowledge, and most people are are not interested in my research area. Anyway, even if you are not interested in I still introduces it a little bit.

Recently nongaussianity is rather popular topic in cosmology. The primordial density fluctuations in the early universe is very Gaussian. What I mean is that one can think of the density fluctuations drawn from a Gaussian distribution. The Gaussian field is completely characterized by its 2-point correlation function (or power spectrum in Fourier space.) Inflation, at least in simple models, predicts that the fluctuations are highly Gaussian. Because of the nonlinearity of gravity, a small amount of nongaussianity is generated. But the amount is not possible to detect in foreseeable observations (although some claim that 21 cm may do it). The common parametrization for nongaussianity is \Phi_{\rm NG}=\phi + f_{\rm NL} \phi^2, where \phi is the Gaussian potential. That is the nongaussianity is generated by the nonlinear in \phi. This is a phenomenological expansion in \phi, and you can add more terms if you like. The current observational efforts are to constrain the value of f_{\rm NL}.

There are two kind of observations that allow people to probe nongaussianity. They are cosmic microwave background and large scale structure. Gaussian field has zero 3-pt function (or bispectrum in Fourier space). So people look for bispectrum in these observations. So far the best limit on f_{\rm NL } is from WMAP data, but obtained by a group other than the WMAP team. The limit is -4 < f_{\rm NL} < 80 (95% confidence level). It will be really interesting if the future data exclude 0.

I may talk about nongaussianity more in future if I run out of stuffs to say.

Cosmic Topology April 16, 2009

Posted by keithkchan in Cosmology.
1 comment so far

Thanks to my colleagues, in particular Mr David, who urged me to update this blog. So I decide not to watch Spiderman cartoon tonight, and write something on this blog. By the way, Spiderman is also a geek. Although he studied at an imaginary university (Empire State University) in Mahattan, we believe that the writer in fact imply NYU. Also he rests on the trademark of NYU, the characteristic NYU flag frequently. Of course, I like Spiderman before coming to NYU.

What do I want to talk about? Well let me write up the paper I presented in the journal club this week. The paper I chose is arXiv:astro-ph/0402324, whose title is Cosmic Topology: a Brief Overview. Well this sounds rather off the main stream of cosmology. Yes, that’s true. The main motivation for me to learn something about these stuffs is that I wonder if the current observed cosmic acceleration can arise from some nontrivial global topology of the universe. It turns out that this is rather unlikely (should I just say impossible?).

In standard cosmology, we usually assume that the spatial topology of the universe is either the Euclidean space E^3, 3-sphere (S^3), or a 3-hyperbolic space (H^3). However, as realized by Karl Scharzschild, in fact the spatial topology can be M/\Gamma, where M is one of the homogeneous and isotropic space and \Gamma is some symmetry transformation that leaves the metric invariant. For example, instead of the whole infinite Euclidean space, we may live in a cube with the opposite face identified, that is a 3-torus. The point is that general relativity is a local metrical theory, it does not allow us to determine the spatial topology. My goal to realize cosmic acceleration by nontrivial topology evaporates and my interest in this subject has dropped by a factor of 3. Incidentally, there are a few papers in the literature trying to argue that spatial topology is the cause of cosmic acceleration. I have read one, and my impression is that it is totally nonsense.

Then can we really tell the spatial topology of the universe by observations. Yes, it is possible. The most striking consequence of the nontrivial topology is that we can see multiple images of the same object in the sky. Unfortunately there are great difficulties in direct detection. The images of the object, such galaxies, are seen from different angles, at different stages of their life, and some of the images may be blocked by other objects. Thus it is almost impossible to tell by direct detection. Instead we can look for some signatures of nontrivial global topology in statistics. I am not going to the details of the statistics, but I just outline the main ideas. One of the way is to compute the distance between the galaxies in a catalogue, and plot the distribution as a function of their separations. Because the separation between the copies of the object is some characteristic length scale, it will show up as a sharp spike in the distribution. One can also look for identical circles in the CMB temperature map, which is so far the best way to probe nontrivial topology. It has ruled out the famous (or infamous) dodecahedral universe that some people proposed to explain the lower-than-expected power in the quadrupole and octopole in the CMB map. Incidentally many people made fun of this model.

Now I should point out that if the particle horizon is much smaller than some “periodic scale” of the universe, even if we live in a universe with nontrivial topology, we have no way to tell. So far, there is no observational evidence that the universe has nontrivial topology. So we can apply Occam’s razor to cut out the unnecessary complications and stick to the standard cosmology happily. On the other hand, we should be open-minded, exhausted all the possibilities, and don’t make fun of those people who have good reasons to do non-standard cosmology.

Implication of NEC on compact extra dimension scenarios March 5, 2009

Posted by keithkchan in Cosmology.
add a comment

Yesterday Paul Steinhardt went to our department to give a talk on his recent paper Dark Energy, Inflation, and Extra Dimensions . Well this all-inclusive title probably don’t tell you much details.

He is a very good speaker. I could understand the ideas pretty well. This is not so typical for high energy seminar. This results of their paper is simply impressive and stunning.

Let me summarize the main results. Of course, I can’t guarantee that I interpret 100% correctly, and you are strongly recommended to read the original paper. In their paper, they assume pretty standard assumptions, and then derive that to get the accelerating expansion that we observe now in the 4D world, the null energy conditions (NEC) are violated. NEC in simple terms in cosmology means \rho + p >0. Violation of NEC is generally associated with many nasty consequences like superluminal propagation, instabilities, or violation of unitarity.

It is important to note what assumptions are made. First, GR is assumed to be the theory of gravity in both the 4D and extra dimensional manifold. Second, the 4D world is spatially flat. The extra dimensions are bounded, so the proof applies only to compact extra dimension models. But a lot models in string theory and extra dimensional model fall into this category. The metric assumed is Ricci-flat or conformally Ricci-flat. These assumptions are either observational fact or typical assumptions made in various kind of models. I don’t think they are controversial.

He then do some A-averaging, which is essentially average over all the spacetime point with some weighting factor, of the the quantities like \rho + p. The kind of A-averaging is new, but it seems to me natural. The equations they derive are essentially the conservation equation. Then it becomes inevitable that to get accelerating expansion in \LambdaCDM cosmology, NEC has to be violated at at least one point. NEC is even much much more strongly violated during inflation.

If nobody could find out any flaws in their proof, this paper is expected to have big repercussions as compact extra dimensions essentially to string theory, and a lot of models are built on this idea. Nonetheless, Steinhardt does support the idea of extra dimensions, and he urge people do models much more carefully. This may require some new way of thinking.

Mukhanov’s cosmology course February 20, 2009

Posted by keithkchan in Cosmology.
2 comments

Although it is my pleasure to shut up, I’d better keep this blog from rusty. I decide to talk about the recent cosmology course given by Mukhanov here at NYU.

He is a Russian, and his English is definitely Russian English. I need to concentrate to understand what the hell he talks about. Besides his Russian English, his teaching is very good, as far as I can tell from his first class. He can elaborate complicated things in simple terms. For example he derived the Friedmann-Robertson-Walker metric in a few lines. In this sense he is similar to Gruzinov. Is it generally true that Russians can simplify complicated things and make them easy to understand?

Apparently his lecture follows his book, the Physical Foundations of Cosmology . This book, at least the first chapter, is rather easy to understand. He tries to make things as clear as possible. Recently I tried to find some readable materials on de Sitter space. I could not find it in the common cosmology textbooks. In his book, he give rather detailed description to the de Sitter space. One can derive the metric of dS spacetime by embedding a hyperboloid in Minkowski spacetime. The dS metric can be expressed in terms of spatially closed, flat and open coordinate systems, although only the closed one can cover the whole manifold. I think this is an excellent textbook for cosmology.

In fact, he gave similar lecture at PI, so if you are interested in you can see the video online. Since the sound quality of the video is not very good, I tried and I could not bear his Russian English.

Dvali-Gabadadze-Porrati Model January 1, 2009

Posted by keithkchan in Cosmology.
1 comment so far

First of all, happy new year to all of you.

I am going to talk about the work that I have been doing for several months. As a preclude, I would like to introduce some related background. In this post, I give a tiny review on the Dvali-Gabadadze-Porrati (DGP) model. I think the readers at NYU physics should know a little bit about this model as it was invented by three physicists at NYU.

DGP model is an extra dimension (ED) model. Any ED model has to reproduce the well-known Newtonian gravitational potential \propto 1/r in the scale that has been well examined. Other ED model like Kaluza-Klein (KK) model and Randall-Sundrum (RS) model modify gravity in the untraviolet regime. That is we can discover the existence of ED by probing sufficiently small distance scales. Currently, the constraint on size of the ED is about sub-millimeter or so. KK hides the ED by compactifying them on some manifold, while RS make use of warp ED. DGP is different from the other model in the sense that it is an infrared modification of gravity. That is you find out the existence of ED if you look at large enough scales. In DGP model, our world is a 3-brane, which is embedded in 4+1 dimensional Minkowski spacetime. The gravitational action in DGP reads
S= M^3 \int d^5 X \sqrt{G}   R^{(5)} + M_P^2 \int d^4 x \sqrt{-g}  R^{(4)}. The second term is crucial for the recovery of Newtonian gravity in small scales.

DGP model respects Lorentz invariance on the 3-brane. It is a covariance formulation of massive gravity. For the observer on the brane, the effective mass of the graviton is m_g=\frac{2M^3}{M_P^2}. The graviton mass m_g is related to the crossover scale r_c as r_c = 1/ m_g. The crossover scale characterizes the scale in which the ED gravity kicks in.

Another remarkable discovery by Deffayet, who was at NYU at that time, was that the resultant DGP Friedmann equation admits a self-accelerating solution. That is in DGP model we can explain the observed cosmic acceleration without invoking the mysterious dark energy!

So far so good. However, it was later found that DGP model is plagued by ghosts. In field theory, ghosts have negative kinetic term, and it is a signal of inconsistency of the theory. If ghosts are not enough to kill DGP, recently some people claimed that the self-accelerating DGP has been ruled out by 5\sigma by observation, through the so-called ISW effects.

Probably DGP will go to the hell with the ghosts pretty soon. However, DGP has motivated more sophisticated generalization, the Cascading Gravity. This model seems to be free of ghosts. But the details has to be worked out as it is very complicated.

Although DGP turns out to be not so successful, it is certainly a landmark of NYU physics. Many people at NYU have contributed to it. I will talk about my own work on it, probably at the end of this month.

Lyman Alpha Forest II December 8, 2008

Posted by keithkchan in Cosmology.
add a comment

I continue my discussion on Lyman alpha forest.

Lyman alpha forest is one of the few tools that enables us to probe the structure at z~3, and it is still relatively linear.   As you may recall, the observed spectrum from the quasar is very choppy, so it is not surprising that one can only extract the information statistically. It turns out to be useful to regard the whole spectrum as a continuum rather than a background plus absorption lines.

One of the reason that Lyman alpha forest is useful as the IGM satisfies some simple relation.  For IGM, the UV background photoionization and the adiabatic cooling are approximately in equilibrium, so n_{HI}\Gamma=\alpha_{Rec}n_en_p, where n_{HI} , n_e   and n_p are the hydrogen, electron and proton number densities respectively. \Gamma is the photoionization coefficient, and \alpha_{ \rm Rec } is the recombination coefficient. Also the temperature T of the gas  satisfies a power law T=T_0 (\rho/\bar{\rho}). The recombination coefficient can be approximated as \alpha_{\rm Rec }\propto T^{-0.7} in the relevant temperature range. Using these relations and the fact that the optical depth \tau is proportional to n_{HI } , one can derive

\tau = A( \rho/ \bar{\rho} )^{\beta} , where \beta \approx 1.6 . The expression for A is complicated, and we don’t bother to write it down here.  This relation is called Fluctuating Gunn Peterson Approximation (FGPA). This simple relation turns out to be rather accurate although it has neglected the peculiar velocity, thermal broadening, and shock heating.  From the optical depth, one can get the flux F = F_c e^{ -\tau } . Flux is the important, as astronomers only measure fluxes. Therfore it is a central quantity in Lyman alpha forest measurement.

It is not surprising that one needs hydrodynamic simulations in order to capture the details of the forest accurately. A cheap way is to run a gravity only PM N-Body simulation and impose the FGPA.

From Lyman alpha forest, researchers have managed to crank the underlying matter power spectrum, and tested the Gaussian initial conditions. I may talk about more constrints  we may get out from Lyman alpha forest.

Lyman Alpha Forest I November 30, 2008

Posted by keithkchan in Cosmology.
add a comment

In the Astrophysics course, we have to present a paper for the evaluation of the course. I chose to talk about Lyman alpha forest. I am learning a little bit about it now. I would like to make some posts on it.

Lyman alpha forest (BTW, this name is very cool!) refers to the absorption spectrum of distant Quasi Stellar Object (QSO) due to neutral hydrogen atoms in the InterStellar Medium (ISM). There are a lot of names, let me try to make sense of them. QSOs (also called quasar) are very bright objects called Active Galactic Nuclei (AGN) (Oh come on I am introducing more and more terms!), which is believed to a galaxy with a (super)massive black hole at the centre. Because QSOs are very bright, they can be seen at large distance (the most distant QSO is about redshift 6). ISM is the gas and dust that fill the space in between stars and galaxies. Lyman alpha refers to the Lyman alpha transition between n=1 and 2 level in the hydrogen atom. The wavelength is 121.6 nm.

The universe is expanding, and so the photons are stretched and the spectrum is redshifted (the spectrum shifts to the larger wavelength: \lambda_{\rm obs} = (1 + z)  \lambda_{\rm em  } . That is the observed photon with wavelength increased by a factor of 1+z.

This cartoon is from Ned Wright. lyaf-75
On the right, we see a QSO, and the peak corresponds to the Lyman alpha emission from the QSO, and it suffers the largest redshift. In the middle, there are clouds in the ISM which contain neutral hydrogen. These neutral hydrogen atoms absorb Lyman alpha photon, and this corresponds to the troughs in the spectrum. They appear at different wavelength since they have different redshift factor 1+z.

Here is a real spectrum from Bill Keel
forest
The whole bunch of messy lines on the left of the peak is the “forest” of Lyman alpha absorption lines. Note that the horizontal axis is emitted wavelength in the frame of the QSO.

Energy Budget of the Universe November 15, 2008

Posted by keithkchan in Cosmology.
5 comments

Mr David is lazy and I don’t have new things to say, but I have to say something, so I decide to talk about my own research interest, cosmology. So I will start a series of posts introducing the basic cosmology.
Now we know that the universe consists of 5% of ordinary matter, 25% of dark matter and 70% of dark energy. These contents can be inferred from the Cosmic Microwave Background. The ordinary matter is just the matter you and I made of. The amount of ordinary matter also agree with the Big Bang Nucleosynthesis. The dark matter is more mysterious because we only “see” it through gravity. In many different occasions, we require the existence of dark matter. The most famous evidence is the rotation curve in the galaxies. According to Kepler’s law, the orbital velocity should drop as r^{-1/2}, but observation shows that the rotation curve flattens. To accommodate this fact, one need to introduce dark matter with density distributes as 1/r in a galaxy. There are some dark matter candidates motivated by high energy physics, e.g. axion, WIMP. Some experiments are dedicated to detecting these elusive particles, although they have not been found anything convincingly so far.
The remaining component is even more mysterious. The simplest candidate is the plain cosmological constant. So far the simple cosmological constant is still in agreement with observation. The cosmological constant may be just interpreted as a constant in the Einstein-Hilbert action. From general relativity, we have no clue how large or small it is. If we interpret it as the vacuum fluctuations, and integrate all the ground state energy up to some cut-off scale, a natural scale is the Planck scale, one will get a huge number, which is 120 orders of magnitude bigger than the observed value. The cosmological constant problem is one of the most embarrassing problem in theoretical physics. There is still no satisfactory solution to this problem. Some people argue that the cosmological constant from vacuum fluctuation vanishes for some reason, and the observed cosmological constant is due to some other reasons. Although cosmological constant is still in pretty good agreement with observations, researches are looking at other more interesting alternatives. I will talk about other more interesting possibilities later.