# How Does LIGO Point Back to a Source?

LIGO published the position of their gravitational wave on the sky when they announced that they had found something, so I was wondering how this could be done. While LIGO has more information that just timing, I decided to consider the case of reconstructing a position based just on the relative timing from two detectors.

I started with a coordinate system with the origin at the center of Earth (assumed to be a perfect sphere here). The z-axis points toward the north pole, the x-axis toward the equator at the Greenwich meridian and the y-axis 90 degrees east of the x-axis. I can then place the two detectors in this coordinate system (easiest using spherical coordinates). The time separation for a gravitational wave along some axis given by a unit vector u(θ,φ) is just

$\Delta t(\theta,\phi) = \frac{R_\oplus}{c}({\bf v_1}-{\bf v_2})\cdot {\bf u(\theta,\phi)}$

where v1 and v2 are the unit vectors pointing toward the two detectors, R is the radius of Earth, and c is the speed of light (the speed of the waves). If each detector can measure the time of the wave to some uncertainty σ and the relative timing uncertainty (typically from GPS) is negligible, I can define some likelihood that u is the correct vector as

$L(\theta,\phi) = e^{-\frac{1}{2}\chi^2(\theta,\phi)}$

where

$\chi^2(\theta,\phi) = \frac{(\Delta t_{\rm meas} - \Delta t(\theta,\phi))^2}{2\sigma^2}$

Using this function for some test values, we can try to understand how pointing works. If the time separation is 0, then we expect to see most likely values along the great circle between the two detectors.

I will also note that this is plotting position on the sky in Earth coordinates. Earth is not in an inertial reference frame: it rotates and moves in space, so actually mapping this against the stars in the sky gets very complicated and I’ll ignore that since it doesn’t affect what I want to show. The coordinates are also spherical coordinates, which are non-Euclidean, so shapes look strange if you’re not used to them. The curve above is actually the spherical version of a “straight” line. For a time separation of 8 ms, we see a thin ring that widens out as the resolution is degraded.

At 10 ms, we’re getting a large separation, and the ring has shrunk almost to just a blob.

Basically, what seems to be going on is:

1. The general shape of the likelihood distribution is a ring of positions corresponding to the measured time separation
2. As the time separation increases, the size of the ring on the sky decreases
3. When the separation is too large (requires a speed less than c), no position works well, so there is a sharp peak at the best (but still non-optimal) position.
4. The ring is characteristic of having two detectors. The center of the ring never changes (there are actually two centers depending on the sign of the separation) and its position is determined by the positions of the detectors
5. A third detector should break the degeneracy and allow for reconstruction of the position as a single point on the sky
6. Since the measured position looks like a blob, it looks like the time separation is large enough for the timing uncertainty to prevent us from seeing a ring-like shape.

# Finally Read the LIGO Paper

I finally found some time to read the LIGO paper. A couple things that I thought were interesting:

1. The peak power from gravitational waves was 200 solar masses per second. The power didn’t stay there for very long since a total of 3 solar masses was radiated away.
2. The rate of false positives the size of the signal seen is one in tens of thousands of years, so this is a signal that is enormously above any known backgrounds.
3. LIGO also uses some complicated template fitting routine where they compare the measured signal to a library of pre-calculated theoretical curves. This only gives approximate results for physics parameters, so they then have to supplement this with an actual fit.
4. The next biggest event had a false positive rate of only one every few years

# Sean Carroll on Gravitational Waves

Sean Carroll, who wrote one of the most popular textbooks on general relativity, has a new article in The Atlantic. He goes over some of the historical context and some of the ideas behind gravitational waves so that people who know almost nothing about physics can understand a little about yesterday’s announcement.

# Gravitational Wave Rumor Back in the News

The rumor from earlier this year that LIGO has found gravitational waves has returned. This time, there’s a theorist who claims that a paper will be released by Nature in less than a week, so we won’t have to wait long to see if this particular rumor is true. The claim is that LIGO has found definitive evidence of a black hole merger, which would be very exciting. Measurable gravitational waves are expected to be generated when two very large objects orbit one another at a close distance, with the waves bleeding off energy and causing the orbital radius to decrease until the objects merge. An interferometry experiment like LIGO would then see a clear oscillating signal from the orbits of these two objects. Gravitational waves are one of the most important predictions of general relativity that we could measure.

Advanced LIGO, the newest iteration of the LIGO experiment has finally turned on. LIGO is a gravitational wave observatory using extremely high precision interferometers to look for signatures of gravitational waves hitting Earth. Advanced LIGO is meant to finally have enough sensitivity to have a decent likelihood of finding gravitational waves within a reasonable amount of time.

Finding direct evidence of gravitational waves would be a huge discovery, as that would confirm an important prediction of the theory of general relativity, which states that mass warps the shape of spacetime. Events such as black hole mergers are thought to generate measurable gravitational waves due to the huge gravitational forces governing their dynamics.

# Dark Matter Evidence: Lensing

Here’s another post continuing my discussion of dark matter.

The previous two posts in this series dealt with two of the most important pieces of evidence for dark matter. Both galactic cluster and rotation curve data show a large disparity between the mass measured using the mass to luminosity ratio and the mass measured using kinetic energy measurements. However, you can easily imagine an alternative explanation: maybe gravity doesn’t really work the way we think it does on large scales. Maybe all we need to do is modify the equations of general relativity so that we retain the behavior at smaller scales that we can actually measure while giving us the correct forces at long scales to explain the apparent mass deficit in luminosity measurements. In recent years, some newer evidence has cropped up that presents a serious challenge to theories of modified gravity with no dark matter.

In general relativity, gravity acts on energy rather than just mass. Light, which has energy but no mass, follows geodesics – basically the equivalent of a straight line in a non-Euclidean geometry. When light passes by an object, the gravitational forces pull the light, deflecting it from its original direction. To a faraway observer, objects emitting light that is deflected by very massive objects will appear distorted in some way. Many types of spatial distortion can be seen. To give a couple examples, multiple copies of some objects can appear due to light being deflected toward us from several different directions. Einstein rings, where a point-like object appears to be a ring instead are observed as well. This is known as strong lensing. There is also weak lensing, where statistical methods must be used to find distortions. By measuring the lensing of objects in the background, the mass density of the object doing the lensing can be reconstructed.

The most significant lensing measurement relevant to dark matter research is that of the Bullet Cluster (1E0657-56) in the early 2000s. This led to the famous picture at the top of this post with data from the Chandra X-ray Observatory and the Magellan telescopes in Chile. The Bullet Cluster is believed to be the result of a collision between two galactic clusters. When two clusters collide, it is expected that the gas, which makes up most of the normal matter, will interact readily and clump together in the center. Galaxies act more like individual particles and are maybe deflected but continue moving on without interacting very much. So, it’s expected that most of the mass of normal matter will show up as a single diffuse cloud. Gas in a large gravitational potential is easily measured through x-ray emission. In several papers, Clowe, Markevitch et al. used x-ray data from Chandra to look at the distribution of gas (most of the normal matter) and optical data from telescopes like the VLT (Very Large Telescope) in the Atacama Desert in Chile to compare the mass distributions from x-rays and weak lensing. Optical measurements showed that the x-ray emitting gas was located in the center, as expected, and the galaxies were distributed in two lobes, again as expected. The lensing measurements, however, showed that most of the mass contributing to gravitational lensing was distributed with the two lobes of galaxies, not with the gas.

This result suggests that most of the mass exists as a diffuse cloud of non-interacting matter. When the clusters collided, this matter would just follow the galaxies since there are no interactions to cause it to cluster together as with gas (mostly hydrogen). This non-interacting matter neatly fits the description of dark matter. The dark matter hypothesis is a simple way to explain all three of these pieces of evidence*, while modified gravity (with no dark matter) struggles with at explaining these phenomena.

*Some have suggested that the Bullet Cluster and some similar objects are actually pretty difficult to model with the regular ΛCDM model. While this would suggest that our cosmological model is not entirely correct, the existence of dark matter would still be strongly favored in alternative models.

# Dark Matter Evidence: Galactic Clusters

In the latest entry in my Physics for Non-Physicists series, I will go over some of the evidence for dark matter from measurements of galactic clusters. This post is more of a historical overview of the subject than a review of the most recent literature.

I gave a very brief introduction to dark matter in my previous post on the thermal history of the universe. To recap, dark matter is a form of matter believed to exist that does not interact much with either itself or with the particles of the Standard Model.

As far back as the 1930s (and possibly earlier), the theoretical background and experimental methods of astrophysics had advanced to the point where things like the velocities and luminosities of far-away galaxies could be measured with a reasonable amount of accuracy.

Galactic clusters are a structure containing many galaxies and associated gas in a gravitationally-bound system. In cases where the effects of general relativity can be ignored with little effect (i.e. most cases), gravity follows a Kepler potential: the gravitational potential energy is proportional to inverse distance while the gravitational force is proportional to the square of the inverse distance. For a Kepler potential, it can be shown that the center-of-mass (removing any overall motion) kinetic energy (T, calculated from taking the sum of the kinetic energies of all objects in the cluster) has a particular relationship to the internal gravitational potential energy (U): 2T = -U. This is known as the virial theorem.

Still in the 1930s, it was generally assumed that stars and galaxies follow an approximately constant mass-to-luminosity ratio. That is, when we look at large enough objects, the amount of light has a  known relation to the amount of mass, so by measuring the luminosity, the mass could be estimated. The gravitational potential energy between two objects is -G m1m2/r where G is Newton’s gravitational constant, the m‘s are the masses, and r is the distance between them. From dimensional analysis, the total potential energy of the cluster is

$U = - \frac{xGM^2}{R}$

where M is the total mass, R is an approximate radius (size) of the cluster, and x is a constant of order 1 related to the shape of the cluster (spherical, ellipsoidal, etc). The kinetic energy is the sum of 1/2 mv2 (half of mass times center-of-mass velocity squared) for all the objects in the cluster. This can be estimated from a representative sample of galaxies in the cluster, since this can be rewritten as (1/2)MV2 where again M is the total mass and now V2 is the estimated mean squared velocity. The velocity can’t be directly measured because the distances are so incredibly large that motion cannot be detected. Instead, the velocity along the line-of-sight direction can be determined by measuring the light spectrum (amount of light at different wavelengths). Certain spectral features occur at known wavelengths. Motion along the line of sight causes redshift (motion away from us) or blueshift (motion toward us), which are terms for the Doppler effect when applied to light. The sound version of the Doppler effect is something most people have noticed in everyday life: As fast moving objects emitting sound approach and then recede from us, the pitch (frequency/wavelength) of the sound changes. Returning to the topic at hand, by applying the virial theorem, we can see that the mass is given by

$M = \frac{RV^2}{Gx}$.

In the early 1930s, an astronomer named Fritz Zwicky was studying the properties of the Coma Cluster (shown at the top of this post). Using the above results, he compared the mass as estimated from the virial theorem to the mass as estimated from the luminosity. The result was surprising: What I’ll call the “virial mass” was much larger than the “luminosity mass:” the galaxies in the cluster seemed to be moving much too fast. Zwicky’s 1937 paper in Ap.J. can be found here. An earlier paper was published in 1933, but it’s not in English so I cannot read it.

Zwicky notes that there are several ways to explain the discrepancy between the estimated masses. One interesting way is that on very large scales (the mass-to-luminosity ratio was for nearby star systems), the gravitational mass is dominated by some form of matter that is not emitting light (i.e. is “dark matter”). This is one of the earliest and most influential proposals for a theory of dark matter. It turns out that Zwicky actually overestimated the discrepancy by a significant amount. More modern values, using a much stronger understanding of astrophysics, still show a discrepancy – our evidence for dark matter has also become much more convincing – but it’s more modest than Zwicky’s, which showed the “virial mass” to be around 100 times the “luminosity mass.”