It was reported last week that the LHC will be shut down for a few weeks due to a weasel gnawing through a power cable. The weasel, as you might expect, did not survive.
The Edelweiss experiment has released a new preprint including updated limits in the low mass region from their WIMP dark matter search using germanium bolometers. There are still some stronger limits, but one interesting thing is that this further bolsters the case that the various purported WIMP signals from other experiments are probably not actually dark matter. The results from experiments like CoGeNT, CRESST, and DAMA all lie above the limit shown here.
Symmetry has a fairly new article on sterile neutrinos, explaining some of the basic ideas about what they are and why we’re interested in them. Measurements of the Z peak in e+/e- collisions at LEP showed that there are only 3 neutrinos, but there are some caveats to that. The measurements showing that there are 3 neutrinos really mean that there are 3 neutrinos that (1) have less than half the Z mass (91.2 GeV) and (2) interact with Standard Model particles via the weak force.
If we can instead add some “neutrinos” that don’t interact through the weak force, then there’s room for more neutrinos. One of the main ways people search for them is to find problems with the standard picture of 3-neutrino oscillations. If there are sterile neutrinos that mix with the three usual flavors (electron, muon, and tau), then maybe we can find evidence of neutrinos oscillating into sterile neutrinos. That is, regular neutrinos seemingly disappearing altogether rather than changing from one type to another.
SuperKEKB, an upgraded version of the KEK B electron/positron collider at the KEK lab in Tsukuba has started up. There are no collisions yet, but they are starting to run beams around the main ring. The Belle II experiment will use he SuperKEKB facility to study B physics – that is, the physics of B mesons (and other particles involving b quarks). These are particularly interesting because the lifetimes of many of these particles decay weakly with long enough lifetimes to actually measure how far they travelled. B physics allows for high precision tests of various aspects of the Standard Model, including things like CP violation in the quark sector (i.e. matter/antimatter differences), particle spectroscopy (measuring the properties of the various kinds of composite particles), and searches for various kinds of new physics.
Symmetry magazine has an ABCs of Particle Physics feature right now, which provides fun explanations of various topics in physics (with a focus on high energy physics). Everything is meant to be accessible to basically anyone. You can check it out here.
LIGO published the position of their gravitational wave on the sky when they announced that they had found something, so I was wondering how this could be done. While LIGO has more information that just timing, I decided to consider the case of reconstructing a position based just on the relative timing from two detectors.
I started with a coordinate system with the origin at the center of Earth (assumed to be a perfect sphere here). The z-axis points toward the north pole, the x-axis toward the equator at the Greenwich meridian and the y-axis 90 degrees east of the x-axis. I can then place the two detectors in this coordinate system (easiest using spherical coordinates). The time separation for a gravitational wave along some axis given by a unit vector u(θ,φ) is just
where v1 and v2 are the unit vectors pointing toward the two detectors, R is the radius of Earth, and c is the speed of light (the speed of the waves). If each detector can measure the time of the wave to some uncertainty σ and the relative timing uncertainty (typically from GPS) is negligible, I can define some likelihood that u is the correct vector as
Using this function for some test values, we can try to understand how pointing works. If the time separation is 0, then we expect to see most likely values along the great circle between the two detectors.
I will also note that this is plotting position on the sky in Earth coordinates. Earth is not in an inertial reference frame: it rotates and moves in space, so actually mapping this against the stars in the sky gets very complicated and I’ll ignore that since it doesn’t affect what I want to show. The coordinates are also spherical coordinates, which are non-Euclidean, so shapes look strange if you’re not used to them. The curve above is actually the spherical version of a “straight” line. For a time separation of 8 ms, we see a thin ring that widens out as the resolution is degraded.
At 10 ms, we’re getting a large separation, and the ring has shrunk almost to just a blob.
Basically, what seems to be going on is:
- The general shape of the likelihood distribution is a ring of positions corresponding to the measured time separation
- As the time separation increases, the size of the ring on the sky decreases
- When the separation is too large (requires a speed less than c), no position works well, so there is a sharp peak at the best (but still non-optimal) position.
- The ring is characteristic of having two detectors. The center of the ring never changes (there are actually two centers depending on the sign of the separation) and its position is determined by the positions of the detectors
- A third detector should break the degeneracy and allow for reconstruction of the position as a single point on the sky
- Since the measured position looks like a blob, it looks like the time separation is large enough for the timing uncertainty to prevent us from seeing a ring-like shape.
I finally found some time to read the LIGO paper. A couple things that I thought were interesting:
- The peak power from gravitational waves was 200 solar masses per second. The power didn’t stay there for very long since a total of 3 solar masses was radiated away.
- The rate of false positives the size of the signal seen is one in tens of thousands of years, so this is a signal that is enormously above any known backgrounds.
- LIGO also uses some complicated template fitting routine where they compare the measured signal to a library of pre-calculated theoretical curves. This only gives approximate results for physics parameters, so they then have to supplement this with an actual fit.
- The next biggest event had a false positive rate of only one every few years