T2K has a new result looking for short-baseline electron neutrino disappearance in the near detector. This kind of search is looking for neutrinos oscillating into a non-interacting, or sterile, neutrino. There are various “anomalies” in certain oscillation results that can potentially be resolved by introducing sterile neutrinos, so this is a way to search for them or exclude their existence.
MINOS has a new result on charged-current quasielastic (CCQE) scattering of muon neutrinos on iron using their near detector. They find some differences between their data and current models at low Q2.
PRL published a couple new papers from AMS today, including this one, which shows an updated plot of the positron fraction (e+/(e+ + e-)) as a function of energy. I don’t think these have been posted to the arXiv yet, which is unusual for our field. Unfortunately, PRL must be accessed either through a network with access or with an account, which requires money, so if you’re not at a university and aren’t an APS member you might have trouble looking at the paper. In conjunction with the papers being released, there was a seminar at CERN earlier today. I didn’t find out about the seminar until it was too late for me to try to call in remotely.
AMS is a large multipurpose detector based at the International Space Station that studies cosmic rays, which are high energy particles flying around in space. The search for dark matter is one of the main purposes of the experiment, but it can study many things about cosmic rays, such as their composition, energy spectra, directions, etc.
This paper shows the positron fraction up to 500 GeV, which is a bit higher than in previous measurements. This measurement is useful for looking for dark matter annihilation to electron-positron pairs. The expected signal is for the positron fraction to increase at some energy and then peak and drop fairly sharply at an energy of approximately the dark matter mass. Previous measurements from AMS, Fermi, PAMELA and ATIC have all seen an increase in the ratio and have even seen the spectrum seem to level off. This new measurement shows that the leveling off continues, and the spectrum may even be starting to fall in the highest energy bin. However, the measurement is limited by both statistics and systematics at the highest energies, so the apparent decrease in the highest bin is not going to be statistically significant at this point in time. That may change with more events in the future and hopefully a better understanding of the detector and of astrophysical models for the non-dark matter background.
Minerva has a new preprint out today on coherent charged pion production in carbon. They measure both types of charged pions in neutrino and antineutrino beams. The measurement looks at energies up to 20 GeV and finds that agreement with the GENIE Monte Carlo generator is not very good. Minerva is a neutrino experiment using the NuMI beamline at Fermilab.
ATLAS has yet another Higgs measurement on the arXiv, this time looking at the diphoton channel. This is the Higgs decay H→2γ.
Higgs couplings to other particles are related to those particles’ masses. Couplings to fermions are proportional to the fermions’ masses and couplings to bosons are proportional to the squares of the masses. The photon, however, is massless, so it does not couple to the Higgs. Then how does this channel even exist? The Higgs couples directly to particles that also couple to photons. Using Feynman diagrams, the Higgs can couple to photons via a loop of some massive particle like a heavy quark. This leads to an indirect Higgs-photon coupling but the diagram (representing the mathematical equations to calculate the decay width) is more complicated. As a result, the Higgs to two photon decay exists, but it is not particularly common. It is still a very useful channel for searches because the energies of photons can be measured very well, allowing for an accurate reconstruction of the Higgs energy, and because the two photon channel is much cleaner than most channels. There aren’t many processes that will result in two high energy photons with a large transverse momentum, so while the signal is small, the background is small as well. A statistically stronger measurement can be obtained from the relatively few events in this channel. Higgs production via gluon fusion is basically the opposite process – just with gluons rather than photons – as this decay,
As with the earlier paper on the four lepton final state, this paper measures the total number of H→2γ events at 7-8 TeV and also tries to separate the sample into various Higgs production channels. Not surprisingly, no significant deviations from Standard Model predictions are found.
A new Higgs measurement from ATLAS is out on the arXiv now. The paper presents the result of a measurement looking for the process H→ZZ*→4ℓ, where Z* is a virtual Z boson, since the Higgs isn’t heavy enough for H→ZZ to be possible.
The four lepton channel is one of the best channels for Higgs precision measurements. While ZZ*→4ℓ represents only a small fraction of ZZ* events, it has a number of advantages over other channels. Detectors can measure electrons and muons with much more accuracy than most particles. Electrons are stable while muons live long enough not to decay in the detector. No energy is carried away by neutral particles like neutrinos. By measuring the leptons, the invariant mass of the Z boson can , and the full invariant mass of the Higgs can be reconstructed for each event. The channel is also cleaner than most (has a good signal to background ratio), with the lepton reconstruction allowing for things like using the reconstructed Z mass from pairs of leptons to aid the selection. The paper also uses subsamples of the total data set to estimate the contributions from several different Higgs production mechanisms (like vector boson fusion and gluon fusion).
The PandaX group has released their first dark matter result, which can be found here. PandaX is a dual-phase liquid xenon detector, similar to XENON-100 and LUX. The actual result is not particularly interesting, but PandaX is an interesting experiment because it is one of the first particle physics experiments to be hosted in China. The collaboration is mostly from China but includes a few people from institutions in the US.
In the latest entry in my Physics for Non-Physicists series, I will go over some of the evidence for dark matter from measurements of galactic clusters. This post is more of a historical overview of the subject than a review of the most recent literature.
I gave a very brief introduction to dark matter in my previous post on the thermal history of the universe. To recap, dark matter is a form of matter believed to exist that does not interact much with either itself or with the particles of the Standard Model.
As far back as the 1930s (and possibly earlier), the theoretical background and experimental methods of astrophysics had advanced to the point where things like the velocities and luminosities of far-away galaxies could be measured with a reasonable amount of accuracy.
Galactic clusters are a structure containing many galaxies and associated gas in a gravitationally-bound system. In cases where the effects of general relativity can be ignored with little effect (i.e. most cases), gravity follows a Kepler potential: the gravitational potential energy is proportional to inverse distance while the gravitational force is proportional to the square of the inverse distance. For a Kepler potential, it can be shown that the center-of-mass (removing any overall motion) kinetic energy (T, calculated from taking the sum of the kinetic energies of all objects in the cluster) has a particular relationship to the internal gravitational potential energy (U): 2T = -U. This is known as the virial theorem.
Still in the 1930s, it was generally assumed that stars and galaxies follow an approximately constant mass-to-luminosity ratio. That is, when we look at large enough objects, the amount of light has a known relation to the amount of mass, so by measuring the luminosity, the mass could be estimated. The gravitational potential energy between two objects is -G m1m2/r where G is Newton’s gravitational constant, the m‘s are the masses, and r is the distance between them. From dimensional analysis, the total potential energy of the cluster is
where M is the total mass, R is an approximate radius (size) of the cluster, and x is a constant of order 1 related to the shape of the cluster (spherical, ellipsoidal, etc). The kinetic energy is the sum of 1/2 mv2 (half of mass times center-of-mass velocity squared) for all the objects in the cluster. This can be estimated from a representative sample of galaxies in the cluster, since this can be rewritten as (1/2)MV2 where again M is the total mass and now V2 is the estimated mean squared velocity. The velocity can’t be directly measured because the distances are so incredibly large that motion cannot be detected. Instead, the velocity along the line-of-sight direction can be determined by measuring the light spectrum (amount of light at different wavelengths). Certain spectral features occur at known wavelengths. Motion along the line of sight causes redshift (motion away from us) or blueshift (motion toward us), which are terms for the Doppler effect when applied to light. The sound version of the Doppler effect is something most people have noticed in everyday life: As fast moving objects emitting sound approach and then recede from us, the pitch (frequency/wavelength) of the sound changes. Returning to the topic at hand, by applying the virial theorem, we can see that the mass is given by
In the early 1930s, an astronomer named Fritz Zwicky was studying the properties of the Coma Cluster (shown at the top of this post). Using the above results, he compared the mass as estimated from the virial theorem to the mass as estimated from the luminosity. The result was surprising: What I’ll call the “virial mass” was much larger than the “luminosity mass:” the galaxies in the cluster seemed to be moving much too fast. Zwicky’s 1937 paper in Ap.J. can be found here. An earlier paper was published in 1933, but it’s not in English so I cannot read it.
Zwicky notes that there are several ways to explain the discrepancy between the estimated masses. One interesting way is that on very large scales (the mass-to-luminosity ratio was for nearby star systems), the gravitational mass is dominated by some form of matter that is not emitting light (i.e. is “dark matter”). This is one of the earliest and most influential proposals for a theory of dark matter. It turns out that Zwicky actually overestimated the discrepancy by a significant amount. More modern values, using a much stronger understanding of astrophysics, still show a discrepancy – our evidence for dark matter has also become much more convincing – but it’s more modest than Zwicky’s, which showed the “virial mass” to be around 100 times the “luminosity mass.”
Exploring the dark side of physics: dark matter, neutrinos, and other stuff we can't see.