Category Archives: Particle Physics

Physics for Non-Physicists: Cross Sections

I’ve decided to start adding explanations of physics topics every once in a while. I’ll make a page for these so they’re easy to find. The idea is that I can compile a series of explanations for non-physicists.

My first post is on cross sections. I chose this because this is important in understanding nearly all measurements in high energy and nuclear physics.

Cross Sections in Classical Physics

The cross section in classical physics is closely related to the informal definition of a cross section.

Consider a three dimensional object, such as a sphere or a cylinder. If I rotate the object in some arbitrary way and look at it, I see that it covers a certain fraction of my field of vision. The effective size of that object as projected onto the plane perpendicular to my line of sight is its cross section. For a circle, this is always just πr2 where r is the radius of the sphere. For a cylinder, the size depends on how it’s been rotated. This kind of cross section is the informal definition.

The total cross section described above only gives a very rough description of the size of the object. A cross section can also be used to glean information about the shape of the object. In physics, this is the differential cross section.

Again imagine your field of vision as a plane with the object at some rotation with respect to this plane. Now shoot tiny particles uniformly from that plane (and perpendicular to it) toward the object. Mathematically, I will quantify this as the intensity, I, which is just the number of particles passing through the plane per unit area per unit time.

If the particles are too far from the center of the object (assuming it is finite in size) they will miss. If they hit the object, they will be deflected at some angle related to the orientation of the surface of the object at the intersection between the object and the path of the particle. Thus, the position at which I fire the particle and its deflection angle tell us about the shape of the object’s surface. This is described mathematically by the differential cross section:

\frac{d\sigma(\theta,\phi)}{d\Omega} = \frac{\rm Number\ of\ particles\ deflected\ into\ a\ region\ d\Omega\ around\ (\theta,\phi)}{Id\Omega}.

The d in dΩ indicates that we look at an infinitesimally small region of scattering direction space, although we can get a good estimate as long as the region is small enough for whatever we’re trying to measure. The total cross section is then the sum over all scattering directions,

\sigma = \int d\Omega \frac{d\sigma}{d\Omega}.

Now add in some long range force like gravity. Gravity pulls particles toward the object, so their paths are deflected. Some particles will get pulled into the object, so clearly the effective cross section has increased. Particles with more momentum will be harder to deflect, so this effective cross section will also depend on the properties of the incoming particles; a generic cross cross section is a function of properties such as particle mass, particle momentum, magnetic moment, etc.

What if I leave in gravity but reduce the physical size of my object to nothing? The incoming particles can no longer fall into the object, but they will still be deflected. But remember that the differential cross section is defined only by the properties of the scattered particles (and maybe the orientation of the object) – not on anything related to the physical size of the object. In fact, we could even model an object with a physical size as simply being a point object that happens to have an infinitely strong repulsive force at what would normally be called the object’s surface. So, we can retain our original definition of the differential cross section but then define the total cross section as the integral of the differential cross section (hence the term “differential”). This gives us a definition of cross section based on the object’s interactions with particles. This definition is also a convenient one for comparing to experimental data.

Cross Sections in Quantum Physics

Quantum mechanics and quantum field theory (QFT – basically relativistic quantum mechanics) have different definitions of the cross section, although they should reduce to the classical one in the classical limit of the theories.

In quantum theories, the scattering process described by a cross section occurs in a probabilistic manner. Our classical definition of the differential cross section is basically probabilistic too. Ignoring the overall normalization giving the total cross section it describes the probability that a scattered particle will scatter in a particular way. However, the classical definition does not describe a probabilistic process. If we shoot a particle at our classical object, we can calculate exactly what it would do. In quantum physics, we can only calculate the probability that the given type of scattering will happen. Additionally, we can scatter particles off of a field localized in some region of space or scatter particles off of each other; objects as we know them won’t really exist at the scales of quantum theories.

The cross section in quantum theories basically just tells us the probability that a given process (what given outgoing particles come out of scattering of some incoming particles with known properties)  will happen. The differential cross section tells us the details (what directions, orientations, momenta, etc. the particles in the final state end up with). More formally,

R = \mathcal{L}\sigma.

The rate of interactions for the cross section σ that we want to measure is the product of the cross section and the luminosity – the analog of the intensity used in QFT calculations. (The quantum mechanics definition is based on waves rather than particles so  I won’t discuss here). The luminosity just quantifies the strength of the “beam” (or beams for a collider). This definition tells us how to get the cross section from a measurement: we know the properties of our beams (or beam and target) and we measure an event rate. The differential cross section just requires us to determine the rate of events with the given kinematics instead of the total rate.

By defining everything through probabilities, it is also easy to treat inelastic scattering, where the outgoing particle(s) (final state) are different from the incoming particle(s) (initial state). Even if there are dozens of particles in the final state, the cross section is really just the probability that that final state will occur (with an extra normalization factor making it an area).

Some Concluding Thoughts

Cross sections are a fundamental piece of much of modern physics, particularly high energy (particle) and nuclear physics. Event rates or numbers of events are very basic kinds of measurements, and these can be turned into cross sections by relating to the expected or measured luminosity.

In some cases just measuring the existence (or nonexistence) of a process is interesting. It could show us the existence of a new particle or a new interaction. Quantum field theories such as the Standard Model predict cross sections (total and differential) for many kinds of interactions and disallow others. Comparing measured cross sections to theoretical models lets us test the validity of those models.

There are many examples of cross sections in measurements that have been in the news over the past few years.

In dark matter direct detection, experiments seek to place upper bounds on the possible interaction cross sections between dark matter particles and Standard Model particles. This requires theoretical modeling of the dark matter distribution near Earth to determine the correct “luminosity” since the “beam” is just particles flying around in space and through detectors on Earth.

At the LHC, physicists are trying to determine whether or not the particle believed to be the Higgs boson really is a Standard Model Higgs by studying its interactions. The cross sections (often ratios of cross sections rather than absolute cross sections to reduce uncertainties) for many initial and final states are being compared to Standard Model predictions (and even non-Standard Model ones to test other models) to see if they are consistent.

Advertisements

New LHCb Results Available

Over the past couple days, the LHCb collaboration has released a number of papers. LHCb is probably the least well known of the four large experiments at the LHC. ATLAS and CMS are general purpose detectors meant to search for a wide variety of physical processes. ALICE focuses on collisions of heavy ions. Unlike these detectors, which try to cover as much of the total 4π solid angle as possible and typically look for events with a large transverse (perpendicular to the beam) momentum, LHCb is a forward spectrometer focusing on events near the beam direction. One of the principal goals of LHCb is to study the properties of B mesons, which are mesons containing a b-quark.

The LHCb papers include one measuring CP violation in a particular decay of the B0s meson, one measuring the lifetime of the B0s using a particular decay channel, and a third measuring the production of two charmonium (charm-anticharm mesons) states in proton proton collisions.

The most interesting paper to non-specialists is the fourth paper, which presents evidence of direct CP violation in the decay of the B+ to a K+ and a proton-antiproton pair. Direct CP violation is basically just looking for differences between particles and their antiparticles. In this case, it was found that in some regions of the kinematic phase space of the final state, decays of the B+ and B mesons seem to occur at different rates. This can occur due to interference between different diagrams leading to the same final state. The result is not significant enough to claim a discovery of direct CP violation in B decays. If the result holds up, it would represent the first measurement of direct CP violation in B decays using a decay channel involving baryons.

New T2K Cross Section Paper

Speaking of neutrinos, there is also a new measurement of the charged current inclusive cross section on hydrocarbon and iron using the T2K INGRID on-axis near detector.

Since I work on T2K (though I didn’t have any involvement on this paper), I won’t really give much commentary. Just a few things on what was actually measured:

  • The J-PARC neutrino beam, as with most accelerator-based beams, is largely muon neutrinos (νμ). This is largely because pions and kaons, the most common hadrons produced in hadronic showers, decay mainly to muons (and muon neutrinos) rather than electrons. They’re also not heavy enough to decay to taus. So, the measurement is a measurement of a νμ cross section.
  • The on-axis detector has a wider spectrum of neutrino energies than the off-axis detectors
  • A bit on terminology: “charged current” means that a muon was found in the event. “Inclusive” means that the analysis only cares that a muon was found. Whether or not other particles are found or not has no bearing on the selection.

New OPERA Tau Neutrino Event

OPERA has a new paper on the arXiv. They report finding a fourth candidate ντ event.

The tau neutrino (ντ) is probably the rarest particle in the Standard Model. Well, that technically isn’t true. The tau neutrino should be extremely common but is the hardest to identify, so very few candidate events have ever been found. In years of running, OPERA only has four candidate events.

There are several things that make the ντ hard to find:

  1. Neutrinos don’t interact very much, so huge numbers of them are required to have measurable numbers of events. The mean free path for neutrino interactions is often quoted as being more than a light year in lead.
  2. Neutrinos can interact via neutral current (the neutrino stays a neutrino) and charged current (the neutrino turns into a charge lepton). Neutral current events cannot be used to distinguish the neutrino type on an event-by-event basis.
  3. The τ (tau) is much heavier than the other charged leptons. It has over 10 times the mass of the muon and over 1000 times the mass of an electron. Thus, much more energy is needed for charged current interactions to occur. A high energy proton synchrotron is needed to create neutrinos with energies high enough to produce taus.
  4. Accelerator-based neutrino experiments primarily produce muon neutrinos from decays of mesons produced in hadronic showers. A tau neutrino experiment must then look for ντ appearance due to the muon neutrinos (and any electron neutrinos) to oscillate into tau neutrinos. This requires the detector to be at an appropriate distance from the target to maximize the number of tau neutrinos passing through the detector.
  5. Even if taus are produced in charged current events, they quickly decay to other particles. The neutrino detector must be able to identify tau candidates, which will typically require a high enough energy and fine enough spatial resolution to distinguish the primary interaction vertex from the secondary vertex created when the tau decays.

This result further strengthens OPERA’s case for definitively confirming the discovery of νμ to ντ oscillations.

Recent MALBEK Result on the ArXiv

About a week ago, the Majorana collaboration released a new MALBEK conference proceeding presenting a preliminary WIMP dark matter limit curve. MALBEK uses p-type point contact germanium detectors to search for new physics. In particular, the MALBEK detectoris basically identical to the germanium detectors used by the CoGeNT collaboration. A few years ago, CoGeNT famously released a result that appeared to display hints of the annually modulating event rate expected from a galactic WIMP dark matter halo. With the same type of detector in a different location, MALBEK could potentially hope to confirm or reject the existence of the proposed CoGeNT signal.

There’s nothing groundbreaking here, but MALBEK has obtained results that reject the CoGeNT result with certain analysis choices but not with others. They use a wavelet-based pulse shape discrimination variable to remove surface events but don’t have a very good way yet to quantify the efficiency for bulk nuclear recoils or the contamination from background.

Juan Collar (from CoGeNT) has already responded. He criticizes the rather opaque nature of the MALBEK pulse shape analysis. MALBEK resorted to these methods because it was found that a simple variable such as the pulse rise time does not separate surface and bulk events at energies below 2 keV. CoGeNT is able to separate these events due to a much lower amount of electronics noise.

At any rate,  the best results in the low mass region from CDMS and LUX already seem to rule out the CoGeNT dark matter hypothesis by a fairly wide margin. CDMS even uses germanium so one can’t argue that maybe the Ge cross section is enhanced compared to the standard assumptions for the Xe cross section.

New Video on T2K Long Baseline Neutrino Oscillation Experiment

In case you were wondering what I work on, KEK (the Japanese high energy physics lab) released a video on the T2K experiment last week. I think it gives a pretty good general explanation for laypeople of the experiment and the physics it’s studying.

Sadly, I don’t think I appear anywhere in the video (maybe in one of the group pictures), but I know some of the people who do.

The short explanation is that there are 3 flavors (types) of neutrinos (electron, muon, and tau neutrinos), which have very small but non-zero mass. As they travel, the neutrinos can change from one flavor to another. We create a beam composed mostly of muon neutrinos (or antineutrinos) at the J-PARC facility in Tokai, Ibaraki and measure neutrino interactions a few hundred meters from the target and again a few hundred kilometers away at the Super-Kamiokande detector in the Kamioka mine near Toyama. This lets us study how the neutrinos change flavor. This is generally known as neutrino oscillations and is one of several active neutrino research topics in high energy and nuclear physics.

DoE and NSF Announce Support for Next-Gen Dark Matter Searches

Yesterday, the Department of Energy and National Science Foundation announced the major dark matter searches that they will be supporting for the coming years. The experiments to be supported are LZ (LUX-ZEPLIN), SuperCDMS (Cryogenic Dark Matter Search), and ADMX (Axion Dark Matter Search). Additional funding will be available for smaller R&D efforts.

My take on this is that the funding agencies have gone with the most conservative approach, supporting proven technologies and experiments based in North America.

LZ is a planned multiton dual-phase xenon TPC. It’s basically a scaled-up version of detectors like LUX and XENON-100, which have been getting some of the best WIMP search results in recent years. Currently, LUX is the most sensitive direct detection experiment for spin-independent interactions across a wide range of WIMP masses. These experiments use liquid xenon as their target material and pull electrons left from ionization into a gaseous region, where a high electic field causes electroluminescence that can be measured by photodetectors. The scintillation left from a nuclear recoil can also be measured by those detectors, giving them two energy channels to use for position and energy reconstruction and particle ID. LZ will be a continuation of the noble gas TPC program, and is expected to be constructed at the Homestake Mine in South Dakota.

SuperCDMS is a scaled-up version of CDMS, an experiment using cryogenic germanium detectors that are sensitive to both ionization and thermal excitation.Germanium detectors have less mass than the TPCs but potentially have a much lower energy threshold allowing for better sensitivity to low mass WIMPs. SuperCDMS is planned to be constructed at SNOLAB in Ontario, which has a long history of operating large experiments.

I’m not entirely sure what these choices mean for other experiments such as XENON-1ton, COUPP, DarkSide, DEAP/CLEAN. These experiments will not be getting US funding, but many of them have significant support from other countries. They’ll have to get enough non-US support to keep going. If US collaborators are unable to secure funding, it’s quite likely that some of these experiments will end up being forced to merge with others or will just shut down completely due to a lack of personnel.

DoE and NSF will continue to support ADMX, which searches for axion dark matter and is basically the only player in the axion DM field right now. This is not surprising, as it’s an easy way for the US to host a world-leading experiment. Finally, while there will continue to be funding for R&D efforts the specific groups and projects have not been announced, so it will be interesting to see what technologies are developed over the next few years.