A high-precision, and very important, experiment (christened E989) to measure the magnetic property of the fundamental particle called muon got under way in February at Fermilab, the high-energy particle accelerator laboratory in Illinois, United States (Figure 1). The importance of this experiment arises from the fact that the present measured value of the muon’s magnetic strength, or its “magnetic moment”, which determines its behaviour in a magnetic field, is significantly higher than the theoretical predictions of the Standard Model (SM) of particle physics, the highly successful theoretical framework with which scientists by and large understand the universe today.
The currently best measured and accepted value is due to an experiment (E821) carried out at the turn of the century at the Brookhaven National Laboratory (BNL) in New York, U.S., to the precision level possible then. This already was an improvement by a factor of 14 over the 1970s’ measurement of the quantity at CERN (the European Council for Nuclear Research). The experiment achieved 540 ppb (parts per billion) accuracy in its measurement, while the accuracy achieved in the SM theoretical calculations was about 420 ppb. The BNL experiments were performed between 1997 and 2001, and the final corrected results were published during 2004-06, according to which the experimental value was higher at about 2.5 ppm (parts per million) level than the theoretical prediction in the SM. In statistical terminology, this is equivalent to a “3.5 sigma” discrepancy, which, in lay language, implies that there was less than a one in 750 chance that the difference was due to a statistical fluctuation or a fluke.
The physicists perceive this variance between theory and experiment to be a pointer to new physics—involving particles as yet not seen—that lies beyond the SM ( Frontline , May 25, 2001). It must be emphasised, however, that, in terms of statistical significance, the discrepancy is not yet enough for physicists to regard it as “proof” of existence of new physics but only as strong evidence. It will be proof only when the discrepancy is at a “5 sigma” level—equivalent to a one in 3.5 million chance of it being a random fluctuation—or more because in particle physics it has often been seen that many discrepancies between theory and experiments at around 3 sigma have just disappeared with improved statistics and more accurate measurements.
So, until Fermilab produces conclusive proof, muon magnetic moment data will remain consistent with the SM although the departure is significantly large (Figure 2). The BNL experiment was essentially statistics limited. Using 21 times more data, and four times more precise measurements than the BNL experiment (140 ppb accuracy compared with 540 ppb), the new experiment is expected to either confirm or negate the BNL finding at a 5 sigma level and more of statistical significance. The first set of improved data is expected in February 2019.
The Fermilab experiment actually uses the same 14.2-metre-diameter, 700-tonne superconducting ring magnet that was used in the BNL experiment. The muons in the experiment, too, will have roughly the same energy of 3.1 gigaelectronvolts (GeV), and the magnetic field too will have the same value of 1.45 tesla (about 30,000 times the earth’s magnetic field), but its goal is to obtain much better results than the BNL.
Actually, the high-sensitivity upgrade that was needed to achieve the stated goal of the Fermilab experiment was, for some technical reason, not possible at the BNL itself. So, this mammoth ringed superconducting coil (the magnet with its central iron yoke removed), whose circular shape has to be maintained to within 0.7 centimetres and flatness to within 0.3 cm, was lugged from Brookhaven to Fermilab on a barge and by road (Figure 3). This journey happened five years ago between June 23 and July 26, covering about 5,500 kilometres over sea and land.
SM calculations by three research groups
Equally, if not more, important is the development in June 2018 of highly refined SM calculations of the muon’s magnetic moment—which minimise the uncertainties arising from a class of processes that contribute to the muon’s magnetic moment—by three different research groups (one published and the other two accepted for publication) that show that the discrepancy between the measured and predicted values persists.
Although the SM is the most successful theory of fundamental particles and forces to date, it is generally seen to be imperfect because of phenomena that it cannot explain and experimental data that are at odds with its predictions at statistically significant levels. The former include describing gravity in its framework, accommodating massive neutrinos, identifying constituents of dark matter and dark energy, and explaining the observed matter-antimatter asymmetry in the universe. The latter include the proton radius puzzle ( Frontline , August 13, 2010), observed excess over theoretical predictions of decays of particles called B mesons and, most importantly, the measured muon magnetic moment, which has remained with us for nearly two decades now.
In fact, the muon magnetic moment anomaly, as the problem is termed, has emerged as a key testing ground for identifying the correct theoretical framework for new physics, one that will be consistent with the large body of existing data that conform to the SM and yet addresses its shortcomings and will be able to make verifiable non-SM predictions as one moves up the energy scale in future experiments. For example, supersymmetry (SUSY), which posits that there are additional symmetries in the theory beyond what the SM already has, remains a key contender on the table even though the highest energy runs (up to 13 teraelectronvolts) of the Large Hadron Collider (LHC) at CERN—where evidence for the new particles that SUSY requires were expected to be seen—have not revealed any so far. SUSY proponents hope that these may show up with the next high-luminosity upgrade of the LHC, which is currently in the works at CERN.
Amidst these developments, there was a bombshell of sorts just before the Fermilab experiment was to start: with the posting of three papers by Japanese researchers, Takahiro Morishima and Hirohiko Shimizu of Nagoya University and Toshifumi Futamase of Kyoto University, on arXiv.org, the online repository of research papers in the prepublication stage (known as e-preprints). In these articles, the authors claimed that if general relativistic effects, arising owing to the gravitational field of the earth, were taken into account by incorporating the local space-time curvature in the calculations, the coupling of the magnetic moment with the electromagnetic field became gravity-dependent. This gravitationally induced anomalous contribution, they said, exactly cancelled the observed discrepancy between SM predictions and experiment.
This would have been bad news for people who believe that there is new physics out there waiting to be revealed and that muon magnetic moment experiments would lead them there. And what end would this new Fermilab experiment then serve? But before it could create a flutter in the physics community, the claim was refuted by Matt Visser of Victoria University of Wellington, New Zealand, and Hrvoje Nikolic of the Rudjer Boskovic Institute, Zagreb, Croatia, who identified a basic flaw in their arguments, which has put the matter to rest.
In the SM framework, the universe is made up of two kinds of fundamental particles: leptons and quarks, with six leptons and six quarks organised in three families of two particles each. The theory also includes three fundamental forces of interaction among the particles whose effects are observable at currently attainable energy scales (up to hundreds of GeVs). These are the familiar electromagnetism, the weak nuclear force (which causes radioactivity), and the strong nuclear force (which holds the nucleus together). The familiar electron belongs to the category of leptons. The other two leptons, the muon and the tau, are the electron’s heavier cousins—the former 200 times and the latter 3,500 times more massive—but otherwise behave identically. Quarks are the fundamental constituents of particles that are collectively called hadrons, which include the familiar neutron and the proton that make up the atomic nucleus.
The three forces are described in the SM through certain mediating or carrier particles. When particles exchange these carrier particles between them, they experience the corresponding forces. The electromagnetic force arises owing to the exchange of the massless photon, the weak nuclear force due to the exchange of a triplet of massive particles called W+, W- and Z, and the strong nuclear force is caused by an octet of massless particles called gluons.
All elementary particles have an intrinsic quantum mechanical attribute called “spin”. This property has no analogue in classical physics, but each particle can be imagined to be spinning about its axis like a top. This intrinsic spin can take only discrete values in multiples of half (in some units). Thus the spin value or the spin angular momentum can be 0, ½, 1, and so on. The value of spin for leptons and quarks is ½, and they are called fermions. The force mediators, on the other hand, have an integral spin of 1, and they are called bosons. Supersymmetry hypothesises the existence of a higher (mathematical) symmetry between the spin-½ particles (the fermions) and spin-1 particles (the bosons) and predicts that every particle in the SM has a heavier supersymmetric partner, but in the other spin category.
Every charged particle with non-zero spin behaves like a miniature dipole bar magnet, with its axis aligned with the spin axis and the (north-south) polarity along the direction of particle motion. Its magnetic moment is a measure of the strength with which it will couple to a magnetic field. A convenient parameter that physicists use to study magnetic properties of particles is the gyromagnetic ratio, which is simply the ratio of the magnetic moment to its spin value, and is called the g-factor.
At the simplest quantum theoretical level ( a la the Dirac equation), the value of the g-factor for point-like spin-½ particles, such as the electron and the muon, is 2. In the idealised situation where the mass and charge distribution of these particles are identical, the factor g-2 = 0. But the real world situation is not ideal, and there are anomalous contributions arising from higher-level relativistic quantum theoretic contributions called “radiative corrections” that cause the g-value to deviate from 2, that is, g-2 ≠ 0. Half of this deviation, or (g-2)/2, is called the anomalous magnetic moment, a, of a given particle.
At the level of first approximation, say up to parts per thousand accuracy, the anomalous magnetic moment of the proton a(p) is 0.18 compared with 0.001 for both a(e) and a(mu) of the electron and the muon respectively. The reason for the large value of the anomaly for the proton is due to its substructure consisting of quarks and gluons, whereas both the electron and the muon are almost point-like, though not quite. More precise calculations that include quantum corrections to the g-factor at higher levels of approximation make the values of a(e) and a(mu) differ significantly in the higher order decimal places. This is because the higher order quantum corrections scale as the squares of the particle masses, which means that the contributions to the magnetic moment of the 200 times heavier muon will be 40,000 times larger, and hence 40,000 times more sensitive to quantum corrections than in the electron case. The electron’s anomalous magnetic moment has been measured to less than ppb precision, and the agreement with the SM prediction is excellent.
By measuring the departure of the g-factor from 2 for the muon, the earlier BNL experiment and the new Fermilab experiment essentially measure the amount of anomalous contributions to the muon magnetic moment, which is why they are called “g-2 experiments”. The basic principle of these g-2 experiments is the following. As school physics tells us, when a charged particle like an electron or a muon moves in a uniform magnetic field that is perpendicular to its direction of motion, it follows a well-defined circular orbit. In both the BNL and Fermilab g-2 experiments, a 3.1 GeV polarised beam of positively charged muons, with their spins aligned along the direction of motion, is injected into the circular 14.2 m superconducting magnet with a uniform vertical magnetic field. As the muons move in their circular orbits in the horizontal plane, they are strictly confined and stored in a doughnut-shaped ring using an applied electric field.
Now, since muons have a quantum mechanical spin, which bestows them with internal magnetism, the magnetic field exerts a torque on them to make their spins align along the direction of the field just like a compass tends to align itself along the direction of the earth’s magnetic field. If the muons were not spinning, this is what would happen. But the muons are spinning and, therefore, have an associated spin angular momentum that prevents this from happening. Instead, the muon’s spin wobbles, or precesses, about the magnetic field axis just like the angular momentum of a spinning top—whose spin axis is not exactly vertical—prevents it from toppling. The exact rate of precession of the muon spin can be calculated.
If the muon was a strictly point-like relativistic particle, with g = 2, its precession rate would be the same as its orbital period in the magnetic field. But g ≠ 2, and this causes the muon to precess a little faster than it rotates. Now positively charged muons radioactively decay into positrons and two neutrinos. The mismatch between the orbital frequency and the precession frequency, which is a measure of the deviation from g-2, is measured as follows. A measurement of the emitted positron energy gives information about the instantaneous direction of the muon spin because the positrons fly off along the muon spin direction at the instant of decay. A detector system records the time and energy of the detected positrons. A plot of number of decay events versus time looks like any other exponential radioactive decay curve except that here it has a wiggle superimposed on it because of spin wobbling. This spin wobble frequency is measured with great accuracy to yield a high-precision value of g-2 and the anomalous magnetic moment.
The non-zero value for g-2 essentially arises owing to the interaction of a given particle with the cloud of “virtual particles”—which emerge fleetingly from the vacuum due to quantum effects, enveloping it. The Heisenberg uncertainty principle, which characterises all quantum phenomena, enables this to happen. For example, it allows charged particles to constantly emit and reabsorb photons, resulting in a fluctuating “virtual” electromagnetic field associated with these particles. At a higher order of this virtual process, the emitted photon can transmute into a virtual electron-positron (or a quark-antiquark) pair before it is reabsorbed. The virtual pair would recombine back into a photon in an ever so fleetingly small instant, which would then get reabsorbed.
One can imagine such virtual processes into higher orders. A higher order, therefore, means virtual processes involving loops of particles, and each higher order can be thought of as addition of loops of virtual particles (Figure 4 shows diagrammatic representations—called Feynman diagrams—of some of the contributions to g from higher order virtual quantum processes).
The uncertainty principle allows for the apparent violation of energy conservation in such virtual transmutation processes that characterise quantum loops. So, what an external magnetic field sees when an electron or a muon passes through it is the bare electromagnetic field arising from the intrinsic electric charge and the spin of the particle as well as the fluctuating field due to the combined effects of these virtual quantum processes, which modify the zeroth order values of particle properties, such as its magnetic moment and its consequent behaviour in a magnetic field.
Now, if there are particles that have not yet been seen and thus not described by the SM, such as the supersymmetric particles and other exotic hypothetical particles, they will also be involved in these virtual quantum processes even if they are too heavy to be produced in the real world at currently accessible accelerator energies. They would flicker into existence and disappear thanks to the Heisenberg principle but just long enough to contribute to the muon’s magnetic moment. Conservation of energy in the real world, which prevents such particles from being observed, does not apply for these virtual processes. Because of the greater muon mass, its g-factor will be 40,000 times more sensitive than the electron to these unknown additional contributions as well.
The belief is that the observed discrepancy in g-2 is due to positive contributions from these unknown processes arising from unknown physics, which could not have been used in calculations. Of course, if one could do experiments with the tau lepton, which is 3,500 times heavier, there would be about 300-fold larger contributions compared with themuon and correspondingly greater sensitivity. However, the tau is unstable and decays too quickly—in tens of trillionths of seconds compared with the muon’s lifetime of millionths of seconds—to be used in meaningful experiments. It is for this reason that measuring g-2 of the muon has emerged as an ideal test bed for new physics.
In the context of the SM, which unifies the electromagnetic, the weak and the strong forces, virtual (quantum loop) processes arising from the different sectors of the model will all contribute to this virtual fluctuating field of the particle. The theoretical calculation, say, of the magnetic moment, will correspondingly involve summing up of the contributions from each sector, and these calculations are tricky, complex and tedious (Figure 4). The amazing thing about g-2 is that it is not only precisely calculable within the SM framework but that it can also be measured very precisely. As mentioned earlier, while SM calculations have been carried out at 420 ppb accuracy, the Fermilab g-2 experiment is expected to achieve 140 ppb accuracy, which is like measuring the length of a football field with an error margin that is only one-tenth the thickness of a human hair.
The electromagnetic effects (which include virtual processes involving photons and charged particles) are calculated using quantum electrodynamics (QED), a theory tested to ppb accuracy; the weak force effects (which include virtual processes involving W and Z bosons and the Higgs boson) are calculated using the (Glashow-Salam-Weinberg) unified electroweak theory; and the strong force (hadronic sector) effects (which include virtual processes involving quarks and gluons) are calculated using quantum chromodynamics (QCD). The calculation of the hadronic sector contribution is somewhat messy because QCD is not solvable at the low-energy scale characterised by the muon mass (100 megaelectronvolts). To achieve the 420 ppb accuracy in theoretical calculations, QED contributions have been calculated to include 5 quantum loops, which means calculating contributions from a mind-boggling 12,672 Feynman diagrams, and electroweak sector contributions have been calculated up to the 2-loop level.
Much of the theoretical uncertainty actually lies in the calculation of contributions from the hadronic sector, which is not calculable directly. To get around the problem, physicists have essentially adopted two methods for calculating the hadronic contribution. The first uses data on hadron production from electron (e - )-positron (e + ) collisions in e - -e + colliders. These hadron-producing e - -e + processes are used as a proxy to derive contributions from virtual hadrons in muon magnetic moment calculations. So, as these e - -e + collider experiments have constantly improved in precision, there are ever ongoing efforts to improve the calculation of hadronic contributions with better and better data inputs from e - -e + processes. The other method is to do calculations in the “lattice-QCD” framework in which particles (quarks and gluons) are assumed to be in a discrete lattice-like world occupying the lattice nodes.
Of the three new calculations that were mentioned early in the article, the published one (by A. Keshavarzi from the University of Liverpool and colleagues in Physical Review D) is based on the first approach and the two that are awaiting publication in Physical Review Letters (one by Sz. Boranyi and others and the second one by T. Blum and associates) are based on the latter approach. Writing a “Viewpoint essay” on the American Physical Society’s website, phys.aps.org, B. Lee Roberts, a member of the Fermilab g-2 experiment team from Boston University, Massachusetts, observed that while the new lattice-QCD calculations were quite consistent with the earlier calculations using the phenomenological approach, the error margins were quite large. On the other hand, the calculation of Keshavarzi and others, which uses the most recent data from e - -e + colliders, represented the most precise evaluation of the hadronic contribution, Roberts said. In fact, their prediction deviates from the experimental value by 3.7 sigma, which strongly reaffirms the long-standing discrepancy of SM predictions with experiments.
As Roberts pointed out, while the central value of SM predictions has remained stable since 2003, the uncertainty in these calculations has steadily been decreasing. This implies that the statistical significance of the discrepancy has continued to increase. This is perhaps an indication that the imminent high-precision data from Fermilab will only firmly reiterate that the discrepancy is indeed real and the muon is showing the way to new physics. On the other hand, if the measurement is consistent with theory and there is a null result, it will allow physicists to narrow the search for new physics as it will rule out some models currently in vogue, such as SUSY, which will no longer be viable.