Comsol -leaderboard other pages

Topics

Interactions with André Petermann

CCpet1_03_12

The origin of this conceptual revolution was the work in which these two theoretical physicists discovered that all quantities such as the gauge couplings (αi ) and the masses (mj) must “run” with q2, the invariant four-momentum of a process (Stueckelberg and Petermann 1951). It took many years to realize that this “running” allows not only the existence of a grand unification and opens the way to supersymmetry but also finally produces the need for a non-point-like description of physics processes – the relativistic quantum-string theory – that should produce the much-needed quantization of gravity.

It is interesting to recall the reasons that this paper attracted so much attention. The radiative corrections to any electromagnetic process had been found to be logarithmically divergent. Fortunately, all divergencies could be grouped into two classes: one had the property of a mass; the other had the property of an electric charge. If these divergent integrals were substituted with the experimentally measured mass and charge of the electron, then all theoretical predictions could be made to be “finite”. This procedure was called “mass” and “charge” renormalization.

Stueckelberg and Petermann discovered that if the mass and the charge are made finite, then they must run with energy. However, the freedom remains to choose the renormalization subtraction points. Petermann and Stueckelberg proposed that this freedom had to obey the rules of an invariance group, which they called the “renormalization group” (Stueckelberg and Petermann 1953). This is the origin of what we now call the renormalization group equations, which – as mentioned – imply that all gauge couplings and masses must run with energy. It was remarkable many years later to find that the three gauge couplings could converge, even if not well, towards the same value. This means that all gauge forces could have the same origin; in other words, grand unification. A difficulty in the unification was the new supersymmetry that my old friend Bruno Zumino was proposing with Julius Wess. Bruno told me that he was working with a young fellow, Sergio Ferrara, to construct non-Abelian Lagrangian theories simultaneously invariant under supergauge transformations, without destroying asymptotic freedom. During a nighttime discussion with André, in the experimental hall to search for quarks at the Intersecting Storage Rings in 1977, I told him that two gifts were in front of us: asymptotic freedom and supersymmetry. The first was essential for the experiment being implemented, the second to make the convergence of the gauge couplings “perfect” for our work on the unification. We will see later that this was the first time that we realized how to make the unification “perfect”.

The muon g-2

The second occasion for me to know about André came in 1960, when I was engaged in measuring the anomalous magnetic moment (g-2) of the muon. He had made the most accurate theoretical prediction, but there was no high-precision measurement of this quantity because technical problems remained to be solved. For example, a magnet had to be built that could produce a set of high-precision polynomial magnetic fields throughout as long a path as possible. This is how the biggest (6-m long) “flat magnet” came to be built at CERN with the invention of a new technology now in use the world over. André worked only at night and because he was interested in the experimental difficulties he spent nights with me working in the SC-Experimental Hall. It was a great help for me to interact with the theorist who had made the most accurate theoretical prediction for the anomalous magnetic moment of a particle 200 times heavier than the electron. The muon must surely reveal a difference in a fundamental property like its g-value. Otherwise, why is its mass 200 times greater than that of the electron? (Even now, five decades later, no one knows why.)

When the experiment at CERN proved that, at the level of 2.5 parts in a million for the g-value, the muon behaves as a perfect electromagnetic object, the problem changed focus to ask why are there so many muons around? The answer lay in the incredible value of the mass difference between the muon and its parent, the π. Could another “heavy electron” – a “third lepton” – exist with a mass in the range of giga-electron-volts? Had a search ever been done for this third “lepton”? The answer was no. Only strongly interacting particles had been studied. This is how the search for a new heavy lepton, called HL, was implemented at CERN, with the Proton AntiProton into LEpton Pairs (PAPLEP) project, where the production process was proton–antiproton annihilation. André and I discussed these topics in the CERN Experimental Hall during the night shifts he spent with me.

The results of the PAPLEP experiment gave an unexpected and extremely strong value for the (time-like) electromagnetic form-factor of the proton, whose consequence was a factor 500 below the point-like cross-section for PAPLEP. This is how, during another series of night discussions with André , we decided that the “ideal” production process for a third “lepton” was (e+e) annihilation. However, there was no such collider at CERN. The only one being built was at Frascati, by Bruno Touschek, who was a good friend of Bruno Ferretti and another physicist who preferred to work at night. I had the great privilege of knowing Touschek when I was in Rome. He also became a strong supporter of the search for a “third lepton” with the new e+e collider, ADONE. Unfortunately the top energy of ADONE was 3 GeV and the only result that we could achieve was a limit of 1 GeV for the mass of the much desired “third lepton”.

Towards supersymmetry

Another topic talked about with André has its roots in the famous work with Stueckelberg – the running with energy of the fundamental couplings of the three interactions: electromagnetic, weak and strong. The crucial point here was at the European Physical Society (EPS) conferences in York (1978) and Geneva (1979). In my closing lecture at EPS-Geneva, I said: “Unification of all forces needs first a supersymmetry. This can be broken later, thus generating the sequence of the various forces of nature as we observe them.” This statement was based on work with André where in 1977 we studied – as mentioned before – the renormalization-group running of the couplings and introduced a new degree of freedom: supersymmetry. The result was that the convergence of the three couplings improved a great deal. This work was not published, but known to a few, and it led to the Erice Schools Superworld I, Superworld II and Superworld III.

This is how we arrived at 1991 when it was announced that the search for supersymmetry had to wait until the multi-tera-electron-volt energy threshold would become available. At the time, a group of 50 young physicists was engaged with me on the search for the lightest supersymmetric particle in the L3 experiment at CERN’s Large Electron Positron (LEP) collider. If the new theoretical “predictions” were true then there was no point in spending so much effort in looking for supersymmetry-breaking in the LEP energy region. Reading the relevant papers, André and I realized that no one had ever considered the evolution of the gaugino mass (EGM). During many nights of work we improved the unpublished result of 1977 mentioned above: the effect of the EGM was to bring down the energy threshold for supersymmetry-breaking by nearly three orders of magnitude. Thanks to this series of works I could assure my collaborators that the “theoretical” predictions on the energy-level where supersymmetry-breaking could occur were perfectly compatible with LEP energies (and now with LHC energies).

Finally, in the field of scientific culture, I would like to pay tribute to André Petermann for having been a strong supporter for the establishment of the Ettore Majorana Centre for Scientific Culture in Erice. In the old days, before anyone knew of Ettore Majorana, André was one of the few people who knew about Majorana neutrinos and that relativistic invariance does not give any privilege to spin-½ particles, such as the privilege of having antiparticles, all spin values having the same privilege. In all of my projects André was of great help, encouraging me to go on, no matter what the opposition could present in terms of arguments that often he found to be far from being of rigorous validity.

Saul Perlmutter: from light into darkness

CCsau1_03_12

Paradoxically, work on “light candles” led to the discovery that the universe is much darker than anyone thought. Arnaud Marsollier caught up with Saul Perlmutter recently to find out more about this Nobel breakthrough.

Saul Perlmutter admits that measuring an acceleration of the expansion of the universe – work for which he was awarded the 2011 Nobel Prize in Physics together with Brian Schmidt and Adam Riess – came as a complete surprise. Indeed, it is exactly the opposite of what Perlmutter’s team was trying to measure: the decelerating expansion of the universe. “My very first reaction was the reaction of any physicist in such a situation: I wondered which part of the chain of the analysis needed a new calibration,” he recalls. After the team had checked and rechecked over several weeks, Perlmutter, who is based at Lawrence Berkeley National Laboratory and the University of California, Berkeley, still wondered what could be wrong: “If we were going to present this, then we would have to make sure that everybody understood each of the checks.” Then, after a few months, the team began to make public its result in the autumn of 1997, inviting scrutiny from the broader cosmology community.

Despite great astonishment, acceptance of the result was swift. “Maybe in science’s history, it’s the fastest acceptance of a big surprise,” says Perlmutter. In a colloquium that he presented in November 1997, he remembers how cosmologist Joel Primack stood up and instead of talking to Perlmutter he addressed the audience, declaring: “You may not realize this, but this is a very big problem. This is an outstanding result you should be worried about.” Of course, some colleagues were sceptical at first. “There must be something wrong, it is just too crazy to have such a small cosmological constant,” said cosmologist Rocky Kolb in a later conference in early 1998.

CCsau2_03_12

According to Perlmutter, one of the main reasons for the quick acceptance by the community of the accelerating expansion of the universe is that two teams reported the same result at almost the same time: Perlmutter’s Supernova Cosmology Project and the High-z Supernova Search Team of Schmidt and Riess. Thus, there was no need to wait a long time for confirmation from another team. “It was known that the two teams were furious competitors and that each of them would be very glad to prove the other one wrong,” he adds. By the spring of 1998, a symposium was organized at Fermilab that gathered many cosmologists and particle physicists specifically to look at these results. At the end of the meeting, after subjecting the two teams to hard questioning, some three quarters of the people in the room raised their hands in a vote to say that they believed the results.

What could be responsible for such an acceleration of the expanding universe? Dark energy, a hypothetical “repulsive energy” present throughout the universe, was the prime suspect. The concept of dark energy was also welcomed because it solves some delicate theoretical problems. “There were questions in cosmology that did not work so well, but with a cosmological constant they are solved,” explains Perlmutter. Albert Einstein had at first included a cosmological constant in his equations of general relativity. The aim was to introduce a counterpart to gravity in order to have a model describing a static universe. However, with evidence for the expansion of the universe and the Big Bang theory, the cosmological constant had been abandoned by most cosmologists. According to George Gamow, even Einstein thought that it was his “biggest blunder” (Gamow 1970). Today, with the discovery of the acceleration of the expansion of the universe, the cosmological constant “is back”.

Since the discovery, other kinds of measurements – for example on the cosmic microwave background radiation (CMB), first by the MAXIMA and BOOMERANG balloon experiments, and then by the Wilkinson Microwave Anisotropy Probe satellite – have proved consistent with, and even made stronger, the idea of an accelerating expansion of the universe. However, it all leads to a big question: what could be the nature of dark energy? In the 20th century, physicists were already busy with dark matter, the mysterious invisible matter that can only be inferred through observations of its gravitational effects on other structures in the universe. Although they still do not know what dark matter is, physicists are increasingly confident that they are close to finding out, with many different kinds of experiments that can shed light on it, from telescopes to underground experiments to the LHC. In the case of dark energy, however, the community is far from agreeing on a consistent explanation.

When asked what dark energy could be, Perlmutter’s eyes light up and his broad smile shows how excited he is by this challenging question. “Theorists have been doing a very good job and we have a whole landscape of possibilities. Over the past 12 years there was an average of one paper a day from the theorists. This is remarkable,” he says. Indeed, this question has now become really important as it seems that physicists know about a mere 5% of the whole mass-energy of the universe, the rest being in the form of dark matter or, in the case of more than 70%, the enigmatic, repulsive stuff known as dark energy or a vacuum energy density.

Including a cosmological constant in Einstein’s equations of general relativity is a simple solution to explain the acceleration of the expansion of the universe. However, there are other possibilities. For example, a decaying scalar field of the kind that could have caused the first acceleration at the beginning of the universe or the existence of extra dimensions could save the standard cosmological model. “We might even have to modify Einstein’s general relativity,” Perlmutter says. Indeed, all that is known is that the expansion of the universe is accelerating, but there is no clue as to why. The ball is in the court of experimentalists, who will have to provide theorists with more data and refined measurements to show precisely how the expansion rate changes over time. New observations by different means will be crucial, as they could show the way forward and decide between the different available theoretical models.

“We have improved the supernova technique and we know what we need to make a measurement that is 20 times more accurate,” he says. There are also two other precision techniques currently being developed to probe dark energy either in space or from the ground. One uses baryon acoustic-oscillations, which can be seen as “standard rulers” in the same way that supernovae are used as standard candles (see box, previous page). These oscillations leave imprints on the structure of the universe at all ages. By studying these imprints relative to the CMB, the earliest “picture of the universe” available, it is possible to measure the rate at which the expansion of the universe is accelerating. The second technique is based on gravitational lensing, a deflection of light by massive structures, which allows cosmologists to study the history of the clumping of matter in the universe, with the attraction of gravity contesting with the accelerating expansion. “We think we can use all of these techniques together,” says Perlmutter. Among the projects he mentions, are the US-led ground-based experiments BigBOSS and the Large Synoptic Survey Telescope and ESA’s Euclid satellite, all of which are under preparation.

However, the answer to this obscure mystery – or at least part of it – could come from elsewhere. The full results from ESA’s Planck satellite, for instance, are eagerly awaited because they should provide unprecedented precision on measurements of the CMB. “The Planck satellite is an ingredient in all of these analyses,” explains Perlmutter. In addition, cosmology and particle physics are increasingly linked. In particular, the LHC could bring some input into the story quite soon. “It is an exciting time for physics,” he says. “If we just get one of these breakthroughs through the LHC, it would help a lot. We are really hoping that we will see the Higgs and maybe we will see some supersymmetric particles. If we are able to pin down the nature of dark matter, that can help a lot as well.” Not that Perlmutter thinks that the mystery of dark energy is related to dark matter, considering that they are two separate sectors of physics, but as he says, “until you find out, it is still possible”.

LHCb looks forward to electroweak physics

CClhc1_03_12

LHCb is one of the four large experiments at the LHC. It was designed primarily to probe beyond the Standard Model by investigating CP violation and searching for the effects of new physics in precision measurements of decays involving heavy quarks, b quarks in particular. At the LHC, pairs of particles (B and B mesons) containing these quarks are mainly produced in the direction of the colliding protons, that is, in the same forward or backward cone about the beam line. For this reason, LHCb was built as a single-arm forward spectrometer that covers production angles close to the beam line with full particle detection and tracking capability – closer even than the general-purpose experiments, ATLAS and CMS. This gives LHCb the opportunity to study the Standard Model in regions that are not easily accessible to ATLAS and CMS. In particular, the experiment has an active and rapidly developing programme of electroweak physics that is beginning to test the Standard Model in several unexplored regions.

Closer to the beam

Particle production at collider experiments is usually described in terms of pseudorapidity, defined as η = –ln(tan θ/2), where θ is the angle that the particle takes relative to the beam axis. The particles tend to be produced in the forward direction: that is crowded into small values of θ, while in terms of η, they are spread more uniformly. The inverse relationship means that the closer a particle is to the beam line, the larger its pseudorapidity. LHCb’s forward spectrometer is fully instrumented in the range 2 < η < 5, a portion of which (2 < η < 2.5) is also covered by ATLAS and CMS. However, the forward region at η > 2.5 – roughly between 10° and 0.5° to the beam – is unique to LHCb, thanks to its full complement of particle detection.

LHCb can explore electroweak physics through the production of W and Z bosons, as well as virtual photons. The experiment can trigger on and reconstruct muons with low momentum pμ > 5 GeV and transverse momentum p > 1 GeV, giving access to low values of the muon-pair invariant mass mμμ > 2.5 GeV. Specialist triggers can even explore invariant masses below 2.5 GeV in environments of low multiplicity. Coupled with the forward geometry, this reconstruction capability opens up a large, previously unmeasured kinematic region.

CClhc2_03_12

Figure 1 shows the kinematic regions that LHCb probes in terms of x, the longitudinal fraction of the incoming proton’s momentum that is carried by the interacting parton (quark or gluon), and Q2, the square of the four-momentum exchanged in the hard scatter. Because of the forward geometry, the momenta of the two interacting partons are highly asymmetric in the particle-production processes detected at LHCb. This means that LHCb can simultaneously probe not only a region at high-x that has been explored by other experiments but also a new, unexplored region at small values of x. The high rapidity range and low transverse-momentum trigger thresholds for muons allow potential exploration of Q2 down to 6.25 GeV2 and x down to 10–6, thus extending the region that was accessible at HERA, the electron–proton collider at DESY.

The aim is to probe and constrain the parton-density functions (PDFs) – basically, the probability density for finding a parton with longitudinal momentum fraction x at momentum transfer Q2 – in the available kinematic regions. The PDFs provide important input to theoretical predictions of cross-sections at the LHC and at present they dominate the uncertainties in the theoretical calculations, which now include terms up to next-next-to-leading order (NNLO).

Using data collected in 2010, the LHCb collaboration measured the production cross-sections of W and Z bosons in proton–proton collisions at a centre-of-mass energy of 7 TeV, based on an analysis of about 36 pb–1 of data (LHCb collaboration 2011a). Although only a small fraction of W and Z bosons enter the acceptance of the experiment (typically 10–15%), the large production cross-sections ensure that the statistical error on these measurements is small. The results are consistent with NNLO predictions that use a variety of models for the PDFs. With greater statistics, the measurements will begin to probe differences between these models.

CClhc3_03_12

The uncertainty in luminosity dominates the precision to which cross-sections can be determined, so the collaboration also measures ratios of W and Z production, which are insensitive to this uncertainty, as well as the charge asymmetry for W production, AW = ( σW+ –  σW– / σW+ +  σW–). Figure 2 shows the results for AW overlaid with equivalent measurements by ATLAS and CMS. It illustrates how the kinematic region explored by LHCb is complementary to that of the general-purpose detectors and extends the range that can be tested at the LHC. It is also apparent that LHCb’s acceptance probes the region where the asymmetry is changing rapidly, so the measurements are particularly sensitive to the parameters of the various PDF models.

Low-momentum muons

The LHCb collaboration also plans to increase the probing power of the cross-section measurements by improving the uncertainty in the luminosity itself. Work is ongoing to measure the exclusive production of pairs of muons, a QED process that should ultimately yield a more precise indirect measure of integrated luminosity. Although instrumented in the forward region, LHCb has some tracking coverage in the region –1.5< η< –4 because the proton–proton collision point lies a little way inside the main tracking detector. The measurement exploits this acceptance, LHCb’s ability to trigger on muons with low momentum and the low pile-up environment of collisions at LHCb, which allows the identification of these low multiplicity, exclusively produced events. First measurements based on 2010 data show that the measurement is feasible (LHCb collaboration 2011b). Updated measurements based on the 2011 data set are underway.

In high-energy hadron–hadron scattering the production of Z and W bosons, which decay into lepton pairs, occurs as a Drell-Yan process in which a quark in one hadron interacts with an antiquark in the other hadron to produce a W or a Z or a virtual photon, which then produces a lepton pair of opposite charges. With its ability to trigger on and identify muons with low transverse-momentum, LHCb can measure the production of muon pairs from Drell-Yan production down to invariant masses approaching 5 GeV. As figure 1 shows, these measurements probe values of x around 10–5 and can be used to improve knowledge of the behaviour of gluons inside the proton, building on the knowledge gained at HERA.

These and other production studies are being updated for the upcoming 20th International Workshop on Deep-Inelastic Scattering that takes place in Bonn on 26–30 March. The first measurements using electron final-states will also be available soon, as will those on the production of Z bosons in association with jets. The latter will open the way to more direct probes of the PDFs, once the jets can tagged by flavour (for example, a measurement of the production of a W boson together with a charm jet will allow constraints to be placed on the behaviour of the strange quark inside the proton).

The forward acceptance of LHCb also provides unexpected advantages for other measurements. The further forward in pseudorapidity that final states are produced, the more likely they are to arise from interactions between a valence quark in one proton and an antiquark in the “sea” of the other proton. This is in contrast to the ATLAS and CMS experiments, which experience predominately sea–sea collisions. The measurement of the forwards–backwards asymmetry of Z bosons, which is sensitive to the electroweak mixing angle, sin2θW, benefits from this ability to define a “forward” incoming-quark direction. Studies show that LHCb can identify this correctly in more than 90% of events that have boson rapidities above 3 (McNulty 2011). PDF uncertainties are also reduced in this region. This gives the LHCb experiment the potential to reach the precision of a typical measurement of sin2θW at the Large Electron–Positron collider, even with the data set of 1 fb–1 already recorded.

Studies of the production of the top quark could also benefit from LHCb’s detection system. Although the production rate for top inside LHCb is small at 7 TeV, at 14 TeV the rate should be large enough to make measurements viable. At this centre-of-mass energy, top-pairs are produced by quark–antiquark annihilation twice as often inside the forward region of LHCb’s acceptance as they are in the central region. A measurement of the t-t asymmetry with LHCb could give a direct and comparable cross-check of the recent result from Fermilab’s Tevatron.

Electroweak physics at LHCb may not have been part of the original programme, but the future prospects are bright.

Experiment recreates ‘seeds’ of the universe’s magnetic fields

CCnew5_02_12

How did magnetic fields arise in the universe? An experiment using a high-power laser to create plasma instabilities may have glimpsed the processes that created magnetic fields during the period of galaxy formation.

Magnetic fields pervade the cosmos. Measurements of synchrotron emission at radio frequencies from cosmic rays and of Faraday rotation reveal that they exist in Galaxy clusters on the megaparsec scale, with strengths that vary from a few nanogauss to a few microgauss. Intergalactic magnetic fields weave through clusters of galaxies forming even larger-scale structures. In these clusters the temperatures can often be greater than 108 K, making them strong X-ray emitters. It is possible that the energy to heat the plasma comes from the magnetic field through some plasma instability. In general, wherever intergalactic hot matter is detected, magnetic fields with strengths greater than 10–9 G are also observed – with weaker magnetic fields tending to occur outside galaxy clusters. The magnetic field therefore appears to play a role in the structure of the universe.

The only way to explain the observed magnetization is through a magnetic dynamo mechanism in which it is necessary to invoke a “seed” field – but the origin of this seed field remains a puzzle. Prior to galaxy formation, density inhomogeneities would drive violent motions in the universe, forming shock waves that would generate vorticity on all scales. In 1997 Russell Kulsrud suggested that the “Biermann battery effect” could create seed magnetic fields as small as 10–21 G that would be amplified by the protogalaxy dynamo. In this effect, proposed by astrophysicist Ludwig Biermann in 1950, electric fields can arise in a plasma as the electrons and the heavier protons respond differently to external pressure and tend to separate. The Biermann battery acts to create the seed magnetic fields whenever the pressure and density gradients are not parallel.

Now, an international team of scientists has performed an experiment to recreate the conditions similar to those in the pregalactic epoch where shocks and turbulent motions form. They used a high-power laser at the Laboratoire pour l’Utilisation des Lasers Intenses in Paris to explode a rod of carbon surrounded by helium gas in a field-free environment. Magnetic induction coils monitored the magnetic fields created in the resulting shock waves. The team found that the explosion generated strong shock waves around which strong electric currents and magnetic fields formed, through the Biermann battery effect, with fields as high as 10–30 G existing for 1–2 μs at 3 cm from the blast. When scaled through 22 orders of magnitude, the measurements matched the predynamo magnetic seeds predicted by theory prior to galaxy formation.

Looking at the top for new physics

Last year one of the properties of top-quark production, the tt charge asymmetry, attracted much interest with the publication of measurements by the CDF and DØ collaborations at Fermilab’s Tevatron (T Aaltonen et al. 2011, V M Abazov et al. 2011). They reported results that were 2σ above the predicted values. This deviation can be explained by several theories that go beyond the Standard Model by introducing new particles that contribute to top-quark production. The CMS collaboration has now measured this top-quark property for the first time at the LHC – and finds a different result.

CCnew6_02_12

In the Standard Model, a difference in angular distributions between top quarks and antiquarks (commonly referred to as the charge asymmetry) in tt production through quark–antiquark annihilation appears in QCD calculations at next-to-leading order. It leads to there being more top quarks produced at small angles to the beam pipe, while top antiquarks are produced more centrally (figure 1). As a consequence, the pseudorapidity distribution of top quarks is broader than that of top antiquarks, which makes the difference of the respective pseudorapidities, Δ|η| = |ηt| – |ηt|, a suitable observable for measuring the charge asymmetry (figure 2). The Standard Model predicts a small asymmetry of AC = 0.0136 ± 0.0008, which translates into an excess of about 1% of events with Δ|η| > 0 compared with events with Δ|η| < 0 (Kuhn and Rodrigo 2011). The existence of new sources of physics could enhance this asymmetry.

CCnew7_02_12

CMS has measured the tt charge asymmetry using data corresponding to an integrated luminosity of 1 fb–1 (CMS collaboration 2011). A total of 12 757 events were selected in the lepton+jets channel, where one top quark decays into a b quark, a charged lepton (electron or muon) and the corresponding neutrino, while the other top quark decays into three quarks. The background contribution to this dataset is about 20%. The measurement of the charge asymmetry is based on the full reconstruction of the four-momenta of the top quarks, which have to be reconstructed from the observed leptons, jets and missing transverse energy. The dependency of the selection efficiency both on the Δ|η| value and on the smearing of the momenta of the top-quark decay products, because of finite detector resolution, are accounted for when calculating the final result.

The measured value of AC = –0.017 ± 0.032 (stat.)+0.025 –0.036 (syst.) is consistent with the Standard Model prediction and does not provide any indication for a new physics contribution. CMS also measured the uncorrected charge asymmetry as a function of the invariant mass of the top-quark pair, mtt. Previous measurements by the CDF collaboration had found an asymmetry that was more than 3σ above the predicted value for large values of mtt. However, the current analysis by CMS dampens the excitement that the CDF result caused because it reveals no hints for a deviation from the Standard Model predictions.

The hunt for long-lived exotic beasts

The hunt for exotic massive long-lived particles is an important element in the ATLAS collaboration’s programme of searches. The signatures associated with such long-lived objects are particularly striking and experimentally challenging. At the LHC they could appear as slow-moving and highly ionizing objects that could slip into the next bunch-crossing, saturate the read-out electronics and confound the event reconstruction software. An alternative approach to the direct detection of moving long-lived particles is to search for those that stop in the detector and subsequently decay. This is the method used in a recent search by the ATLAS collaboration.

CCnew8_02_12

The new search looks for metastable R-hadrons that would be formed from gluinos and light quarks (ATLAS collaboration 2011a). If produced, some R-hadrons would stop in the dense calorimeter material – following electromagnetic and hadronic interactions. Within the scenario of split-supersymmetry, an R-hadron could decay to a final state of jets and a neutralino. During 2010, the experiment used jet triggers to record candidate decays in empty bunch crossings when no proton–proton collisions were intended. With the subsequent analysis, which required estimations of cosmic and beam-related backgrounds along with the uncertainties on R-hadron stopping rates, ATLAS has set upper limits on the pair-production cross-section for gluinos with lifetimes in the range 10–5–103 s. From this, the collaboration has obtained a lower mass limit for the gluino of around 340 GeV at the 95% CL (see figure). Although the search was inspired by split-supersymmetry, the results are generally applicable for any heavy object decaying to jets.

This complex work complements other, more conventional, searches for long-lived particles that interact or decay in the ATLAS detector. These results allow stringent limits to be set on topical models of new physics. Moreover, the collaboration is performing experimentally driven searches up to the limits of the detector’s capability to detect long-lived objects. For example, a search based on early collision data sought exotic particles with large electric charge (up to 17e).

With more data and a continually improving knowledge of the detector response, the ATLAS collaboration is aiming at a set of comprehensive searches for long-lived objects, which possess a range of colour, electric and magnetic charges, and appear as stable objects or decay to a variety of final states.

ALICE unveils mysteries of the J/ψ

J/ψ suppression

The J/ψ meson, a bound state of a charm (c) and an anticharm (c) quark, is unique in the long list of particles that physicists have discovered over the past 50 years. Found almost simultaneously in 1974 – at Brookhaven, in proton–nucleus collisions, and at SLAC, in e+e collisions – this particle is the only one with two names, given to it by the two teams. With a mass greater than 3 GeV it was by far the heaviest known particle at the time and it opened a new field in particle physics, namely the study of “heavy” quarks.

The charm quark and its heavier partners, the bottom and top quarks (the latter discovered more than 20 years later, in 1995), have proved to be a source of both inspiration and problems for particle physicists. By now, thousands of experimental and theoretical papers have been published on these quarks and the production, decay and spectroscopy of particles containing heavy quarks have been the focus of intense and fruitful investigations.

In conclusion, a particle that has been known for almost half a century continues to be a source of inspiration and progress

However, despite a history of almost 40 years, the production of the J/ψ itself still represents a puzzle for QCD, the standard theory of strong interactions between quarks and gluons. On the one hand, the creation of a pair of quarks as “heavy” as charm (mc ≈ 1.3 GeV/c2) in a gluon–gluon or quark–antiquark interaction is a process that is “hard” enough to be treated in a perturbative way and therefore well understood by theory. On the other hand however, the binding of the pair is essentially a “soft” process – the relative velocity of the two quarks in a J/ψ is “only” about 0.5 c – and this proves to be much more difficult to model.

J/ψ production

About fifteen years ago, the results obtained at Fermilab’s Tevatron collider first showed a clear inconsistency with the theoretical approach adopted at the time to model J/ψ production, the so-called colour-singlet model. This unsatisfactory situation led to the formulation of the more refined approach of nonrelativistic QCD (NRQCD), which brought a better agreement with data. However, other quantities such as the polarization of the produced J/ψ, i.e. the extent to which the intrinsic angular momentum of the particle is aligned with respect to its momentum, were poorly reproduced. This uncomfortable situation also arose partly because of controversial experimental results from the Tevatron, where the CDF experiment’s results on polarization from Run1 disagreed with those from Run2. Considerable hope is therefore placed on the results that the LHC can obtain for this observable (more on this later).

Nevertheless, despite these unresolved mysteries surrounding its production, the J/ψ has an important “application” in high-energy nuclear physics and more precisely in the branch that studies the formation of the state of (nuclear) matter where quarks and gluons are no longer confined into hadrons: the quark–gluon plasma (QGP). If such a state is created, it can be thought of as a hot “soup” of coloured quarks and gluons, where colour is the “charge” of the strong interaction. In the usual world, quarks and gluons are confined within hadrons and colour cannot fly over large distances. However, in certain situations, as when ultrarelativistic heavy-ion collisions take place, a QGP state could be formed and studied. Indeed, such studies form the bulk of the physics programme of the ALICE experiment at the LHC.

The J/ψ is composed of a heavy quark–antiquark pair with the two objects orbiting at a relative distance of about 0.5 fm, held together by the strong colour interaction. However, if such a state were to be placed inside a QGP, it turns out that its binding could be screened by the huge number of colour charges (quarks and gluons) that make up the QGP freely roaming around it. This causes the binding of the quark and antiquark in the J/ψ to become weaker so that ultimately the pair disintegrates and the J/ψ disappears – i.e. it is “suppressed”. Theory has shown that the probability of dissociation depends on the temperature of the QGP, so that the observation of a suppression of the J/ψ can be seen as a way to place a “thermometer” in the medium itself.

Such a screening of the colour interaction, and the consequent J/ψ suppression, was first predicted by Helmut Satz and Tetsuo Matsui in 1986 and was thoroughly investigated over the following years in experiments with heavy-ion collisions. In particular, Pb–Pb interactions were studied at CERN’s Super Proton Synchrotron (SPS) at a centre-of-mass energy, √s, of around 17 GeV per nucleon pair and then Au–Au collisions were studied at √s=200 GeV at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC).

The J/ψ and its suppression can be seen as a thermometer in the medium created in the collision

As predicted by the theory, a suppression of the J/ψ yield was observed with respect to what would be expected from a mere superposition of production from elementary nucleon–nucleon collisions. However, the experiments also made some puzzling observations. In particular, the size of the suppression (about 60–70% for central, i.e. head-on nucleus–nucleus collisions) was found to be approximately the same at the SPS and RHIC, despite the jump in the centre-of-mass energy of more than one order of magnitude, which would suggest higher QGP temperatures at RHIC. Ingenious explanations were suggested but a clear-cut explanation of this puzzle proved impossible.

At the LHC, however, extremely interesting developments are expected. In particular, a much higher number of charm–anticharm pairs are produced in the nuclear interaction, thanks to the unprece¬dented centre-of-mass energies. As a consequence, even a suppression of the J/ψ yield in the hot QGP phase could be more than counter-balanced by a statistical combination of charm–anticharm pairs happening when the system, after expansion and cooling, finally crosses the temperature boundary between the QGP and a hot gas of particles. If the density of heavy quark pairs is large enough, this regeneration process may even lead to an enhancement of the J/ψ yield – or at least to a much weaker suppression with respect to the experiments at lower energies. The observation of the fate of the J/ψ in nuclear collisions at the LHC constitutes one of the goals of the ALICE experiment and was among its main priorities during the first run of the LHC with lead beams in November/December 2010.

The ALICE experiment is particularly suited to observing a J/ψ regeneration process. For simple kinematic reasons, regeneration can be more easily observed for charm quarks with low transverse-momentum. Contrary to the other LHC experiments, both detector systems where the J/ψ detection takes place – the central barrel (where the J/ψ→e+e decay is studied) and the forward muon spectrometer (for J/ψ→μ+μ) – can detect J/ψ particles down to zero transverse momentum.

As the luminosity of the LHC was still low during its first nucleus–nucleus run, the overall J/ψ statistics collected in 2010 were not huge, of the order of 2000 signal events. Nevertheless, it was possible to study the J/ψ yield as a function of the centrality of the collisions in five intervals from peripheral (grazing) to central (head-on) interactions.

Clearly, suppression or enhancement of a signal must be established with respect to a reference process. And for such a study, the most appropriate reference is the J/ψ yield in elementary proton–proton collisions at the same energy as in the nucleus–nucleus data-taking. However, in the first proton run of the LHC the centre-of-mass energy of 7 TeV was more than twice the energy of 2.76 TeV per nucleon–nucleon collision in the Pb–Pb run. To provide an unbiased reference, the LHC was therefore run for a few days at the beginning of 2011 with lower-energy protons and J/ψ production was studied at the same centre-of-mass energy of Pb–Pb interactions.

The two parameters

The Pb–Pb and p–p results are compared using a standard quantity, the nuclear modification factor RAA. This is basically a ratio between the J/ψ yield in Pb–Pb collisions, normalized to the average number of nucleon–nucleon collisions that take place in the interaction of the two nuclei and the proton–proton yield. Values smaller than 1 for RAA therefore indicate a suppression of the J/ψ yield, while values larger than 1 represent an enhancement.

The results from the first ALICE run are rather striking, when compared with the observations from lower energies (figure 1). While a similar suppression is observed at LHC energies for peripheral collisions, when moving towards more head-on collisions – as quantified by the increasing number of nucleons in the lead nuclei participating in the interaction – the suppression no longer increases. Therefore, despite the higher temperatures attained in the nuclear collisions at the LHC, more J/ψ mesons are detected by the ALICE experiment in Pb–Pb with respect to p–p. Such an effect is likely to be related to a regeneration process occurring at the temperature boundary between the QGP and a hot gas of hadrons (T≈160 MeV).

The picture arises from these observations is consistent with the formation, in Pb–Pb collisions at the LHC, of a deconfined system (QGP) that can suppress the J/ψ meson, followed by a hadronic system in which a fraction of the charm–anticharm pairs coalesce and ultimately give a J/ψ yield larger than that observed at lower energies. This picture should be clarified by the Pb–Pb data that were collected in autumn 2011. Thanks to an integrated luminosity for such studies that was 20 times larger than in 2010, a final answer on the fate of the J/ψ inside the hot QGP produced at the LHC seems to be within reach.

ALICE is also working hard to help solve other puzzles in J/ψ production in proton–proton collisions, in particular by studying, as described above, the degree of polarization. A first result, recently published in Physical Review Letters, shows that the J/ψ produced at not too high a transverse momentum are essentially unpolarized, i.e. the angular distribution of the decay muons in the J/ψ→μ+μ process is nearly isotropic (figure 2). Theorists are now working to establish if such behaviour is compatible with the NRQCD approach that up to now is the best possible tool for understanding the physics related to J/ψ production.

In conclusion, a particle that has been known for almost half a century continues to be a source of inspiration and progress. However, even if particle and nuclear physicists working at the LHC are confident of being able finally to understand its multifaceted aspects, the future often brings the unexpected. So stay tuned and be ready for surprises.

DIRAC observes dimeson atoms and measures their lifetime

CCdir2_02_12

The study of nonstandard atoms has a long tradition in particle physics. Such exotic atoms include positronium, muonic atoms, antihydrogen and hadronic atoms. In this last category, mesonic hydrogen in particular has been investigated extensively in different experiments at CERN, PSI and Frascati. Dimeson atoms also belong to this category. These electromagnetically bound mesonic pairs, such as the π+π atom (pionium, A) or the πK atom (AπK), offer the opportunity to study the theory of the strong interaction, QCD, at low energy, i.e. in the confinement region.

CCdir1_02_12

This strong interaction leads to a broadening and a shift of atomic levels, and dominates the lifetime of these exotic atoms in their s-states. The ππ interaction at low energy, constrained by the approximate chiral symmetry SU(2) for two flavours (u and d quarks), is the simplest and best understood hadron–hadron process. Since the bound-state physics is well known, a measurement of the A lifetime provides information on hadron properties in the form of scattering lengths – the basic parameters in low-energy ππ scattering.

CCdir3_02_12

Moreover, the low-energy interaction between the pion and the next lightest, and strange, meson – the kaon – provides a promising probe for learning about the more general three-flavour SU(3) structure (u, d and s quarks) of hadronic interactions, which is a matter not directly accessible in pion–pion interactions. Hence, data on πK atoms are valuable because they provide insights into the role played by the strange quarks in the QCD vacuum.

The experiment

The mesonic atoms A (AπK) are produced by final-state Coulomb interactions between oppositely charged ππ (πK) pairs that are generated in proton–target reactions (Nemenov 1985). In the DImeson Relativistic Atom Complex (DIRAC) experiment at CERN, they are formed when a 24 GeV/c proton beam from the Proton Synchrotron hits a thin target, typically a 100 μm-thick nickel foil (figure 1). After production, the mesonic atoms travel through the target and some of them are broken up (ionized) as they interact with matter. This produces “atomic pairs”, which are characterized by their small relative momenta, Q < 3 MeV/c, in the centre of mass of the pair. These pairs are detected in the DIRAC apparatus. The remaining atoms mainly annihilate into π0π0 pairs, which are not detected, or they survive and annihilate later. The number of “atomic pairs” from the break-up of atoms, nA, depends on the annihilation mean-free-path, which is given by the atom’s lifetime, τ, and its momentum. Thus, the break-up probability, Pbr, is a function of the A’s lifetime, τ.

CCdir4_02_12

The interactions between the protons and the target also produce oppositely charged free ππ pairs, both with and without final-state Coulomb interactions, depending on whether or not the pairs are produced close to each other. This gives rise to “Coulomb pairs” and “non-Coulomb pairs”. The latter includes meson pairs in which one or both mesons come from the decay of long-lived sources. Furthermore, two mesons from different interactions can contribute as “accidental pairs”. The total number of atoms produced, NA, is proportional to NC, the number of Coulomb pairs with low relative momenta, NA = kNC, where the coefficient, k, is precisely calculable. DIRAC measures the break-up probability for the A, which is defined as the ratio of the observed number of “atomic pairs” to the number of produced atoms: Pbr(τ)= nA/NA. NA is calculated from the number of “Coulomb pairs”, NC, obtained from fits to the data.

The purpose of the DIRAC set-up is to record oppositely charged ππ (πK) pairs with small relative momenta, Q. As figure 2 shows, the emerging pairs of charged pions travel in vacuum through the upstream part where co-ordinate and ionization detectors provide initial track data, before they are split by the 2.3 Tm bending magnet into the “positive” (T1) and “negative” (T2) arms. Both arms are equipped with high-precision drift chambers and trigger/time-of-flight detectors, as well as Cherenkov, preshower and muon counters. The relative time resolution between the two arms is around 200 ps.

CCdir5_02_12

The momentum reconstruction in the double-arm spectrometer uses the drift chamber information from both arms as well as the measured hits in the upstream co-ordinate detectors. The resolution on the longitudinal (QL) and transverse (QT) components of the relative momentum of the pair, Q, defined with respect to the direction of the total momentum of the pair in the laboratory, is 0.55 MeV/c and 0.1 MeV/c, respectively. A system of fast-trigger processors selects the all-important events with small Q.

Observing and measuring lifetimes

The observation of the A was reported from an experiment at Serpukhov nearly 20 years ago. This was followed at CERN 10 years later with a measurement of the A lifetime by DIRAC (Adeva et al. 2005). Last autumn, DIRAC presented the most recent value for the A lifetime in the ground state, τ = 3.15 × 10–15 s, with a total uncertainty of around 9%, based on the statistics of 21 200 “atomic pairs” collected with the nickel target in 2001–2003 (Adeva et al. 2011). Figure 3 (overleaf) shows the characteristic accumulation of events at low QL from the break-up of the π+π atom: the A signal appears as an excess of pairs over the background spectrum in the low Q region.

S-wave ππ scattering is isospin-dependent, so this lifetime can be used to calculate a scattering-length difference, |a0–a2|, where a0 and a2 are the S-wave ππ scattering lengths for isospin 0 and 2, respectively. The measured lifetime yields a result of |a0–a2| = 0.253 (mπ–1) with around 4% precision, in agreement with the result obtained by the NA48/2 experiment at CERN (Batley et al. 2009). The corresponding theoretical values are 0.265 ± 0.004 (mπ–1) for the scattering-length difference (Colangelo et al. 2000) and (2.9 ± 0.1) × 10–15 s for the lifetime (Gasser et al. 2001). These results demonstrate the high precision that can be reached in low-energy hadron interactions, in both experiment and theory.

The first evidence for the observation of the πK atom, AπK, was published by the DIRAC collaboration in 2009 (Adeva et al. 2009). In this case, the mesonic atoms were produced in a 26 μm-thick platinum target and the DIRAC spectrometer had been upgraded for K identification with heavy-gas (C4F10) and aerogel Cherenkov detectors. An enhancement observed at low relative momentum corresponds to the production of 173 ± 54 πK “atomic pairs”. From this first data sample, the collaboration derived a lower limit on the πK atom lifetime of τπK > 0.8 × 10–15 s (90% CL), to be compared with the theoretical prediction of (3.7 ± 0.4) × 10–15 s (Schweizer 2004). The ongoing detailed analysis of a much larger data sample aims first to extract a clear signal for the production of the AπK atom and then to deduce from these data a value for the AπK lifetime.

Future investigations

Pionium is an atom like hydrogen and the properties of its states vary strongly with their quantum numbers. An illustration of this is the 2s→1s two-photon de-excitation in hydrogen (τ2s ≈ 0.1s), which is many orders of magnitude slower than the 2p→1s radiative transition (τ2p = 1.6 ns). In pionium, the situation is similar but opposite: the decay A (2s-state)→2π02s = 23.2 fs) is roughly three orders of magnitude faster than the 2p→1s radiative transition (τ2p = 11.7 ps). The DIRAC collaboration aims to measure Lamb shifts in pionium by exploiting the properties of these specific states and in 2010 started to study the possibility of observing long-lived A states (Nemenov et al. 2002).

CCdir6_02_12

The energy shifts, ΔEns–np, for levels with the principal quantum number, n, and orbital quantum number, l, are another valuable source of information. These shifts contain a dominant strong contribution, ΔEstrong ns–np, together with minor QED contributions, ΔEQED ns–np, from vacuum polarization and self-energy effects. The strong s-state energy shift,ΔEstrong ns–np, is proportional to (2a0+a2), i.e. it depends on the same scattering lengths, a0 and a2, as the pionium lifetime. As figure 4 shows, for the principal quantum number n=2, the strong and electromagnetic interactions shift the 2s level below the 2p level by ΔE2s–2p = ΔEstrong ns–np + ΔEQED 2s–2p = – 0.47 eV – 0.12 eV = – 0.59 eV (Schweizer 2004).

By studying the dependence of the lifetime of long-lived A (with l≥1) on an applied electric field – the Stark-mixing effect – the DIRAC experiment is in a unique position to investigate the splitting of pionium energy levels. This will allow another combination of pion-scattering lengths to be extracted, so that a0 and a2 can finally be determined individually.

Getting excited about the Higgs?

Main auditorium

Tuesday 13 December 2011 is a day that many will remember. There was high anticipation of what the ATLAS and CMS collaborations would have to say about the latest results on the search for the elusive Higgs boson. Only senior management knew what the other collaboration was going to present, for the rest it was a well kept surprise.

CCnew2_01_12

From 8.30 a.m. onwards, physicists flocked into the main auditorium at CERN and by 10.30 a.m. the place was packed – three and a half hours before the talks even started – and no more people were being allowed in. The atmosphere was almost festive, maybe because the wireless network became saturated so that nobody could work. Similarly eager anticipation could be felt in a separate room where journalists representing the many news agencies and TV channels were able to share in the excitement.

But what was all of the excitement about?

The short answer is that the presentations revealed that if the Higgs boson exists in the manner predicted by the Standard Model, then its mass is most likely between 115.5 and 127 GeV. To be more precise, the CMS collaboration rules out at 95% confidence level a Higgs boson with a mass larger than 127 GeV, while the ATLAS collaboration rules it out for masses below 115.5 GeV and larger than 131 GeV (with a small window of 237–251 GeV in mass not yet excluded by ATLAS). The upper limits on the exclusion region are 468 GeV for ATLAS and 600 GeV for CMS.

The ability to exclude the low-mass region was limited in both experiments by an excess of events around 120 GeV.

The ability to exclude the low-mass region was limited in both experiments by an excess of events around 120 GeV. Such excesses could be just background fluctuations or the first indications of a Higgs signal building up. These results are consistent with what is expected from the statistics accumulated so far, whether the low-mass Higgs exists or not.

The 2012 data campaign, during which the LHC is expected to deliver at least twice as many collisions as in 2011, should put to rest the 40-year quest for the Standard Model Higgs boson via either its discovery or its complete exclusion.

Only a month after the end of proton–proton collisions in 2011, both collaborations showed preliminary results using the full statistics of the year, corresponding to an integrated luminosity of 4.6–4.9 fb–1 – almost twice that shown at the summer conferences. The results unveiled in the presentations in December demonstrate the deep understanding achieved by each collaboration of detector performance and of the numerous backgrounds.

Both spokespeople, Fabiola Gianotti for the ATLAS collaboration and Guido Tonelli for the CMS collaboration, paid tribute in their presentations to the hundreds of physicists – most of them students and young post-docs – who have worked so hard in recent months to improve substantially the understanding of the detectors, in particular under the complex condition of ever increasing pile-up, where as many as 20 interaction vertices were reconstructed in a single event.

As a result of a coin toss by the director-general, Gianotti spoke first. The ATLAS collaboration had concentrated on updating the analyses in the channels that are most sensitive to the low-mass Higgs boson: H→γγ and the “golden channel”, H→ZZ→llll, where l indicates an electron or a muon. An update of the H→WW search with the data collected for the summer conferences was also shown. While in the first two channels the Higgs boson would be seen as a narrow peak on top of a broad background, the presence of the Higgs boson in the third channel would be seen as a more broad excess of events.

Tonelli then took the stage and presented a full array of CMS analyses – all including the full 2011 statistics – starting with the ones sensitive to the highest Higgs masses, H→ZZ →(llqq), (llνν), (llττ), continuing with H→WW, H→ττ, H→bb and finishing with H→γγ and the golden channel H→ZZ→ llll.

The energy and angular resolutions of the electromagnetic calorimeters are the key ingredients in the analysis of H→γγ, which is potentially the single-most sensitive mode in the low-mass region. The better the resolution, the narrower the peak in the invariant-mass distribution of the two photons and the easier it will be to see the Higgs, if it exists.

Despite having two different detector technologies – ATLAS uses a liquid argon sampling calorimeter, while CMS relies on crystals – the mass resolution obtained in both experiments in the channel with the best resolution is 1.4 GeV. For CMS, good mass resolution is possible thanks in particular to the major progress that has been achieved in understanding the calibration of the crystal calorimeter; in the central area of the calorimeter the performance is now close to nominal. The ATLAS calorimeter provides a similar mass resolution to that of CMS, despite intrinsically worse energy resolution. This is thanks to its capability to measure photon angles.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

LHCb sees first evidence for CP violation in charm decays

Mass-difference spectra

The LHCb experiment was initially designed for the study of B physics (the “b” in its name stands for beauty, or b quark). However, the LHC is also a copious source of particles that contain the charm quark, such as the D meson, which also makes the experiment well suited to their study. The rate at which data are selected by the LHCb trigger and written to storage was therefore increased last year by 50%, to 3 kHz, with the extra capacity dedicated to charm. This has now paid off spectacularly, with one of the most interesting (and unexpected) results to come from the LHC so far: evidence of CP violation in charm decays.

CP symmetry, the combination of charge-conjugation, C, and parity, P, is known to be violated in B and K decays. It is an important property to study because it is a necessary ingredient for explaining the matter–antimatter asymmetry in the universe. The CP violation observed so far in B and K decays is consistent with the predictions of the Standard Model but is far too small to explain the observed matter–antimatter asymmetry. The prediction for D mesons in the Standard Model is that they should have little CP violation, at the level of 10–3 or less, but this may be enhanced by new physics.

The LHCb collaboration performed its search by measuring the difference in the CP asymmetry for two final states

CP violation can be observed as a difference in the rate of D0 decays to a given final state, compared with the rate of the antiparticle D0 decays to the same final state. Because the effect being looked for is tiny, the LHCb collaboration performed its search by measuring the difference in the CP asymmetry for two final states, D0 → KK+ and D0 → ππ+, which is denoted by ΔACP. In this way, systematic uncertainties in the production and detection of particles compared with antiparticles should cancel, while the CP asymmetry, which is expected to have opposite sign for the two modes, should remain visible.

D → D0π± decays are used as the source of the neutral D-meson sample because the charge of the pion tags the produced particle as a D0 or D0 (see figure). The experiment has collected more than a million of these decays, a total more than 10 times greater than in measurements by experiments at B-factories. It is also larger than the data sample used to obtain the previously most precise result, from the CDF experiment at Fermilab’s Tevatron, and benefits from the clean selection made possible by LHCb’s ring-imaging Cherenkov detectors.

LHCb measures the asymmetry to be ΔACP = (–0.82 ± 0.21 ± 0.11)%, where the first error is statistical and the second systematic. The significance of the measured deviation from zero is 3.5 σ, giving the first evidence for CP violation in the charm sector, at a level that is higher than was expected. Establishing whether this result is consistent with the Standard Model, or the first hint of new physics, will require the analysis of additional data and improved theoretical understanding. LHCb has already collected almost a factor of two more data, with more to come this year, so this exciting first indication should be clarified soon.

bright-rec iop pub iop-science physcis connect