Comsol -leaderboard other pages

Topics

Common baryon source found in proton collisions

Figure 1

High-energy hadronic collisions, such as those delivered by the LHC, result in the production of a large number of particles. Particle pairs produced close together in both coordinate and momentum space are subject to final-state effects, such as quantum statistics, Coulomb forces and, in the case of hadrons, strong interactions. Femtoscopy uses the correlation of such pairs in momentum space to gain insights into the interaction potential and the spatial extent of an effective particle-emitting source.

Abundantly produced pion pairs are used to assess the size and evolution of the high-density and strongly interacting quark–gluon plasmas, which are formed in heavy-ion collisions. Recently, high-multiplicity pp collisions at the LHC have raised the possibility of observing collective effects similar to those seen in heavy-ion collisions, motivating detailed investigations of the particle source in such systems as well. A universal description of the emission source for all baryon species, independent of the specific quark composition, would open new possibilities to study the baryon–baryon interaction, and would impose strong constraints on particle-production models.

The ALICE collaboration has recently used p–p and p–Λ pairs to perform the first study of the particle-emitting source for baryons produced in pp collisions. The chosen data sample isolates the 1.7 permille highest-multiplicity collisions in the 13 TeV data set, yielding events with 30 to 40 charged particles reconstructed, on average, per unit of rapidity. The yields of protons and Λ baryons are dominated by contributions from short-lived resonances, accounting for about two thirds of all produced particles. A basic thermal model (the statistical hadronisation model) was used to estimate the number and composition of these resonances, indicating that the average lifetime of those feeding to protons (1.7 fm) is significantly shorter than those feeding to Λ baryons (4.7 fm) – this would have led to a substantial broadening of the source shape if not properly accounted for. An explicit treatment of the effect of short-lived resonances was developed by assuming that all primordial particles and resonances are emitted from a common core source with a Gaussian shape. The core source was then folded with the exponential tails introduced by the resonance decays. The resulting root-mean-square width of the Gaussian core scales from 1.3 fm to 0.85 fm as a function of an increase in the pair’s transverse mass (mT) from 1.1 to 2.2 GeV, for both p–p and p–Λ pairs (see figure). The transverse mass of a particle is its total energy in a coordinate system in which its velocity is zero along the beam axis. The two systems exhibit a common scaling of the source size, indicating a common emission source for all baryons. The observed scaling of the source size with mT is very similar to that observed in heavy-ion collisions, wherein the effect is attributed to the collective evolution of the system.

This result is a milestone in the field of correlation studies, as it directly relates to important topics in physics. The common source size observed for p–p and p–Λ pairs implies that the spatial- temporal properties of the hadronisation process are independent of the particle species. This observation can be exploited by coalescence models studying the production of light nuclei, such as deuterons or 3He, in hadronic collisions. Moreover, the femtoscopy formalism relates the emission source to the interaction potential between pairs of particles, enabling the study of the strong nuclear force between hadrons, such as p–K, p–Ξ, p–Ω and ΛΛ, with unprecedented precision.

CLOUD clarifies cause of urban smog

Urban flow patterns

Urban particle pollution ranks fifth in the risk factors for mortality worldwide, and is a growing problem in many built-up areas. In a result that could help shape policies for reducing such pollution, the CLOUD collaboration at CERN has uncovered a new mechanism that drives winter smog episodes in cities.

Winter urban smog episodes occur when new particles form in polluted air trapped below a temperature inversion: warm air above the inversion inhibits convection, causing pollution to build up near the ground. However, how additional aerosol particles form and grow in this highly polluted air has puzzled researchers because they should be rapidly lost through scavenging by pre-existing aerosol particles. CLOUD, which uses an ultraclean cloud chamber situated in a beamline at CERN’s Proton Synchrotron to study the formation of aerosol particles and their effect on clouds and climate, has found that ammonia and nitric acid can provide the answer.

Deriving in cities mainly from vehicle emissions, ammonia and nitric acid were previously thought to play a passive role in particle formation, simply exchanging with ammonium nitrate in the particles. However, the new CLOUD study finds that small inhomogeneities in the concentrations of ammonia and nitric acid can drive the growth rates of newly formed particles up to more than 100 times faster than seen before, but only in short spurts that have previously escaped detection. These ultrafast growth rates are sufficient to rapidly transform the newly formed particles to larger sizes, where they are less prone to being lost through scavenging, leading to a dense smog
episode with a high number of particles.

“Although the emission of nitrogen oxides is regulated, ammonia emissions are not and may even be increasing with the latest catalytic converters used in gasoline and diesel vehicles,” explains CLOUD spokesperson Jasper Kirkby. “Our study shows that regulating ammonia emissions from vehicles could contribute to reducing urban smog.”

Lofty thinking

Jasper Kirkby

What, in a nutshell, is CLOUD?

It’s basically a cloud chamber, but not a conventional one as used in particle physics. We realistically simulate selected atmospheric environments in an ultraclean chamber and study the formation of aerosol particles from trace vapours, and how they grow to become the seeds for cloud droplets. We can precisely control all the conditions found throughout the atmosphere such as gas concentrations, temperature, ultraviolet illumination and “cosmic ray” intensity with a beam from CERN’s Proton Synchrotron (PS). The aerosol processes we study in CLOUD are poorly known yet climatically important because they create the seeds for more than 50% of global cloud droplets.

We have 22 institutes and the crème de la crème of European and US atmospheric and aerosol scientists. It’s a fabulous mixture of physicists and chemists, and the skills we’ve learned from particle physics in terms of cooperating and pooling resources have been incredibly important for the success of CLOUD. It’s the CERN model, the CERN culture that we’ve conveyed to another discipline. We implemented the best of CERN’s know-how in ultra-clean materials and built the cleanest atmospheric chamber in the world.

How did CLOUD get off the ground?

The idea came to me in 1997 during a lecture at CERN given by Nigel Calder, a former editor of New Scientist magazine, who pointed out a new result from satellite data about possible links between cosmic rays and cloud formation. That Christmas, while we visited relatives in Paris, I read a lot of related papers and came up with the idea to test the cosmic ray–cloud link at CERN with an experiment I named CLOUD. I did not want to ride into another field telling those guys how to do their stuff, so I wrote a note of my ideas and started to make contact with the atmospheric community in Europe and build support from lab directors in particle physics. I managed to assemble a dream team to propose the experiment to CERN. The hard part was convincing CERN that they should do this crazy experiment. We proposed it in 2000 and it was finally approved in 2006, which I think is a record for CERN to approve an experiment. There were some people in the climate community who were against the idea that cosmic rays could influence clouds. But we persevered and, once approved, things went very fast. We started taking data in 2009 and have been in discovery mode ever since.

Do you consider yourself a particle physicist or an atmospheric scientist?

An experimental physicist! My training and my love is particle physics, but judging by the papers I write and review, I am now an atmospheric scientist. It was not difficult to make this transition. It was a case of going back to my undergraduate physics and high-school chemistry and learning on the job. It’s also very rewarding. We do experiments, like we all do at CERN, on a 24/7 basis, but with CLOUD I can calculate things in my notebook and see the science that we are doing, so we know immediately what the new stuff is and we can adapt our experiments continuously during our run.

On the other hand, in particle physics the detectors are running all the time but we really don’t know what is in the data without years of very careful analysis afterwards, so there is this decoupling of the result from the actual measurement. Also, in CLOUD we don’t need a separate discipline to tell us about the underlying theory or beauty of what we are doing. In CLOUD you’re the theorist and the experimentalist at the same time – like it was in the early days of particle physics.

How would you compare the Standard Model to state-of-the-art climate models?

It’s night and day. The Standard Model (SM) is such a well formed theory and remarkably high-quality quantitatively that we can see incredibly subtle signals in detectors against a background of something that is extremely well understood. Climate models, on the other hand, are trying to simulate a very complex system about what’s happening on Earth’s surface, involving energy exchanges between the atmosphere, the oceans, the biosphere, the cryosphere … and the influence of human beings. The models involve many parameters that are poorly understood, so modellers have to make plausible yet uncertain choices. As a result, there is much more flexibility in climate models, whereas there is almost none in the SM. Unfortunately, this flexibility means that the predictive power of such models is much weaker than it is in particle physics.

The CLOUD detector

There are skills such as the handling of data, statistics and software optimisation where particle physics is probably the leading science in the world, so I would love to see CERN sponsor a workshop where the two communities could exchange ideas and perhaps even begin to collaborate. This is what CLOUD has done. It’s politically correct to talk about the power of interdisciplinary research, but it’s very difficult in practical terms – especially when it comes to funding because experiments often fall into the cracks between funding agencies.

How has CLOUDs focus evolved during a decade of running?

CLOUD was designed to explore whether variations of cosmic rays in the atmosphere affect clouds and climate, and that’s still a major goal. What I didn’t realise at the beginning is how important aerosol–particle formation is for climate and health, and just how much is not yet understood. The largest uncertainty facing predictions of global warming is not due to a lack of understanding about greenhouse gases, but about how much aerosols and clouds have increased since pre-industrial times from human activities. Aerosol changes have offset some of the warming from greenhouse gases but we don’t know by how much – it could have offset almost nothing, or as much as one half of the warming effect. Consequently, when we project forwards, we don’t know how much Earth will warm later this century to better than a factor of three.

Many of our experiments are now aimed at reducing the aerosol uncertainties in anthropogenic climate change. Since all CLOUD experiments are performed under different ionisation conditions, we are also able to quantify the effect of cosmic rays on the process under study. A third major focus concerns the formation of smog under polluted urban conditions.

What have CLOUD’s biggest contributions been?

We have made several major discoveries and it’s hard to rank them. Our latest result (CLOUD clarifies cause of urban smog) on the role of ammonia and nitric acid in urban environments is very important for human health. We have found that ammonia and nitric acid can drive the growth rates of newly formed particles up to more than 100 times faster than seen before, but only in short spurts that have previously escaped detection. This can explain the puzzling observation of bursts of new particles that form and grow under highly polluted urban conditions, producing winter smog episodes. An earlier CLOUD result, also in Nature, showed that a few parts-per-trillion of amine vapours lead to extremely rapid formation of sulphuric acid particles, limited only by the kinetic collision rate. We had a huge fight with one of the referees of this paper, who claimed that it couldn’t be atmospherically important because no-one had previously observed it. Finally, a paper appeared in Science last year showing that sulphuric acid–amine nucleation is the key process driving new particle formation in Chinese megacities.

In CLOUD youre the theorist and the experimentalist at the same time – like it was in the early days of particle physics

A big result from the point of view of climate change came in 2016 when we showed that trees alone are capable of producing abundant particles and thus cloud seeds. Prior to that it was thought that sulphuric acid was essential to form aerosol particles. Since sulphuric acid was five times lower in the pre-industrial atmosphere, climate models assumed that clouds were fewer and thinner back then. This is important because the pre-industrial era is the baseline aerosol state from which we assess anthropogenic impacts. The fact that biogenic vapours make lots of aerosols and cloud droplets reduces the contrast in cloud coverage (and thus the amount of cooling offset) between then and now. The formation rate of these pure biogenic particles is enhanced by up to a factor 100 by galactic cosmic rays, so the pristine pre-industrial atmosphere was more sensitive to cosmic rays than today’s polluted atmosphere.

There was an important result the very first week we turned on CLOUD, when we saw that sulphuric acid does not nucleate on its own but requires ammonia. Before CLOUD started, people were measuring particles but they weren’t able to measure the molecular composition, so many experiments were being fooled by unknown contaminants.

Have CLOUD results impacted climate policy?

The global climate models that inform the Intergovernmental Panel on Climate Change (IPCC) have begun to incorporate CLOUD aerosol parameterisations, and they are impacting estimates of Earth’s climate sensitivity. The IPCC assessments are hugely impressive works of the highest scientific quality. Yet, there is something of a disconnect between what climate modellers do and what we do in the experimental and observational world. The modellers tend to work in national centres and connect with experiments through the latter’s publications, at the end of the chain. I would like to see much closer linkage between the models and the measurements, as we do in particle physics where there is a fluid connection between theory, experiment and modelling. We do this already in CLOUD, where we have several institutes who are primarily working on regional and global aerosol-cloud models.

What’s next on CLOUD’s horizon?

The East Hall at the PS is being completely rebuilt during CERN’s current long shutdown, but the CLOUD chamber itself is pretty much the only item that is untouched. When the East Area is rebuilt there will be a new beamline and a new experimental zone for CLOUD. We think we have a 10-year programme ahead to address the questions we want to and to settle the cosmic ray–cloud–climate question. That will take me up to just over 80 years old!

Will humanity succeed in preventing catastrophic climate change?

I am an optimist, so I believe there is always a way out of everything. It’s very understandable that people want to freeze the exact temperature of Earth as it is now because we don’t want to see a flood or desert in our back garden. But I’m afraid that’s not how Earth is, even without the anthropogenic influence. Earth has gone through much larger natural climate oscillations, even on the recent timescale of homo sapiens. That being said, I think Earth’s climate is fundamentally stable. Oceans cover two thirds of Earth’s surface and their latent heat of vaporisation is a huge stabiliser of climate – they have never evaporated nor completely frozen over. Also, only around 2% of CO2 is in the atmosphere and most of the rest is dissolved in the oceans, so eventually, over the course of several centuries, CO2 in the atmosphere will equilibrate at near pre-industrial levels. The current warming is an important change – and some argue it could produce a climate tipping point – but Earth has gone through larger changes in the past and life has continued. So we should not be too pessimistic about Earth’s future. And we shouldn’t conflate pollution and climate change. Reducing pollution is an absolute no brainer, but environmental pollution is a separate issue from climate change and should be treated as such.

New Perspectives on Einstein’s E = mc2

New Perspectives on Einstein’s E = mc2 mixes historical notes with theoretical aspects of the Lorentz group that impact relativity and quantum mechanics. The title is a little perplexing, however, as one can hardly expect nowadays to discover new perspectives on an equation such as E = mc2. The book’s true aim is to convey to a broader audience the formal work done by the authors on group theory. Therefore, a better-suited title may have been “Group theoretical perspectives on relativity”, or even, more poetically, “When Wigner met Einstein”.

The first third of the book is an essay on Einstein’s life, with historical notes on topics discussed in the subsequent chapters, which are more mathematical and draw heavily on publications by the authors – a well-established writing team who have co-authored many papers relating to group theory. The initial part is easy to read and includes entertaining stories, such as Einstein’s mistakes when filing his US tax declaration. Einstein, according to this story, was calculating his taxes erroneously, but the US taxpayer agency was kind enough not to raise the issue. The reader has to be warned, however, that the authors, professors at the University of Maryland and New York University, have a tendency to make questionable statements about certain aspects of the development of physics that may not be backed up by the relevant literature, and may even contradict known facts. They have a repeated tendency to interpret the development of physical theories in terms of a Hegelian synthesis of a thesis and an antithesis, without any cited sources in support, which seems, in most cases, to be a somewhat arbitrary a posteriori assessment.

There is a sharp distinction in the style of the second part of the book, which requires training in physics or maths at advanced undergraduate level. These chapters begin with a discussion of the Lorentz group. The interest then quickly shifts to Wigner’s “little groups”, which are subgroups of the Lorentz group with the property of leaving the momentum of a system invariant. Armed with this mathematical machinery, the authors proceed to Dirac spinors and give a Lorentz-invariant formulation of the harmonic oscillator that is eventually applied to the parton model. The last chapter is devoted to a short discussion on optical applications of the concepts advanced previously. Unfortunately, the book finishes abruptly at this point, without a much-needed final chapter to summarise the material and discuss future work, which, the previous chapters imply, should be plentiful.

Young Suh Kim and Marilyn Noz’s book may struggle to find its audience. The contrast between the lay and expert parts of this short book, and the very specialised topics it explores, do not make it suitable for a university course, though sections could be incorporated as additional material. It may well serve, however, as an interesting pastime for mathematically inclined audiences who will certainly appreciate the formalism and clarity of the presentation of the mathematics.

Researchers grapple with XENON1T excess

An intriguing low-energy excess of background events recorded by the world’s most sensitive WIMP dark-matter experiment has sparked a series of preprints speculating on its underlying cause. On 17 June, the XENON collaboration, which searches for excess nuclear recoils in the XENON1T detector, a one-tonne liquid-xenon time-projection chamber (TPC) located underground at Gran Sasso National Laboratory in Italy, reported an unexpected excess in electronic recoils at energies of a few keV, just above its detection threshold. Though acknowledging that the excess could be due to a difficult-to-constrain tritium background, the collaboration says solar axions and solar neutrinos with a Majorana nature, both of which would signal physics beyond the Standard Model, are credible explanations for the approximately 3σ effect.

Who needs the WIMP if we can have the axion?

Elena Aprile

“Thanks to our unprecedented low event rate in electronic recoils background, and thanks to our large exposure, both in detector mass and time, we could afford to look for signatures of rare and new phenomena expected at the lowest energies where one usually finds lots of background,” says XENON spokesperson Elena Aprile, of Columbia University in New York. “I am especially intrigued by the possibility to detect axions produced in the Sun,” she says. “Who needs the WIMP if we can have the axion?”

The XENON collaboration has been in pursuit of WIMPs, a leading bosonic cold-dark-matter candidate, since 2005 with a programme of 10 kg, 100 kg and now 1 tonne liquid-xenon TPCs. Particles scattering in the liquid xenon create both scintillation light and ionisation electrons; the latter drift upwards in an electric field towards a gaseous phase where electroluminescence amplifies the charge signal into a light signal. Photomultiplier tubes record both the initial scintillation light and the later electroluminescence, to reveal 3D particle tracks, and the relative magnitudes of the two signals allows nuclear and electronic recoils to be differentiated. XENON1T derives its world-leading limit on WIMPs – the strictest 90% confidence limit being a cross-section of 4.1×10−47 cm2 for WIMPs with a mass of 30 GeV – from the very low rate of nuclear recoils observed by XENON1T from February 2017 to February 2018.

XENON1T low-energy electronic recoils

A surprise was in store, however, in the same data set, which also revealed 285 electronic recoils at the lower end of XENON1T’s energy acceptance, from 1 to 7 keV, over the expected background of 232±15. The sole background-modelling explanation for the excess that the collaboration has not been able to rule out is a minute concentration of tritium in the liquid xenon. With a half-life of 12.3 years and a relatively low amount of energy liberated in the decay of 18.6 keV, an unexpected contribution of tritium decays is favoured over XENON1T’s baseline background model at approximately 3σ. “We can measure extremely tiny amounts of various potential background sources, but unfortunately, we are not sensitive to a handful of tritium atoms per kilogram,” explains deputy XENON1T spokesperson Manfred Lindner, of the Max Planck Institute for Nuclear Physics in Heidelberg. Cryogenic distillation plus running the liquid xenon through a getter is expected to remove any tritium below the level that would be relevant, he says, but this needs to be cross-checked. The question is whether a minute amount of tritium could somehow remain in liquid xenon or if some makes it from the detector materials into the liquified xenon in the detector. “I personally think that the observed excess could equally well be a new background or new physics. About 3σ implies of course a certain statistical chance for a fluctuation, but I find it intriguing to have this excess not at some random place, but towards the lower end of the spectrum. This is interesting since many new-physics scenarios generically lead to a 1/E or 1/E2 enhancement which would be cut off by our detection threshold.”

Solar axions

One solution proposed by the collaboration is solar axions. Axions are a consequence of a new U(1) symmetry proposed in 1977 to explain the immeasurably small degree of CP violation in quantum chromodynamics – the so-called strong CP problem — and are also a dark-matter candidate. Though XENON1T is not expected to be sensitive to dark-matter axions, should they exist they would be produced by the sun at energies consistent with the XENON1T excess. According to this hypothesis, the axions would be detected via the “axioelectric” effect, an axion analogue of the photoelectric effect. Though a good fit phenomenologically, and like tritium favoured over the background-only hypothesis at approximately 3σ, the solar-axion explanation is disfavoured by astrophysical constraints. For example, it would lead to a significant extra energy loss in stars.

Axion helioscopes such as the CERN Axion Solar Telescope (CAST) experiment, which directs a prototype LHC dipole magnet at the Sun and could convert solar axions into X-ray photons, will help in testing the hypothesis. “It is not impossible to have an axion model that shows up in XENON but not in CAST,” says deputy spokesperson Igor Garcia Irastorza of University of Zaragoza, “but CAST already constraints part of the axion interpretation of the XENON signal.” Its successor, the International Axion Observatory (IAXO), which is set to begin data taking in 2024, will have improved sensitivity. “If the XENON1T signal is indeed an axion, IAXO will find it within the first hours of running,” says Garcia Irastorza.

A second new-physics explanation cited for XENON1T’s low-energy excess is an enhanced rate of solar neutrinos interacting in the detector. In the Standard Model, neutrinos have a negligibly small magnetic moment, however, should they be Majorana rather than Dirac fermions, and identical to their antiparticles, their magnetic moment should be larger, and proportional to their mass, though still not detectable. New physics beyond the Standard Model could, however, enhance the magnetic moment further. This leads to a larger interaction cross section at low energies and an excess of low-energy electron recoils. XENON1T fits indicate that solar Majorana neutrinos with an enhanced magnetic moment are also favoured over the background-only hypothesis at the level of 3σ.

The absorption of dark photons could explain the observed excess.

Joachim Kopp

The community has quickly chimed in with additional ideas, with around 40 papers appearing on the arXiv preprint server since the result was released. One possibility is a heavy dark-matter particle that annihilates or decays to a second, much lighter, “boosted dark-matter” particle which could scatter on electrons via some new interaction, notes CERN theorist Joachim Kopp. Another class of dark-matter model that has been proposed, he says, is “inelastic dark matter”, where dark-matter particles down-scatter in the detector into another dark-matter state just a few keV below the original one, with the liberated energy then seen in the detector. “An explanation I like a lot is in terms of dark photons,” he says. “The Standard Model would be augmented by a new U(1) gauge symmetry whose corresponding gauge boson, the dark photon, would mix with the Standard-Model photon. Dark photons could be abundant in the Universe, possibly even making up all the dark matter. Their absorption in the XENON1T detector could explain the observed excess.”

“The strongest asset we have is our new detector, XENONnT,” says Aprile. Despite COVID-19, the collaboration is on track to take first data before the end of 2020, she says. XENONnT will boast three times the fiducial volume of XENON1T and a factor six reduction in backgrounds, and should be able to verify or refute the signal within a few months of data taking. “An important question is if the signal has an annual modulation of about 7% correlated to the distance of the sun,” notes Lindner. “This would be a strong hint that it could be connected to new physics with solar neutrinos or solar axions.”

LHC physics shines amid COVID-19 crisis

The eighth Large Hadron Collider Physics (LHCP) conference, originally scheduled to be held in Paris, was held as a fully online conference from 25 to 30 May. To enable broad participation, the organisers waived the registration fee, and, with the help of technical support from CERN, hosted about 1,300 registered participants from 56 countries, with attendees actively engaging via Zoom webinars. Even a poster session was possible, with 50 junior attendees from all over the world presenting their work via meeting rooms and video recordings. The organisers must be complimented for organising a pioneering virtual conference that succeeded in bringing the LHC community together, in larger and more diverse numbers than at previous editions.

LHCP20 presentations covered a wide assortment of topics and several new results with significantly enhanced sensitivity than was previously possible. These included both precision measurements with excellent potential to uncover discrepancies that can be explained only by beyond the Standard Model (SM) physics and direct searches using innovative techniques and advanced analysis methods to look for new particles.

The first observation of the combined production of three massive vector bosons was reported by CMS

The first observation of the combined production of three massive vector bosons (VVV with V = W or Z) was reported by the CMS experiment. In the nearly 40 years that have followed the discovery of the W and Z boson, their properties have been measured very precisely, including via “diboson” measurements of the simultaneous production of two vector bosons. However, “triboson” simultaneous production of three massive vector bosons had eluded us so far, as the cross sections are small and the background contributions are rather large. Such measurements are crucial to undertake, both to test the underlying theory and to probe non-standard interactions. For example, if new physics beyond the SM is present at high mass scales not far above 1 TeV, then cross section measurements for triboson final states might deviate from SM predictions. The CMS experiment took advantage of the large Run 2 dataset and machine learning techniques to search for these rare processes. Leveraging the relatively background-free leptonic final states, CMS collaborators were able to combine searches for different decay modes and different types of triboson production (WWW, WWZ, WZZ and ZZZ) to achieve the first observation of combined heavy triboson production (with an observed significance of 5.7 standard deviations) and at the same time evidence for WWW and WWZ production with observed significances of 3.3 and 3.4 standard deviations, respectively. While the results obtained so far are in agreement with SM predictions, more data is needed for the individual measurements of the WZZ and ZZZ processes.

Four-top-quark production

The first evidence for four-top-quark production was announced by ATLAS. The top-quark discovery in 1995 launched a rich programme of top-quark studies that includes precision measurements of its properties as well as the observation of single-top-quark production. In particular, since the large mass of the top quark is a result of its interaction with the Higgs field, studies of rare processes such as the simultaneous production of four top quarks can provide insights into properties of the Higgs boson. Within the SM, this process is extremely rare, occurring just once for every 70 thousand pairs of top quarks created at the LHC; on the other hand, numerous extensions of the SM predict exotic particles that couple to top quarks and lead to significantly higher production rates. The ATLAS experiment performed this challenging measurement using the full Run-2 dataset using sophisticated techniques and machine-learning methods applied to the multilepton final state to obtain strong evidence for this process. The observed signal significance was found to be 4.3 standard deviations, in excess of the expected sensitivity of 2.4, assuming SM four-top-quark-production properties. While the measured value of the cross section was found to consistent with the SM prediction within 1.7 standard deviations, the data collected during Run 3 will shed further light on this rare process.

The LHCb collaboration presented, with unprecedented precision, measurements of two properties of the mysterious X(3872) particle. Originally discovered by the Belle experiment in 2003 as a narrow state in the J/ψπ+π mass spectrum of B+→J/ψπ+πK+ decays, this particle has puzzled particle physicists ever since. The nature of the state is still unclear and several hypotheses have been proposed, such as its being an exotic tetraquark (a system of four quarks bound together), a two-quark hadron, or a molecular state consisting of two D mesons. LHCb collaborators reported the most precise mass measurement yet, and measured, for the first time, and with 5 standard-deviations significance, the width of the resonance (see LHCb interrogates X(3872) line-shape). Though the results favour its interpretation as a quasi-bound D0D*0 molecule, more data and additional analyses are needed to rule out other hypotheses.

Antideuterons could be produced during the annihilation or decay of neutralinos or sneutrinos

The ALICE collaboration presented a first measurement of the inelastic low-energy antideuteron cross section using p-Pb collisions at a centre-of-mass energy per nucleon–nucleon pair of 5.02 TeV. Low-energy antideuterons (composed of an antiproton and an antineutron) are predicted by some models to be a promising probe for indirect dark-matter searches. In particular, antideuterons could be produced during the annihilation or decay of neutralinos or sneutrinos, which are hypothetical dark-matter particles. Contributions from cosmic-ray interactions in the low-energy range below 1-2 GeV per nucleon are expected to be small. ALICE collaborators used a novel technique that utilised the detector material as an absorber for antideuterons to measure the production and annihilation rates of low energy antideuterons. The results from this measurement can be used in propagation models of antideuterons within the interstellar medium for interpreting dark-matter searches, including intriguing results from the AMS experiment. Future analyses with higher statistics data will improve the modelling as well as extend these studies to heavier antinuclei.

The above are just a few of the many excellent results that were presented at LHCP2020. The extraordinary performance of the LHC coupled with progress reported by the theory community, and the excellent data collected by the experiments, has inspired LHC physicists to continue with their rich harvest of physics results despite the current world crisis. Results presented at the conference showed that huge progress has been made on several fronts, and that Run 3 and the High-Luminosity LHC upgrade programme will enable further exploration of particle physics at the energy frontier.

Funky physics at KIT

The FUNK experimental area, where the black-painted floor can be seen with the PMT-camera pillar at the centre and the mirror on the left. A black-cotton curtain encloses the whole area during running. Credit: KIT.

A new experiment at Karlsruhe Institute of Technology (KIT) called FUNK – Finding U(1)s of a Novel Kind – has reported its first results in the search for ultralight dark matter. Using a large spherical mirror as an electromagnetic dark-matter antenna, the FUNK team has set an improved limit on the existence of hidden photons as candidates for dark matter with masses in the eV range.

Despite overwhelming astronomical evidence for the existence of dark matter, direct searches for dark-matter particles at colliders and dedicated nuclear-recoil experiments have so far come up empty handed. With these searches being mostly sensitive to heavy dark-matter particles, namely weakly interacting massive particles (WIMPs), the search for alternative light dark-matter candidates is growing in momentum. Hidden photons, a cold, ultralight dark-matter candidate, arise in extensions of the Standard Model which contain a new U(1) gauge symmetry and are expected to couple very weakly to charged particles via kinetic mixing with regular photons. Laboratory experiments that are sensitive to such hidden or dark photons include helioscopes such as the CAST experiment at CERN, and “light-shining-through-a-wall” methods such as ALPS experiment at DESY.

FUNK exploits a novel “dish-antenna” method first proposed in 2012, whereby a hidden photon crossing a metallic spherical mirror surface would cause faint electromagnetic waves to be emitted almost perpendicularly to the mirror surface, and be focused on the radius point. The experiment was conceived in 2013 at a workshop at DESY when it was realised that there was a perfectly suited mirror — a prototype for the Pierre Auger Observatory with a surface area of 14 m2 – in the basement of KIT. Various photodetectors placed at the radius point allow FUNK to search for a signal in different wavelength ranges, corresponding to different hidden-photon masses. The dark-matter nature of a possible signal can then be verified by observing small daily and seasonal movements of the spot around the radius point as Earth moves through the dark-matter field. The broadband dish-antenna technique is able to scan hidden photons over a large parameter space.

The mass range of viable hidden-photon dark matter is huge

Joerg Jaeckel

Completed in 2018, the experiment took data during last year in several month-long runs using low-noise PMTs. In the mass range 2.5 – 7 eV, the data exclude a hidden-photon coupling stronger than 10−12 in kinetic mixing. “This is competitive with limits derived from astrophysical results and partially exceeds those from other existing direct-detection experiments,” says FUNK principal investigator Ralph Engel of KIT. So far two other experiments of this type have reported search results for hidden photons in this energy range — the dish-antenna at the University of Tokyo and the SHUKET experiment at Paris-Saclay – though FUNK’s factor-of-ten larger mirror surface brings a greater experimental sensitivity, says the team. Other experiments, such as NA64 at CERN which employs missing-energy techniques, are setting stringent bounds on the strength of dark-photon couplings for masses in the MeV range and above.

“The mass range of viable hidden-photon dark matter is huge,” says FUNK collaborator Joerg Jaeckel of Heidelberg University. “For this reason, techniques which can scan over a large parameter space are especially useful even if they cannot explore couplings as small as is possible with some other dedicated methods. A future exploitation of the setup in other wavelength ranges is possible, and FUNK therefore carries an enormous physics potential.”

100 TeV photons test Lorentz invariance

Over the past decades the photon emission from astronomical objects has been measured across 20 orders of magnitude in energy, from radio up to TeV gamma rays. This has not only led to many astronomical discoveries, but also, thanks to the extreme distances and energies involved, allowed researchers to test some of the fundamental tenets of physics. For example, the 2017 joint measurement of gravitational waves and gamma-rays from a binary neutron-star merger made it possible to determine the speed of gravity with a precision of less than 10-16 compared to the speed of light. Now, the High-Altitude Water Cherenkov (HAWC) collaboration has pushed the energy of gamma-ray observations into new territory, placing constraints on Lorentz-invariance violation (LIV) that are up to two orders of magnitude tighter than before.

Models incorporating LIV allow for modifications to the standard energy—momentum relationship dictated by special relativity, predicting phenomenological effects such as photon decay and photon splitting. Even if the probability for a photon to decay through such effects is small, the large distances involved in astrophysical measurements in principle allow experiments to detect it. The most striking implication would be the existence of a cutoff in the energy spectrum above which photons would decay while traveling towards Earth. Simply by detecting gamma-ray photons above the expected cutoff would put strong constraints on LIV.

HAWC

Increasing the energy limit for photons with which we observe the universe is, however, challenging. Since the flux of a typical source, such as a neutron star, decreases rapidly (by approximately two orders of magnitude for each order of magnitude increase in energy), ever larger detectors are needed to probe higher energies. Photons with energies of hundreds of GeV can still be directly detected using satellite-based detectors equipped with tracking and colorimetry. However, these instruments, such as the US-European Fermi-LAT detector and the Chinese-European DAMPE detector, require a mass of several tonnes, making launching them expensive and complex. To get to even higher energies ground-based detectors, which detect gamma-rays through the showers they induce in Earth’s atmosphere, are more popular. While they can be more easily scaled up in size than can space-based detectors, the indirect detection and the large background coming from cosmic rays make such measurements difficult.

It is likely that LIV will be further constrained in the near future, as a range of new high-energy gamma-ray detectors are developed

Recently, significant improvements have been made in ground-based detector technology and data analysis. The Japanese-Chinese Tibet air shower gamma-ray experiment ASγ, a Cherenkov-based detector array built at an altitude of 4 km in Yangbajing, added underground muon detectors to allow hadronic air showers to be differentiated from photon-induced ones via the difference in muon content. By additionally improving the data-analysis techniques to more accurately remove the isotropic all-sky background from the data, in 2019 the ASγ team managed to observe a source, in this case the Crab pulsar, at energies above 100 TeV for the first time. This ground-breaking measurement was soon followed by measurements of nine different sources above 56 TeV by the HAWC observatory located at 4 km altitude in the mountains near Puebla, Mexico.

These new measurements of astrophysical sources, which are likely all pulsars, could not only lead to an answer on the question where the highest-energy (PeV and above) cosmic rays are produced, but also allows new constraints to be placed on LIV. The spectra of the four sources studied by the collaboration did not show any signs of a cutoff, allowing the HAWC team to exclude the LIV energy scale to 2.2×1031  eV — an improvement of one-to-two orders of magnitude over previous limits.

It is likely that LIV will be further constrained in the near future, as a range of new high-energy gamma-ray detectors are developed. Perhaps the most powerful of these is the Large High Altitude Air Shower Observatory (LHAASO) located in the mountains of the Sichuan province of China. The construction of the detector array is ongoing while the first stage of the array commenced data taking in 2018. Once finished, LHAASO will be close to two orders of magnitude more sensitive than HAWC at 100 TeV and capable of pushing the photon energy into to the PeV range. Additionally, the limit of direct-detection measurements will be pushed beyond that from Fermi-LAT and DAMPE by the Chinese European High Energy cosmic Radiation Detector (HERD), a 1.8-tonne calorimeter surrounded by a tracker scheduled for launch in 2025 which is foreseen to be able to directly detect photons up to 100 TeV.

LHCb interrogates X(3872) line-shape

Figure 1

In 2003, the Belle collaboration reported the discovery of a mysterious new hadron, the X(3872), in the decay B+→X(3872)K+. Their analysis suggested an extremely small width, consistent with zero, and a mass remarkably close to the sum of the masses of the D0 and D*0 mesons. The particle’s existence was later confirmed by the CDF, D0, and BaBar experiments. LHCb first reported studies of the X(3872) in the data sample taken in 2010, and later unambiguously determined its quantum numbers to be 1++, leading the Particle Data Group to change the name of the particle to χc1(3872).

The nature of this state is still unclear. Until now, only an upper limit on the width of the χc1(3872) of 1.2 MeV has been available. No conventional hadron is expected to have such a narrow width in this part of the otherwise very well understood charmonium spectrum. Among the possible explanations are that it is a tetraquark, a molecular state, a hybrid state where the gluon field contributes to its quantum numbers, or a glueball without any valence quarks at all. A mixture of these explanations is also possible.

Two new measurements

As reported at the LHCP conference this week, the LHCb collaboration has now published two new measurements of the width of the χc1(3872), based on minimally overlapping data sets. The first uses Run 1 data corresponding to an integrated luminosity of 3 fb-1, in which (15.5±0.4)×103 χc1(3872) particles were selected inclusively from the decays of hadrons containing b quarks. The second analysis selected (4.23±0.07)×103 fully reconstructed B+→χc1(3872)K+ decays from the full Run 1–2 data set, which corresponds to an integrated luminosity of 9 fb-1. In both cases, the χc1(3872) particles were reconstructed through decays to the final state J/ψπ+π. For the first time the measured Breit-Wigner width was found to be non-zero, with a value close to the previous upper limit from Belle (see figure).

Combining the two analyses, the mass of the χc1(3872) was found to be 3871.64±0.06 MeV – just 70±120 keV below the D0D*0 threshold. The proximity of the χc1(3872) to this threshold puts a question mark on measuring the width using a simple fit to the well-known Breit-Wigner function, as this approach neglects potential distortions. Conversely, a precise measurement of the line-shape could help elucidate the nature of the χc1(3872). This has led LHCb to explore a more sophisticated Flatté parametrisation and report a measurement of the χc1(3872) line-shape with this model, including the pole positions of the complex amplitude. The results favour the interpretation of the state as a quasi-bound D0D*0 molecule, but other possibilities cannot yet be ruled out. Further studies are ongoing. Physicists from other collaborations are also keenly interested in the nature of the χc1(3872), and the very recent observation by CMS of the decay process Bs0→χc1(3872)? suggests another laboratory for studying its properties.

LEP-era universality discrepancy unravelled

Figure 1

The family of charged leptons is composed of the electron, muon (μ) and tau lepton (τ). According to the Standard Model (SM), these particles only differ in their mass: the muon is heavier than the electron and the tau is heavier than the muon. A remarkable feature of the SM is that each flavour is equally likely to interact with a W boson. This is known as lepton flavour universality.

In a new ATLAS measurement reported this week at the LHCP conference, a novel technique using events with top-quark pairs has been exploited to test the ratio of the probabilities for tau leptons and muons to be produced in W boson decays, R(τ/μ). In the SM, R(τ/μ) is expected to be unity, but a longstanding tension with this prediction has existed since the LEP era in the 1990s, where, from a combination of the four experiments, R(τ/μ) was measured to be 1.070 ± 0.026, deviating from the SM expectation by 2.7σ. This strongly motivated the need for new measurements with higher precision. If the LEP result were confirmed it would correspond to an unambiguous discovery of beyond the SM physics.

Tag and probe

To conclusively prove either that the LEP discrepancy is real or that it was just a statistical fluctuation, a precision of at least 1–2% is required — something previously not thought possible at a hadron collider like the LHC, where inclusive W bosons, albeit produced abundantly, suffer from large backgrounds and kinematic biases due to the online selection in the trigger. The key to achieving this is to obtain a sample of muons and tau leptons from W boson decays that is as insensitive as possible to the details of the trigger and object reconstruction used to select them. ATLAS has achieved this by exploiting both the LHC’s large sample of over 100 million top-quark pairs produced in the latest run, and the fact that top quarks decay almost exclusively to a W boson and a b quark. In a tag-and-probe approach, one W boson is used to select the events and the other is used, independently of the first, to measure the fractions of decays to tau-leptons and muons.

The analysis focuses on tau-lepton decays to a muon, rather than hadronic tau decays which are more complicated to reconstruct, thus reducing the systematic uncertainties associated with the object reconstruction. The lifetime of the tau lepton and its lower momentum decay products are exploited by the precise muon reconstruction available from the ATLAS detector to separate muons from tau-lepton decays and muons produced directly by a W decay (so-called prompt muons). Specifically, the absolute distance of closest approach of muon tracks in the plane perpendicular to the beam line, |d0μ| (figure 1), and the transverse momentum, pTμ, of the muons, are used to isolate these contributions. These variables, in particular |d0μ|, are calibrated using a pure sample of prompt muons from Z→μμ data.

The extraction of R(τ/μ) is performed using a fit to |d0μ| and pTμ where the cancellation of several systematic uncertainties is observed as they are correlated between the prompt μ and τ→μ contributions. This includes, for example, uncertainties related to jet reconstruction, flavour tagging and trigger efficiencies. As a result, the measurement obtains very high precision, surpassing that of the previous LEP measurement.

Figure 2

The measured value is R(τ/μ) = 0.992 ± 0.013 [ ± 0.007 (stat) ± 0.011 (syst) ], forming the most precise measurement of this ratio, with an uncertainty half the size of that from the combination of LEP results (figure 2). It is in agreement with the Standard Model expectation and suggests that the previous LEP discrepancy may be due to a fluctuation.

Though surviving this latest test, the principle of lepton flavour universality will not quite be out of the woods until the anomalies in B-meson decays recorded by the LHCb experiment (CERN Courier May/June 2020 p10) have also been definitively probed.

bright-rec iop pub iop-science physcis connect