Topics

The bumpy ride to the bump

Welding a dipole-magnet interconnect

19 September 2008: the LHC was without beam because of a transformer problem. The hardware commissioning team were finishing off powering tests of the main dipole magnet circuit in sector 3–4 when, at 11:18, an electrical fault resulted in considerable physical damage, the release of helium, and debris in a long section of the machine. In the control room, the alarms came swamping in. The cryogenics team grappled to make sense of what their systems were telling them, and there was frantic effort to interpret the data from the LHC’s quench protection system. I called LHC project leader Lyn Evans: “looks like we’ve got a serious problem here”.

Up to this point, 2008 had been non-stop but things were looking good. First circulating beam had been established nine days earlier in a blaze of publicity. Beam commissioning had started in earnest, and the rate of progress was catching some of us by surprise.

It is hard to describe how much of a body blow the sector 3–4 incident was to the community. In the following days, as the extent of the damage became clearer, I remember talking to Glyn Kirby of the magnet team and being aghast when he observed that “it’s going to take at least a year to fix”. He was, of course, right.

What followed was a truly remarkable effort by everyone involved. A total of 53 cryomagnets (39 dipoles and 14 quadrupoles) covering most of the affected 700 m-long zone were removed and brought to the surface for inspection, cleaning and repair or reuse. Most of the removed magnets were replaced by spares. All magnets whatever their origin had to undergo full functional tests before being installed.

CERN Control Centre on 20 November 2009

Soot in the vacuum pipes, which had been found to extend beyond the zone of removed magnets, was cleared out using endoscopy and mechanical cleaning. The complete length of the beam pipes was inspected for contamination by flakes of multilayer insulation, which were removed by vacuum cleaning. About 100 plug-in modules installed in the magnet interconnects were replaced. 

Following an in-depth analysis of the root causes of the incident, and an understanding of the risks posed by the joints in the magnet interconnects, a new worst-case Maximum Credible Incident was adopted and a wide range of recommendations and mitigation measures were proposed and implemented. These included a major upgrade of the quench protection system, new helium pressure-release ports, and new longitudinal restraints for selected magnets. 

One major consequence of the 19 September incident was the decision to run at a lower-than-design energy until full consolidation of the joints had been performed – hence the adoption of an operational beam energy of 3.5 TeV for Run 1. Away from the immediate recovery, other accelerator teams took the opportunity to consolidate and improve controls, hardware systems, instrumentation, software and operational procedures. As CMS technical coordinator Austin Ball famously noted, come the 2009 restart, CMS, at least, was in an “unprecedented state of readiness”. 

Take two

Beam was circulated again on 20 November 2009. Progress thereafter was rapid. Collisions with stable-beam conditions were quickly established at 450 + 450 GeV, and a ramp to the maximum beam energy at the time (1.18 TeV, compared to the Tevatron’s 0.98 TeV) was successfully performed on 30 November. The first ramps were a lot of fun – there’s a lot going on behind the scenes, including compensation of significant field dynamics in the superconducting dipoles. Cue much relief when beam made it up the ramp for the first time. All beam-based systems were at least partially commissioned and LHC operations started a long process to master the control of a hugely complex machine. Following continued deployment of the upgraded quench protection system during the 2009 year-end technical stop, commissioning with beam started again in the new year. Progress was good, with first colliding beams at 3.5 + 3.5 TeV being established under the watchful eye of the media on 30 March 2010. With scheduled collisions delayed by two unsuccessful ramps, it was a gut-knotting experience in the control room. Nonetheless, we finally got there about three hours late. “Stable Beams” was declared, the odd beer was had, and we were off. 

Essentially 2010 was then devoted to commissioning and establishing confidence in operational procedures and the machine protection system, before starting to increase the number of bunches in the beam. In June the decision was taken to go for bunches with nominal population (~1.2 × 1011 protons), which involved another extended commissioning period. Up to this point, in deference to machine-protection concerns, only around one fifth of the nominal bunch population had been used. To further increase the number of bunches, the move to a bunch separation of 150 ns was made and the crossing angle bumps spanning the experiments’ insertion regions were deployed. After a carefully phased increase in total intensity, the proton run finished with beams of 368 bunches of around 1.2 × 1011 protons per bunch, and a peak luminosity of 2.1 × 1032 cm–2s–1.

LHC operators on 30 November 2009

Looking back, 2010 was a profoundly important year for a chastened and cautious accelerator sector. The energy stored in the magnets had demonstrated its destructive power, and it was clear from the start that the beam was to be treated with the utmost respect; safe exploitation of the machine was necessarily an underlying principle for all that followed. The LHC became magnetically and optically well understood (judged by the standards at the time – impressively surpassed in later years), and was stunningly magnetically reproducible. The performance of the collimation system was revelatory and accomplished its dual role of cleaning and protection impeccably throughout the full cycle. The injectors were doing a great job throughout in reliably providing high-intensity bunches with unforeseen low transverse emittances.

2010 finished with a switch from protons to operations with lead ions for the first time. Diligent preparation and the experience gained with protons allowed a rapid execution of the ion commissioning programme and Stable Beams for physics was declared on 7 November. 

Homing in 

The beam energy remained at 3.5 TeV in 2011, with the bunch spacing switched from 75 to 50 ns. A staged ramp in the number of bunches then took place up to a maximum of 1380 bunches, and performance was further increased by reducing the transverse size of the beams delivered by the injectors and by gently increasing the bunch population. The result was a peak luminosity of 2.4 × 1033cm–2s–1 and some healthy delivery rates that topped 90 pb–1 in 24 hours. The next step-up in peak luminosity followed a reduction in the β* parameter in ATLAS and CMS from 1.5 to 1 m (the transverse beam size at the interaction point is directly related to the value of β*). Along with further gentle increases in bunch population, this produced a peak luminosity of 3.8 × 1033 cm–2s–1 – well beyond expectations at the start of the year. Coupled with a concerted effort to improve availability, the machine went on to deliver a total of around 5.6 fb–1 for the year to both ATLAS and CMS. 

Some of the first events recorded by ATLAS and CMS

Meanwhile, excitement was building in the experiments. A colloquium at the end of 2011 showed a strengthening significance of an excess at around 125 GeV. The possible discovery of the Higgs boson in 2012 was recognised, and corresponding LHC running scenarios were discussed in depth – first at the Evian workshop (where we heard the plea from CMS spokesperson Guido Tonelli to “gimme 20” [inverse femtobarns]) and crystallised at the 2012 Chamonix workshop, where CERN Director-General Rolf Heuer stated: as a top priority the LHC machine must produce enough integrated luminosity to allow the ATLAS and CMS experiments an independent discovery of the Higgs before the start of long shutdown 1 (LS1). Soon after the workshop, Council president Michel Spiro sent a message to CERN’s member states: “After a brilliant year in 2011, 2012 should be historic, with either the discovery of the Standard Model Higgs boson or its exclusion.”

An important decision concerned the energy. A detailed risk evaluation concluded that the probability of a splice burn-out at 4 TeV per beam in 2012 was equal to, or less than, the probability that had been estimated in 2011 for 3.5 TeV per beam. The decision to run at 4 TeV helped in a number of ways: higher cross-sections for Higgs-boson production, reduced emittance and the possibility for a further reduction of β*.

Discovery year 

And so 2012 was to be a production year at an increased beam energy of 4 TeV. The choice was made to continue to exploit 50 ns bunch spacing, which offered the advantages of less electron cloud and higher bunch charge compared with 25 ns, and to run with 1380 bunches. Based on the experience of 2011, it was also decided to operate with tight collimator settings, enabling a more aggressive squeeze to β* = 0.6 m. The injectors continued to provide exceptional quality beam and routinely delivered 1.7 × 1011 protons per bunch. The peak luminosity quickly rose to its maximum for the year, followed by determined and long running attempts to improve peak performance. Beam instabilities, although never debilitating, were a reoccurring problem and there were phases when they cut into operational efficiency. Nonetheless by the middle of the year another 6 fb–1 had been delivered to both ATLAS and CMS. Combined with the 2011 dataset, this paved the way for the announcement of the Higgs-boson discovery. 

After a brilliant year in 2011, 2012 should be historic, with either the discovery of the Standard Model Higgs boson or its exclusion

2012 was a very long operational year and included the extension of the proton–proton run until December to allow the experiments to maximise their 4 TeV data before LS1. Integrated-luminosity rates were healthy at around 1 fb–1 per week, and the total for the year came in at about 23 fb–1 to both ATLAS and CMS. Run 1 finished with four weeks of proton–lead operations at the start of 2013.

It is impossible to do justice to the commitment and effort that went into establishing, and then maintaining, the complex operational performance of the LHC that underpinned the Higgs-boson discovery: RF, power converters, collimation, injection and beam-dump systems, vacuum, transverse feedback, machine protection, cryogenics, magnets, quench detection and protection, accelerator physics, beam instrumentation, beam-based feedbacks, controls, databases, software, survey, technical infrastructure, handling engineering, access, radiation protection plus material science, mechanical engineering, laboratory facilities … and the coordination of all that! 

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Stepping into the spotlight

François Englert and Peter Higgs

With the boson confirmed, speculation inevitably grew about the 2012 Nobel Prize in Physics. The prize is traditionally announced on the Tuesday of the first full week in October, at about midday in Stockholm. As it approaches, a highly selective epidemic breaks out: Nobelitis, a state of nervous tension among scientists who crave Nobel recognition. Some of the larger egos will have previously had their craving satisfied, only perhaps to come down with another fear: will I ever be counted as one with Einstein? Others have only a temporary remission, before suffering a renewed outbreak the following year.

Three people at most can share a Nobel, and at least six had ideas like Higgs’s in the halcyon days of 1964 when this story began. Adding to the conundrum, the discovery of the boson involved teams of thousands of physicists from all around the world, drawn together in a huge cooperative venture at CERN, using a machine that is itself a triumph of engineering. 

The 2012 Nobel Prize in Physics was announced on Tuesday 9 October and went to Serge Haroche and David Wineland for taking the first steps towards a quantum computer. Two days later, I went to Edinburgh to give a colloquium and met Higgs for a coffee beforehand. I asked him how he felt now that the moment had passed, at least for this year. “I’m enjoying the peace and quiet. My phone hasn’t rung for two days,” he remarked. 

That the sensational discovery of 2012 was indeed of Higgs’s boson was, by the summer of 2013, beyond dispute. That Higgs was in line for a Nobel prize also seemed highly likely. Higgs himself, however, knew from experience that in the Stockholm stakes, nothing is guaranteed. 

Back in 1982, at dawn on 5 October in the Midwest and the eastern US, preparations were in hand for champagne celebrations in three departments at two universities. At Cornell, the physics department hoped they would be honouring Kenneth Wilson, while over in the chemistry department their prospect was Michael Fisher. In Chicago, the physicists’ hero was to be Leo Kadanoff. Two years earlier the trio had shared the Wolf Prize, the scientific analogue of the Golden Globes to the Nobel’s Oscars, for their work on critical phenomena connected with phase transitions, fuelling speculation that a Nobel would soon follow. At the appointed hour in Stockholm, the chair of the awards committee announced that the award was to Wilson alone. The hurt was especially keen in the case of Michael Fisher, whose experience and teaching about phase transitions, illuminating the subtle changes in states of matter such as melting ice and the emergence of magnetism, had inspired Wilson, five years his junior. The omission of Kadanoff and Fisher was a sensation at the time and has remained one of the intrigues of Nobel lore.

Fisher’s agony was no secret to Peter Higgs. As undergraduates they had been like brothers and remained close friends for more than 60 years. Indeed, Fisher’s influence was not far away in July 1964, for it was while examining how some ideas from statistical mechanics could be applied to particle physics that Higgs had the insight that would become the capstone to the theory of particles and forces half a century later. For this he was to share the 2004 Wolf Prize with Robert Brout (who sadly died in 2011) and François Englert – just as Fisher, Kadanoff and Wilson had shared this prize in 1980. Then as October approached in 2013 Higgs became a hot favourite at least to share the Nobel Prize in Physics, and the bookmakers would only take bets at extreme odds-on. 

Time to escape 

In 2013, 8 October was the day when the Nobel decision would be announced. Higgs’s experiences the year before had helped him to prepare: “I decided not to be at home when the announcement was made with the press at my door; I was going to be somewhere else.” His first plan was to disappear into the Scottish Highlands by train, but he decided it was too complicated, and that he could hide equally well in Edinburgh. “All I would have to do is go down to Leith early enough. I knew the announcement would be around noon so I would leave home soon after 11, giving myself a safe margin, and have an early lunch in Leith about noon.” 

ATLAS and CMS physicists in Building 40 on 8 October 2013

Richard Kenway, the Tait Professor of Mathematical Physics at Edinburgh and one of the university’s vice principals, confirmed the tale. “That was what we were all told, and he completely convinced us. Right up to the actual moment when we were sitting waiting for the [Nobel] announcement, we thought he had disappeared off somewhere into the Highlands.” Some newspapers got the fake news from the department, and one reporter even went up into the Highlands to look for him.

As scientists and journalists across the world were glued to the live broadcast, the Nobel committee was still struggling to reach the famously reclusive physicist. The announcement of his long-awaited crown was delayed by about half an hour until they decided they could wait no longer. Meanwhile, Peter Higgs sat at his favourite table in The Vintage, a seafood bar in Henderson Street, Leith, drinking a pint of real ale and considering the menu. As the committee announced that it had given the prize to François Englert and Peter Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider”, phones started going off in the Edinburgh physics department. 

Higgs finished his lunch. It seemed a little early to head home, so he decided to look in at an art exhibition. At about three o’clock he was walking along Heriot Row in Edinburgh, heading for his flat nearby, when a car pulled up near the Queen Street Gardens. “A lady in her 60s, the widow of a high-court judge, got out and came across the road in a very excited state to say, ‘My daughter phoned from London to tell me about the award’, and I said, ‘What award?’ I was joking of course, but that’s when she confirmed that I had won the prize. I continued home and managed to get in my front door with no more damage than one photographer lying in wait.” It was only later that afternoon that he finally learned from the radio news that the award was to himself and Englert. 

Suited and booted 

On arrival in Stockholm in December 2013, after a stressful two-day transit in London, Higgs learned that one of the first appointments was to visit the official tailor. The costume was to be formal morning dress in the mid-19th-century style of Alfred Nobel’s time, including elegant shoes adorned with buckles. As Higgs recalled, “Getting into the shirt alone takes considerable skill. It was almost a problem in topology.” The demonstration at the tailor’s was hopeless. Higgs was tense and couldn’t remember the instructions. On the day of the ceremony, fortunately, “I managed somehow.” Then there were the shoes. The first pair were too small, but when he tried bigger ones, they wouldn’t fit comfortably either. He explained, “The problem is that the 19th-century dress shoes do not fit the shape of one’s foot; they were rather pointy.” On the day of the ceremony both physics laureates had a crisis with their shoes. “Englert called my room: ‘I can’t wear these shoes. Can we agree to wear our own?’ So we did. We were due to be the first on the stage and it must have been obvious to everyone in the front row that we were not wearing the formal shoes.” 

Robert Brout in spirit completed a trinity of winners

On the afternoon of 10 December, nearly 2000 guests filled the Stockholm Concert Hall to see 12 laureates receive their awards from King Gustav of Sweden. They had been guided through the choreography of the occasion earlier, but on the day itself, performing before the throng in the hall, there would be first-night nerves for this once-in-a-lifetime theatre. Winners of the physics prize would be called to receive their awards first, while the others watched and could see what to expect when they were named. The scenery, props and supporting cast were already in place. These included former winners dressed in tail suits and proudly wearing the gold button stud that signifies their membership of this unique club. Among them were Carlo Rubbia, discoverer of the W and Z particles, who instigated the experimental quest for the boson and won the prize in 1984; Gerard ’t Hooft, who built on Higgs’s work to complete the theoretical description of the weak nuclear force and won in 1999; and 2004 winner Frank Wilczek, who had built on his own prize-winning work to identify the two main pathways by which the Higgs boson had been discovered.

Peter Higgs in July 2012

After a 10-minute oration by the chair of the Nobel Foundation and a musical interlude, Lars Brink, chairman of the Nobel Committee for Physics, managed to achieve one of the most daunting challenges in science pedagogy, successfully addressing both the general public in the hall and the assembled academics, including laureates from other areas of science. The significance of what we were celebrating was beyond doubt: “With discovery of the Higgs boson in 2012, the Standard Model of physics was complete. It has been proved that nature follows precisely that law that Brout, Englert and Higgs created. This is a fantastic triumph for science,” Brink announced. He also introduced a third name, that of Englert’s collaborator, Robert Brout. In so doing, he made an explicit acknowledgement that Brout in spirit completed a trinity of winners. 

Brink continued with his summary history of how their work and that of others established the Standard Model of particle physics. Seventeen months earlier the experiments at the LHC had confirmed that the boson is real. What had been suspected for decades was now confirmed forever. The final piece in the Standard Model of particle physics had been found. The edifice was robust. Why this particular edifice is the one that forms our material universe is a question for the future. Brink now made the formal invitation for first Englert and then Higgs to step forward to receive their share of the award.

Higgs, resplendent in his formal suit, and comfortable in his own shoes, rose from his seat and prepared to walk to centre-stage. Forty-eight years since he set out on what would be akin to an ascent of Everest, Higgs had effectively conquered the Hillary step – the final challenge before reaching the peak – on 4 July 2012 when the existence of his boson was confirmed. Now, all that remained while he took nine steps to reach the summit was to remember the choreography: stop at the Nobel Foundation insignia on the carpet; shake the king’s hand with your right hand while accepting the Nobel prize and diploma with the other. Then bow three times, first to the king, then to the bust of Alfred Nobel at the rear of the stage, and finally to the audience in the hall.

Higgs successfully completed the choreography and accepted his award. As a fanfare of trumpets sounded, the audience burst into applause. Higgs returned to his seat. The chairman of the chemistry committee took the lectern to introduce the winners of the chemistry prize. To his relief, Higgs was no longer in the spotlight.

All in a name 

The saga of Higgs’s boson had begun with a classic image – a lone genius unlocking the secrets of nature through the power of human thought. The fundamental nature of Higgs’s breakthrough had been immediately clear to him. However, no one, least of all Higgs, could have anticipated that it would take nearly half a century and several false starts to get from his idea to a machine capable of finding the particle. Nor did anyone envision that this single “good idea” would turn a shy and private man into a reluctant celebrity, accosted by strangers in the supermarket. Some even suggested that the reason why the public became so enamoured with Higgs was the solid ordinariness of his name, one syllable long, unpretentious, a symbol of worthy Anglo-Saxon labour. 

lusive: How Peter Higgs Solved the Mystery of Mass

In 2021, nine years after the discovery, we were reminiscing about the occasion when, to my surprise, Higgs suddenly remarked that it had “ruined my life”. To know nature through mathematics, to see your theory confirmed, to win the plaudits of your peers and join the exclusive club of Nobel laureates: how could all this equate with ruin? To be sure I had not misunderstood, I asked again the next time we spoke. He explained: “My relatively peaceful existence was ending. I don’t enjoy this sort of publicity. My style is to work in isolation, and occasionally have a bright idea.”   

  • This is an edited extract from Elusive: How Peter Higgs Solved the Mystery of Mass, by Frank Close, published on 14 June (Basic Books, US) and 7 July (Allen Lane, UK)

Electroweak baryogenesis

Simulation of Higgs-bubble nucleation

Precision measurements of the Higgs boson open the possibility to explore the moment in cosmological history when electroweak symmetry broke and elementary particles acquired mass. Ten years after the Higgs-boson discovery, it remains a possibility that the electroweak phase transition happened as a rather violent process, with a large departure from thermal equilibrium, via Higgs-bubble nucleations and collisions. This is a fascinating scenario for three reasons: it provides a framework for explaining the matter–antimatter asymmetry of the universe; it predicts the existence of at least one new weak-scale scalar field and thus is testable at colliders; and it would leave a unique signature of gravitational waves detectable by the future space-based interferometer LISA.

One major failure of the Standard Model (SM) is its inability to explain the baryon-to-photon ratio in the universe: η ≈ 6 × 10–10. Measurements of this ratio from two independent approaches – anisotropies in the cosmic microwave background and the abundances of light primordial elements – are in beautiful agreement. In a symmetric universe, however, the prediction for η is a billion times smaller; big-bang nucleosynthesis could not have occurred and structures could not have formed. This results from strong annihilations between nucleons and antinucleons, which deplete their number densities very efficiently. Only in a universe with a primordial asymmetry between nucleons and antinucleons can these annihilations be prevented. There are many different models to explain such “baryogenesis”. Interestingly, however, the Higgs boson plays a key role in essentially all of them. 

Accidental symmetry

It is worth recalling how baryon number B gets violated by purely SM physics. B is an “accidental” global symmetry in the SM. There are no B-violating couplings in the SM Lagrangian. But the chiral nature of electroweak interactions, combined with the non-trivial topology of the SU(2) gauge theory, results in non-perturbative, B-violating processes. Technically, these are induced by extended gauge-field configurations called sphalerons, whose energy is proportional to the value of the Brout–Englert–Higgs (BEH) field. The possibility of producing these configurations is totally suppressed at zero temperature, such that B is an extremely good symmetry today. However, at high temperature, and in particular at 100 GeV or so, when the electroweak symmetry is unbroken, the baryon number is violated intensively as there is no energy cost. Since both baryons and antibaryons are created by sphalerons, charge–parity (CP) violation is needed. Indeed, as enunciated by Sakharov in 1967, a theory of baryogenesis requires three main ingredients: B violation, CP violation and a departure from equilibrium, otherwise the baryon number will relax to zero. 

The conclusion is that baryogenesis must take place either from a mechanism occurring before the electroweak phase transition (necessitating new sources of B violation beyond the SM) or from a mechanism where B-violation relies exclusively on SM sphalerons and occurring precisely at the electroweak phase transition (provided that it is sufficiently out-of-equilibrium and CP-violating). The most emblematic example in the first category is leptogenesis, where a lepton asymmetry is produced from the decay of heavy right-handed neutrinos and “reprocessed” into a baryon asymmetry by sphalerons. This is a popular mechanism motivated by the mystery of the origin of neutrino masses, but is difficult to test experimentally. The second categ­ory, electroweak baryogenesis, involves electroweak-scale physics only and is therefore testable at the LHC.

Electroweak baryogenesis requires a first-order electroweak phase transition to provide a large departure from thermal equilibrium, otherwise the baryon asymmetry is washed out. A prime example of this type of phase transition is boiling water, where bubbles of gas expand into the liquid phase. During a first-order electroweak phase transition, symmetric and broken phases coexist until bubbles percolate and the whole universe is converted into the broken phase (see “Bubble nucleation” image). Inside the bubble, the BEH field has a non-zero vacuum expectation value; outside the bubble, the electroweak symmetry is unbroken. As the wall is passing, chiral fermions in the plasma scatter off the Higgs at the phase interface. If some of these interactions are CP-violating, a chiral asymmetry will develop inside and in front of the bubble wall. The resulting excess of left-handed fermions in front of the bubble wall can be converted into a net baryon number by the sphalerons, which are unsuppressed in the symmetric phase in front of the bubble. Once inside the bubble, this baryon number is preserved as sphalerons are frozen there. In this picture, the baryon asymmetry is determined by solving a diffusion system of coupled differential equations.

New scalar required

The nature of the electroweak phase transition in the SM is well known: for a 125 GeV Higgs boson, it is a smooth crossover with no departure from thermal equilibrium. This prevents the possibility of electroweak baryogenesis. It is, however, easy to modify this prediction to produce a first-order transition by adding an electroweak-scale singlet scalar field that couples to the Higgs boson, as predicted in many SM extensions. Notably, this is a general feature of composite-Higgs models, where the Higgs boson emerges as a “pseudo Nambu–Goldstone” boson of a new strongly-interacting sector. 

Stochastic gravitational-wave background

An important consequence of such models is that the BEH field is generated only at the TeV scale; there is no field at temperatures above that. In the minimal composite Higgs model, the dynamics of the electroweak phase transition can be entirely controlled by an additional scalar Higgs-like field, the dilaton, which has experimental signatures very similar to the SM Higgs boson. In addition, we expect modifications of the Higgs boson’s couplings (to gauge bosons and to itself) induced by its mixing with this new scalar. LHC Run 3 thus has excellent prospects to fully test the possibility of a first-order electroweak phase transition in the minimal composite Higgs model.

The properties of the additional particle required to modify the electroweak phase transition also suggest new sources of CP violation, which is welcome as CP-violating SM processes are not sufficient to explain the baryon asymmetry. In particular, this would generate non-zero electric dipole moments (EDMs). The most recent bounds on the electron EDM from the ACME experiment in the US placed stringent constraints on a large number of electroweak baryogenesis models, in particular two-Higgs-doublet models. This is forcing theorists to consider new paths such as dynamical Yukawa couplings in composite Higgs models, a higher temperature for the electroweak phase transition, or the use of dark particles as the new source of CP violation. Here, there is a tension. To evade the stringent EDM bounds, the new scalar has to be heavy. But if it is too heavy, it reheats the universe too much at the end of the electroweak phase transition and washes out the just-produced baryon asymmetry. During the next decade, precise measurements of the Higgs boson at the LHC will enable a definitive test of the electroweak baryogenesis paradigm. 

Gravitational waves 

There is a further striking consequence of a first-order electroweak phase transition: fluid velocities in the vicinity of colliding bubbles generate gravitational waves (GWs). Today, these would appear as a stochastic background that is homogeneous, isotropic, Gaussian and unpolarised – the superposition of GWs generated by an enormous number of causally-independent sources, arriving at random times and from random directions. It would appear as noise in GW detectors with a frequency (in the mHz region) corresponding to the typical inverse bubble size, redshifted to today (see “Primordial peak” figure). There has been a burst of activity in the past few years to evaluate the chances of detecting such a peaked spectrum at the future space interferometer LISA, opening the fascinating possibility of learning about Higgs physics from GWs. 

The results from the LHC so far have pushed theorists to question traditional assumptions about where new physics beyond the SM could lie. Electroweak baryogenesis relies on rather conservative and minimal assumptions, but more radical approaches are now being considered, such as the intriguing possibility of a cosmological interplay between the Higgs boson and a very light and very weakly-coupled axion-like particle. Through complementarity of studies in theory, collider experiments, EDMs, GWs and cosmology, probing the electroweak phase transition will keep us busy for the next two decades. There are exciting times ahead.

Synergy at the Higgs frontier

Sally Dawson

What impact did the discovery of the Higgs boson have on your work? 

It was huge because before then it was possible that maybe there was no Higgs. You could have some kind of dynamical symmetry breaking, or maybe a heavy Higgs, at 400 GeV say, which would be extremely interesting but completely different. So once you knew that the Higgs was at the same mass scale as the W and the Z, our thinking changed because that comes out of only a certain kind of model. And of course once you had it, everyone, including myself, was motivated to calculate everything we could. 

I am working on how you tease out new physics from the Higgs boson. It’s the idea that even if we don’t see new particles at the LHC, precision measurements of the Higgs couplings are going to tell us something about what is happening at very high energy scales. I’m using what’s called an effective field theory approach, which is the standard these days for trying to find out what we can learn from combining Higgs measurements with other types of measurements, such as gauge-boson pair production and top-quark physics. 

Aside from the early formal work, what was the role of Standard Model calculations in the discovery of the Higgs boson?

You had to know what you were looking for, because there’s so many events at the LHC. Otherwise, it would be like looking for a needle in a haystack. The Higgs was discovered, for example, by its decay to two photons and there are millions of two-photon events at the LHC that have nothing to do with the Higgs. Theory told you how to look for this particle, and I think it was really important that a trail was set out to follow. This involves calculating how often you make a Higgs boson and what the background might look like. It wasn’t until the late 1980s that people began taking this seriously. It was really the Superconducting Super Collider that started us thinking about how to observe a Higgs at a hadron collider. And then there were the LEP and Tevatron programmes that actively searched for the Higgs boson. 

To what order in perturbation theory were those initial calculations performed?

For the initial searches you didn’t need the complicated calculations because you weren’t looking for precision measurements such as those required at the Z-pole, for example. You really just needed the basic rate and background information. We weren’t inspired to do higher order calculations until later in the game. When I was a postdoc at Berkeley in 1986, that’s when I really started to calculate things about the Higgs. But there was a long gap between the time when the Brout–Englert–Higgs mechanism was proposed and when people really started doing some hard calculations. There’s the famous paper in 1976 by Ellis, Gaillard and Dimopoulos that calculated how the Higgs might be observed, but in essence it said: why bother looking for this thing, we don’t know where it is! So people were thinking we could see the Higgs in kaon decays, if it was very light, and in other ways, and were looking at the problem in a global kind of way. 

Was this what drove your involvement with The Higgs Hunter’s Guide in 1990?

We were further along in terms of calculating things precisely by then, and I suppose there was a bit of a generation gap. It was a wonderful collaboration to produce the guide. We still went through the idea of how you would find the Higgs at different energy scales because we still had no idea where it was. The calculations went into high gear around that time, which was well before the Higgs was discovered. Partly it was the motivation that we were pretty sure we would see it at the LHC. But partly it was developments in theory which meant we could calculate things that we never would have imagined was possible 30 years earlier. The capability of theorists to calculate has grown exponentially. 

What have these improvements been?

It’s what they call the next-to-next-to-leading order (NNLO) revolution – a new frontier in perturbative QCD where diagrams with two extra emissions of real or extra loops of virtual partons are accounted for. These were new mathematical techniques for evaluating the integrals that come into the quantum field theory, so not just turning the crank computationally but really an intellectual advance in understanding the structure of these calculations. It started with Bern, Dixon and Kosower, who understood the needed amplitudes in a formal way. This enabled all sorts of calculations, and now we have N3LO calculations for certain Higgs-boson production modes. 

What is driving greater precision on Higgs calculations today?

Actually it’s really exciting because at the high-luminosity LHC (HL-LHC), experimentalists will be limited in their understanding of the Higgs boson by theory – the theory and experimental uncertainties will be roughly the same. This is truly impressive. You might think that these higher order corrections, which have quite small errors, are enough but they need to be even smaller to match the expected experimental precision. As theorists we have to keep going and do even better, which from my point of view is wonderful. It’s the synergy between experiment and theory that is the real story. We’re co-dependent. Even now, theory is not so different from ATLAS and CMS in terms of precision. Theory errors are hard things to pin down because you never really know what they are. Unlike an absolute statistical uncertainty, they’re always an estimate. 

How do the calculations look for measurements beyond the LHC? 

It’s a very different situation at e+e colliders compared to hadron colliders. The LHC runs with protons containing gluons, so that’s why you need the higher order corrections. At a future e+e+ collider, you need higher-order corrections but they are much more straightforward because you don’t have parton distribution functions to worry about. We know how to do the calculations needed for an e+e Future Circular Collider, for example, but there is not a huge community of people working on them. That’s because they are really hard: you can’t just sit down and do them as a hobby, they really need a lot of skills. 

You are currently leading the Higgs properties working group of the current Snowmass planning exercise. What has been the gist of discussions? 

This is really exciting because our job has essentially been to put together the pieces of the puzzle after the European strategy update in 2020. That process did a very careful job of looking at the future Higgs programme, but there have been developments in our understanding since then. For example, the muon collider might be able to measure the Higgs couplings to muons very precisely, and there has been some good work on how to measure the couplings to strange quarks, which is very hard to do. 

The Higgs Hunters Guide

I would like to see an e+e collider built somewhere, anywhere. In point of fact, when you look at the proposals they’re roughly the same in terms of Higgs physics. This was clear from the European strategy report and will be clear from the upcoming Snowmass report. Personally, I don’t much care whether there is a precision of 1% or 1.5% on some coupling. I care that you can get down to that order of magnitude, and that e+e machines will significantly improve on the precision of HL-LHC measurements. The electroweak programme of large circular e+e colliders is extremely interesting. At the Z-pole you get some very precise measurements of Standard Model quantities that feed into the whole theory because everything is connected. And at the WW threshold you get very precise measurements in the effective field theory of things that connect the Higgs and WW pairs. As a theorist, it doesn’t make sense to think of the Higgs in a vacuum. The Higgs is part of this whole electroweak programme. 

What are the prospects for finding new physics via the Higgs?

The fact that we haven’t seen anything unexpected yet is probably because we haven’t probed enough. I’m absolutely convinced we are going to see something, I just don’t know what (or where) it is. So I can’t believe in the alternative “nightmare” scenario of a Standard-Model Higgs and nothing else because there are just so many things we don’t know. You can make pretty strong arguments that we haven’t yet reached the precision where we would expect to see something new in precision measurements. It’s a case of hard work.  

What’s next in the meantime?

The next big thing is measuring two Higgs bosons at a time. That’s what theorists are super excited about because we haven’t yet seen the production of two Higgses and that’s a fundamental prediction of our theory. If we don’t see it, and it’s extremely difficult to do so experimentally, it tells us something about the underlying model. It’s a matter of getting the statistics. If we actually saw it, then we would do more calculations. For the trilinear Higgs coupling we now have a complete calculation at next-to-leading order, which is a real tour de force. The calculations are sufficient for a discovery, and because it’s so rare it’s unlikely we will be doing precision measurements, so it is probably okay for the foreseeable future. For the quartic coupling there are some studies that suggest you might see it at a 100 TeV hadron collider.

With all the Standard Model particles in the bag, does theory take more of a back seat from here? 

The hope is that we will see something that doesn’t fit our theory, which is of course what we’re really looking for. We are not making these measurements at ever higher precisions for the sake of it. We care about measuring something we don’t expect, as an indicator of new physics. The Higgs is the only tool we have at the moment. It’s the only way we know how to go.

You have to be able to explain ‘why’

Sean Carroll

On 4 July 2012, Sean Carroll was at CERN to witness the momentous announcements by ATLAS and CMS – but not in his usual capacity as a physicist. He was there as an accredited member of the media, sharing an overflow room with journalists to get first-hand footage for the final chapter of his book. The Particle at the End of the Universe ended up being the first big title on the discovery and went on to win the 2013 Royal Society Science Books Prize. “It got reviewed everywhere, so I am really grateful to the Higgs boson and CERN!”

Carroll’s publisher sensed an opportunity for a timely, expert-authored title in 2011, as excitement in ATLAS and CMS grew. He initially said “No” – it wasn’t his research area, and he preferred to present a particular point of view, as he did in his first popular work From Eternity to Here: The Quest for the Ultimate Theory of Time. “With the Higgs boson, there is no disagreement, he says. “Everyone knows what the boson is, what it does and why is it important.” After some negotiation, he received an offer he couldn’t refuse. It also delved into the LHC, the experiments and how it all works, with a dash of quantum field theory and particle physics more generally. “We were hoping the book would come out by the time they announced the discovery, but on the other hand at least I got to include the discovery in the book, and was there to see it.”

Show me the money

Books are not very lucrative, he says. “Back in the 1980s and 1990s, when the success of Hawking’s A Brief History of Time awoke the interest of publishers, if you had a good idea for a physics book you could make a million dollars. But it is very hard to earn enough to make a living. “It takes roughly a year, or more depending on how much you have to learn, and depends on luck, the book and the person writing it.” His next project is a series of three books aimed at explaining physics to the general reader. The first, The Biggest Ideas in the Universe: Space, Time and Motion, due out in September, covers Newtonian mechanics and relativity; the second covers quantum mechanics and quantum field theory, and the third complexity, emergence and large-scale phenomena. 

Meanwhile, Carroll’s podcast Mindscape, in which he invites experts from different fields to discuss a range of topics, has produced 200 episodes since it launched in 2018 and attracts around 100,000 listeners weekly. “I thought that it was a very fascinating idea, basically your personal radio show, but I quickly learned that I didn’t have that many things to say all by myself,” he explains. “Then I realised it would give me an excuse to talk to lot of interesting people and stretch my brain a lot, and that worked out really well.” 

Reaching out

As someone who fell in love with science at a young age and enjoyed speaking and writing, Carroll has clearly found his ideal career. But stepping outside the confines of research is not without its downsides. “Overall, I think it has been negative actually, as it’s hard for some scientists to think that somebody is both writing books and giving talks, and also doing research at the same time. There is a prejudice that if you are a really good researcher then that’s all you do, and anything else is a waste of time. But whatever it does to my career, it has been good in many ways, and I think for the field, because I have reached people who wouldn’t know about physics otherwise.”

We need to take seriously the responsibility to tell people what it is that we have learned about the universe, and why it’s exciting to explore further

Moreover, he says, scientists are obligated to communicate the results of their work. “When it comes to asking the public for lots of money you have to be able to explain why it’s needed, and if they understand some of the physics and they have been excited by other discoveries they are much more likely to appreciate that,” he says, citing the episode of the Superconducting Super Collider. “When we were trying to build the SSC, physicists were trying their best to explain why we needed it and it didn’t work. Big editorials in the New York Times clearly revealed that people did not understand the reasons why this was interesting, and furthermore thought that the kind of physics we do does not have any immediate or technological benefit. But they are all also curious like we are. And while we don’t all have to become pop-science writers or podcasters (just like I am not going to turn up on Tik Tok or do a demo in the street), as a field we really need to take seriously the responsibility to tell people what it is that we have learned about the universe, and why it’s exciting to explore further.”

Accelerating aerosol production

A simulation of aerosol-particle formation

The CLOUD collaboration at CERN has uncovered a new mechanism accelerating the formation of aerosol particles in the upper troposphere, with potential implications for air-pollution regulations. The results, published in Nature on 18 May, show that an unexpected synergy between nitric acid, sulphuric acid and ammonia leads to the formation of aerosols at significantly faster rates than those from any two of the three components. The mechanism may represent a major source of cloud and ice seed particles in certain regions of the globe, says the team.

Aerosol particles are known to generally cool the climate by reflecting sunlight back into space and by seeding cloud droplets. But the vapours driving their formation are not well understood. The CLOUD (Cosmics Leaving Outdoor Droplets) facility at CERN’s East Area replicates the atmosphere in an ultraclean chamber to study, under precisely-controlled atmospheric conditions, the formation of aerosol particles from trace vapours and how they grow to become the seeds for clouds.

Three is key

Building on earlier findings that ammonia and nitric acid can accelerate the growth rates of newly formed particles, the CLOUD team introduced mixtures of sulphuric acid, nitric acid and ammonia vapours to the chamber and observed the rates at which particles formed. They found that the three vapours together form new particles 10–1000 times faster than a sulphuric acid–ammonia mixture, which previous CLOUD measurements suggested was the dominant source of upper tropospheric particles. Once the three-component particles form, they grow rapidly from the condensation of nitric acid and ammonia alone to sizes where they seed clouds. 

Moreover, the team found these particles to be highly efficient at seeding ice crystals, comparable to desert dust particles, which are thought to be the most widespread and effective ice seeds in the atmosphere. When a supercooled cloud droplet freezes, the resulting ice particle will grow at the expense of any unfrozen droplets nearby, making ice a major factor in the microphysical properties of clouds and precipitation. Around three-quarters of global precipitation is estimated to originate from ice particles.

Feeding their measurements into global aerosol models that include vertical transport of ammonia by deep convective clouds, the CLOUD researchers found that although the particles form locally in ammonia-rich regions of the upper troposphere, such as over the Asian monsoon regions, they travel from Asia to North America in just three days via the subtropical jet stream, potentially influencing Earth’s climate on an intercontinental scale (see “Enhancement” figure). The importance of the new synergistic mechanism depends on the availability of ammonia in the upper troposphere, which originates mainly from livestock and fertiliser emissions. Atmospheric concentrations of all three compounds are much higher today than in the pre-industrial era.

“Our results will improve the reliability of global climate models in accounting for aerosol formation in the upper troposphere and in predicting how the climate will change in the future,” says CLOUD spokesperson Jasper Kirkby. “Once again, CLOUD is finding that anthropogenic ammonia has a major influence on atmospheric aerosol particles, and our studies are informing policies for future air-pollution regulations.”

Our results will improve the reliability of global climate models

Working at the intersection between atmospheric science and particle physics, CLOUD has published several important results since it started operations in 2009. These include new mechanisms responsible for driving winter smog episodes in cities and for potentially accelerating the loss of Arctic sea ice, in addition to studies of the impact of cosmic rays on clouds and climate (CERN Courier July/August 2020 p48). 

“When CLOUD started operation, the prevailing understanding was that sulphuric acid vapour alone could account for almost all observations of new-particle formation in the atmosphere,” says Kirkby. “Our first experiments showed that it was around one million times too slow, and CLOUD went on to discover that additional vapours – especially biogenic vapours from trees – form particles together with stabilisers like ammonia, amines or ions from cosmic rays. CLOUD has now established a mechanistic understanding of aerosol particle formation for global climate models – but our work isn’t finished yet.”

Top quark weighs in with unparalleled precision

A top-quark pair at the LHC

The CMS collaboration has substantially improved on its measurement of the top-quark mass. The latest result, 171.77 ± 0.38 GeV, presented at CERN on 5 April, represents a precision of about 0.22% – compared to the 0.36% obtained in 2018 with the same data. The gain comes from new analysis methods and improved procedures to consistently treat uncertainties in the measurement simultaneously.

As the heaviest elementary particle, precise knowledge of the top-quark mass is of paramount importance to test the internal consistency of the Standard Model. Together with accurate knowledge of the masses of the W and Higgs bosons, the top-quark mass is no longer a free parameter but a clear prediction of the Standard Model. Since the top-quark mass dominates higher-order corrections to the Higgs-boson mass, a precise measurement of the top mass also places strong constraints on the stability of the electroweak vacuum (see The Higgs and the fate of the universe). 

Since its discovery at Fermilab in 1995, the mass of the top quark has been measured with increasing precision using the invariant mass of different combinations of its decay products. Measurements by the Tevatron experiments resulted in a combined value of 174.30 ± 0.65 GeV, while the ATLAS and CMS collaborations measured 172.69 ± 0.48 GeV and 172.44 ± 0.48 GeV, respectively, from the combination of their most precise results from LHC Run 1 recorded at a centre-of-mass energy of 8 TeV. The latter measurement achieved a relative precision of about 0.28%. In 2019, the CMS collaboration also experimentally investigated the running of the top quark mass – a prediction of QCD that causes the mass to vary as a function of energy – for the first time at the LHC. 

The LHC produces top quarks predominantly in quark–antiquark pairs via gluon fusion, which then decay almost exclusively to a bottom quark and a W boson. Each tt event is classified by the subsequent decay of the W bosons. The latest CMS analysis uses semileptonic events – where one W decays into jets and the other into a lepton and a neutrino – selected from 36 fb–1 of Run 2 data collected at a centre-of-mass energy of 13 TeV. Five kinematical variables, as opposed to up to three in previous analy­ses, were used to extract the top-quark mass. While the extra information in the fit improved the precision of the measurement in a novel and unconventional way, it made the analysis significantly more complicated. In addition, the measurement required an extremely precise calibration of the CMS data and an in-depth understanding of the remaining experimental and theoretical uncertainties and their interdependencies. 

The final result, 171.77 ± 0.38 GeV, which includes 0.04 GeV statistical uncertainty, is a considerable improvement compared to all previously published top-quark mass measurements and supersedes the previously published measurement in this channel using the same data set. 

“The cutting-edge statistical treatment of uncertainties and the use of more information have vastly improved this new measurement from CMS,” says Hartmut Stadie of the University of Hamburg, who contributed to the result. “Another big step is expected when the new approach is applied to the more extensive dataset recorded in 2017 and 2018.”

Gérard Bachy 1942–2022

Gérard Bachy

Gérard Bachy arrived at CERN in 1967, straight after graduating from ETH Zurich, and spent his entire 35-year career there. He started off as a mechanical engineer with the Big European Bubble Chamber, where he was in charge of the design and manufacture of the expansion system. In 1972 he joined the team of John Adams that was building CERN’s new flagship facility, the Super Proton Synchrotron (SPS), taking on responsibility for its coordination and installation. The first protons were injected into the SPS on 3 May 1976. Gérard was then approached by Giorgio Brianti, deputy head of the SPS division, to set up a section in charge of the underground-area infrastructure and installation of the experiments. He formed a motivated team where new ideas thrived and were put into practice – including a bicycle-driven system for moving detector components weighing several dozen tonnes using air cushions. 

In 1981, when the huge Large Electron–Positron (LEP) collider project was taking shape, Gérard and his team were brought in by director-in-charge Emilio Picasso. They were soon merged with the engineering group to become the LEP–IM group, which went on to play a key role in the realisation of LEP. More innovations were in store to solve the many challenges associated with this project: modular access shafts; a monorail to facilitate the installation of various components; highly precise planning, logistics and others. The project moved fast, culminating in the start-up of LEP on 14 July 1989.

The engineering for the accelerators was spread across the various CERN divisions, which hampered efficiency. In 1990, Director-General Carlo Rubbia entrusted Gérard with bringing all the different activities together under one umbrella, and the mechanical technologies division was born. Over the next five years, the focus was on modernising the facilities, infrastructures and working methods, first for the LEP200 project and then for the LHC preparations. Gérard fostered the development of the engineering and equipment data-management service, encouraged the creation of quality assurance plans and promoted a project-management culture.

In 1996, Hans Hoffmann, the technical coordinator for ATLAS, appointed Gérard as project engineer in his technical coordination and integration team. Gérard’s experience was to have a big impact on important technical choices, such as the “large wheel” concept for the ATLAS muon spectrometer. He retired in June 2001 to be able to devote more time to his other great passions, sailing and travel. 

Gérard Bachy was a brilliant engineer and a charismatic leader. He played an undisputed role at the top level of engineering at CERN and acted as a mentor for many of us.

Jean-Charles Chollet 1938–2021

Jean-Charles Chollet

Experimental particle physicist Jean-Charles (Charlie) Chollet passed away on 24 August 2021. He had spent his whole scientific career at CERN, working as a member of the Orsay Laboratoire de l’Accélérateur Linéaire. His work was always in the area of precision measurements involving subtle analyses.

Charlie started at the CERN Proton Synchrotron with his thesis, defended in 1969 under the supervision of Jean-Marc Gaillard, on the observation of the interference between KL and KS in the π0π0 decay mode. He then contributed to the WA2 experiment at the Super Proton Synchrotron (SPS) studying leptonic decays of hyperons, where he took care of one of the most difficult components of the detector, the DISC Cherenkov counter, which led to the impressive achievement of separating ~200 GeV/c Σ and Ξ hyperons thanks to a combination of subtle optics and of a complex system of photodetection. He then participated in the UA2 experiment at the SPS pp̅ collider, where he was in charge of the pre-shower detector calibration and performance. 

Later he engaged himself in the preparation of the ATLAS experiment at the LHC, where he performed several studies, notably on the pileup background properties and their expected impact on the design of the liquid-argon calorimeter electronics. He also participated in test-beam analysis of early “accordion calorimeters”, prototypes of this same calorimeter. He ended his career at the NA48 experiment, which was measuring the direct CP violation parameter ε´/ε in neutral kaon decays and where he made an important contribution with the analysis of kaon scattering in the collimator. From small inconsistencies in the data, he managed to find and understand the source of this background, thereby allowing it to be precisely taken into account in the measurement.

He was a great sportsman, especially sailing, skiing and cycling. Those who worked with Jean-Charles Chollet will always remember the pleasure of his company, his dry sense of humour and the depth and refinement of his work, which was always presented with the utmost modesty.

Dead-cone effect exposed by ALICE

A charm quark in a parton shower

More than 30 years after it was predicted, a phenomenon in quantum chromodynamics (QCD) called the dead-cone effect has been directly observed by the ALICE collaboration. The result, reported in Nature on 18 May, not only confirms a fundamental feature of the theory of the strong force, but enables a direct experimental observation of the non-zero mass of the charm quark in the partonic phase.

In QCD, the dead-cone effect predicts a suppression of gluon bremsstrahlung from a quark within a cone centred on the quark’s flight direction. This cone has an angular size mq/E, where mq is the mass of the quark and E is its energy. The effect arises due to the conservation of angular momentum during the gluon emission and is significant for low-energy heavy-flavour quarks. 

The dead cone has been indirectly observed at particle colliders. A direct observation from the parton shower’s radiation pattern has remained challenging, however, because it relies on the determination of the emission angle of the gluon, as well as the emitting heavy-flavour quark’s energy, at each emission vertex in the parton shower (see “Showering” figure). This requires a dynamic reconstruction of the cascading quarks and gluons in the shower from experimentally accessible hadrons, which had not been possible until now. In addition, the dead-cone region can be obscured and filled by other sources such as the decay products of heavy-flavour hadrons, which must be removed during the measurement.

To observe the dead-cone effect directly, ALICE used jets tagged with a reconstructed D0-meson in a 25 nb–1 sample of pp collisions at a centre-of-mass-energy of 13 TeV collected between 2016 and 2018. The D0-mesons were reconstructed with transverse momenta between 2 and 36 GeV/c through their decay into a kaon and pion pair. Jet-finding was then performed on the events with the “anti-kT” algorithm, and jets with the reconstructed D0-meson amongst their constituents were tagged. The team used recursive jet-clustering techniques to reconstruct the gluon emissions from the radiating charm quark by following the branch containing the D0-meson at each de-clustering step, which is equivalent to following the emitting charm quark through the shower. A similar procedure was carried out on a flavour-untagged sample of jets, which contain primarily gluon and light-quark emissions and form a baseline where the dead-cone effect is absent.

Comparisons between the gluon emissions from charm quarks and from light quarks and gluons directly reveal the dead-cone effect through a suppression of gluon emissions from the charm quark at small angles, compared to the emissions from light quarks and gluons. Since QCD predicts a mass-dependence of the dead cones, the result also directly exposes the mass of the charm quark, which is otherwise inaccessible due to confinement. ALICE’s successful technique to directly observe a parton shower’s dead cone may therefore offer a way to measure quark masses.

The upgraded ALICE detector in LHC Run 3 will enable an extension of the measurement to jets tagged with a B+ meson. This will allow the reconstruction of gluon emissions from beauty quarks which, due to their larger mass, are expected to have a larger dead cone than charm quarks. Comparisons between the angular distribution of gluon emissions from beauty quarks and those from charm quarks will isolate mass-dependent effects in the shower and remove the contribution from effects pertaining to the differences between quark and gluon fragmentation, bringing deeper insights into the intriguing workings of the strong force.

bright-rec iop pub iop-science physcis connect