Topics

Through the Higgs portal

Referring to the field equation of general relativity Rμν – ½ Rgμν = κTμν , Einstein is reported to have said that the left-hand side, constructed from space–time curvature, is “a palace of gold”; while the right-hand side, which parameterises the energy and momentum of matter, is by comparison “a hovel of wood”. Present-day physics has arrived at much more concrete ideas about the right-hand side than were available to Einstein. It is fair to say that some of it has come to look quite palatial, and fully worthy to stand alongside the left-hand side. These are the terms that involve field kinetic energy and gauge bosons, as described by the Standard Model (SM). Their form follows logically, within the framework of relativistic quantum field theory, directly from the principles of local gauge symmetry and relativity. Mathematically, they also speak the same geometric language as the right-hand side. The gauge bosons are avatars of curvature in “internal spaces”, similar to how gravitons are the avatars of space–time curvature. Internal spaces parameterise ways in which fields can vary – and thus, in effect, move – independently of ordinary motion in space–time. In this picture, the strong, weak and electromagnetic interactions arise from the influence of internal space curvature on internal space motion, similar to how gravity arises from the influence of space–time curvature on space–time motion.

The Higgs particle is the only portal connecting normal matter to such phantom fields

The other contributions to Tμν, all of which involve the Higgs particle, do not yet reach that standard. We can aspire to do better! They are of three kinds. First, there are the many Yukawa-like terms from which quark and lepton masses and mixings arise. Then there is the Higgs self-coupling and finally a term representing its mass. These contributions to Tμν contain almost two dozen dimensionless coupling parameters that present-day theory does not enable us to calculate or even much constrain. It is therefore important to investigate experimentally, through quantitative studies of Higgs-particle properties and interactions, whether this ramshackle structure describes nature accurately. 

Higgs potential

The Higgs boson is special among the elementary particles. As the quantum of a condensate that fills all space, it is metaphorically “a fragment of vacuum”. Speaking more precisely, the Higgs particle has no spin, no electric or colour charge and, at the level of strong and electromagnetic interactions, normal charge conjugation and parity. Thus, it can be emitted singly and without angular momentum barriers, and it can decay directly into channels free of colour and electromagnetically charged particles, which might otherwise be difficult to access. For these and other, more technical, reasons, the Higgs particle has the potential to reveal new physical phenomena of several kinds. 

A unique aspect of the Higgs mass term is especially promising for revealing possible shortcomings in the SM. In quantum field theory, an important property of an interaction is the “mass dimension” of the operator that implements it – a number that in an important sense indicates its complexity. Scalar and gauge fields have mass dimension 1 as do space–time derivatives, whereas fermion fields have mass dimension 3/2. More complicated operators are built up by multiplying these, and the mass dimension of a product is the sum of the mass dimensions of its factors. Interactions associated with operators whose mass dimension is greater than 4 are problematic because they lead to violent quantum fluctuations and mathematical divergences. Whereas all the other terms in the SM Lagrangian arise from operators of mass dimension 4, the Higgs mass term has mass dimension 2. Thus it is uniquely open to augmentation by couplings to hypothetical new SU(3) × SU(2) × U(1) singlet scalar fields, because the mass dimension of the augmented interaction can be 3 or 4 – i.e. still “safe”. The Higgs particle is the only portal connecting normal matter to such phantom fields.

Dark matter map

Why is this an interesting observation? There are three main reasons: two broadly theoretical, one pragmatic. First of all, the particles that are generally considered part of the SM carry a variety of charge assignments under the gauge groups SU(3) × SU(2) × U(1) that govern the strong and electroweak interactions. For example, the left-handed up quark is charged under all three groups, while the right-handed electron carries only U(1) hypercharge. Thus it is not only logically possible, but reasonably plausible, that there could be particles that are neutral under all three groups. Such phantom particles might easily escape detection, since they do not participate in the strong or electroweak interactions. Indeed, there are several examples of well-motivated candidate particles of that kind. Axions are one. Since they are automatically “dark” in the appropriate sense, phantom particles could contribute to the astronomical dark matter, and might even dominate it, as model-builders have not failed to notice. Also, many models of unification bring in scalar fields belonging to representations of a unifying gauge group that contains SU(3) × SU(2) × U(1) singlets, as do models with supersymmetry. Only phantom scalars are directly accessible through the Higgs portal, but phantoms of higher spin, including right-handed neutrinos, could cascade from real or virtual scalars.

Mysterious values

Second, the empirical value of the Higgs mass term is somewhat mysterious and even problematic, given that quantum corrections should push it to a value many orders of magnitude higher. This is the notorious “hierarchy problem” (see Naturalness after the Higgs). Given this situation, it seems appropriate to explore the possibility that part (or all) of the effective mass-term of the SM Higgs particle arises from more fundamental couplings upon condensation of SU(3) × SU(2) × U(1) singlet scalar fields, i.e. the emergence of a non-zero space-filling field, as occurs in the Brout–Englert–Higgs mechanism.

The portal idea leads to concrete proposals for directions of experimental exploration

Third, the portal idea leads to concrete proposals for directions of experimental exploration. These are of two basic kinds: one involves the observed strength of conventional Higgs couplings, the other the kinematics of Higgs production and decay. Couplings of the Higgs field to singlets that condense will lead to mixing, altering numerical relationships among Higgs-particle couplings and masses of gauge bosons, and of fermions from their minimal SM values. Also, the Higgs-field couplings to gauge bosons and fermions will be divided among two or more mass eigenstates. Since existing data indicates that deviations from the minimal model are small, the coupling of normal matter to the “mostly but not entirely” singlet pieces could be quite small, perhaps leading to very long lifetimes (as well as small production rates) for those particles. Whether or not the phantom particles contribute significantly to cosmological dark matter, they will appear as missing energy or momentum accompanying Higgs particle decay or, through Bremsstrahlung-like processes, when they are produced. 

We introduced the term “Higgs portal” to describe this circle of ideas in 2006, triggering a flurry of theoretical discussion. Now that the portal is open for business, and with larger data samples in store at the LHC, we can think more concretely about exploring it experimentally.

The thrill of the chase

Fabiola Gianotti and Joe Incandela during their 4 July 2012 presentations

At around 10:30 a.m. on 4 July 2012, two remarkable feats of theoretical and experimental physics reached an apex in the CERN auditorium. One was the work of a few individuals using the most rudimentary of materials, the other a global endeavour involving thousands of people and the world’s most powerful collider. Forty-eight years after it was predicted, the CMS and ATLAS collaborations presented conclusive evidence for the existence of a new elementary particle, the Higgs boson, the cornerstone of the electroweak Standard Model. 

“It took us several years to recover,” says CMS experimentalist Chiara Mariotti, who was co-convener of the collaboration’s Higgs group at the time. “For me there was a strong sense of ‘Higgs blues’ afterwards! On the other hand, the excitement was also productive. Immediately after the discovery we managed to invent a new method to measure the Higgs width, with a precision more than 200 times better than what we were thinking – a real breakthrough.”

Theoretically, the path to the Higgs boson had been paved by the early 1970s, building on foundations laid by the pioneers of quantum field theory and superconductivity. When Robert Brout and François Englert, and independently Peter Higgs, published their similarly titled papers on broken symmetry and the mass of gauge bosons in 1964, nobody took much notice. One of Higgs’s manuscripts was even rejected by an editor based at CERN. The profound consequences of the Brout–Englert–Higgs (BEH) mechanism – that the universe is pervaded by a scalar field responsible for breaking electroweak symmetry and giving elementary particles their mass (see “The Higgs, the universe and everything” panel) – only caught wider attention after further Nobel-calibre feats by Steven Weinberg, who incorporated the BEH mechanism into electroweak theory developed also by Abdus Salam and Sheldon Glashow, and by Gerard ’t Hooft and Martinus Veltman, who proved that the unified theory was mathematically consistent and capable of making testable predictions (see A triumph for theory). 

Over to CERN 

The first bridge linking the BEH mechanism to the real world was sketched out in CERN’s theory corridors in the form of a 50-page-long phenomenological profile of the Higgs boson by John Ellis, Mary Gaillard and Dimitri Nanopoulos published in 1976. The discovery of neutral currents in 1973 by Gargamelle at CERN, and of the charm quark at Brookhaven and SLAC in 1974, had confirmed that the Standard Model was on the right track. Despite their conviction that something like the Higgs boson had to exist, however, Ellis et al. ended their paper on a cautionary, somewhat tongue-in-cheek note: “We apologise to experimentalists for having no idea what is the mass of the Higgs boson… and for not being sure of its couplings to other particles, except that they are probably all very small. For these reasons we do not want to encourage big experimental searches for the Higgs boson, but we do feel that people performing experiments vulnerable to the Higgs boson should know how it may turn up”. 

As it turned out, discovering and measuring the electroweak bosons would drive three major projects at CERN spanning three decades: the SPS proton–antiproton collider, LEP and the LHC. Following Carlo Rubbia and Simon van der Meer’s ingenious modification of the SPS to collide protons and antiprotons, greatly increasing the available energy, the UA1 and UA2 collaborations confirmed the existence of the W boson on 25 January 1983. The discovery of the slightly heavier Z boson came a few months later. The discoveries made the case for the Higgs boson stronger, since all three bosons hail from the same scalar field (see panel). 

The Higgs, the universe and everything

The “Mexican hat"

The Higgs boson is the excitation of a featureless condensate that fills all space – a complex scalar field with a shape resembling a Mexican hat. The universe is pictured as being born in a symmetric state at the top of the hat: the electromagnetic and weak forces were one, and particles moved at the speed of light. A fraction of a nanosecond later, the universe transitioned to a less symmetric but more stable configuration in the rim of the hat, giving the universe a vacuum expectation value of 246 GeV. 

During this electroweak symmetry- breaking process, three of the BEH field’s components were absorbed to generate polarisation states, and thus masses, for the W and Z bosons; the other component, corresponding to a degree of freedom “up and down” the rim of the hat, is the Higgs boson (see “Lifting the lid” image). The masses of the fermions are generated via Yukawa couplings to the BEH field, implying that mass is not an intrinsic property of elementary particles.

The roots of the BEH mechanism lie in the phenomenon of spontaneous symmetry breaking, which is inherent in superconductivity and superfluidity. In 1960, Yoichiro Nambu and then Jeffrey Goldstone introduced spontaneous symmetry breaking into particle physics, paving the way for taming the weak interaction using gauge theory, like electromagnetism before it. Four years later, Robert Brout and Franҫois Englert and, independently, Peter Higgs, showed that a mathematical obstacle called the Goldstone theorem, which implied the existence of unobserved massless particles, is a blessing rather than a curse for gauge theories: the degrees of freedom responsible for the troublesome massless states generate masses for the heavy gauge bosons that mediate the short-range weak interaction (see A triumph for theory).

LEP, along with the higher energy Tevatron collider at Fermilab, offered Higgs hunters their first serious chance of a sighting. Dedicated analysis groups formed in the  experiments. For a decade they saw nothing. Then, on 14 June 2000, LEP’s final year of scheduled running, ALEPH reported a Higgs candidate at around 114–115 GeV, followed soon by a second and third event. LEP was granted a one-month extension. On 16 October, L3 announced a candidate. By 3 November ALEPH had notched up a 2.9σ excess. A request to extend LEP by one year was made, but there was deadlock at CERN. Five days later, Director-General Luciano Maiani announced that LEP had closed for the last time, so as not to delay the LHC. In addition to determining the properties of the W and Z bosons in detail and confirming the existence of electroweak radiative corrections, LEP had planted a flag in energy below which the Higgs would not be found.

Muscling a discovery 

In 1977, CERN Director-General John Adams had the foresight to make the LEP tunnel large enough to accommodate a TeV hadron collider capable of probing the scale of electroweak symmetry breaking. Spurred on by the W and Z discoveries, finding or ruling out the Higgs boson became the central goal of the LHC, greatly influencing the designs of the ATLAS and CMS detectors during the 1990s. Tens of millions of people worldwide watched as the first proton beams were threaded through the machine on 10 September 2008. While the LHC had other goals, the quest for the Higgs boson and the origin of mass resonated with non-experts and brought particle physics to the world. 

It was a bumpy start (see The bumpy ride to the bump), but high-energy LHC data began to flood in on 10 March 2010. By the time of the European Physical Society high-energy physics conference in Grenoble in July 2011, ATLAS and CMS were ready to offer a peek of their results. Practically, the search for the Higgs came down to a process of excluding mass ranges in which no signal had been seen. ATLAS and CMS had shrunk the allowed range and found a number of events hinting at a Higgs boson with a mass of about 142 GeV. “We both saw a bump at the same place, and we had champagne after the talks,” recalls Kyle Cranmer, co-coordinator of the ATLAS Higgs combination group at the time. “We weren’t confident then, but we were optimistic.” Fermilab’s Tevatron collider was also sensitive to a Higgs in the upper mass range and its CDF and D0 experiments pioneered many of the analysis methods that were used in ATLAS and CMS. Just four years earlier, they had hinted at a possible signal at 160 GeV, only for it to disappear with further data. Was the US machine about to make a last-gasp discovery and scoop the LHC? 

2011 results from ATLAS and CMS

The media were hot on the sigma trail. On 13 December 2011, the LHC experiments updated their findings: ATLAS constrained the Higgs to lie in the range 116-130 GeV, and CMS to lie in the range 115-127 GeV. For some, a light Higgs boson was in the bag. Others were hesitant. “There was a three-sigma excess when combining all the channels, but there were also less significant excesses in other mass regions,” recalls Mariotti. “I maybe also wanted not to believe it, in order not to be biased when analysing the data in 2012. And maybe because somehow if the Higgs was not there, it would have been really thrilling, much more challenging for us all.”

The following year, with the LHC running at a slightly higher energy, the collaborations knew that they would soon be able to say something definitive about the low-mass excess of events. From that moment, CMS decided not to look at the data and instead to redesign its analyses on simulated events “blinded”. On the evening of 14 June, all the analysis groups met separately to “open the box”. The next day, they shared their results with the collaboration. The two-photon and four-lepton channels had a beautiful peak at the same place. “It was like a very strong punch in the stomach,” says Mariotti. “From that moment it was difficult to sleep, and it was hard not to smile!”

The quest for the Higgs boson and the origin of mass resonated with non-experts and brought particle physics to the world

Members of both collaborations were under strict internal embargoes concerning the details. ATLAS unblinded its di-photon results late on 31 May, revealing a roughly 2σ excess. By 19 June it had grown to 3.3σ. The four-lepton group saw a similar excess. “My student Sven Kreiss was the first person in ATLAS to combine the channels and see the curve cross the 5σ threshold,” says Cranmer. “That was on 24 June, and it started to sink in that we had really found it. But it was still not clear what we would claim or how we would phrase things.” Amazingly, he says, he was not aware of the CMS results. “I was also not going out of my way to find out. I was relishing the moment, the excitement, and the last days of uncertainty. I also had more important things to do in preparation for the talk.” 

With the rumour mill in overdrive, a seminar at CERN was called for 4 July, also the first day of the ICHEP conference in Melbourne. Peter Higgs and François Englert, and Carl Hagan and Gerald Guralnik (who, with Tom Kibble, also arrived at the mass-generating mechanism), were to be there. The collaborations were focused only on their presentations. It had to be a masterpiece, says Mariotti. The day before, the CMS and ATLAS Higgs conveners met for coffee. They revealed nothing. “It was really hard not to know. We knew we had it, but somehow if ATLAS did not have it or had it but at a different mass, it all would have been a big disillusion.”

ICHEP 2012 and François Englert with Peter Higgs

Many at CERN decided to spend the night of 3 July in front of the auditorium so as not to miss the historic moment. CMS spokesperson Joe Incandela was first to guide the audience through the checks and balances behind the final plots. Fabiola Gianotti followed for ATLAS. When it was clear that both had seen a 5σ excess of events at around 125 GeV, the room erupted. Was is it really the Higgs? All that was certain was that the particle was a boson, with a mass where the Standard Model expected it. Seizing the moment, and the microphone, Director-General Rolf Heuer announced: “As a layman, I would now say ‘I think we have it’, do you agree?” It was a spontaneous decision, he says. “For a short period between the unblindings and the seminar, I was one of the few people in the world, just with research director Sergio Bertolucci, in fact, who was aware of both results. We would not have announced a discovery had one experiment not come close to that threshold.”  

The summer of 2012 produced innumerable fantastic memories, says Marumi Kado, ATLAS Higgs-group co-convener at the time and now a deputy spokesperson. “The working spirit in the group was exceptional. Each unblinding, each combination of the channels was an incredible event. Of course, the 4 July seminar was among the greatest.” In CMS, says Mariotti, there was a “party-mood” for months. “Every person thought, correctly, that they had played a role in the discovery, which is important, otherwise very large experiments cannot be done.” 

The path from here 

Ten years later, ATLAS and CMS measurements have shown the Higgs boson to be consistent with the minimal version required by the Standard Model. Its couplings to the gauge bosons and the heaviest three fermions (top, bottom and tau) have been confirmed, evidence that it couples to a second-generation fermion (the muon) obtained, and first studies of Higgs–charm and Higgs–Higgs couplings reported (see The Higgs boson under the microscope). However, data from Run 3, the High-Luminosity LHC and a possible Higgs-factory to follow the LHC, are needed to fully test the Standard-Model BEH mechanism (see The Higgs after LHC). 

Every person thought, correctly, that they had played a role in the discovery, which is important, otherwise very large experiments cannot be done

Events on 4 July 2012 brought one scientific adventure to a close, but opened another, fascinating chapter in particle physics with fewer theoretical signposts. What is clear is that precision measurements of the Higgs boson open a new window to explore several pressing mysteries. The field from which the Higgs boson hails governs a critical phase transition that might be linked to the cosmic matter–antimatter asymmetry (see Electroweak baryogenesis); as an elementary scalar, it offers a unique “portal” to dark or hidden sectors which might include dark matter (see Through the Higgs portal); as the arbiter of mass, it could hold clues to the puzzling hierarchy of fermion masses (see The origin of particle masses); and its interactions govern the ultimate stability of the universe (see The Higgs and the fate of the universe). The very existence of a light Higgs boson in the absence of new particles to stabilise its mass is paradoxical (see Naturalness after the Higgs). Like the discovery of the accelerating universe, Nima Arkani-Hamed told the Courier in 2019, it is profoundly “new” physics: “Both discoveries are easily accommodated in our equations, but theoretical attempts to compute the vacuum energy and the scale of the Higgs mass pose gigantic, and perhaps interrelated, theoretical challenges. While we continue to scratch our heads as theorists, the most important path forward for experimentalists is completely clear: measure the hell out of these crazy phenomena!”

A triumph for theory

Increasingly complex electroweak processes

Often in physics, experimentalists observe phenomena that theorists had not been able to predict. When the muon was discovered, theoreticians were confused; a particle had been predicted, but not this one. Isidor Rabi came with his famous outcry: “who ordered that?” The J/ψ is another special case. A particle was discovered with properties so different from the particles that were expected, that the first guesses as to what it was were largely mistaken. Soon it became evident that it was a predicted particle after all, but it so happened that its features were more exotic than was foreseen. This was an experimental discovery requiring new twists in the theory, which we now understand very well. The Higgs particle also has a long and interesting history, but from my perspective, it was to become a triumph for theory. 

From the 1940s, long before any indications were seen in experiments, there were fundamental problems in all theories of the weak interaction. Then we learned from very detailed and beautiful measurements that the weak force seemed to have a vector-minus axial-vector (V-A) structure. This implied that, just as in Yukawa’s theory for the strong nuclear force, the weak force can also be seen as resulting from an exchange of particles. But here, these particles had to be the energy quanta of vector and axial-vector fields, so they must have spin one, with positive and negative parities mixed up. They also must be very heavy. This implied that, certainly in the 1960s, experiments would not be able to detect these intermediate particles directly. But in theory, we should be able to calculate accurately the effects of the weak interaction in terms of just a few parameters, as could be done with the electromagnetic force. 

Electromagnetism was known to be renormalisable – that is, by carefully redefining and rearranging the mass and interaction parameters, all observable effects would become calculable and predictable, avoiding meaningless infinities. But now we had a difficulty: the weak exchange particles differed from the electromagnetic ones (the photons) because they had mass. The mass was standing in the way when you tried to do what was well understood in electromagnetism. How exactly a correct formalism should be set up was not known, and the relationship between renormalisability and gauge invariance was not understood at all. Indeed, today we can say that the first hints were already there by 1954, when C N Yang and Robert Mills wrote a beautiful paper in which they generalised the principle of local gauge invariance to include gauge transformations that affect the nature of the particles involved. In its most basic form, their theory described photons with electric charge.

Thesis topic

In 1969 I began my graduate studies under the guidance of Martinus J G Veltman. He explained to me the problem he was working on: if photons were to have mass, then renormalisation would not work the same way. Specifically, the theory would fail to obey unitarity, a quantum mechanical rule that guarantees probabilities are conserved. I was given various options for my thesis topic, but they were not as fundamental as the issues he was investigating. “I want to work with you on the problem you are looking at now,” I said. Veltman replied that he had been working on his problem for almost a decade; I would need lots of time to learn about his results. “First, read this,” he said, and he gave me the Yang–Mills paper. “Why?” I asked. He said, “I don’t know, but it looks important.”

Making history

That, I could agree  with. This was a splendid idea. Why can’t you renormalise this? I had convinced myself that it should be possible, in principle. The Yang–Mills theo­­ry was a relativistic quantised field theory. But Veltman explained that, in such a theory, you must first learn what the Feynman rules are. These are the prescriptions that you have to follow to get the amplitudes generated by the theory. You can read off whether the amplitudes are unitary, obey dispersion relations, and check that everything works out as expected.

Many people thought that renormalisation – even quantum field theory – was suspect. They had difficulties following Veltman’s manipulations with Feynman diagrams, which required integrations that do not converge. To many investigators, he seemed to be sweeping the difficulties with the infinities under the rug. Nature must be more clever than this! Yang–Mills seemed to be a divine theory with little to do with reality, so physicists were trying all sorts of totally different approaches, such as S-matrix theory and Regge trajectories. Veltman decided to ignore all that.

Solid-state inspiration

Earlier in the decade, some investigators had been inspired by results from solid-state physics. Inside solids, vibrating atoms and electrons were described by nonrelativistic quantum field theories, and those were conceptually easier to understand. Philip Anderson had learned to understand the phenomenon of superconductivity as a process of spontaneous symmetry breaking; photons would obtain a mass, and this would lead to a remarkable rearrangement of the electrons as charge carriers that would no longer generate any resistance to electric currents. Several authors realised that this procedure might apply to the weak force. In the summer of 1964, Peter Higgs submitted a manuscript to Physical Review Letters, where he noted that the mechanism of making photons massive should also apply to relativistic particle systems. But there was a problem. Jeffrey Goldstone had sound mathematical arguments to expect the emergence of massless scalar particles as soon as a continuous symmetry breaks down spontaneously. Higgs put forward that this theorem should not apply to spontaneously broken local symmetries, but critics were unconvinced.

The journal sent Higgs’s manuscript out to be peer reviewed. The reviewer did not see what the paper would add to our understanding. “If this idea has anything to do with the real world, would there be any possibility to check it experimentally?” The correct question would have been what the paper would imply for the renormalisation procedure, but this question was in nobody’s mind. Anyway, Higgs gave a clear and accurate answer: “Yes, there is a consequence: this theory not only explains where the photon mass comes from, but it also predicts a new particle, a scalar particle (a particle with spin zero), which unlike all other particles, forms an incomplete representation of the local gauge symmetry.” In the meantime, other papers appeared about the photon mass-generation process, not only by François Englert and Robert Brout in Brussels, but also by Tom Kibble, Gerald Guralnik and Carl Hagen in London. And Sheldon Glashow, Abdus Salam and Steven Weinberg were formulating their first ideas (all independently) about using local gauge invariance to create models for the weak interaction. 

I started to study everything from the ground up

At the time spontaneous symmetry breaking was being incorporated into quantum field theory, the significance of renormalisation and the predicted scalar particles were hardly mentioned. Certainly, researchers were not able to predict the mass of such particles. Personally, although I had heard about these ideas, I also wasn’t sure I understood what they were saying. I had my own ways of learning how to understand things, so I started to study everything from the ground up. 

If you work with quantum mechanics, and you start from a relativistic classical field theory, to which you add the Copenhagen procedure to turn that into quantum mechanics, then you should get a unitary theory. The renormalisation procedure amounts to transforming all expressions that threaten to become infinite due to divergence of the integrals, to apply only to unobservable qualities of particles and fields, such as their “bare mass” and “bare charge”. If you understand how to get such things under control, then your theory should become a renormalised description of massive particles. But there were complications.

The infinities that require a renormalisation procedure to tame them originate from uncontrolled behaviour at very tiny distances, where the effective energies are large and consequently the effects of mass terms for the particles should become insignificant. This revealed that you first have to renormalise the theory without any masses in them, where also the spontaneous breakdown of the local symmetry becomes insignificant. You had to get the particle book-keeping right. A massless photon has only two observable field components (they can be left- or right-rotating), whereas a massive particle with the same spin can rotate in three different ways. One degree of freedom did not match. This was why an extra field was needed. If you wanted massive photons with electric charges +, 0 or –, you would need a scalar field with four components; one of these would represent the total field strength, and would behave as an extra, neutral, spin-0 particle – the observable particle that Higgs had talked about – but the others would turn the number of spinning degrees of freedom of the three other bosons from two to three each (see “Dynamical” figure).

One question

In 1970 Veltman sent me to a summer school organised by Maurice Lévy in a new science institute at Cargèse on the French island of Corsica. The subject would be the study of the Gell–Mann–Lévy model for pions and nucleons, in particular its renormalisation and the role of spontaneous symmetry breaking. Will renormalisation be possible in this model, and will it affect its symmetry? The model was very different from what I had just started to study: Yang–Mills theory with spontaneous breaking of its symmetry. There were quite a few reputable lecturers besides Lévy himself: Benjamin Lee and Kurt Symanzik had specialised in renormalisation. Shy as I was, I only asked one question to Lee, and the same to Symanzik: does your analysis apply to the Yang–Mills case?

Both gave me the same answer: if you are Veltman’s student, ask him. But I had, and Veltman did not believe that these topics were related. I thought that I had a better answer, and I fantasised that I was the only person on the planet who knew how to do it right. It was not obvious at all; I had two German roommates at the hotel where I had been put, who tried to convince me that renormalisation of Feynman graphs where lines cross each other would be unfathomably complicated.

Spin-1 particles

Veltman had not only set up detailed, fully running machinery to handle the renormalisation of all sorts of models, but he had also designed a futuristic computer program to do the enormous amount of algebra required to handle the numerous Feynman diagrams that appear to be relevant for even the most basic computations. I knew he had those programs ready and running. He was now busy with some final checks: if his present attempts to check the unitarity of his renormalised model still failed, we should seriously consider giving this up. Yang–Mills theories for the weak interactions would not work as required.

But Veltman had not thought of putting a spin-zero, neutral particle in his model, certainly not if it wasn’t even in a complete representation of the gauge symmetry. Why should anyone add that? After returning from Cargèse I went to lunch with Veltman, during which I tried to persuade him. Walking back to our institute, he finally said, “Now look, what I need is not an abstract mathematical idea, what I want is a model, with a Lagrangian, from which I can read off the Feynman diagrams to check it with my program…”. “But that Lagrangian I can give you,” I said. Next, he walked straight into a tree! A few days after I had given him the Lagrangian, he came to me, quite excited. “Something strange,” he said, “your theory isn’t right because it still isn’t unitary, but I see that at several places, if the numbers had been a trifle different, it could have worked out.” Had he copied those factors ¼ and ½ that I had in my Lagrangian, I wondered? I knew they looked odd, but they originated from the fact that the Higgs field has isospin ½ while all other fields have isospin one.

No, Veltman had thought that those factors came from a sloppy notation I must have been using. “Try again,” I asked. He did, and everything fell into place. Most of all, we had discovered something important. This was the beginning of an intensive but short collaboration. My first publication “Renormalization of massless Yang–Mills fields”, published in October 1971, concerned the renormalisation of the Yang–Mills theory without the mass terms. The second publication that year, “Renormalizable Lagrangians for massive Yang–Mills fields,” where it was explained how the masses had to be added, had a substantial impact. 

There was an important problem left wide open, however: even if you had the correct Feynman diagrams, the process of cancelling out the infinities could still leave finite, non-vanishing terms that ruin the whole idea. These so-called “anomalies” must also cancel out. We found a trick called dimensional renormalisation, which would guarantee that anomalies cancel except in the case where particles spin preferentially in one direction. Fortunately, as charged leptons tend to rotate in opposite directions compared to quarks, it was discovered that the effects of the quarks would cancel those of the leptons. 

The fourth component

Within only a few years, a complete picture of the fundamental interactions became visible, where experiment and theory showed a remarkable agreement. It was a fully renormalisable model where all quarks and all leptons were represented as “families” that were only complete if each quark species had a leptonic counterpart. There was an “electroweak force”, where electromagnetism and the weak force interfere to generate the force patterns observed in experiments, and the strong force was tamed at almost the same time. Thus the electroweak theory and quantum chromodynamics were joined into what is now known as the Standard Model.

Be patient, we are almost there, we have three of the four components of this particle’s field

This theory agreed beautifully with observations, but it did not predict the mass of the neutral, spin-0 Higgs particle. Much later, when the W and the Z bosons were well-established, the Higgs was still not detected. I tried to reassure my colleagues: be patient, we are almost there, we have three of the four components of this particle’s field. The fourth will come soon.

As the theoretical calculations and the experimental measurements became more accurate during the 1990s and 2000s, it became possible to derive the most likely mass value from indirect Higgs-particle effects that had been observed, such as those concerning the top-quark mass. On 4 July 2012 a new boson was directly detected close to where the Standard Model said the Higgs  would be. After these first experimental successes, it was of utmost importance to check whether this was really the object we had been expecting. This has kept experimentalists busy for the past 10 years, and will continue to do so for the foreseeable future. 

The discovery of the Higgs particle is a triumph for high technology and basic science, as well as accurate theoretical analyses. Efforts spanning more than half a century paid off in the summer of 2012, and a new era of understanding the particles, their masses and interactions began.

The origin of particle masses

For thousands of years, humans have asked “what are the building blocks of nature?” To those not familiar with the wonders of relativistic quantum mechanics, the question might seem equivalent to asking “what are the smallest particles known?” However, we know that the size of atoms is quantised, and has negligible dependence on the size of nuclei. In fact, atomic size is essentially inversely proportional to the mass of the electron. Therefore, it is the electron mass, in addition to the rules of quantum mechanics, that essentially controls all the inner structure of all the elements. Furthermore, the masses and sizes of nuclei, protons and neutrons cannot simply be obtained by “adding up” smaller degrees of freedom; they are rather dictated by the coupling constant of the strong force, which below a certain energy scale, ΛQCD, becomes so large that the force between two particles becomes approximately independent of their distance, inducing confinement.

The above description suggests that “all” that is required to understand the basic structure of matter is to understand the origin of the electron mass and to study quantum chromodynamics. But this misses the bigger picture revealed by the Standard Model (SM). Protons, neutrons and other light, long-lived baryons are the lightest excitations of the pion field, which is constructed from the ultra-light u and d quarks, and perhaps also s quarks. This reveals the profound importance of the values of the fermion masses: increasing the u and d mass difference by less than 10 MeV (that is, about 1% of the proton mass), for instance, would make hydrogen and its isotopes unstable, thereby preventing the formation of almost all the elements in the early universe. Indeed, there are only certain regions in the vast quark-mass and ΛQCD parameter space that enable the universe as we know it to form.

Artistic representation of the Higgs boson

Having established that the structure of the masses of the elementary particles is an existential issue, what does this have to do with the discovery of the Higgs boson? While the Higgs boson carries a cosmological background value called the vacuum expectation value (VEV), which is associated with the spontaneous breaking of the electroweak symmetry, the VEV is not necessarily the source of the actual value and/or the pattern of fermion masses. The reason is that, in addition to baryonic charge (or number), all the elementary charged particles carry “chiral charge” – they are either left- or right-handed – which is conserved in the absence of the Brout–Englert–Higgs (BEH) field. What is fascinating about the BEH mechanism is that with the appropriate choice of coupling, the product of the field and its coupling-strength to the fermions effectively becomes a source of chiral charge, allowing the fermions to interact with it; the VEV is merely the constant of proportionality that induces the masses of the fermions (and of the weak-force mediators). This is a very minimal setup! In other known symmetry-breaking frameworks – for instance models based on technicolour/QCD-like dynamics or on superconductivity, where the electromagnetic symmetry inside a superconductor is broken by a condensate of electrons denoted Cooper pairs – there is no direct link to the generation of fermion masses. 

Standard Model couplings

The BEH mechanism might be minimal, but it still involves many parameters. The origin of fundamental masses requires switching on nine trilinear-couplings, which are broken into three generations of fundamental particles: three involving the u-type left- and right-handed quarks (u, c, t), three involving the d-type left- and right-handed quarks (d, s, b) and three involving the left- and right-handed charged leptons (e, µ, τ). Each coupling is associated with a linear “Yukawa” coupling of the Higgs boson to fermions, which implies that all the charged fermions acquire a mass proportional to the VEV of the BEH field. In other words, there is a linear relation between the Yukawa coupling and the fermion masses. Strikingly, the observed fermion masses encoded in the Yukawa couplings span some five orders of magnitude, with all but some members of the third generation being extremely small – leading to the fermion mass-hierarchy puzzle.

Relationship between the fundamental masses and their Yukawa couplings to the BEH field

The coupling between the Higgs boson and the fermions can be pictured as a new force – one that is radically different to the SM gauge forces. Given that this force only works between two particles that are closer than around 10–18 m – i.e. 1000 times smaller than the proton radius – it is not relevant to any experimental setup. The Higgs–Yukawa couplings do, however, conceal two interesting aspects related to our existence. The first is that increasing the VEV by a few factors would increase the neutron–proton mass splitting to the point where all nuclei are unstable. The second, pointed out by Giuseppe Degrassi and co-workers in 2013, is that the top-quark Yukawa interaction is close to its maximal size: increasing it by as little as 10% would push the VEV to fantastically large values, rendering our current universe unstable (see The Higgs and the fate of the universe). 

Massive alternatives

The minimal BEH mechanism is not the only way to understand the fermion mass hierarchy. This is illustrated by two radically different options. In the first, proposed in 2008 by Gian Giudice and Oleg Lebedev, the Yukawa couplings are assumed to depend on the BEH field, therefore avoiding hierarchies in the Yukawa couplings. The idea postulates a variation of chiral symmetry (in which the lighter the fermion the more chiral charge it carries) that forbids lighter particles from coupling to the Higgs linearly, but instead generates their masses through appropriate powers of the VEV (see “In line” figure, blue curve). The other extreme possibility, discussed more recently by the present author and colleagues, is where the masses of the light fermions instead come from their interaction with a subdominant additional source of electroweak symmetry breaking, similar to the technicolour framework. This new source replaces the Higgs boson’s role as the carrier of the light-generation chiral-charge, causing the light fermion-Higgs couplings to vanish (see figure, red curve). Both cases lead to an alternative understanding of the mass hierarchy puzzle and to the establishment of new physics.

The conclusion is that measuring the fermion-Higgs couplings at higher levels of precision will significantly improve our understanding of the origin of masses in nature. It took a few years after the Higgs-boson discovery, around 2018, for ATLAS and CMS to establish that the standard BEH mechanism is behind the third-generation fermion masses. This is a legacy result from the LHC experiments that is sometimes overlooked by our community (CERN Courier September/October 2020 p41). While significant, however, it told us little about the origin of the matter in the universe, which is almost exclusively made out of first-generation fermions with extremely small couplings to the Higgs boson. So far, we only have indirect information, via Higgs-boson couplings to the gauge bosons, about the origin of mass of the first and second generations. But breakthroughs are imminent. In the past two years, ATLAS and CMS have found signs that the Higgs boson contributes to both the second-generation muon and charm masses, which would exclude models leading to both the blue and red curves in the figure. Measuring the smallest electron Yukawa coupling is only possible at a future collider, whereas for the u and d quarks there is no clear experimental pathway.

Experimental novelties

A recent, unexpected way to tackle the mystery of fermion masses involves dark matter, specifically a class of models in which the dark-matter particle is ultra-light and its field-value oscillates with time. Such particles would couple to fermions in a way that echoes the Higgs–Yukawa coupling, though with an extremely low interaction strength, and lead to a variation in the masses of the fundamental fermions with time. This feeble effect cannot be searched for at colliders, but it can be probed with quantum sensors such as atomic clocks or future nuclear clocks that reach sensitivity of one part in 1019 or more. The strongest sensitivity of these tabletop experiments is the one to the electron mass.

It is now a priority to directly test the mass-generating mechanism of the first two generations

The discovery of the Higgs boson has opened a new window on the origin of masses, and consequently the structure of the basic blocks of nature, with profound links to our existence. ATLAS and CMS have made several breakthroughs, including the observation that the third-generation masses originate from the SM minimal BEH mechanism, and also providing evidence for part of the second-generation fermions. It is now a priority to directly test the mass-generating mechanism of the first two generations, and to determine all the Higgs couplings at higher precision, in search of possible chinks in the SM armour. 

The Higgs and the fate of the universe

Transition after electroweak symmetry breaking

A vacuum is ordinarily pictured as an empty region containing no particles, atoms or molecules of matter, as in outer space. To a particle physicist, however, it is better defined as the lowest energy state that can be attained when no physical particles are present. Even in empty space there are fields that are invisible to the naked eye but nevertheless influence the behaviour of matter, while quantum mechanics ensures that, even if particles are not physically present, they continually fluctuate spontaneously in and out of existence. 

In the Standard Model (SM), in addition to the familiar gravitational and electromagnetic fields, there is the Brout–Englert–Higgs (BEH) field that is responsible for particle masses. It is usually supposed to have a constant value throughout the universe, namely the value that it takes at the bottom of its “Mexican hat” potential (see “New depths” figure). However, as was first pointed out by several groups in 1979, and revisited by many theorists subsequently, the shape of the Mexican hat is subject to quantum effects that change its shape. For example, the BEH field has self-interactions that tend to curl the brim of the hat upwards, but there are additional quantum effects that tend to curl the brim downwards, due to the interactions with the fundamental particles to which the BEH field gives mass. The most important of these is the heaviest matter particle: the top quark.

Push and pull

The upward push of the Higgs boson’s self-interaction and the downward pressure of the top quark are very sensitive to their masses, and also to the strong interactions, which modify the effect of the top quark. Experiments at the LHC have already determined the mass of the Higgs boson with a precision approaching 0.1%, and CMS recently measured the mass of the top quark with an accuracy of almost 0.2%, while the strong coupling strength is known to better than 1%. The latest calculations of the quantum effects of the Higgs boson and the top quark indicate that the brim of the Mexican hat turns down when the BEH field exceeds its value today by 10 orders of magnitude, implying that the current value is not the lowest energy and hence not the true vacuum of the SM. A consequence is that the current BEH value is not stable, because quantum fluctuations would inevitably cause it to decay into a lower-energy state. The universe as we know it would be doomed (see “On the cusp” figure).

However, there is no immediate need to panic. First, the universe is metastable with an estimated lifetime before it decays that is many, many orders of magnitude longer than its age so far. Second, one could perhaps cling to the increasingly forlorn hope that the prediction of a lower-energy state of the SM vacuum is somehow mistaken. Perhaps an experimental measurement going into the calculation has an unaccounted uncertainty, or perhaps there is some ingredient that is missing from the theoretical calculation of the shape of the Mexican hat? 

Absolute stability, metastability and instability of the SM vacuum

If you simply take the calculation at face value and humbly accept the eventual demise of the universe as we know it, however, a further problem arises. Since quantum and thermal fluctuations in the BEH field were probably much larger when the universe was young and much hotter than today, the overwhelming majority of the universe would have been driven into the lower-energy state. Only an infinitesimal fraction would be in the metastable state we find ourselves in today, where the value of the BEH field is relatively small. Of course, one could argue anthropically that this good luck was inevitable, as we could not live in any other “vacuum” state. 

To me, this argument reeks of special pleading. Instead, my instinct is to argue that some physics beyond the SM must appear below the turn-down scale and stabilise the vacuum that we live in. This argument is not specific about the type of new physics or the scale at which it appears. One extension of the SM that fits the bill is supersymmetry, but the stability argument offers no guarantee that this or any other extension of the SM is within reach of current experiments.

It used to be said that the nightmare scenario for the LHC would be to discover the Higgs boson and nothing else. However, the measured masses of the Higgs boson and the top quark may be hinting that there must be physics beyond the SM that stabilises the vacuum. Let us take heart from this argument, and keep looking for new physics, even if there is no guarantee of immediate success.

The Higgs boson under the microscope

On 4 July 2012, the ATLAS and CMS collaborations jointly announced their independent discoveries of a new particle directly related to the Brout–Englert–Higgs field that gives mass to all other particles in the Standard Model (SM). The LHC and its two general-purpose experiments were designed and built, among other things, with the aim of detecting or ruling out the SM Higgs boson. Within three years of the LHC startup, the two experiments detected a signal consistent with a Higgs boson with a mass of about 125 GeV, which was perfectly consistent with indications from precision measurements carried out at the electron–positron colliders LEP and SLC, and at the Tevatron proton–antiproton collider.

Higgs encounters

The discovery was made mainly by detecting decays of the new particle into two photons or two Z bosons (each of which decay into a pair of electrons or muons), for which the invariant mass can be reconstructed with high resolution. The search for the Higgs boson was also performed in other channels, and all results were found to be consistent with the SM expectations. A peculiar feature of the Higgs boson is that it has zero spin. At the time of the discovery, it was already excluded that the particle was a standard vector boson: a spin-1 particle cannot decay into two photons, leaving only spin-0 or spin-2 as the allowed possibilities. 

Ten years ago, the vast majority of high-energy physicists were convinced that a Higgs boson had been detected. The only remaining question was whether it was the boson predicted by the SM or part of an extended Higgs sector.

Basic identity  

The mass of the Higgs boson is the only parameter of the Higgs sector that is not predicted by the SM. A high-precision measurement of the mass is therefore crucial because, once it is known, all the couplings and production cross sections can be predicted in the SM and then compared with experimental measurements. The mass measurement is carried out using the H γγ and H  ZZ  4ℓ channels, with a combined ATLAS and CMS measurement based on Run 1 data obtaining a value of 125.09 ± 0.24 GeV. More precise results with a precision at the level of one part per thousand have been obtained by ATLAS and CMS using partial datasets from Run 2.

The width of the Higgs boson, unlike its mass, is well predicted at approximately 4 MeV. Since this is much smaller than the ATLAS and CMS detector resolutions, a precise direct measurement can only be carried out at future electron–positron colliders. At the LHC it is possible to indirectly constrain the width by studying the production of di-boson pairs (ZZ or WW) via the exchange of off-shell Higgs bosons: under some reasonable assumptions, the off-shell cross section at high mass relative to the on-shell cross section increases proportionally to the width. A recent result from CMS constrains the Higgs-boson width to be between 0.0061 and 2.0 times the SM prediction at 95% confidence level. Finding the width to be smaller than the SM would mean that some of the couplings are smaller than predicted, while a larger measured width could reflect additional decay channels beyond the SM, or a larger branching fraction of those predicted by the SM.

This is the first strong suggestion that the Higgs boson also couples to fermions from generations other than the third

The spin and charge-parity (CP) properties of the Higgs boson are other key quantities. The SM predicts that the Higgs boson is a scalar (spin-0 and positive CP) particle, but in extended Higgs models it could be a superposition of positive and negative CP states, for example. The spin and CP properties can be probed using angular distributions of the Higgs-boson decay products, and several decay channels were exploited by ATLAS and CMS: H γγ, ZZ, WW and ττ. All results to date indicate consistency with the SM and exclude most other models at more than 3σ confidence level, including all models with spin different from zero. 

Couplings to others 

One of the main tools for characterising the Higgs boson is the measurement of its production processes and decays. Thanks to growing datasets, improved analysis techniques, more accurate theoretical tools and better modeling of background processes, ATLAS and CMS have made remarkable progress in this crucial programme over the past decade. 

Using Run 1 data recorded between 2010 and 2012, the gluon-fusion and vector-boson fusion production processes were established, as were the decays to pairs of bosons (γγ, WW* and ZZ*) and to a τ-lepton pair from the combination of ATLAS and CMS data. With Run 2 data (2015–2018), both ATLAS and CMS observed the decay to a pair of b quarks. Although the preferred decay mode of the Higgs boson, this channel suffers from larger backgrounds and is mainly accessible in the associated production of the Higgs boson with a vector boson. The rarer production mode of the Higgs boson in association with a t-quark pair was also observed using a combination of different decay modes, providing a direct proof of the Yukawa coupling between the Higgs boson and top quark. The existence of the Yukawa couplings between the Higgs boson and third-generation fermions (t, b, τ) is thus established.

Mass spectra

The collaborations also investigated the coupling of the Higgs boson to the second-generation fermions, in particular the muon. With the full Run 2 dataset, CMS reported evidence at the level of 3σ over the background-only hypothesis that the Higgs boson decays into μ+μ, while ATLAS supported this finding with a 2σ excess. This is the first strong suggestion that the Higgs boson also couples to fermions from generations other than the third, again in accordance with the SM. Research is also ongoing to constrain the Higgs’s coupling to charm quarks via the decay H  cc. This is a much more difficult channel but, thanks to improved detectors and analysis methods, including extensive use of machine learning, ATLAS and CMS recently achieved a sensitivity beyond expectations and excluded a branching fraction of H  cc relative to the SM prediction larger than O(10). The possibility that the Higgs-boson’s coupling to charm is at least as large as the coupling to bottom quarks is excluded by a recent ATLAS analysis at 95% confidence level.

The accuracy of the production cross-section times decay branching-fraction measurements in the bosonic decay channel (diphoton, ZZ and WW) with the full Run 2 dataset is around 10%, allowing measurements in a more restricted kinematical region that can be sensitive to physics beyond the SM. In all probed phase-space regions, the measured cross sections are compatible with the SM expectations (Data used for some of the measurements are shown in the “Mass spectra” figure). 

Ten years after the discovery of a new elementary boson, considerable progress has been made toward understanding this particle

The combination of all measurements in the different production and decay processes can be used to further constrain the measured couplings between the Higgs boson and the other particles. The production cross section for vector-boson-fusion production, for example, is directly proportional to the square of the coupling strengths between the Higgs boson and W or Z bosons. A modification of these couplings will also affect the rate at which the Higgs boson decays to various final states. Assuming no contribution beyond the SM to Higgs decays and that only SM particles contribute to Higgs-boson vertices involving loops, couplings to t, b and τ are currently determined with uncertainties of around 10%, and couplings to W and Z bosons with uncertainties of about 5%. 

The relation between the mass of a particle and its coupling to the Higgs boson is as expected from the SM, in which the particle masses originate from their coupling to the Brout–Englert–Higgs field (see “Couplings” figure). These measurements thus set bounds on specific new-physics models that predict deviations of the Higgs-boson couplings from the SM. The impact of new physics at a high energy scale is also probed in effective-field-theory frameworks, introducing all possible operators that describe couplings of the Higgs boson to SM particles. No deviations from predictions are observed. 

New physics 

The Higgs boson is the only elementary particle with spin-0. However, an extended Higgs sector is a minimal extension of the SM and is predicted by many theories, such as those based on supersymmetry. These extensions predict several neutral or charged spin-0 particles: one is the observed 125 GeV Higgs boson; the others would preferentially couple to heavier SM particles. Searches for heavier scalar (or pseudo-scalar) particles have been carried out in a variety of final states, but no evidence for such particles is found. For example, the search for heavy scalar or pseudo-scalar particles decaying to a pair of τ leptons excludes masses up to 1–1.5 TeV. The extended Higgs sector can also include lighter scalar or pseudo-scalar particles into which the observed Higgs boson could decay. A wide range of final states have been investigated but no evidence found, setting stringent constraints on the corresponding Higgs-boson decay branching fractions.

Couplings

The Higgs sector could also play a role linking the SM to new physics that explains the presence of dark matter in the form of new neutral, weakly interacting particles. If their mass is less than half that of the Higgs boson, the Higgs boson could decay to a pair of these neutral particles. Since the particles would be invisible in the detector, this process can be detected by observing the presence of missing transverse momentum from the Higgs-boson recoiling against visible particles. The most sensitive processes are those in which the Higgs boson is produced in association to other particles: vector boson fusion, and the associated productions with a vector boson or with a top quark pair. No evidence of such decay has been found, setting upper limits on the invisible decay branching fraction of the Higgs boson at the level of 10%, and providing complementary constraints to those from direct dark-matter detection experiments.

Self-interaction

In addition to its couplings to other bosons and to fermions, the structure of the Brout–Englert–Higgs potential predicts a self-coupling of the Higgs boson that is related to electroweak symmetry breaking (see Electroweak baryogenesis). By studying Higgs-boson pair production at the LHC, it is possible to directly probe this self-coupling. 

The two main challenges of this measurement are the tiny cross section for Higgs-boson pair production (about 1000 times smaller than the production of a single Higgs boson) and the interference between processes that involve the self-coupling and those that do not. Final states with a favourable combination of the expected signal yield and signal-over-background ratio are exploited. The most sensitive channels are those with one Higgs boson decaying to a b-quark pair and the other decaying either to a pair of photons, τ leptons or b quarks. Upper limits of approximately three times the predicted cross section have been obtained with the Run 2 dataset. These searches can also be used to set constraints on the Higgs boson self-coupling relative to its SM value. 

The sensitivities achieved for Higgs-boson pair production searches with the Run 2 dataset are significantly better than expected before the start of Run 2, thanks to several improvements in object reconstruction and analysis techniques. These searches are mostly limited by the size of the dataset and thus will improve further with the Run 3 and much larger High-Luminosity LHC (HL-LHC) datasets.

Going further

Ten years after the discovery of a new elementary boson, considerable progress has been made toward understanding this particle. All measurements so far point to properties that are very consistent with the SM Higgs boson. All main production and decay modes have been observed by ATLAS and CMS, and the couplings to vector bosons and third-generation fermions are probed with 5 to 10% accuracy, confirming the pattern expected from the Brout–Englert–Higgs mechanism for electroweak symmetry breaking and the generation of the masses of elementary particles. Still, there is ample room for improvement in the forthcoming Run 3 and HL-LHC phases, to reduce the uncertainty in the coupling measurements down to a few per cent, to establish couplings to second-generation fermions (muons) and to investigate the Higgs-boson self-coupling. Improved measurements will also significantly expand the sensitivity to a possible extended Higgs sector or new dark sector. 

To reach the ultimate accuracy in the measurements of all Higgs-boson properties (including its self-coupling), to remove the assumptions in the determination of the Higgs couplings at the LHC, and to considerably extend the search for new physics in the Higgs sector, new colliders – such as an e+e collider and a future hadron collider – will be required.

Naturalness after the Higgs

Artwork from Peter Higgs’ Nobel diploma

When Victor Weisskopf sat down in the early 1930s to compute the energy of a solitary electron, he had no way of knowing that he’d ultimately discover what is now known as the electroweak hierarchy problem. Revisiting a familiar puzzle from classical electrodynamics – that the energy stored in an electron’s own electric field diverges as the radius of the electron is taken to zero (equivalently, as the energy cutoff of the theory is taken to infinity) – in Dirac’s recently proposed theory of relativistic quantum mechanics, he made a remarkable discovery: the contribution from a new particle in Dirac’s theory, the positron, cancelled the divergence from the electron itself and left a quantum correction to the self-energy that was only logarithmically sensitive to the cutoff. 

The same cancellation occurred in any theory of charged fermions. But when Weisskopf considered the case for charged scalar particles in 1939, the problem returned. To avoid the need for finely-tuned cancellations between this quantum correction and other contributions to a scalar’s self-energy, he posited that the cutoff energy for scalars should be close to their observed self-energy, heralding the appearance of new features that would change the calculation and render the outcome “natural”. 

Nearly 30 years would pass before Weisskopf’s prediction about scalars was put to the test. The charged pion, a pseudoscalar, suffered the very same divergent self-energy that he had computed. As the neutral pion is free from this divergence, Weisskopf’s logic suggested that the theory of charged and neutral pions should change at around 800 MeV, the cutoff scale suggested by the observed difference in their self-energies. Lo and behold, the rho meson appeared at 775 MeV. Repeating the self-energy calculation with the rho meson included, the divergence in the charged pion’s self-energy disappeared. 

This same logic would predict something new. It had been known for some time that the relative self-energy between the neutral kaons KL and KS diverged due to contributions from the weak interactions in a theory containing only the known up, down and strange quarks. Matching the observed difference suggested that the theory should change at around 3 GeV. Repeating the calculation with the addition of the recently proposed charm quark in 1974, Mary K Gaillard and Ben Lee discovered that the self-energy difference became finite, which allowed them to predict that the charm quark should lie below 1.5 GeV. The discovery at 1.2 GeV later that year promoted Weisskopf’s reasoning from an encouraging consistency check to a means of predicting new physics.

Higgs, we have a problem

Around the same time, Ken Wilson recognised that the coupling between the Higgs boson and other particles of the Standard Model (SM) leads to yet another divergent self-energy, for which the logic of naturalness implied new physics at around the TeV scale. Thus the electroweak hierarchy problem was born – not as a new puzzle unique to the Higgs, but rather the latest application of Weisskopf’s wildly successful logic (albeit one for which the answer is not yet known). 

History suggested two possibilities. As a scalar, the Higgs could only benefit from the sort of cancellation observed among fermions if there is a symmetry relating bosons and fermions, namely supersymmetry. Alternatively, it could be a light product of compositeness, just as the pions and kaons are light bound states of the strong interactions. These solutions to the hierarchy problem came to dominate expectations for physics beyond the SM, with a sharp target – the TeV scale – motivating successive generations of collider experiments. Indeed, when the physics case for the LHC was first developed in the mid-1980s, it was thought that new particles associated with supersymmetry or compositeness would be much easier to discover than the Higgs itself. But while the Higgs was discovered, no signs of supersymmetry or compositeness were to be found.

In the meantime, other naturalness problems were brewing. The vacuum energy – Einstein’s infamous cosmological constant – suffers a divergence of its own, and even the finite contributions from the SM are many orders of magnitude larger than the observed value. Although natural expectations for the cosmological constant fail, an entirely different set of logic seems to succeed in its place. To observe a small cosmological constant requires observers, and observers can presumably arise only if gravitationally-bound structures are able to form. As Steven Weinberg and others observed in the 1980s, such anthropic reasoning leads to a prediction that is remarkably close to the value ultimately measured in 1998. To have predictive power, this requires a multitude of possible universes across which the cosmological constant varies; only the ones with sufficiently small values of the cosmological constant produce observers to bear witness.

The electroweak hierarchy problem

An analogous argument might apply to the electroweak hierarchy problem: the nuclear binding energy is no longer sufficient to stabilise the neutron within typical nuclei if the Higgs vacuum expectation value (VEV) is increased well above its observed value. If the Higgs VEV varies across a landscape of possible universes while its couplings to fermions are kept fixed, only universes with sufficiently small values of the Higgs VEV would lead to complex atoms and, presumably, observers. Although anthropic reasoning for the hierarchy problem requires stronger assumptions than for the cosmological-constant problem, its compatibility with null results at the LHC is enough to raise questions about the robustness of natural reasoning. 

Amidst all of this, another proposed scalar particle entered the picture. The observed homogeneity and isotropy of the universe point to a period of exponential expansion of spacetime in the early universe driven by the inflaton. While the inflaton may avoid naturalness problems of its own, the expansion of spacetime and the quantum fluctuations of fields during inflation lead to qualitatively new effects that are driving new approaches to the hierarchy problem at the intersection of particle physics, cosmology and gravitation.

Perhaps the most prominent of these new approaches came, surprisingly enough, from a failed solution to the cosmological constant problem. Around the same time as the first anthropic arguments for the cosmological constant were taking form, Laurence Abbott proposed to “relax” the cosmological constant from a naturally large value by the evolution of a scalar field in the early universe. Abbot envisioned the scalar evolving along a sloping, bumpy potential, much like a marble rolling down a wavy marble run. As it did so, this scalar would decrease the total value of the cosmological constant until it reached the last bump before the cosmological constant turned negative. Although the universe would crunch away into nothingness if the scalar evolved to negative values of the cosmological constant, it could remain poised at the last bump for far longer than the age of the observed universe. 

Despite the many differences among the new approaches, they share a common tendency to leave imprints on the Higgs boson

While this fails for the cosmological constant (the resulting metastable universe is largely devoid of matter), analogous logic succeeds for the hierarchy problem. As Peter Graham, David Kaplan and Surjeet Rajendran pointed out in 2015, a scalar evolving down a potential in the early universe can also be used to relax the Higgs mass from naturally large values. Of course, it needs to stop close to the observed mass. But something interesting happens when the Higgs mass-squared passes from positive values to negative values: the Higgs acquires a VEV, which gives mass to quarks, which induces bumps in the potential of a particular type of scalar known as an axion (proposed to explain the unreasonably good conservation of CP symmetry by the strong interactions). So if the relaxing scalar is like an axion – a relaxion, you might say – then it will encounter bumps in its potential when it relaxes the Higgs mass to small values. If the relaxion is rolling during an inflationary period, the expansion of spacetime can provide the “friction” necessary for the relaxion to stop when it hits these bumps and set the observed value of the weak scale. The effective coupling between the relaxion and the Higgs that induces bumps in the relaxion potential is large enough to generate a variety of experimental signals associated with a new, light scalar particle that mixes with the Higgs.

The success of the relaxion hypothesis in solving the hierarchy problem hinges on an array of other questions involving gravity. Whether the relaxion potential can remain sufficiently smooth over the vast trans-Planckian distances in field space required to set the value of the weak scale is an open question, one that is intimately connected to the fate of global symmetries in a theory of quantum gravity (itself the target of active study in what is known as the Swampland programme).

Models abound 

In the meantime, the recognition that cosmology might play a role in solving the hierarchy problem has given rise to a plethora of new ideas. For instance, in Raffaele D’Agnolo and Daniele Teresi’s recent paradigm of “sliding naturalness”, the Higgs is coupled to a new scalar whose potential features two minima. In the true minimum, the cosmological constant is large and negative, and the universe would crunch away into oblivion if it ended up in this vacuum. In the second, local minimum, the cosmological constant is safely positive (and can be made compatible with the small observed value of the cosmological constant by Weinberg’s anthropic selection). The Higgs couples to this scalar in such a way that a large value of the Higgs VEV destabilises the “safe” minimum. During the inflationary epoch, only universes with suitably small values of the Higgs VEV can grow and expand, while those with large values of the Higgs VEV crunch away. A second scalar coupled analogously to the Higgs can explain why the VEV is small but non-zero. Depending on how these scalars are coupled to the Higgs, experimental signatures range from the same sort of axion-like signals arising from the relaxion, to extra Higgs bosons at the LHC.

Alternatively, in the paradigm of “Nnaturalness” proposed by Nima Arkani-Hamed and others, the multitude of SMs over which the Higgs mass varies occur in one universe, rather than many. The fact that the universe is predominantly composed of one copy of the SM with a small Higgs mass can be explained if inflation ends and reheats the universe through the decay of a single particle. If this particle is sufficiently light, it will preferentially reheat the copy of the SM with the smallest non-zero value of the Higgs VEV, even if it couples symmetrically to each copy. The sub-dominant energy density deposited in other copies of the SM leaves its mark in the form of dark radiation susceptible to detection by the Simons Observatory or upcoming CMB-S4 facility. 

Finally, Gian Giudice, Matthew Mccullough and Tevong You have recently shown that inflation can help to understand the electroweak hierarchy problem by analogy with self-organised criticality. Just as adding individual grains of sand to a sandpile induces avalanches over diverse length scales – a hallmark of critical behaviour, obtained without tuning parameters – so too can inflation drive scalar fields close to critical points in their potential. This may help to understand why the observed Higgs mass lies so close to the boundary between the unbroken and broken phases of electroweak symmetry without fine tuning.

Going the distance 

Underlying Weisskopf’s natural reasoning is a long-standing assumption about relativistic theories of quantum mechanics: physics at short distances (the ultraviolet, or UV) is decoupled from physics at long distances (the infrared, or IR), making it challenging to apply a theory involving a large energy scale to a much smaller one without fine tuning. This suggests that loopholes may be found in theories that mix the UV and the IR, as is known to occur in quantum gravity. 

While the connection between this type of UV/IR mixing and the mass of the Higgs remains tenuous, there are encouraging signs of progress. For instance, Panagiotis Charalambous, Sergei Dubovsky and Mikhail Ivanov recently used it to solve a naturalness problem involving so-called “Love numbers” that characterise the tidal response of black holes. The surprising influence of quantum gravity on the parameter space of effective field theories implied by the Swampland programme also has a flavour of UV/IR mixing to it. And UV/IR mixing may even provide a new way to understand the apparent violation of naturalness by the cosmological constant.

We have come a long way since Weisskopf first set out to understand the self-energy of the electron. The electroweak hierarchy problem is not the first of its kind, but rather the one that remains unresolved. The absence of supersymmetry or compositeness at the TeV scale beckons us to search for new solutions to the hierarchy problem, rather than turning our backs on it. In the decade since the discovery of the Higgs, this search has given rise to a plethora of novel approaches, building new bridges between particle physics, cosmology and gravity along the way. Despite the many differences among these new approaches, they share a common tendency to leave imprints on the Higgs boson. And so, as ever, we must look to experiment to show the way. 

 

The bumpy ride to the bump

Welding a dipole-magnet interconnect

19 September 2008: the LHC was without beam because of a transformer problem. The hardware commissioning team were finishing off powering tests of the main dipole magnet circuit in sector 3–4 when, at 11:18, an electrical fault resulted in considerable physical damage, the release of helium, and debris in a long section of the machine. In the control room, the alarms came swamping in. The cryogenics team grappled to make sense of what their systems were telling them, and there was frantic effort to interpret the data from the LHC’s quench protection system. I called LHC project leader Lyn Evans: “looks like we’ve got a serious problem here”.

Up to this point, 2008 had been non-stop but things were looking good. First circulating beam had been established nine days earlier in a blaze of publicity. Beam commissioning had started in earnest, and the rate of progress was catching some of us by surprise.

It is hard to describe how much of a body blow the sector 3–4 incident was to the community. In the following days, as the extent of the damage became clearer, I remember talking to Glyn Kirby of the magnet team and being aghast when he observed that “it’s going to take at least a year to fix”. He was, of course, right.

What followed was a truly remarkable effort by everyone involved. A total of 53 cryomagnets (39 dipoles and 14 quadrupoles) covering most of the affected 700 m-long zone were removed and brought to the surface for inspection, cleaning and repair or reuse. Most of the removed magnets were replaced by spares. All magnets whatever their origin had to undergo full functional tests before being installed.

CERN Control Centre on 20 November 2009

Soot in the vacuum pipes, which had been found to extend beyond the zone of removed magnets, was cleared out using endoscopy and mechanical cleaning. The complete length of the beam pipes was inspected for contamination by flakes of multilayer insulation, which were removed by vacuum cleaning. About 100 plug-in modules installed in the magnet interconnects were replaced. 

Following an in-depth analysis of the root causes of the incident, and an understanding of the risks posed by the joints in the magnet interconnects, a new worst-case Maximum Credible Incident was adopted and a wide range of recommendations and mitigation measures were proposed and implemented. These included a major upgrade of the quench protection system, new helium pressure-release ports, and new longitudinal restraints for selected magnets. 

One major consequence of the 19 September incident was the decision to run at a lower-than-design energy until full consolidation of the joints had been performed – hence the adoption of an operational beam energy of 3.5 TeV for Run 1. Away from the immediate recovery, other accelerator teams took the opportunity to consolidate and improve controls, hardware systems, instrumentation, software and operational procedures. As CMS technical coordinator Austin Ball famously noted, come the 2009 restart, CMS, at least, was in an “unprecedented state of readiness”. 

Take two

Beam was circulated again on 20 November 2009. Progress thereafter was rapid. Collisions with stable-beam conditions were quickly established at 450 + 450 GeV, and a ramp to the maximum beam energy at the time (1.18 TeV, compared to the Tevatron’s 0.98 TeV) was successfully performed on 30 November. The first ramps were a lot of fun – there’s a lot going on behind the scenes, including compensation of significant field dynamics in the superconducting dipoles. Cue much relief when beam made it up the ramp for the first time. All beam-based systems were at least partially commissioned and LHC operations started a long process to master the control of a hugely complex machine. Following continued deployment of the upgraded quench protection system during the 2009 year-end technical stop, commissioning with beam started again in the new year. Progress was good, with first colliding beams at 3.5 + 3.5 TeV being established under the watchful eye of the media on 30 March 2010. With scheduled collisions delayed by two unsuccessful ramps, it was a gut-knotting experience in the control room. Nonetheless, we finally got there about three hours late. “Stable Beams” was declared, the odd beer was had, and we were off. 

Essentially 2010 was then devoted to commissioning and establishing confidence in operational procedures and the machine protection system, before starting to increase the number of bunches in the beam. In June the decision was taken to go for bunches with nominal population (~1.2 × 1011 protons), which involved another extended commissioning period. Up to this point, in deference to machine-protection concerns, only around one fifth of the nominal bunch population had been used. To further increase the number of bunches, the move to a bunch separation of 150 ns was made and the crossing angle bumps spanning the experiments’ insertion regions were deployed. After a carefully phased increase in total intensity, the proton run finished with beams of 368 bunches of around 1.2 × 1011 protons per bunch, and a peak luminosity of 2.1 × 1032 cm–2s–1.

LHC operators on 30 November 2009

Looking back, 2010 was a profoundly important year for a chastened and cautious accelerator sector. The energy stored in the magnets had demonstrated its destructive power, and it was clear from the start that the beam was to be treated with the utmost respect; safe exploitation of the machine was necessarily an underlying principle for all that followed. The LHC became magnetically and optically well understood (judged by the standards at the time – impressively surpassed in later years), and was stunningly magnetically reproducible. The performance of the collimation system was revelatory and accomplished its dual role of cleaning and protection impeccably throughout the full cycle. The injectors were doing a great job throughout in reliably providing high-intensity bunches with unforeseen low transverse emittances.

2010 finished with a switch from protons to operations with lead ions for the first time. Diligent preparation and the experience gained with protons allowed a rapid execution of the ion commissioning programme and Stable Beams for physics was declared on 7 November. 

Homing in 

The beam energy remained at 3.5 TeV in 2011, with the bunch spacing switched from 75 to 50 ns. A staged ramp in the number of bunches then took place up to a maximum of 1380 bunches, and performance was further increased by reducing the transverse size of the beams delivered by the injectors and by gently increasing the bunch population. The result was a peak luminosity of 2.4 × 1033cm–2s–1 and some healthy delivery rates that topped 90 pb–1 in 24 hours. The next step-up in peak luminosity followed a reduction in the β* parameter in ATLAS and CMS from 1.5 to 1 m (the transverse beam size at the interaction point is directly related to the value of β*). Along with further gentle increases in bunch population, this produced a peak luminosity of 3.8 × 1033 cm–2s–1 – well beyond expectations at the start of the year. Coupled with a concerted effort to improve availability, the machine went on to deliver a total of around 5.6 fb–1 for the year to both ATLAS and CMS. 

Some of the first events recorded by ATLAS and CMS

Meanwhile, excitement was building in the experiments. A colloquium at the end of 2011 showed a strengthening significance of an excess at around 125 GeV. The possible discovery of the Higgs boson in 2012 was recognised, and corresponding LHC running scenarios were discussed in depth – first at the Evian workshop (where we heard the plea from CMS spokesperson Guido Tonelli to “gimme 20” [inverse femtobarns]) and crystallised at the 2012 Chamonix workshop, where CERN Director-General Rolf Heuer stated: as a top priority the LHC machine must produce enough integrated luminosity to allow the ATLAS and CMS experiments an independent discovery of the Higgs before the start of long shutdown 1 (LS1). Soon after the workshop, Council president Michel Spiro sent a message to CERN’s member states: “After a brilliant year in 2011, 2012 should be historic, with either the discovery of the Standard Model Higgs boson or its exclusion.”

An important decision concerned the energy. A detailed risk evaluation concluded that the probability of a splice burn-out at 4 TeV per beam in 2012 was equal to, or less than, the probability that had been estimated in 2011 for 3.5 TeV per beam. The decision to run at 4 TeV helped in a number of ways: higher cross-sections for Higgs-boson production, reduced emittance and the possibility for a further reduction of β*.

Discovery year 

And so 2012 was to be a production year at an increased beam energy of 4 TeV. The choice was made to continue to exploit 50 ns bunch spacing, which offered the advantages of less electron cloud and higher bunch charge compared with 25 ns, and to run with 1380 bunches. Based on the experience of 2011, it was also decided to operate with tight collimator settings, enabling a more aggressive squeeze to β* = 0.6 m. The injectors continued to provide exceptional quality beam and routinely delivered 1.7 × 1011 protons per bunch. The peak luminosity quickly rose to its maximum for the year, followed by determined and long running attempts to improve peak performance. Beam instabilities, although never debilitating, were a reoccurring problem and there were phases when they cut into operational efficiency. Nonetheless by the middle of the year another 6 fb–1 had been delivered to both ATLAS and CMS. Combined with the 2011 dataset, this paved the way for the announcement of the Higgs-boson discovery. 

After a brilliant year in 2011, 2012 should be historic, with either the discovery of the Standard Model Higgs boson or its exclusion

2012 was a very long operational year and included the extension of the proton–proton run until December to allow the experiments to maximise their 4 TeV data before LS1. Integrated-luminosity rates were healthy at around 1 fb–1 per week, and the total for the year came in at about 23 fb–1 to both ATLAS and CMS. Run 1 finished with four weeks of proton–lead operations at the start of 2013.

It is impossible to do justice to the commitment and effort that went into establishing, and then maintaining, the complex operational performance of the LHC that underpinned the Higgs-boson discovery: RF, power converters, collimation, injection and beam-dump systems, vacuum, transverse feedback, machine protection, cryogenics, magnets, quench detection and protection, accelerator physics, beam instrumentation, beam-based feedbacks, controls, databases, software, survey, technical infrastructure, handling engineering, access, radiation protection plus material science, mechanical engineering, laboratory facilities … and the coordination of all that! 

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Stepping into the spotlight

François Englert and Peter Higgs

With the boson confirmed, speculation inevitably grew about the 2012 Nobel Prize in Physics. The prize is traditionally announced on the Tuesday of the first full week in October, at about midday in Stockholm. As it approaches, a highly selective epidemic breaks out: Nobelitis, a state of nervous tension among scientists who crave Nobel recognition. Some of the larger egos will have previously had their craving satisfied, only perhaps to come down with another fear: will I ever be counted as one with Einstein? Others have only a temporary remission, before suffering a renewed outbreak the following year.

Three people at most can share a Nobel, and at least six had ideas like Higgs’s in the halcyon days of 1964 when this story began. Adding to the conundrum, the discovery of the boson involved teams of thousands of physicists from all around the world, drawn together in a huge cooperative venture at CERN, using a machine that is itself a triumph of engineering. 

The 2012 Nobel Prize in Physics was announced on Tuesday 9 October and went to Serge Haroche and David Wineland for taking the first steps towards a quantum computer. Two days later, I went to Edinburgh to give a colloquium and met Higgs for a coffee beforehand. I asked him how he felt now that the moment had passed, at least for this year. “I’m enjoying the peace and quiet. My phone hasn’t rung for two days,” he remarked. 

That the sensational discovery of 2012 was indeed of Higgs’s boson was, by the summer of 2013, beyond dispute. That Higgs was in line for a Nobel prize also seemed highly likely. Higgs himself, however, knew from experience that in the Stockholm stakes, nothing is guaranteed. 

Back in 1982, at dawn on 5 October in the Midwest and the eastern US, preparations were in hand for champagne celebrations in three departments at two universities. At Cornell, the physics department hoped they would be honouring Kenneth Wilson, while over in the chemistry department their prospect was Michael Fisher. In Chicago, the physicists’ hero was to be Leo Kadanoff. Two years earlier the trio had shared the Wolf Prize, the scientific analogue of the Golden Globes to the Nobel’s Oscars, for their work on critical phenomena connected with phase transitions, fuelling speculation that a Nobel would soon follow. At the appointed hour in Stockholm, the chair of the awards committee announced that the award was to Wilson alone. The hurt was especially keen in the case of Michael Fisher, whose experience and teaching about phase transitions, illuminating the subtle changes in states of matter such as melting ice and the emergence of magnetism, had inspired Wilson, five years his junior. The omission of Kadanoff and Fisher was a sensation at the time and has remained one of the intrigues of Nobel lore.

Fisher’s agony was no secret to Peter Higgs. As undergraduates they had been like brothers and remained close friends for more than 60 years. Indeed, Fisher’s influence was not far away in July 1964, for it was while examining how some ideas from statistical mechanics could be applied to particle physics that Higgs had the insight that would become the capstone to the theory of particles and forces half a century later. For this he was to share the 2004 Wolf Prize with Robert Brout (who sadly died in 2011) and François Englert – just as Fisher, Kadanoff and Wilson had shared this prize in 1980. Then as October approached in 2013 Higgs became a hot favourite at least to share the Nobel Prize in Physics, and the bookmakers would only take bets at extreme odds-on. 

Time to escape 

In 2013, 8 October was the day when the Nobel decision would be announced. Higgs’s experiences the year before had helped him to prepare: “I decided not to be at home when the announcement was made with the press at my door; I was going to be somewhere else.” His first plan was to disappear into the Scottish Highlands by train, but he decided it was too complicated, and that he could hide equally well in Edinburgh. “All I would have to do is go down to Leith early enough. I knew the announcement would be around noon so I would leave home soon after 11, giving myself a safe margin, and have an early lunch in Leith about noon.” 

ATLAS and CMS physicists in Building 40 on 8 October 2013

Richard Kenway, the Tait Professor of Mathematical Physics at Edinburgh and one of the university’s vice principals, confirmed the tale. “That was what we were all told, and he completely convinced us. Right up to the actual moment when we were sitting waiting for the [Nobel] announcement, we thought he had disappeared off somewhere into the Highlands.” Some newspapers got the fake news from the department, and one reporter even went up into the Highlands to look for him.

As scientists and journalists across the world were glued to the live broadcast, the Nobel committee was still struggling to reach the famously reclusive physicist. The announcement of his long-awaited crown was delayed by about half an hour until they decided they could wait no longer. Meanwhile, Peter Higgs sat at his favourite table in The Vintage, a seafood bar in Henderson Street, Leith, drinking a pint of real ale and considering the menu. As the committee announced that it had given the prize to François Englert and Peter Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider”, phones started going off in the Edinburgh physics department. 

Higgs finished his lunch. It seemed a little early to head home, so he decided to look in at an art exhibition. At about three o’clock he was walking along Heriot Row in Edinburgh, heading for his flat nearby, when a car pulled up near the Queen Street Gardens. “A lady in her 60s, the widow of a high-court judge, got out and came across the road in a very excited state to say, ‘My daughter phoned from London to tell me about the award’, and I said, ‘What award?’ I was joking of course, but that’s when she confirmed that I had won the prize. I continued home and managed to get in my front door with no more damage than one photographer lying in wait.” It was only later that afternoon that he finally learned from the radio news that the award was to himself and Englert. 

Suited and booted 

On arrival in Stockholm in December 2013, after a stressful two-day transit in London, Higgs learned that one of the first appointments was to visit the official tailor. The costume was to be formal morning dress in the mid-19th-century style of Alfred Nobel’s time, including elegant shoes adorned with buckles. As Higgs recalled, “Getting into the shirt alone takes considerable skill. It was almost a problem in topology.” The demonstration at the tailor’s was hopeless. Higgs was tense and couldn’t remember the instructions. On the day of the ceremony, fortunately, “I managed somehow.” Then there were the shoes. The first pair were too small, but when he tried bigger ones, they wouldn’t fit comfortably either. He explained, “The problem is that the 19th-century dress shoes do not fit the shape of one’s foot; they were rather pointy.” On the day of the ceremony both physics laureates had a crisis with their shoes. “Englert called my room: ‘I can’t wear these shoes. Can we agree to wear our own?’ So we did. We were due to be the first on the stage and it must have been obvious to everyone in the front row that we were not wearing the formal shoes.” 

Robert Brout in spirit completed a trinity of winners

On the afternoon of 10 December, nearly 2000 guests filled the Stockholm Concert Hall to see 12 laureates receive their awards from King Gustav of Sweden. They had been guided through the choreography of the occasion earlier, but on the day itself, performing before the throng in the hall, there would be first-night nerves for this once-in-a-lifetime theatre. Winners of the physics prize would be called to receive their awards first, while the others watched and could see what to expect when they were named. The scenery, props and supporting cast were already in place. These included former winners dressed in tail suits and proudly wearing the gold button stud that signifies their membership of this unique club. Among them were Carlo Rubbia, discoverer of the W and Z particles, who instigated the experimental quest for the boson and won the prize in 1984; Gerard ’t Hooft, who built on Higgs’s work to complete the theoretical description of the weak nuclear force and won in 1999; and 2004 winner Frank Wilczek, who had built on his own prize-winning work to identify the two main pathways by which the Higgs boson had been discovered.

Peter Higgs in July 2012

After a 10-minute oration by the chair of the Nobel Foundation and a musical interlude, Lars Brink, chairman of the Nobel Committee for Physics, managed to achieve one of the most daunting challenges in science pedagogy, successfully addressing both the general public in the hall and the assembled academics, including laureates from other areas of science. The significance of what we were celebrating was beyond doubt: “With discovery of the Higgs boson in 2012, the Standard Model of physics was complete. It has been proved that nature follows precisely that law that Brout, Englert and Higgs created. This is a fantastic triumph for science,” Brink announced. He also introduced a third name, that of Englert’s collaborator, Robert Brout. In so doing, he made an explicit acknowledgement that Brout in spirit completed a trinity of winners. 

Brink continued with his summary history of how their work and that of others established the Standard Model of particle physics. Seventeen months earlier the experiments at the LHC had confirmed that the boson is real. What had been suspected for decades was now confirmed forever. The final piece in the Standard Model of particle physics had been found. The edifice was robust. Why this particular edifice is the one that forms our material universe is a question for the future. Brink now made the formal invitation for first Englert and then Higgs to step forward to receive their share of the award.

Higgs, resplendent in his formal suit, and comfortable in his own shoes, rose from his seat and prepared to walk to centre-stage. Forty-eight years since he set out on what would be akin to an ascent of Everest, Higgs had effectively conquered the Hillary step – the final challenge before reaching the peak – on 4 July 2012 when the existence of his boson was confirmed. Now, all that remained while he took nine steps to reach the summit was to remember the choreography: stop at the Nobel Foundation insignia on the carpet; shake the king’s hand with your right hand while accepting the Nobel prize and diploma with the other. Then bow three times, first to the king, then to the bust of Alfred Nobel at the rear of the stage, and finally to the audience in the hall.

Higgs successfully completed the choreography and accepted his award. As a fanfare of trumpets sounded, the audience burst into applause. Higgs returned to his seat. The chairman of the chemistry committee took the lectern to introduce the winners of the chemistry prize. To his relief, Higgs was no longer in the spotlight.

All in a name 

The saga of Higgs’s boson had begun with a classic image – a lone genius unlocking the secrets of nature through the power of human thought. The fundamental nature of Higgs’s breakthrough had been immediately clear to him. However, no one, least of all Higgs, could have anticipated that it would take nearly half a century and several false starts to get from his idea to a machine capable of finding the particle. Nor did anyone envision that this single “good idea” would turn a shy and private man into a reluctant celebrity, accosted by strangers in the supermarket. Some even suggested that the reason why the public became so enamoured with Higgs was the solid ordinariness of his name, one syllable long, unpretentious, a symbol of worthy Anglo-Saxon labour. 

lusive: How Peter Higgs Solved the Mystery of Mass

In 2021, nine years after the discovery, we were reminiscing about the occasion when, to my surprise, Higgs suddenly remarked that it had “ruined my life”. To know nature through mathematics, to see your theory confirmed, to win the plaudits of your peers and join the exclusive club of Nobel laureates: how could all this equate with ruin? To be sure I had not misunderstood, I asked again the next time we spoke. He explained: “My relatively peaceful existence was ending. I don’t enjoy this sort of publicity. My style is to work in isolation, and occasionally have a bright idea.”   

  • This is an edited extract from Elusive: How Peter Higgs Solved the Mystery of Mass, by Frank Close, published on 14 June (Basic Books, US) and 7 July (Allen Lane, UK)

Electroweak baryogenesis

Simulation of Higgs-bubble nucleation

Precision measurements of the Higgs boson open the possibility to explore the moment in cosmological history when electroweak symmetry broke and elementary particles acquired mass. Ten years after the Higgs-boson discovery, it remains a possibility that the electroweak phase transition happened as a rather violent process, with a large departure from thermal equilibrium, via Higgs-bubble nucleations and collisions. This is a fascinating scenario for three reasons: it provides a framework for explaining the matter–antimatter asymmetry of the universe; it predicts the existence of at least one new weak-scale scalar field and thus is testable at colliders; and it would leave a unique signature of gravitational waves detectable by the future space-based interferometer LISA.

One major failure of the Standard Model (SM) is its inability to explain the baryon-to-photon ratio in the universe: η ≈ 6 × 10–10. Measurements of this ratio from two independent approaches – anisotropies in the cosmic microwave background and the abundances of light primordial elements – are in beautiful agreement. In a symmetric universe, however, the prediction for η is a billion times smaller; big-bang nucleosynthesis could not have occurred and structures could not have formed. This results from strong annihilations between nucleons and antinucleons, which deplete their number densities very efficiently. Only in a universe with a primordial asymmetry between nucleons and antinucleons can these annihilations be prevented. There are many different models to explain such “baryogenesis”. Interestingly, however, the Higgs boson plays a key role in essentially all of them. 

Accidental symmetry

It is worth recalling how baryon number B gets violated by purely SM physics. B is an “accidental” global symmetry in the SM. There are no B-violating couplings in the SM Lagrangian. But the chiral nature of electroweak interactions, combined with the non-trivial topology of the SU(2) gauge theory, results in non-perturbative, B-violating processes. Technically, these are induced by extended gauge-field configurations called sphalerons, whose energy is proportional to the value of the Brout–Englert–Higgs (BEH) field. The possibility of producing these configurations is totally suppressed at zero temperature, such that B is an extremely good symmetry today. However, at high temperature, and in particular at 100 GeV or so, when the electroweak symmetry is unbroken, the baryon number is violated intensively as there is no energy cost. Since both baryons and antibaryons are created by sphalerons, charge–parity (CP) violation is needed. Indeed, as enunciated by Sakharov in 1967, a theory of baryogenesis requires three main ingredients: B violation, CP violation and a departure from equilibrium, otherwise the baryon number will relax to zero. 

The conclusion is that baryogenesis must take place either from a mechanism occurring before the electroweak phase transition (necessitating new sources of B violation beyond the SM) or from a mechanism where B-violation relies exclusively on SM sphalerons and occurring precisely at the electroweak phase transition (provided that it is sufficiently out-of-equilibrium and CP-violating). The most emblematic example in the first category is leptogenesis, where a lepton asymmetry is produced from the decay of heavy right-handed neutrinos and “reprocessed” into a baryon asymmetry by sphalerons. This is a popular mechanism motivated by the mystery of the origin of neutrino masses, but is difficult to test experimentally. The second categ­ory, electroweak baryogenesis, involves electroweak-scale physics only and is therefore testable at the LHC.

Electroweak baryogenesis requires a first-order electroweak phase transition to provide a large departure from thermal equilibrium, otherwise the baryon asymmetry is washed out. A prime example of this type of phase transition is boiling water, where bubbles of gas expand into the liquid phase. During a first-order electroweak phase transition, symmetric and broken phases coexist until bubbles percolate and the whole universe is converted into the broken phase (see “Bubble nucleation” image). Inside the bubble, the BEH field has a non-zero vacuum expectation value; outside the bubble, the electroweak symmetry is unbroken. As the wall is passing, chiral fermions in the plasma scatter off the Higgs at the phase interface. If some of these interactions are CP-violating, a chiral asymmetry will develop inside and in front of the bubble wall. The resulting excess of left-handed fermions in front of the bubble wall can be converted into a net baryon number by the sphalerons, which are unsuppressed in the symmetric phase in front of the bubble. Once inside the bubble, this baryon number is preserved as sphalerons are frozen there. In this picture, the baryon asymmetry is determined by solving a diffusion system of coupled differential equations.

New scalar required

The nature of the electroweak phase transition in the SM is well known: for a 125 GeV Higgs boson, it is a smooth crossover with no departure from thermal equilibrium. This prevents the possibility of electroweak baryogenesis. It is, however, easy to modify this prediction to produce a first-order transition by adding an electroweak-scale singlet scalar field that couples to the Higgs boson, as predicted in many SM extensions. Notably, this is a general feature of composite-Higgs models, where the Higgs boson emerges as a “pseudo Nambu–Goldstone” boson of a new strongly-interacting sector. 

Stochastic gravitational-wave background

An important consequence of such models is that the BEH field is generated only at the TeV scale; there is no field at temperatures above that. In the minimal composite Higgs model, the dynamics of the electroweak phase transition can be entirely controlled by an additional scalar Higgs-like field, the dilaton, which has experimental signatures very similar to the SM Higgs boson. In addition, we expect modifications of the Higgs boson’s couplings (to gauge bosons and to itself) induced by its mixing with this new scalar. LHC Run 3 thus has excellent prospects to fully test the possibility of a first-order electroweak phase transition in the minimal composite Higgs model.

The properties of the additional particle required to modify the electroweak phase transition also suggest new sources of CP violation, which is welcome as CP-violating SM processes are not sufficient to explain the baryon asymmetry. In particular, this would generate non-zero electric dipole moments (EDMs). The most recent bounds on the electron EDM from the ACME experiment in the US placed stringent constraints on a large number of electroweak baryogenesis models, in particular two-Higgs-doublet models. This is forcing theorists to consider new paths such as dynamical Yukawa couplings in composite Higgs models, a higher temperature for the electroweak phase transition, or the use of dark particles as the new source of CP violation. Here, there is a tension. To evade the stringent EDM bounds, the new scalar has to be heavy. But if it is too heavy, it reheats the universe too much at the end of the electroweak phase transition and washes out the just-produced baryon asymmetry. During the next decade, precise measurements of the Higgs boson at the LHC will enable a definitive test of the electroweak baryogenesis paradigm. 

Gravitational waves 

There is a further striking consequence of a first-order electroweak phase transition: fluid velocities in the vicinity of colliding bubbles generate gravitational waves (GWs). Today, these would appear as a stochastic background that is homogeneous, isotropic, Gaussian and unpolarised – the superposition of GWs generated by an enormous number of causally-independent sources, arriving at random times and from random directions. It would appear as noise in GW detectors with a frequency (in the mHz region) corresponding to the typical inverse bubble size, redshifted to today (see “Primordial peak” figure). There has been a burst of activity in the past few years to evaluate the chances of detecting such a peaked spectrum at the future space interferometer LISA, opening the fascinating possibility of learning about Higgs physics from GWs. 

The results from the LHC so far have pushed theorists to question traditional assumptions about where new physics beyond the SM could lie. Electroweak baryogenesis relies on rather conservative and minimal assumptions, but more radical approaches are now being considered, such as the intriguing possibility of a cosmological interplay between the Higgs boson and a very light and very weakly-coupled axion-like particle. Through complementarity of studies in theory, collider experiments, EDMs, GWs and cosmology, probing the electroweak phase transition will keep us busy for the next two decades. There are exciting times ahead.

bright-rec iop pub iop-science physcis connect