Comsol -leaderboard other pages

Topics

The Higgs boson under the microscope

On 4 July 2012, the ATLAS and CMS collaborations jointly announced their independent discoveries of a new particle directly related to the Brout–Englert–Higgs field that gives mass to all other particles in the Standard Model (SM). The LHC and its two general-purpose experiments were designed and built, among other things, with the aim of detecting or ruling out the SM Higgs boson. Within three years of the LHC startup, the two experiments detected a signal consistent with a Higgs boson with a mass of about 125 GeV, which was perfectly consistent with indications from precision measurements carried out at the electron–positron colliders LEP and SLC, and at the Tevatron proton–antiproton collider.

Higgs encounters

The discovery was made mainly by detecting decays of the new particle into two photons or two Z bosons (each of which decay into a pair of electrons or muons), for which the invariant mass can be reconstructed with high resolution. The search for the Higgs boson was also performed in other channels, and all results were found to be consistent with the SM expectations. A peculiar feature of the Higgs boson is that it has zero spin. At the time of the discovery, it was already excluded that the particle was a standard vector boson: a spin-1 particle cannot decay into two photons, leaving only spin-0 or spin-2 as the allowed possibilities. 

Ten years ago, the vast majority of high-energy physicists were convinced that a Higgs boson had been detected. The only remaining question was whether it was the boson predicted by the SM or part of an extended Higgs sector.

Basic identity  

The mass of the Higgs boson is the only parameter of the Higgs sector that is not predicted by the SM. A high-precision measurement of the mass is therefore crucial because, once it is known, all the couplings and production cross sections can be predicted in the SM and then compared with experimental measurements. The mass measurement is carried out using the H γγ and H  ZZ  4ℓ channels, with a combined ATLAS and CMS measurement based on Run 1 data obtaining a value of 125.09 ± 0.24 GeV. More precise results with a precision at the level of one part per thousand have been obtained by ATLAS and CMS using partial datasets from Run 2.

The width of the Higgs boson, unlike its mass, is well predicted at approximately 4 MeV. Since this is much smaller than the ATLAS and CMS detector resolutions, a precise direct measurement can only be carried out at future electron–positron colliders. At the LHC it is possible to indirectly constrain the width by studying the production of di-boson pairs (ZZ or WW) via the exchange of off-shell Higgs bosons: under some reasonable assumptions, the off-shell cross section at high mass relative to the on-shell cross section increases proportionally to the width. A recent result from CMS constrains the Higgs-boson width to be between 0.0061 and 2.0 times the SM prediction at 95% confidence level. Finding the width to be smaller than the SM would mean that some of the couplings are smaller than predicted, while a larger measured width could reflect additional decay channels beyond the SM, or a larger branching fraction of those predicted by the SM.

This is the first strong suggestion that the Higgs boson also couples to fermions from generations other than the third

The spin and charge-parity (CP) properties of the Higgs boson are other key quantities. The SM predicts that the Higgs boson is a scalar (spin-0 and positive CP) particle, but in extended Higgs models it could be a superposition of positive and negative CP states, for example. The spin and CP properties can be probed using angular distributions of the Higgs-boson decay products, and several decay channels were exploited by ATLAS and CMS: H γγ, ZZ, WW and ττ. All results to date indicate consistency with the SM and exclude most other models at more than 3σ confidence level, including all models with spin different from zero. 

Couplings to others 

One of the main tools for characterising the Higgs boson is the measurement of its production processes and decays. Thanks to growing datasets, improved analysis techniques, more accurate theoretical tools and better modeling of background processes, ATLAS and CMS have made remarkable progress in this crucial programme over the past decade. 

Using Run 1 data recorded between 2010 and 2012, the gluon-fusion and vector-boson fusion production processes were established, as were the decays to pairs of bosons (γγ, WW* and ZZ*) and to a τ-lepton pair from the combination of ATLAS and CMS data. With Run 2 data (2015–2018), both ATLAS and CMS observed the decay to a pair of b quarks. Although the preferred decay mode of the Higgs boson, this channel suffers from larger backgrounds and is mainly accessible in the associated production of the Higgs boson with a vector boson. The rarer production mode of the Higgs boson in association with a t-quark pair was also observed using a combination of different decay modes, providing a direct proof of the Yukawa coupling between the Higgs boson and top quark. The existence of the Yukawa couplings between the Higgs boson and third-generation fermions (t, b, τ) is thus established.

Mass spectra

The collaborations also investigated the coupling of the Higgs boson to the second-generation fermions, in particular the muon. With the full Run 2 dataset, CMS reported evidence at the level of 3σ over the background-only hypothesis that the Higgs boson decays into μ+μ, while ATLAS supported this finding with a 2σ excess. This is the first strong suggestion that the Higgs boson also couples to fermions from generations other than the third, again in accordance with the SM. Research is also ongoing to constrain the Higgs’s coupling to charm quarks via the decay H  cc. This is a much more difficult channel but, thanks to improved detectors and analysis methods, including extensive use of machine learning, ATLAS and CMS recently achieved a sensitivity beyond expectations and excluded a branching fraction of H  cc relative to the SM prediction larger than O(10). The possibility that the Higgs-boson’s coupling to charm is at least as large as the coupling to bottom quarks is excluded by a recent ATLAS analysis at 95% confidence level.

The accuracy of the production cross-section times decay branching-fraction measurements in the bosonic decay channel (diphoton, ZZ and WW) with the full Run 2 dataset is around 10%, allowing measurements in a more restricted kinematical region that can be sensitive to physics beyond the SM. In all probed phase-space regions, the measured cross sections are compatible with the SM expectations (Data used for some of the measurements are shown in the “Mass spectra” figure). 

Ten years after the discovery of a new elementary boson, considerable progress has been made toward understanding this particle

The combination of all measurements in the different production and decay processes can be used to further constrain the measured couplings between the Higgs boson and the other particles. The production cross section for vector-boson-fusion production, for example, is directly proportional to the square of the coupling strengths between the Higgs boson and W or Z bosons. A modification of these couplings will also affect the rate at which the Higgs boson decays to various final states. Assuming no contribution beyond the SM to Higgs decays and that only SM particles contribute to Higgs-boson vertices involving loops, couplings to t, b and τ are currently determined with uncertainties of around 10%, and couplings to W and Z bosons with uncertainties of about 5%. 

The relation between the mass of a particle and its coupling to the Higgs boson is as expected from the SM, in which the particle masses originate from their coupling to the Brout–Englert–Higgs field (see “Couplings” figure). These measurements thus set bounds on specific new-physics models that predict deviations of the Higgs-boson couplings from the SM. The impact of new physics at a high energy scale is also probed in effective-field-theory frameworks, introducing all possible operators that describe couplings of the Higgs boson to SM particles. No deviations from predictions are observed. 

New physics 

The Higgs boson is the only elementary particle with spin-0. However, an extended Higgs sector is a minimal extension of the SM and is predicted by many theories, such as those based on supersymmetry. These extensions predict several neutral or charged spin-0 particles: one is the observed 125 GeV Higgs boson; the others would preferentially couple to heavier SM particles. Searches for heavier scalar (or pseudo-scalar) particles have been carried out in a variety of final states, but no evidence for such particles is found. For example, the search for heavy scalar or pseudo-scalar particles decaying to a pair of τ leptons excludes masses up to 1–1.5 TeV. The extended Higgs sector can also include lighter scalar or pseudo-scalar particles into which the observed Higgs boson could decay. A wide range of final states have been investigated but no evidence found, setting stringent constraints on the corresponding Higgs-boson decay branching fractions.

Couplings

The Higgs sector could also play a role linking the SM to new physics that explains the presence of dark matter in the form of new neutral, weakly interacting particles. If their mass is less than half that of the Higgs boson, the Higgs boson could decay to a pair of these neutral particles. Since the particles would be invisible in the detector, this process can be detected by observing the presence of missing transverse momentum from the Higgs-boson recoiling against visible particles. The most sensitive processes are those in which the Higgs boson is produced in association to other particles: vector boson fusion, and the associated productions with a vector boson or with a top quark pair. No evidence of such decay has been found, setting upper limits on the invisible decay branching fraction of the Higgs boson at the level of 10%, and providing complementary constraints to those from direct dark-matter detection experiments.

Self-interaction

In addition to its couplings to other bosons and to fermions, the structure of the Brout–Englert–Higgs potential predicts a self-coupling of the Higgs boson that is related to electroweak symmetry breaking (see Electroweak baryogenesis). By studying Higgs-boson pair production at the LHC, it is possible to directly probe this self-coupling. 

The two main challenges of this measurement are the tiny cross section for Higgs-boson pair production (about 1000 times smaller than the production of a single Higgs boson) and the interference between processes that involve the self-coupling and those that do not. Final states with a favourable combination of the expected signal yield and signal-over-background ratio are exploited. The most sensitive channels are those with one Higgs boson decaying to a b-quark pair and the other decaying either to a pair of photons, τ leptons or b quarks. Upper limits of approximately three times the predicted cross section have been obtained with the Run 2 dataset. These searches can also be used to set constraints on the Higgs boson self-coupling relative to its SM value. 

The sensitivities achieved for Higgs-boson pair production searches with the Run 2 dataset are significantly better than expected before the start of Run 2, thanks to several improvements in object reconstruction and analysis techniques. These searches are mostly limited by the size of the dataset and thus will improve further with the Run 3 and much larger High-Luminosity LHC (HL-LHC) datasets.

Going further

Ten years after the discovery of a new elementary boson, considerable progress has been made toward understanding this particle. All measurements so far point to properties that are very consistent with the SM Higgs boson. All main production and decay modes have been observed by ATLAS and CMS, and the couplings to vector bosons and third-generation fermions are probed with 5 to 10% accuracy, confirming the pattern expected from the Brout–Englert–Higgs mechanism for electroweak symmetry breaking and the generation of the masses of elementary particles. Still, there is ample room for improvement in the forthcoming Run 3 and HL-LHC phases, to reduce the uncertainty in the coupling measurements down to a few per cent, to establish couplings to second-generation fermions (muons) and to investigate the Higgs-boson self-coupling. Improved measurements will also significantly expand the sensitivity to a possible extended Higgs sector or new dark sector. 

To reach the ultimate accuracy in the measurements of all Higgs-boson properties (including its self-coupling), to remove the assumptions in the determination of the Higgs couplings at the LHC, and to considerably extend the search for new physics in the Higgs sector, new colliders – such as an e+e collider and a future hadron collider – will be required.

Naturalness after the Higgs

Artwork from Peter Higgs’ Nobel diploma

When Victor Weisskopf sat down in the early 1930s to compute the energy of a solitary electron, he had no way of knowing that he’d ultimately discover what is now known as the electroweak hierarchy problem. Revisiting a familiar puzzle from classical electrodynamics – that the energy stored in an electron’s own electric field diverges as the radius of the electron is taken to zero (equivalently, as the energy cutoff of the theory is taken to infinity) – in Dirac’s recently proposed theory of relativistic quantum mechanics, he made a remarkable discovery: the contribution from a new particle in Dirac’s theory, the positron, cancelled the divergence from the electron itself and left a quantum correction to the self-energy that was only logarithmically sensitive to the cutoff. 

The same cancellation occurred in any theory of charged fermions. But when Weisskopf considered the case for charged scalar particles in 1939, the problem returned. To avoid the need for finely-tuned cancellations between this quantum correction and other contributions to a scalar’s self-energy, he posited that the cutoff energy for scalars should be close to their observed self-energy, heralding the appearance of new features that would change the calculation and render the outcome “natural”. 

Nearly 30 years would pass before Weisskopf’s prediction about scalars was put to the test. The charged pion, a pseudoscalar, suffered the very same divergent self-energy that he had computed. As the neutral pion is free from this divergence, Weisskopf’s logic suggested that the theory of charged and neutral pions should change at around 800 MeV, the cutoff scale suggested by the observed difference in their self-energies. Lo and behold, the rho meson appeared at 775 MeV. Repeating the self-energy calculation with the rho meson included, the divergence in the charged pion’s self-energy disappeared. 

This same logic would predict something new. It had been known for some time that the relative self-energy between the neutral kaons KL and KS diverged due to contributions from the weak interactions in a theory containing only the known up, down and strange quarks. Matching the observed difference suggested that the theory should change at around 3 GeV. Repeating the calculation with the addition of the recently proposed charm quark in 1974, Mary K Gaillard and Ben Lee discovered that the self-energy difference became finite, which allowed them to predict that the charm quark should lie below 1.5 GeV. The discovery at 1.2 GeV later that year promoted Weisskopf’s reasoning from an encouraging consistency check to a means of predicting new physics.

Higgs, we have a problem

Around the same time, Ken Wilson recognised that the coupling between the Higgs boson and other particles of the Standard Model (SM) leads to yet another divergent self-energy, for which the logic of naturalness implied new physics at around the TeV scale. Thus the electroweak hierarchy problem was born – not as a new puzzle unique to the Higgs, but rather the latest application of Weisskopf’s wildly successful logic (albeit one for which the answer is not yet known). 

History suggested two possibilities. As a scalar, the Higgs could only benefit from the sort of cancellation observed among fermions if there is a symmetry relating bosons and fermions, namely supersymmetry. Alternatively, it could be a light product of compositeness, just as the pions and kaons are light bound states of the strong interactions. These solutions to the hierarchy problem came to dominate expectations for physics beyond the SM, with a sharp target – the TeV scale – motivating successive generations of collider experiments. Indeed, when the physics case for the LHC was first developed in the mid-1980s, it was thought that new particles associated with supersymmetry or compositeness would be much easier to discover than the Higgs itself. But while the Higgs was discovered, no signs of supersymmetry or compositeness were to be found.

In the meantime, other naturalness problems were brewing. The vacuum energy – Einstein’s infamous cosmological constant – suffers a divergence of its own, and even the finite contributions from the SM are many orders of magnitude larger than the observed value. Although natural expectations for the cosmological constant fail, an entirely different set of logic seems to succeed in its place. To observe a small cosmological constant requires observers, and observers can presumably arise only if gravitationally-bound structures are able to form. As Steven Weinberg and others observed in the 1980s, such anthropic reasoning leads to a prediction that is remarkably close to the value ultimately measured in 1998. To have predictive power, this requires a multitude of possible universes across which the cosmological constant varies; only the ones with sufficiently small values of the cosmological constant produce observers to bear witness.

The electroweak hierarchy problem

An analogous argument might apply to the electroweak hierarchy problem: the nuclear binding energy is no longer sufficient to stabilise the neutron within typical nuclei if the Higgs vacuum expectation value (VEV) is increased well above its observed value. If the Higgs VEV varies across a landscape of possible universes while its couplings to fermions are kept fixed, only universes with sufficiently small values of the Higgs VEV would lead to complex atoms and, presumably, observers. Although anthropic reasoning for the hierarchy problem requires stronger assumptions than for the cosmological-constant problem, its compatibility with null results at the LHC is enough to raise questions about the robustness of natural reasoning. 

Amidst all of this, another proposed scalar particle entered the picture. The observed homogeneity and isotropy of the universe point to a period of exponential expansion of spacetime in the early universe driven by the inflaton. While the inflaton may avoid naturalness problems of its own, the expansion of spacetime and the quantum fluctuations of fields during inflation lead to qualitatively new effects that are driving new approaches to the hierarchy problem at the intersection of particle physics, cosmology and gravitation.

Perhaps the most prominent of these new approaches came, surprisingly enough, from a failed solution to the cosmological constant problem. Around the same time as the first anthropic arguments for the cosmological constant were taking form, Laurence Abbott proposed to “relax” the cosmological constant from a naturally large value by the evolution of a scalar field in the early universe. Abbot envisioned the scalar evolving along a sloping, bumpy potential, much like a marble rolling down a wavy marble run. As it did so, this scalar would decrease the total value of the cosmological constant until it reached the last bump before the cosmological constant turned negative. Although the universe would crunch away into nothingness if the scalar evolved to negative values of the cosmological constant, it could remain poised at the last bump for far longer than the age of the observed universe. 

Despite the many differences among the new approaches, they share a common tendency to leave imprints on the Higgs boson

While this fails for the cosmological constant (the resulting metastable universe is largely devoid of matter), analogous logic succeeds for the hierarchy problem. As Peter Graham, David Kaplan and Surjeet Rajendran pointed out in 2015, a scalar evolving down a potential in the early universe can also be used to relax the Higgs mass from naturally large values. Of course, it needs to stop close to the observed mass. But something interesting happens when the Higgs mass-squared passes from positive values to negative values: the Higgs acquires a VEV, which gives mass to quarks, which induces bumps in the potential of a particular type of scalar known as an axion (proposed to explain the unreasonably good conservation of CP symmetry by the strong interactions). So if the relaxing scalar is like an axion – a relaxion, you might say – then it will encounter bumps in its potential when it relaxes the Higgs mass to small values. If the relaxion is rolling during an inflationary period, the expansion of spacetime can provide the “friction” necessary for the relaxion to stop when it hits these bumps and set the observed value of the weak scale. The effective coupling between the relaxion and the Higgs that induces bumps in the relaxion potential is large enough to generate a variety of experimental signals associated with a new, light scalar particle that mixes with the Higgs.

The success of the relaxion hypothesis in solving the hierarchy problem hinges on an array of other questions involving gravity. Whether the relaxion potential can remain sufficiently smooth over the vast trans-Planckian distances in field space required to set the value of the weak scale is an open question, one that is intimately connected to the fate of global symmetries in a theory of quantum gravity (itself the target of active study in what is known as the Swampland programme).

Models abound 

In the meantime, the recognition that cosmology might play a role in solving the hierarchy problem has given rise to a plethora of new ideas. For instance, in Raffaele D’Agnolo and Daniele Teresi’s recent paradigm of “sliding naturalness”, the Higgs is coupled to a new scalar whose potential features two minima. In the true minimum, the cosmological constant is large and negative, and the universe would crunch away into oblivion if it ended up in this vacuum. In the second, local minimum, the cosmological constant is safely positive (and can be made compatible with the small observed value of the cosmological constant by Weinberg’s anthropic selection). The Higgs couples to this scalar in such a way that a large value of the Higgs VEV destabilises the “safe” minimum. During the inflationary epoch, only universes with suitably small values of the Higgs VEV can grow and expand, while those with large values of the Higgs VEV crunch away. A second scalar coupled analogously to the Higgs can explain why the VEV is small but non-zero. Depending on how these scalars are coupled to the Higgs, experimental signatures range from the same sort of axion-like signals arising from the relaxion, to extra Higgs bosons at the LHC.

Alternatively, in the paradigm of “Nnaturalness” proposed by Nima Arkani-Hamed and others, the multitude of SMs over which the Higgs mass varies occur in one universe, rather than many. The fact that the universe is predominantly composed of one copy of the SM with a small Higgs mass can be explained if inflation ends and reheats the universe through the decay of a single particle. If this particle is sufficiently light, it will preferentially reheat the copy of the SM with the smallest non-zero value of the Higgs VEV, even if it couples symmetrically to each copy. The sub-dominant energy density deposited in other copies of the SM leaves its mark in the form of dark radiation susceptible to detection by the Simons Observatory or upcoming CMB-S4 facility. 

Finally, Gian Giudice, Matthew Mccullough and Tevong You have recently shown that inflation can help to understand the electroweak hierarchy problem by analogy with self-organised criticality. Just as adding individual grains of sand to a sandpile induces avalanches over diverse length scales – a hallmark of critical behaviour, obtained without tuning parameters – so too can inflation drive scalar fields close to critical points in their potential. This may help to understand why the observed Higgs mass lies so close to the boundary between the unbroken and broken phases of electroweak symmetry without fine tuning.

Going the distance 

Underlying Weisskopf’s natural reasoning is a long-standing assumption about relativistic theories of quantum mechanics: physics at short distances (the ultraviolet, or UV) is decoupled from physics at long distances (the infrared, or IR), making it challenging to apply a theory involving a large energy scale to a much smaller one without fine tuning. This suggests that loopholes may be found in theories that mix the UV and the IR, as is known to occur in quantum gravity. 

While the connection between this type of UV/IR mixing and the mass of the Higgs remains tenuous, there are encouraging signs of progress. For instance, Panagiotis Charalambous, Sergei Dubovsky and Mikhail Ivanov recently used it to solve a naturalness problem involving so-called “Love numbers” that characterise the tidal response of black holes. The surprising influence of quantum gravity on the parameter space of effective field theories implied by the Swampland programme also has a flavour of UV/IR mixing to it. And UV/IR mixing may even provide a new way to understand the apparent violation of naturalness by the cosmological constant.

We have come a long way since Weisskopf first set out to understand the self-energy of the electron. The electroweak hierarchy problem is not the first of its kind, but rather the one that remains unresolved. The absence of supersymmetry or compositeness at the TeV scale beckons us to search for new solutions to the hierarchy problem, rather than turning our backs on it. In the decade since the discovery of the Higgs, this search has given rise to a plethora of novel approaches, building new bridges between particle physics, cosmology and gravity along the way. Despite the many differences among these new approaches, they share a common tendency to leave imprints on the Higgs boson. And so, as ever, we must look to experiment to show the way. 

 

Electroweak baryogenesis

Simulation of Higgs-bubble nucleation

Precision measurements of the Higgs boson open the possibility to explore the moment in cosmological history when electroweak symmetry broke and elementary particles acquired mass. Ten years after the Higgs-boson discovery, it remains a possibility that the electroweak phase transition happened as a rather violent process, with a large departure from thermal equilibrium, via Higgs-bubble nucleations and collisions. This is a fascinating scenario for three reasons: it provides a framework for explaining the matter–antimatter asymmetry of the universe; it predicts the existence of at least one new weak-scale scalar field and thus is testable at colliders; and it would leave a unique signature of gravitational waves detectable by the future space-based interferometer LISA.

One major failure of the Standard Model (SM) is its inability to explain the baryon-to-photon ratio in the universe: η ≈ 6 × 10–10. Measurements of this ratio from two independent approaches – anisotropies in the cosmic microwave background and the abundances of light primordial elements – are in beautiful agreement. In a symmetric universe, however, the prediction for η is a billion times smaller; big-bang nucleosynthesis could not have occurred and structures could not have formed. This results from strong annihilations between nucleons and antinucleons, which deplete their number densities very efficiently. Only in a universe with a primordial asymmetry between nucleons and antinucleons can these annihilations be prevented. There are many different models to explain such “baryogenesis”. Interestingly, however, the Higgs boson plays a key role in essentially all of them. 

Accidental symmetry

It is worth recalling how baryon number B gets violated by purely SM physics. B is an “accidental” global symmetry in the SM. There are no B-violating couplings in the SM Lagrangian. But the chiral nature of electroweak interactions, combined with the non-trivial topology of the SU(2) gauge theory, results in non-perturbative, B-violating processes. Technically, these are induced by extended gauge-field configurations called sphalerons, whose energy is proportional to the value of the Brout–Englert–Higgs (BEH) field. The possibility of producing these configurations is totally suppressed at zero temperature, such that B is an extremely good symmetry today. However, at high temperature, and in particular at 100 GeV or so, when the electroweak symmetry is unbroken, the baryon number is violated intensively as there is no energy cost. Since both baryons and antibaryons are created by sphalerons, charge–parity (CP) violation is needed. Indeed, as enunciated by Sakharov in 1967, a theory of baryogenesis requires three main ingredients: B violation, CP violation and a departure from equilibrium, otherwise the baryon number will relax to zero. 

The conclusion is that baryogenesis must take place either from a mechanism occurring before the electroweak phase transition (necessitating new sources of B violation beyond the SM) or from a mechanism where B-violation relies exclusively on SM sphalerons and occurring precisely at the electroweak phase transition (provided that it is sufficiently out-of-equilibrium and CP-violating). The most emblematic example in the first category is leptogenesis, where a lepton asymmetry is produced from the decay of heavy right-handed neutrinos and “reprocessed” into a baryon asymmetry by sphalerons. This is a popular mechanism motivated by the mystery of the origin of neutrino masses, but is difficult to test experimentally. The second categ­ory, electroweak baryogenesis, involves electroweak-scale physics only and is therefore testable at the LHC.

Electroweak baryogenesis requires a first-order electroweak phase transition to provide a large departure from thermal equilibrium, otherwise the baryon asymmetry is washed out. A prime example of this type of phase transition is boiling water, where bubbles of gas expand into the liquid phase. During a first-order electroweak phase transition, symmetric and broken phases coexist until bubbles percolate and the whole universe is converted into the broken phase (see “Bubble nucleation” image). Inside the bubble, the BEH field has a non-zero vacuum expectation value; outside the bubble, the electroweak symmetry is unbroken. As the wall is passing, chiral fermions in the plasma scatter off the Higgs at the phase interface. If some of these interactions are CP-violating, a chiral asymmetry will develop inside and in front of the bubble wall. The resulting excess of left-handed fermions in front of the bubble wall can be converted into a net baryon number by the sphalerons, which are unsuppressed in the symmetric phase in front of the bubble. Once inside the bubble, this baryon number is preserved as sphalerons are frozen there. In this picture, the baryon asymmetry is determined by solving a diffusion system of coupled differential equations.

New scalar required

The nature of the electroweak phase transition in the SM is well known: for a 125 GeV Higgs boson, it is a smooth crossover with no departure from thermal equilibrium. This prevents the possibility of electroweak baryogenesis. It is, however, easy to modify this prediction to produce a first-order transition by adding an electroweak-scale singlet scalar field that couples to the Higgs boson, as predicted in many SM extensions. Notably, this is a general feature of composite-Higgs models, where the Higgs boson emerges as a “pseudo Nambu–Goldstone” boson of a new strongly-interacting sector. 

Stochastic gravitational-wave background

An important consequence of such models is that the BEH field is generated only at the TeV scale; there is no field at temperatures above that. In the minimal composite Higgs model, the dynamics of the electroweak phase transition can be entirely controlled by an additional scalar Higgs-like field, the dilaton, which has experimental signatures very similar to the SM Higgs boson. In addition, we expect modifications of the Higgs boson’s couplings (to gauge bosons and to itself) induced by its mixing with this new scalar. LHC Run 3 thus has excellent prospects to fully test the possibility of a first-order electroweak phase transition in the minimal composite Higgs model.

The properties of the additional particle required to modify the electroweak phase transition also suggest new sources of CP violation, which is welcome as CP-violating SM processes are not sufficient to explain the baryon asymmetry. In particular, this would generate non-zero electric dipole moments (EDMs). The most recent bounds on the electron EDM from the ACME experiment in the US placed stringent constraints on a large number of electroweak baryogenesis models, in particular two-Higgs-doublet models. This is forcing theorists to consider new paths such as dynamical Yukawa couplings in composite Higgs models, a higher temperature for the electroweak phase transition, or the use of dark particles as the new source of CP violation. Here, there is a tension. To evade the stringent EDM bounds, the new scalar has to be heavy. But if it is too heavy, it reheats the universe too much at the end of the electroweak phase transition and washes out the just-produced baryon asymmetry. During the next decade, precise measurements of the Higgs boson at the LHC will enable a definitive test of the electroweak baryogenesis paradigm. 

Gravitational waves 

There is a further striking consequence of a first-order electroweak phase transition: fluid velocities in the vicinity of colliding bubbles generate gravitational waves (GWs). Today, these would appear as a stochastic background that is homogeneous, isotropic, Gaussian and unpolarised – the superposition of GWs generated by an enormous number of causally-independent sources, arriving at random times and from random directions. It would appear as noise in GW detectors with a frequency (in the mHz region) corresponding to the typical inverse bubble size, redshifted to today (see “Primordial peak” figure). There has been a burst of activity in the past few years to evaluate the chances of detecting such a peaked spectrum at the future space interferometer LISA, opening the fascinating possibility of learning about Higgs physics from GWs. 

The results from the LHC so far have pushed theorists to question traditional assumptions about where new physics beyond the SM could lie. Electroweak baryogenesis relies on rather conservative and minimal assumptions, but more radical approaches are now being considered, such as the intriguing possibility of a cosmological interplay between the Higgs boson and a very light and very weakly-coupled axion-like particle. Through complementarity of studies in theory, collider experiments, EDMs, GWs and cosmology, probing the electroweak phase transition will keep us busy for the next two decades. There are exciting times ahead.

Synergy at the Higgs frontier

Sally Dawson

What impact did the discovery of the Higgs boson have on your work? 

It was huge because before then it was possible that maybe there was no Higgs. You could have some kind of dynamical symmetry breaking, or maybe a heavy Higgs, at 400 GeV say, which would be extremely interesting but completely different. So once you knew that the Higgs was at the same mass scale as the W and the Z, our thinking changed because that comes out of only a certain kind of model. And of course once you had it, everyone, including myself, was motivated to calculate everything we could. 

I am working on how you tease out new physics from the Higgs boson. It’s the idea that even if we don’t see new particles at the LHC, precision measurements of the Higgs couplings are going to tell us something about what is happening at very high energy scales. I’m using what’s called an effective field theory approach, which is the standard these days for trying to find out what we can learn from combining Higgs measurements with other types of measurements, such as gauge-boson pair production and top-quark physics. 

Aside from the early formal work, what was the role of Standard Model calculations in the discovery of the Higgs boson?

You had to know what you were looking for, because there’s so many events at the LHC. Otherwise, it would be like looking for a needle in a haystack. The Higgs was discovered, for example, by its decay to two photons and there are millions of two-photon events at the LHC that have nothing to do with the Higgs. Theory told you how to look for this particle, and I think it was really important that a trail was set out to follow. This involves calculating how often you make a Higgs boson and what the background might look like. It wasn’t until the late 1980s that people began taking this seriously. It was really the Superconducting Super Collider that started us thinking about how to observe a Higgs at a hadron collider. And then there were the LEP and Tevatron programmes that actively searched for the Higgs boson. 

To what order in perturbation theory were those initial calculations performed?

For the initial searches you didn’t need the complicated calculations because you weren’t looking for precision measurements such as those required at the Z-pole, for example. You really just needed the basic rate and background information. We weren’t inspired to do higher order calculations until later in the game. When I was a postdoc at Berkeley in 1986, that’s when I really started to calculate things about the Higgs. But there was a long gap between the time when the Brout–Englert–Higgs mechanism was proposed and when people really started doing some hard calculations. There’s the famous paper in 1976 by Ellis, Gaillard and Dimopoulos that calculated how the Higgs might be observed, but in essence it said: why bother looking for this thing, we don’t know where it is! So people were thinking we could see the Higgs in kaon decays, if it was very light, and in other ways, and were looking at the problem in a global kind of way. 

Was this what drove your involvement with The Higgs Hunter’s Guide in 1990?

We were further along in terms of calculating things precisely by then, and I suppose there was a bit of a generation gap. It was a wonderful collaboration to produce the guide. We still went through the idea of how you would find the Higgs at different energy scales because we still had no idea where it was. The calculations went into high gear around that time, which was well before the Higgs was discovered. Partly it was the motivation that we were pretty sure we would see it at the LHC. But partly it was developments in theory which meant we could calculate things that we never would have imagined was possible 30 years earlier. The capability of theorists to calculate has grown exponentially. 

What have these improvements been?

It’s what they call the next-to-next-to-leading order (NNLO) revolution – a new frontier in perturbative QCD where diagrams with two extra emissions of real or extra loops of virtual partons are accounted for. These were new mathematical techniques for evaluating the integrals that come into the quantum field theory, so not just turning the crank computationally but really an intellectual advance in understanding the structure of these calculations. It started with Bern, Dixon and Kosower, who understood the needed amplitudes in a formal way. This enabled all sorts of calculations, and now we have N3LO calculations for certain Higgs-boson production modes. 

What is driving greater precision on Higgs calculations today?

Actually it’s really exciting because at the high-luminosity LHC (HL-LHC), experimentalists will be limited in their understanding of the Higgs boson by theory – the theory and experimental uncertainties will be roughly the same. This is truly impressive. You might think that these higher order corrections, which have quite small errors, are enough but they need to be even smaller to match the expected experimental precision. As theorists we have to keep going and do even better, which from my point of view is wonderful. It’s the synergy between experiment and theory that is the real story. We’re co-dependent. Even now, theory is not so different from ATLAS and CMS in terms of precision. Theory errors are hard things to pin down because you never really know what they are. Unlike an absolute statistical uncertainty, they’re always an estimate. 

How do the calculations look for measurements beyond the LHC? 

It’s a very different situation at e+e colliders compared to hadron colliders. The LHC runs with protons containing gluons, so that’s why you need the higher order corrections. At a future e+e+ collider, you need higher-order corrections but they are much more straightforward because you don’t have parton distribution functions to worry about. We know how to do the calculations needed for an e+e Future Circular Collider, for example, but there is not a huge community of people working on them. That’s because they are really hard: you can’t just sit down and do them as a hobby, they really need a lot of skills. 

You are currently leading the Higgs properties working group of the current Snowmass planning exercise. What has been the gist of discussions? 

This is really exciting because our job has essentially been to put together the pieces of the puzzle after the European strategy update in 2020. That process did a very careful job of looking at the future Higgs programme, but there have been developments in our understanding since then. For example, the muon collider might be able to measure the Higgs couplings to muons very precisely, and there has been some good work on how to measure the couplings to strange quarks, which is very hard to do. 

The Higgs Hunters Guide

I would like to see an e+e collider built somewhere, anywhere. In point of fact, when you look at the proposals they’re roughly the same in terms of Higgs physics. This was clear from the European strategy report and will be clear from the upcoming Snowmass report. Personally, I don’t much care whether there is a precision of 1% or 1.5% on some coupling. I care that you can get down to that order of magnitude, and that e+e machines will significantly improve on the precision of HL-LHC measurements. The electroweak programme of large circular e+e colliders is extremely interesting. At the Z-pole you get some very precise measurements of Standard Model quantities that feed into the whole theory because everything is connected. And at the WW threshold you get very precise measurements in the effective field theory of things that connect the Higgs and WW pairs. As a theorist, it doesn’t make sense to think of the Higgs in a vacuum. The Higgs is part of this whole electroweak programme. 

What are the prospects for finding new physics via the Higgs?

The fact that we haven’t seen anything unexpected yet is probably because we haven’t probed enough. I’m absolutely convinced we are going to see something, I just don’t know what (or where) it is. So I can’t believe in the alternative “nightmare” scenario of a Standard-Model Higgs and nothing else because there are just so many things we don’t know. You can make pretty strong arguments that we haven’t yet reached the precision where we would expect to see something new in precision measurements. It’s a case of hard work.  

What’s next in the meantime?

The next big thing is measuring two Higgs bosons at a time. That’s what theorists are super excited about because we haven’t yet seen the production of two Higgses and that’s a fundamental prediction of our theory. If we don’t see it, and it’s extremely difficult to do so experimentally, it tells us something about the underlying model. It’s a matter of getting the statistics. If we actually saw it, then we would do more calculations. For the trilinear Higgs coupling we now have a complete calculation at next-to-leading order, which is a real tour de force. The calculations are sufficient for a discovery, and because it’s so rare it’s unlikely we will be doing precision measurements, so it is probably okay for the foreseeable future. For the quartic coupling there are some studies that suggest you might see it at a 100 TeV hadron collider.

With all the Standard Model particles in the bag, does theory take more of a back seat from here? 

The hope is that we will see something that doesn’t fit our theory, which is of course what we’re really looking for. We are not making these measurements at ever higher precisions for the sake of it. We care about measuring something we don’t expect, as an indicator of new physics. The Higgs is the only tool we have at the moment. It’s the only way we know how to go.

Top quark weighs in with unparalleled precision

A top-quark pair at the LHC

The CMS collaboration has substantially improved on its measurement of the top-quark mass. The latest result, 171.77 ± 0.38 GeV, presented at CERN on 5 April, represents a precision of about 0.22% – compared to the 0.36% obtained in 2018 with the same data. The gain comes from new analysis methods and improved procedures to consistently treat uncertainties in the measurement simultaneously.

As the heaviest elementary particle, precise knowledge of the top-quark mass is of paramount importance to test the internal consistency of the Standard Model. Together with accurate knowledge of the masses of the W and Higgs bosons, the top-quark mass is no longer a free parameter but a clear prediction of the Standard Model. Since the top-quark mass dominates higher-order corrections to the Higgs-boson mass, a precise measurement of the top mass also places strong constraints on the stability of the electroweak vacuum (see The Higgs and the fate of the universe). 

Since its discovery at Fermilab in 1995, the mass of the top quark has been measured with increasing precision using the invariant mass of different combinations of its decay products. Measurements by the Tevatron experiments resulted in a combined value of 174.30 ± 0.65 GeV, while the ATLAS and CMS collaborations measured 172.69 ± 0.48 GeV and 172.44 ± 0.48 GeV, respectively, from the combination of their most precise results from LHC Run 1 recorded at a centre-of-mass energy of 8 TeV. The latter measurement achieved a relative precision of about 0.28%. In 2019, the CMS collaboration also experimentally investigated the running of the top quark mass – a prediction of QCD that causes the mass to vary as a function of energy – for the first time at the LHC. 

The LHC produces top quarks predominantly in quark–antiquark pairs via gluon fusion, which then decay almost exclusively to a bottom quark and a W boson. Each tt event is classified by the subsequent decay of the W bosons. The latest CMS analysis uses semileptonic events – where one W decays into jets and the other into a lepton and a neutrino – selected from 36 fb–1 of Run 2 data collected at a centre-of-mass energy of 13 TeV. Five kinematical variables, as opposed to up to three in previous analy­ses, were used to extract the top-quark mass. While the extra information in the fit improved the precision of the measurement in a novel and unconventional way, it made the analysis significantly more complicated. In addition, the measurement required an extremely precise calibration of the CMS data and an in-depth understanding of the remaining experimental and theoretical uncertainties and their interdependencies. 

The final result, 171.77 ± 0.38 GeV, which includes 0.04 GeV statistical uncertainty, is a considerable improvement compared to all previously published top-quark mass measurements and supersedes the previously published measurement in this channel using the same data set. 

“The cutting-edge statistical treatment of uncertainties and the use of more information have vastly improved this new measurement from CMS,” says Hartmut Stadie of the University of Hamburg, who contributed to the result. “Another big step is expected when the new approach is applied to the more extensive dataset recorded in 2017 and 2018.”

Dead-cone effect exposed by ALICE

A charm quark in a parton shower

More than 30 years after it was predicted, a phenomenon in quantum chromodynamics (QCD) called the dead-cone effect has been directly observed by the ALICE collaboration. The result, reported in Nature on 18 May, not only confirms a fundamental feature of the theory of the strong force, but enables a direct experimental observation of the non-zero mass of the charm quark in the partonic phase.

In QCD, the dead-cone effect predicts a suppression of gluon bremsstrahlung from a quark within a cone centred on the quark’s flight direction. This cone has an angular size mq/E, where mq is the mass of the quark and E is its energy. The effect arises due to the conservation of angular momentum during the gluon emission and is significant for low-energy heavy-flavour quarks. 

The dead cone has been indirectly observed at particle colliders. A direct observation from the parton shower’s radiation pattern has remained challenging, however, because it relies on the determination of the emission angle of the gluon, as well as the emitting heavy-flavour quark’s energy, at each emission vertex in the parton shower (see “Showering” figure). This requires a dynamic reconstruction of the cascading quarks and gluons in the shower from experimentally accessible hadrons, which had not been possible until now. In addition, the dead-cone region can be obscured and filled by other sources such as the decay products of heavy-flavour hadrons, which must be removed during the measurement.

To observe the dead-cone effect directly, ALICE used jets tagged with a reconstructed D0-meson in a 25 nb–1 sample of pp collisions at a centre-of-mass-energy of 13 TeV collected between 2016 and 2018. The D0-mesons were reconstructed with transverse momenta between 2 and 36 GeV/c through their decay into a kaon and pion pair. Jet-finding was then performed on the events with the “anti-kT” algorithm, and jets with the reconstructed D0-meson amongst their constituents were tagged. The team used recursive jet-clustering techniques to reconstruct the gluon emissions from the radiating charm quark by following the branch containing the D0-meson at each de-clustering step, which is equivalent to following the emitting charm quark through the shower. A similar procedure was carried out on a flavour-untagged sample of jets, which contain primarily gluon and light-quark emissions and form a baseline where the dead-cone effect is absent.

Comparisons between the gluon emissions from charm quarks and from light quarks and gluons directly reveal the dead-cone effect through a suppression of gluon emissions from the charm quark at small angles, compared to the emissions from light quarks and gluons. Since QCD predicts a mass-dependence of the dead cones, the result also directly exposes the mass of the charm quark, which is otherwise inaccessible due to confinement. ALICE’s successful technique to directly observe a parton shower’s dead cone may therefore offer a way to measure quark masses.

The upgraded ALICE detector in LHC Run 3 will enable an extension of the measurement to jets tagged with a B+ meson. This will allow the reconstruction of gluon emissions from beauty quarks which, due to their larger mass, are expected to have a larger dead cone than charm quarks. Comparisons between the angular distribution of gluon emissions from beauty quarks and those from charm quarks will isolate mass-dependent effects in the shower and remove the contribution from effects pertaining to the differences between quark and gluon fragmentation, bringing deeper insights into the intriguing workings of the strong force.

Probing new physics with the Higgs boson

ATLAS figure 1

Due to its connection to the process of electroweak symmetry breaking, the Higgs boson plays a special role in the Standard Model (SM). Its properties, such as its mass and its couplings to fermions and bosons, have been measured with increasing precision. For these reasons, the Higgs boson has become an ideal tool to conduct new-physics searches. Prominent examples are direct searches for new heavy particles decaying into Higgs bosons or searches for exotic decays of the Higgs boson. Such phenomena have been predicted in many extensions of the SM motivated by long-standing open questions, including the hierarchy problem, dark matter and electroweak baryogenesis. Examples of new particles that couple to the Higgs boson are heavy vector bosons (as in models with Higgs compositeness or warped extra dimensions) and additional scalar particles (as in supersymmetric models or axion models).

Searches for resonances

The ATLAS collaboration recently released results of a search for a new heavy particle decaying into a Higgs and a W boson. The search was performed by probing for a localised excess in the invariant mass distribution of the ℓνbb final state. As no such excess was found, upper limits at 95% confidence level were set on the production-cross section times branching ratio of the new heavy resonance (figure 1). The results were also interpreted in the context of the heavy vector triplet (HVT) model, which extends the SM gauge group by an additional SU(2) group, to constrain the coupling strengths of heavy vector bosons to SM particles. In two HVT benchmark models, W masses below 2.95 and 3.15 TeV are excluded.

ATLAS figure 2

Rare or exotic decays are excellent candidates to search for weakly coupled new physics. The Higgs boson is particularly sensitive to such new physics owing to its narrow total width, which is three orders of magnitude smaller than that of the W and Z bosons and the top quark. Several searches for exotic decays of the Higgs boson have been carried out by ATLAS, and they may be broadly classified as those scenarios where the possible new daughter particle decays promptly to SM particles, and those where it would be long-lived or stable.

A recent search from ATLAS targeted exotic decays of the Higgs boson into a final state into four electrons or muons, which benefit from a very clean experimental signature. Although a signal was not observed, the search put stringent constraints on decays to new light scalar bosons – particularly in the low mass range of a few GeV – and to new vector bosons, dubbed dark Z bosons or dark photons, in the mass range up to a few tens of GeV. Depen­ding on the new-physics model, this search can exclude branching ratios of the Higgs boson to new particles as low as O(10–5).

Invisibles

Another interesting possibility is the case where the Higgs boson decays to particles that are invisible in the detector, such as dark-matter candidates. To select such events, different strategies are pursued depending on the particles produced in association with the Higgs boson. The most powerful channel for such a search is the vector-boson fusion production process, where two energetic jets from quarks are produced with large angular separation along­side the invisibly decaying Higgs boson (figure 2). Another sensitive channel is the associated production of a Higgs boson with a Z boson that decays to a pair of leptons. Improvements in background predictions have made it possible to reach a sensitivity down to 10% on the branching ratio of invisible Higgs-boson decays, while the corresponding observed limit amounts to 15%.

These searches will greatly benefit from the large datasets expected in Run 3 and later High-Luminosity LHC runs, and will enable searches for even more feeble couplings of new particles to the Higgs boson.

Upsilon suppression in heavy-ion collisions

CMS figure 1

The bound states of a heavy quark and its antiquark, called quarkonia, have long been regarded as ideal probes to study the quark–gluon plasma (QGP) formed in high-energy heavy-ion collisions. The golden signature is the suppression of their production yield in lead–lead (PbPb) collisions with respect to extrapolations from proton–proton (pp) collisions, caused by modifications of the binding potential in the QGP. The suppression of the different quarkonium states is expected to depend on their binding energies. Quarkonia can also be produced by recombination processes. The ϒ states (bound states of b quarks and antiquarks) are much less affected by recombination effects than charmonium states, given the very small probability that b quarks are produced. A comparison of their suppression patterns is particularly informative because of the different binding energies of the ϒ(1S), ϒ(2S) and ϒ(3S) states.

The suppression of quarkonium production is quantified via the nuclear modification factor RAA, defined as the ratio between the yield in nucleus–nucleus (AA) collisions and the yield extrapolated from pp data. Previous measurements of RAA for the ϒ mesons by experiments at RHIC and the LHC revealed a significant suppression of the ϒ(1S) state and a larger suppression for the ϒ(2S) state. However, these experiments could only set upper limits for the ϒ(3S) state due to its very low production yield. The CMS experiment recently changed this situation by presenting the first observation of the ϒ(3S) meson in heavy-ion collisions. The ϒ mesons are detected using their decay to two muons. The analysis used the large PbPb data sample collected in 2018 and extracted the ϒ(3S) signals from the large background of muon pairs by using a boosted decision tree algorithm.

The new RAA results are shown together with the previously published ϒ(1S) values as a function of the average number of nucleons participating in the PbPb collisions, <Npart> (figure 1). Collisions with larger <Npart> show a bigger overlap between the two nuclei, producing a larger and hotter QGP. As previously observed, the degree of suppression increases from peripheral to central collisions, i.e. as Npart increases, indicating a more substantial dissociation effect at higher QGP temperatures. The new ϒ(3S) suppression measurement completes the picture of suppression patterns for five different quarkonium states, which was started 35 years ago at the CERN SPS with the J/ψ and ψ(2S) results of NA38. The stage is set for a deeper understanding of deconfinement in the QGP.

X-ray polarisation probes extreme physics

Accretion disk around magnetar 4U 0142+61

X-ray astronomy has been around for more than 50 years and remains responsible for a wealth of discoveries. Astronomical breakthroughs have been the result of detailed measurements of the X-ray arrival time, direction and energy. But the fourth measurable parameter of X-rays, their polarisation, remains largely unexplored. Following the first rough measurements of a handful of objects in the 1970s by Martin Weisskopf and co-workers, there was a hiatus in X-ray polarimetry due to the complexity of the detection mechanism. In recent years, in parallel with the emergence of gamma-ray polarimetry, interest in the field has returned. Indeed, after some initial measurements using the Chinese–Italian PolarLight Cubesat launched in October 2018, X-ray polarimetry has reached full maturity with the launch of the first large-scale dedicated observatory in December 2021: the Imaging X-ray Polarimetry Explorer (IXPE), a joint project by NASA and the Italian Space Agency, led by Weisskopf.

The IXPE mission uses gas pixel detectors to measure the polarisation for a range of astronomical sources in the 2-8 keV energy range. Incoming X-rays are absorbed in a gas which results in the emission of a photoelectron, the azimuthal emission direction of which is correlated with the polarisation vector of the incoming photon. Tracking the path of the electron therefore allows the polarisation to be inferred. Accurately measuring the emission direction of the low-energy photoelectron, especially in a space-based detector, has been one of the main IXPE challenges and required decades of detector development. 

X-ray polarimetry has reached full maturity with the launch of the first large-scale dedicated observatory

IXPE has already observed a range of sources. Its first public results, posted on arXiv on 18 May, concern a magnetar, a highly magnetic neutron star, called 4U 0142+61, which rotates around its axis in about 8 s and has a magnetic field of 1010 T. IXPE’s first ever measurement of polarised emission from a magnetar in the X-ray region shows this extreme object to have an energy-integrated polarisation degree of 12%, while in the thermal (2–4 keV) range this is about 12%, and as high as 41% for emission at higher energies (5.5–8 keV). The polarisation angles of the two emission components are orthogonal. 

The results appear to agree best with a model where the thermal emission stems from a condensed iron atmosphere: the higher energy emission would be a result of some thermal photons being up-scattered to higher energies when interacting with charged particles following the magnetic field lines. However, since other models link the emission to a gaseous atmosphere heated by a constant bombardment of particles, measurements of additional magnetars are needed.

Fundamental physics

Apart from providing novel insights into neutron-star properties, time-resolved studies of the emission during the rotation period hints at more fundamental physics at play. The spectral profile of 4U 0142+61 was found to be rather constant during the rotation, indicating that the emission does not come from hot-spots, such as the poles, but rather from a large area on the surface. As the magnetic field over such a large area would, however, be expected to vary significantly, so would the polarisation angle of the emitted X-rays. As a result, the net polarisation seen on Earth would largely be blurred out, resulting in a much lower polarisation degree than is observed. 

An intriguing explanation for this, note the authors, is vacuum birefringence – an effect predicted to be important in the presence of extreme magnetic fields, but which has never been observed. While for the magnetar the polarisation angle of the emission varies with the emission location, it gets altered as the photons travel through the strong magnetic field in which continuous electron–positron pairs affect their propagation. Only when the magnetic field is weak enough, at around 100 times the radius of the star, does the polarisation angle get frozen. Since this angle is aligned with the magnetic field, which at this point is smoother, the emission will realign the emission travelling towards Earth and allow for a net polarisation.

Although the polarisation degrees measured by IXPE are not high enough to definitively prove vacuum birefringence, the results give a clear hint. Furthermore, the measurements of 4U 0142+61 are only the first of many performed by the IXPE team. Throughout the coming months, detailed measurements of galactic objects such as the Crab Nebula, as well as extra-galactic sources, are predicted to be released. Among these objects there will be other magnetars, the X-ray emission from which will soon bring further understanding of these extreme objects and potentially confirm the existence of vacuum birefringence.

Higgs Hunting

The origin of electroweak symmetry breaking is one of the central topics of research in fundamental physics. The discovery of a Higgs boson at CERN on 4 July 2012, following a hunt that spanned several decades and multiple colliders, changed the landscape of these investigations and provided key evidence for the Brout–Englert–Higgs mechanism of mass generation through the spontaneous breaking of Electroweak symmetry.

Almost ten years later, the hunt goes on several fronts, in particular for:

  • New physics through precision studies of the properties of the Higgs boson: in particular its mass, spin and couplings to other Standard Model particles.
  • New production and decay modes, in particular in processes involving multiple Higgs bosons which provide key insight into the shape of the Higgs potential.
  • New Higgs-like states and signals for physics beyond the Standard Model.

The 12th workshop of the Higgs Hunting series organised on 12–14 September 2022 will present an overview of these topics, focusing in particular on new developments in the LHC Run-2 analyses, detailed studies of Higgs boson properties and possible deviations from Standard Model predictions. Highlights will also include a first look at LHC Run-3 analyses, prospects from studies at future colliders, and recent theoretical developments.

bright-rec iop pub iop-science physcis connect