Topics

Searching for Higgs compositeness

The mass range excluded in the search for the pair production of vector-like top quarks

Since the discovery of the Higgs boson at the LHC in 2012, physicists have a more complete understanding of the Standard Model (SM) and the origin of elementary particle mass. However, theoretical questions such as why the Higgs boson is so light remain. An attractive candidate explanation postulates that the Higgs boson is not a fundamental particle, but instead is a composite state of a new, strongly-interacting sector – similar to the pion in ordinary strong interactions. In such composite-Higgs scenarios, new partners of the top and bottom quarks of the SM could be produced and observed at the LHC. 

If they exist, VLQs could be very heavy, with masses at the TeV scale, and could be produced either singly or in pairs at the LHC.

Ordinary SM quarks come in left-handed and right-handed varieties, which behave differently in weak interactions. The hypothetical new quark partners, however, behave the same way in weak interactions, whether they are left- or right-handed. Composite-Higgs models, and several other theories beyond the SM, predict the existence of such “vector-like quarks” (VLQs). Searching for them is therefore an exciting opportunity for the LHC experiments. 

If they exist, VLQs could be very heavy, with masses at the TeV scale, and could be produced either singly or in pairs at the LHC. Furthermore, VLQs could decay into regular top or bottom quarks in combination with a W, Z or Higgs boson. This rich phenomenology warrants a varied range of complementary searches to provide optimal coverage. 

The ATLAS collaboration has recently carried out two VLQ searches based on the full Run–2 dataset (139 fb–1) at 13 TeV. The first analysis targets pair-production of VLQs, focusing on the possibility that most VLQs decay to a Z boson and a top quark. To help identify likely signal events, leptonically decaying Z bosons were tagged in events with pairs of electrons or muons. To maximise the discriminating power between the VLQ signal and the SM background, machine-learning techniques using a deep neural network were employed to identify the hadronic decays of top quarks, Z, W or Higgs bosons, and categorise events into 19 distinct regions. 

The second analysis targets the single production of VLQs. While the rate of pair production of VLQs through regular strong interactions only depends on their mass, their single production also depends on their coupling to SM electroweak bosons. As a result, depending on the model under consideration, VLQs heavier than approximately 1 TeV might predominantly be produced singly, and a measurement would therefore uniquely allow insight into this coupling strength.

The analysis was optimised for VLQ decays to top quarks in combination with either a Higgs or a Z boson. Events with a single lepton and multiple jets were selected, and tagging algorithms were used to identify the boosted  leptonic and hadronic decays of top quarks, and the hadronic decays of Higgs and Z bosons. The presence of a forward jet, characteristic of the single VLQ production mode, was used (along with the multiplicity of jets, b-jets and reconstructed boosted objects) to categorise the analysed events into 24 regions.

The largest excluded mass for the single production of a vector-like top quark for a range of models

The observations from both analyses are consistent with SM predictions, which allows ATLAS to set the strongest constraints to date on VLQ production. Together, the pair- and single-production analyses exclude VLQs with masses up to 1.6 TeV (see figure 1) and 2.0 TeV (see figure 2), respectively, depending on the assumed model. These two analyses are part of a broader suite of searches for VLQs underway in ATLAS. The combination of these searches will provide the greatest potential for the discovery of VLQs, and ATLAS therefore looks forward to the upcoming Run–3 data.

Powering for a sustainable future

The TT2 transfer line carries beams from the Proton Synchrotron to the majority of CERN’s facilities

Just over 60 years ago, physicists and engineers at CERN were hard at work trying to tune the world’s first proton synchrotron, the PS. It was the first synchrotron of its kind, employing the strong-focusing principle to produce higher-energy beams within a smaller aperture and with a lower construction cost compared to, for example, the CERN synchrocyclotron. Little could physicists in 1959 imagine the maze of technical galleries and tunnels stemming out of the PS ring not many years later.

The first significant expansion to CERN’s accelerator complex was prompted by the 1962 discovery of the muon neutrino at the competing Alternating Gradient Synchrotron at Brookhaven National Laboratory in the US. Soon afterwards, CERN embarked on an ambitious programme starting with a new east experimental area, the PS booster and the first hadron collider – the Intersecting Storage Rings (ISR). A major challenge during this expansion was transferring the beam to targets, experiments and the ISR, which required that CERN build transfer lines that could handle different particles, different extraction energy levels and various duty cycles (see “In service” figure).

The CERN facilities and experiments whose transfer lines have been renovated during long shutdown 2

Transfer lines transport particle beams from one machine to another using powerful magnets. Once fully accelerated, a beam is given an ultra-fast “kick” off its trajectory by a kicker magnet and then guided away from the ring by one or more septum magnets. A series of focusing and defocusing quadrupole magnets contain the beams in the vacuum pipe while bending magnets direct them to their new destination (a target or a subsequent accelerator ring).

Making the connection

The first transfer lines linking two different CERN accelerators were TT1 and TT2, which were originally built for the ISR. The need to handle different particle energies and even different particle charges required continuous adjustment of the magnetic field at every extraction, typically once per second in the PS. One of the early challenges faced was a memory effect in the steel yokes of the magnets: alternating among different field values leaves a remnant field that changes the field density depending on the order of cycles played out before. Initially, complex solutions with secondary field-resetting coils were used. Later, magnetic reset was achieved by applying a predefined field excitation that brings the magnet to a reproducible state prior to the next physics cycle.

Solving the magnetic hysteresis problem was not the only hurdle that engineers faced. Handling rapid injections and extractions through the magnets was also a major challenge for the electronics of the time. The very first powering concept used machine/generator setups with adjustable speeds to modulate the electric current and consequently the field density in the transfer-line magnets. Each transfer line would have its own noisy generation plant that required a control room with specialised personnel (see “Early days” images). Modifying the mission-profile of a magnet to test new physics operations was a heavy and tedious operation.

Early days of CERN

Towards the end of 1960s, electrical motors in the west PS hall were replaced by the first semiconductor-operated thyristor rectifiers, which transformed the 50 Hz alternating grid voltage to a precisely regulated (to nearly 100 parts per million) current in the beamline magnets. They also occupied a fraction of the space, had lower power losses and were able to operate unsupervised. All of a sudden, transporting different particles with variable energies became possible at the touch of a knob. The timing could not have been better, as CERN prepared itself for the Super Proton Synchrotron (SPS) era, which would see yet more transfer lines added to its accelerator complex. 

By the early 1980s the ISR had completed its mission, and the TT1 transfer line was decommissioned together with the storage rings. However, the phenomenal versatility of TT2 has allowed it to continue to extract particles for experiments. Today, virtually all user beams, except those for the East Area and ISOLDE, pass through the 300 m-long line. It delivers low-energy 3 GeV beams to “Dump 2” for machine development, 14 GeV beams to the SPS for various experiments in the North Area, 20 GeV beams towards the n_ToF facility, 26 GeV beams to the Antiproton Decelerator, and to the SPS – where protons are accelerated to 450 GeV before being injected into the LHC. While beams traverse TT2 in just over a microsecond, other beamlines, such as those in the East Area, spill particles out of the PS continuously for 450 ms towards the CLOUD experiment and other facilities – a process known as slow extraction.

Energy economy 

Transfer lines are heavy users of electrical power, since typically their magnets are powered for long periods compared to the time it takes a beam to pass. During their last year of operation in 2017, for example, the East Area transfer lines accounted for 12% of all energy consumption by CERN’s PS/PSB injector complex. The reason for this inefficiency was the non-stop powering of the few dozen magnets used in each transfer line for the necessary focusing, steering and trajectory-correction functions. This old powering system, combined with a solid-yoke magnet structure, did not permit extraction of the magnetic field energy between beam operations. 

CERN is looking at testing and implementing new systems that lower its environmental impact today and into the far future

For reference, a typical bending magnet absorbs the same energy as a high-performance car accelerating from 0 to 100 km/h, and must do so in a period of 0.5 s every 1.2 s for beams from the PS. To supply and recover all this energy between successive beam operations, powerful converters are required along with laminated steel magnet yokes, all of which became possible with the recent East Area renovation project. 

Energy economy was the primary motivation for CERN to adopt the “Sirius” family of regenerative power converters for TT2 and, subsequently, the East Area and Booster transfer lines. While transfer lines typically absorb and return all the magnetic field energy from and to the power grid, the new Sirius power converter allows a more energy-efficient approach by recovering the magnetic field energy locally into electrolytic capacitors for re-use in the next physics cycle. Electrolytic capacitors are the only energy-storage technology that can withstand the approximately 200 million beam transports that a Sirius converter is expected to deliver during its lifetime, and the system employs between 15 and 420 such wine-bottle-sized units according to the magnet size and beam energy to be supplied (see “Transformational” image).

Sirius power converters and their electrolytic capacitors

Sirius is also equipped with a front-end unit that can control the energy flow from the grid to match what is required to compensate the thermal losses in the system. By estimating in real time how much of the total energy can be recycled, Sirius has enabled the newly renovated East Area to be powered using only two large-distribution transformers rather than the seven transformers used in the past for the old 1960s thyristor rectifiers. To control the energy flow in the magnets, Sirius uses powerful silicon-based semiconductors that switch on and off 13,000 times per second. By adjusting the “on” time of the switches the average current in and out of the energy-storing units can be controlled with precision, while the high switching frequency allows rapid corrections of the generated voltage and current across the magnet.

The Sirius converters entered operation gradually from September 2020, and at present a total of 500 million magnetic cycles have been completed. Recent measurements made on the first circuits commissioned in the East Area demonstrated an energy consumption 95% lower than compared to the original 1960s figures. But above all, the primary role of Sirius is to provide current and hence magnetic field in transfer-line magnets to a precision of 10 parts per million, which enables excellent reproducibility for the beams coming down the lines. The most recent measurements demonstrated a stability better than 10 ppm during a 24-hour interval.

Unusual engineering model 

CERN employs a rather unusual engineering model compared to those in industry. For Sirius, a team of experts and technicians from the electrical power converters group designed, prototyped and validated the power-converter design before issuing international tenders to procure the subsystems, assembly and testing. Engineers therefore have the opportunity to work with their counterparts in member-state industries, often helping them develop new manufacturing methods and skills. Sirius, for example, helped a magnetics-component manufacturer in Germany achieve a record precision in their manufacturing process and to improve their certification procedures for medium-power reactors. Another key partner acquired new knowledge in the manufacturing and testing of inoxidised water-cooling circuits, enabling the firm to expand its project portfolio. 

Thanks to the CERN procurement process, Sirius components are built by a multitude of suppliers across Europe. For some, it was their first time working with CERN. For example, the converter-assembly contract was the first major (CHF 12 million) contract won by Romanian industry after the country’s accession to CERN five years ago. Other significant contributions were made by German, Dutch, French, UK, Danish and Swedish industries. Recent work by the CERN knowledge transfer group resulted in a contract with a Spanish firm that licensed the Sirius design for production for other laboratories, with the profits invested in R&D for future converter families.

Energy recycling tends to yield more impressive energy savings in fast-cycling accelerators and transfer lines, such as those in the PS. However, CERN is planning to deploy similar technologies in other experimental facilities such as the North Area that will undergo a major makeover in the following years. The codename for this new converter project is Polaris – a scalable converter family that can coast through the long extraction plateaus used in the SPS (see “Physics cycles” figure). The primary goal of the renovation, beyond better energy efficiency, is to restore the reliability and provide a 10-fold improvement in the precision of the magnetic field regulation.

Thermal loss versus recoverable energy used by a typical magnet in different CERN accelerator facilities.

Development efforts in the power-converters group do not stop here. The electrification of transportation and the net-zero carbon emission targets of many governments are also driving innovation in power electronics, which CERN might take advantage of. For example, wide bandgap semiconductors exhibit higher reverse-blocking capabilities and faster transitions that could allow switching at a rate of more than 40,000 Hz and therefore help to reduce size, losses and eliminate the audible noise emitted by power conversion altogether. 

Another massive opportunity concerns energy storage, with CERN looking closely at the technologies driven by the battery mega-factories that are being built around the world. As part of our mission to provide the next generation of sustainable scientific facilities, as outlined in CERN’s recently released second environment report, we are looking at testing and implementing new systems to lower our environmental impact today and into the far future. 

Beauty enhances precision of CKM angle γ

Beauty and charm measurements

The CP-violating angle γ of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix is a benchmark of the Standard Model, since it can be determined from tree-level beauty decays in an entirely data-driven way with negligible theoretical uncertainty. Comparisons between direct and indirect measurements of γ therefore provide a potent test for new physics. Before LHCb began taking data, γ was one of the poorest known constraints of the CKM unitarity triangle, but that is no longer the case. 

A new result from LHCb marks an important change in strategy, by including not only results from beauty decays sensitive to γ but additionally exploiting the sensitivity to CP violation and mixing of charm meson (D0) decays. Mixing in the D0D0 system proceeds via flavour-changing neutral currents, which may also be affected by contributions from new heavy particles. The process is described by two parameters: the mass difference, x, and width difference, y, between the two charm flavour states (see figure 1).

The latest combination takes the results of more than 20 LHCb beauty and charm measurements to determine γ = (65.4 –4.2+3.8 )°, which is the most precise measurement from a single experiment (see figure 2). Furthermore, various charm-mixing parameters were determined by combining, for the first time, both the beauty and charm datasets, which results in x= (0.400)% and y= (0.630)%. The latter is a factor-of-two more precise than the current world average, which is entirely due to the new methodology that harnesses additional sensitivity to the charm sector from beauty decays.

This demonstrates that LHCb has already achieved better precision than its original design goals. When the redesigned LHCb detector restarts operations in 2022, the target of sub-degree precision on γ, and the chance to observe CP violation in charm mixing, comes ever closer.

A systematic approach to systematics

Whenever we perform an analysis of our data, whether measuring a physical quantity of interest or testing some hypothesis, it is necessary to assess the accuracy of our result. Statistical uncertainties arise from the limited accuracy with which we can measure anything, or from the natural Poisson fluctuations involved in counting independent events. They have the property that repeated measurements result in greater accuracy.

Systematic uncertainties, on the other hand, arise from many sources and may not cause a spread in results when experiments are repeated, but merely shift them away from the true value. Accumulating more data usually does not reduce the magnitude of a systematic effect. As a result, estimating systematic uncertainties typically requires much more effort than for statistical ones, and more personal judgement and skill is involved. Furthermore, statistical uncertainties between different analyses usually are independent; this often is not so for systematics.

The November event saw the largest number of statisticians at any PHYSTAT meeting

In particle-physics analyses, many systematics are related to detector and analysis effects. Examples include trigger efficiency; jet energy scale and resolution; identification of different particle types; and the strength of backgrounds and their distributions. There are also theoretical uncertainties which, as well as affecting predicted values for comparison with measured ones, can also influence the experimental variables extracted from the data. Another systematic comes from the intensity of accelerator beams (the integrated luminosity at the LHC for example), which is likely to be correlated for the various measurements made using the same beams.

At the LHC, it is in analyses with large amounts of data where systematics are likely to be most relevant. For example, a measurement of the mass of the W boson published by the ATLAS collaboration in 2018, based on a sample of 14 million W-boson decays, had a statistical uncertainty of 7 MeV but a systematic uncertainty of 18 MeV.

PHYSTAT-Systematics

Two big issues for systematics are how the magnitudes of the different sources are estimated, and how they are then incorporated in the analysis. The PHYSTAT-Systematics meeting concentrated on the latter, as it was thought that this was more likely to benefit from the presence of statisticians – a powerful feature of the PHYSTAT series, which started at CERN in 2000.

The 20 talks fell into three categories. The first were those devoted to analyses in different particle-physics areas: the LHC experiments; neutrino-oscillation experiments; dark-matter searches; and flavour physics. A large amount of relevant information was discussed, with interesting differences in the separate sub-fields of particle physics. For example, in dark-matter searches, upper limits sometimes are set using Yellin’s Maximum Gap method when the expected background is low, or by using Power Constrained Limits, whereas these tend not to be used in other contexts.

The second group followed themes: theoretical systematics; unfolding; mis-modelling; an appeal for experiments to publish their likelihood functions; and some of the many aspects that arise in using machine learning (where the machine-learning process itself can result in a systematic, and the increased precision of a result should not be at the expense of accuracy).

Finally, there was a series of talks and responses by statisticians. The November event saw the largest number of statisticians at any PHYSTAT meeting, and the efforts that they made to understand our intricate analyses and the statistical procedures that we use were much appreciated. It was valuable to have insights from a different viewpoint on the largely experimental talks. David van Dyk, for instance, emphasised the conceptual and practical differences between simply using the result of a subsidiary experiment’s estimate of a systematic to assess its effect on a result, and using the combined likelihood function for the main and the subsidiary measurements. Also, in response to talks about flavour physics and neutrino-oscillation experiments, attention was drawn to the growing impact in cosmology of non-parametric, likelihood-free (simulation-based likelihoods) and Bayesian methods. Likelihood-free methods came up again in response to a modelling talk based on LHC-experiment analyses, and the role of risk estimation was emphasised by statisticians. Such suggestions for alternative statistical strategies open the door to further discussions about the merits of new ideas in particular contexts.

A novel feature of this remote meeting was that the summary talks were held a week later, to give speakers Nick Wardle and Sara Algeri more time. In her presentation, Algeri, a statistician, called for improved interaction between physicists and statisticians in dealing with these interesting issues.

Overall, the meeting was a good step on the path towards having a systematic approach to systematics. Systematics is an immense topic, and it was clear that one meeting spread over four afternoons was not going to solve all the issues. Ongoing PHYSTAT activities are therefore planned, and the organisers welcome further suggestions.

Scrutinising the Higgs sector

The 11th Higgs Hunting workshop took place remotely between 20 and 22 September 2021, with more than 300 registered participants engaging in lively discussions about the most recent results in the Higgs sector. ATLAS and CMS presented results based on the full LHC Run-2 dataset (up to 140 fb-1) recorded at 13 TeV. While all results remain compatible with Standard Model expectations, the precision of the measurements benefited from significant reductions in statistical uncertainties, more than three times smaller with the 13 TeV data than in previous LHC results at 7 and 8 TeV. This also brought into sharp relief the role of systematic uncertainties, which in some cases are becoming dominant.

The status of theory improvements and phenomenological interpretations, such as those from effective field theory, were also presented. Highlights included the Higgs pair-production process, which is particularly challenging at the LHC due to its low rate. ATLAS and CMS showed greatly improved sensitivity in various final states, thanks to improvements in analysis techniques. Also shown were results on the scattering of weak vector bosons, a process that is strongly related to the Higgs sector, highlighting large improvements from both the larger datasets and the higher collision energy available in Run 2.

Several searches for phenomena beyond the Standard Model – in particular for additional Higgs bosons – were presented. No significant excesses have yet been found.

The historical talk “The LHC timeline: a personal recollection (1980-2012)” was given by Luciano Maiani, former CERN Director-General, and concluding talks were given by Laura Reina (Florida) and Paolo Meridiani (Rome). A further highlight was the theory talk from Nathaniel Craig, who discussed the progress being made in addressing six open questions. Does the Higgs boson have a size? Does it interact with itself? Does it mediate a Yukawa force? Does it fulfill the naturalness strategy? Does it preserve causality? And does it realise electroweak symmetry?

The next Higgs Hunting workshop will be held in Orsay and Paris from 12 to 14 September 2022.

2021 IOP Awards

The UK Institute of Physics has announced its 2021 awards, recognising several high-energy and nuclear physicists across three categories.

David Deutsch and Ian Chapman-2

In the gold-medal category David Deutsch of the University of Oxford has been awarded the Isaac Newton Prize “for founding the discipline named quantum computation and establishing quantum computation’s fundamental idea, now known as the ‘qubit’ or quantum bit.” In the same category, Ian Chapman received the Richard Glazebrook Prize “for outstanding leadership of the UK Atomic Energy Authority and the world’s foremost fusion research and technology facility, the Joint European Torus, and the progress it has delivered in plasma physics, deuterium-tritium experiments, robotics, and new materials”.

Silver medal

Among this year’s silver-medal recipients, experimentalist Mark Lancaster of the University of Manchester earned the James Chadwick Prize “for distinguished, precise measurements in particle physics, particularly of the W boson mass and the muon’s anomalous magnetic moment”. Michael Bentley (University of York) received  the Ernest Rutherford Prize for his contributions to the understanding of fundamental symmetries in atomic nuclei, while Jerome Gauntlett (Imperial College London) received the John William Strutt Lord Rayleigh Prize for applications of string theory to quantum field theory, black holes, condensed matter physics and geometry.

Bronze medals collage-2

Finally, in the bronze medal category for early-career researchers, the Daphne Jackson Prize for exceptional contributions to physics education goes to accelerator physicist Chris Edmons (University of Liverpool) in recognition of his work in improving access for the visually impaired, for example via the Tactile Collider project. And the Mary Somerville Prize for exceptional contributions to public engagement in physics goes to XinRan Liu (University of Edinburgh) for his promotion of UK research and innovation to both national and international audiences.

Lyn Evans and Tim Palmer

Acknowledging physicists who have contributed to the field generally, 2021 honorary Institute of Physics fellowships were granted to Lyn Evans (for sustained and distinguished contributions to, and leadership in, the design, construction and operation of particle accelerator systems, and in particular the LHC) and climate physicist Tim Palmer, a proponent of building a ‘CERN for climate change’, for his pioneering work exploring the nonlinear dynamics and predictability of the climate system.

The quantum frontier: cold atoms in space

The quantum frontier

Cold atoms offer exciting prospects for high-precision measurements based on emerging quantum technologies. Terrestrial cold-atom experiments are already widespread, exploring both fundamental phenomena such as quantum phase transitions and applications such as ultra-precise timekeeping. The final quantum frontier is to deploy such systems in space, where the lack of environmental disturbances enables high levels of precision.

This was the subject of a workshop supported by the CERN Quantum Technology Initiative, which attracted more than 300 participants online from 23 to 24 September. Following a 2019 workshop triggered by the European Space Agency (ESA)’s Voyage 2050 call for ideas for future experiments in space, the main goal of this workshop was to begin drafting a roadmap for cold atoms in space.

The workshop opened with a presentation by Mike Cruise (University of Birmingham) on ESA’s vision for cold atom R&D for space: considerable efforts will be required to achieve the technical readiness level needed for space missions, but they hold great promise for both fundamental science and practical applications. Several of the cold-atom teams that contributed white papers to the Voyage 2050 call also presented their proposals.

Atomic clocks

Next came a session on atomic clocks, including descriptions of their potential for refining the definitions of SI units, such as the second, and distributing this new time-standard worldwide, and potential applications of atomic clocks to geodesy. Next-generation spacebased atomic-clock projects for these and other applications are ongoing in China, the US (Deep Space Atomic Clock) and Europe.

This was followed by a session on Earth observation, featuring the prospects for improved gravimetry using atom interferometry and talks on the programmes of ESA and the European Union. Quantum space gravimetry could contribute to studies of climate change, for example, by measuring the densities of water and ice very accurately and with improved geographical precision.

Cold-atom experiments in space offer great opportunities to probe the foundations of physics

For fundamental physics, prospects for space-borne cold-atom experiments include studies of wavefunction collapse and Bell correlations in quantum mechanics, probes of the equivalence principle by experiments like STEQUEST, and searches for dark matter.

The proposed AEDGE atom interferometer will search for ultralight dark matter and gravitational waves in the deci-Hertz range, where LIGO/Virgo/KAGRA and the future LISA space observatory are relatively insensitive, and will probe models of dark energy. AEDGE gravitational- wave measurements could be sensitive to first-order phase transitions in the early universe, as occur in many extensions of the Standard Model, as well as to cosmic strings, which could be relics of symmetries broken at higher energies than those accessible to colliders.

These examples show that cold-atom experiments in space offer great opportunities to probe the foundations of physics as well as make frontier measurements in astrophysics and cosmology.

Several pathfinder experiments are underway. These include projects for terrestrial atom interferometers on scales from 10 m to 1 km, such as the MAGIS project at Fermilab and the AION project in the UK, which both use strontium, and the MIGA project in France and proposed European infrastructure ELGAR, which both use rubidium. Meanwhile, a future stage of AION could be situated in an access shaft at CERN – a possibility that is currently under study, and which could help pave the way towards AEDGE. Pioneering experiments using Bose-Einstein condensates on research rockets and the International Space Station were also presented.

A strong feature of the workshop was a series of breakout sessions to enable discussions among members of the various participating communities (atomic clocks, Earth observation and fundamental science), as well as a group considering general perspectives, which were summarised in a final session. Reports from the breakout sessions will be integrated into a draft roadmap for the development and deployment of cold atoms in space. This will be set out in a white paper to appear by the end of the year and presented to ESA and other European space and funding agencies.

Space readiness

Achieving space readiness for cold-atom experiments will require significant research and development. Nevertheless, the scale of participation in the workshop and the high level of engagement testifies to the enthusiasm in the cold-atom community and prospective user communities for deploying cold atoms in space. The readiness of the different communities to collaborate in drafting a joint roadmap for the pursuit of common technological and scientific goals was striking.

Beate Heinemann appointed director at DESY

Beate Heinemann

Experimental particle physicist Beate Heinemann has been announced as the new director of DESY’s High Energy Physics division, effective from 1 February. Succeeding interim director Ties Behnke, who held the position since January 2021 when Joachim Mnich joined CERN as director for research and computing, she is the first female director in DESY’s 60-year history.

After completing a PhD at the University of Hamburg in 1999, based on data from the H1 experiment at DESY’s former electron-proton collider HERA, Heinemann did a postdoc at the University of Liverpool, UK, working on the CDF experiment at Fermilab.  She became a lecturer at Liverpool in 2003, a professor at UC Berkeley in 2006 and a scientist at Lawrence Berkeley National Laboratory.

In 2007 Heinemann joined the ATLAS collaboration in which she helped with the installation, commissioning and data-quality assessment of the pixel detector as well as performing other roles including as data-preparation coordinator during the LHC startup phase. She was deputy spokesperson of the ATLAS collaboration from 2013 to 2017, and since 2016 has been a senior scientist at DESY and W3 professor at Albert-Ludwigs-Universität Freiburg. She was also a member of the Physics Preparatory Group during the 2020 update of the European strategy for particle physics, and since 2017 she has been a member of the CERN Scientific Policy Committee.

Born in Hamburg, Heinemann is looking forward to the many exciting challenges, both scientifically and socially, ahead: “It is very important that we retain and further expand our pioneering role as a centre for fundamental research for the study of matter. In the next few years, the course will be set for the successor project to the LHC, whose technology and location have not yet been chosen. DESY must be actively involved in the preparation of this project in order to maintain and expand its pioneering role,” she explains. “Another topic that is very close to my heart, both personally and through my new office, is diversity. DESY should remain a cosmopolitan, diverse laboratory, and there is still room for improvement in many areas, for example the number of women in management positions.”

Hadron colliders in perspective

From visionary engineer Rolf Widerøe’s 1943 patent for colliding beams, to the high-luminosity LHC and its possible successor, the 14 October symposium “50 Years of Hadron Colliders at CERN” offered a feast of physics and history to mark the 50th anniversary of the Intersecting Storage Rings (ISR). Negotiating the ISR’s steep learning curve in the 1970s, the ingenious conversion of the Super Proton Synchrotron (SPS) into a proton–antiproton collider (SppS) in the 1980s, and the dramatic approval and switch-on of the LHC in the 1990s and 2000s chart a scientific and technological adventure story, told by its central characters in CERN’s main auditorium.

Former CERN Director-General (DG) Chris Llewellyn Smith swiftly did away with notions that the ISR was built without a physics goal. Viki Weisskopf (DG at the time) was well aware of the quark model, he said, and urged that the ISR be built to discover quarks. “The basic structure of high-energy collisions was discovered at the ISR, but you don’t get credit for it because it is so obvious now,” said Llewellyn Smith. Summarising the ISR physics programme, Ugo Amaldi, former DELPHI spokesperson and a pioneer of accelerators for hadron therapy, listed the observation of charmed-hadron production in hadronic interactions, studies of the Drell–Yan process, and measurements of the proton structure function as ISR highlights. He also recalled the frustration at CERN in late 1974 when the J/ψ meson was discovered at Brookhaven and SLAC, remarking that history would have changed dramatically had the ISR detectors also enabled coverage at high transverse momentum.

A beautiful machine

Amaldi sketched the ISR’s story in three chapters: a brilliant start followed by a somewhat difficult time, then a very active and interesting programme. Former CERN director for accelerators and technology Steve Myers offered a first-hand account, packed with original hand-drawn plots, of the battles faced and the huge amount learned in getting the first hadron collider up and running. “The ISR was a beautiful machine for accelerator physics, but sadly is forgotten in particle physics,” he said. “One of the reasons is that we didn’t have beam diagnostics, on account of the beam being a coasting beam rather than a bunched beam, which made it really hard to control things during physics operation.” Stochastic cooling, a “huge surprise”, was the ISR’s most important legacy, he said, paving the way for the SppS and beyond.

Former LHC project director Lyn Evans took the baton, describing how the confluence of electroweak theory, the SPS as collider and stochastic cooling led to rapid progress. It started with the Initial Cooling Experiment in 1977–1978, then the Antiproton Accumulator. It would take about 20 hours to produce a bunch dense enough for injection into the SppS , recalled Evans, and several other tricks to battle past the “26 GeV transition, where lots of horrible things” happened. At 04:15 on 10 July 1981, with just him and Carlo Rubbia in the control room, first collisions at 270 GeV at the SppS were declared.

Poignantly, Evans ended his presentation “The SPS and LHC machines” there. “The LHC speaks for itself really,” he said. “It is a fantastic machine. The road to it has been a long and very bumpy one. It took 18 years before the approval of the LHC and the discovery of the Higgs. But we got there in the end.”

Discovery machines

The parallel world of hadron-collider experiments was brought to life by Felicitas Pauss, former CERN head of international relations, who recounted her time as a member of the UA1 collaboration at the SppS during the thrilling period of the W and Z discoveries. Jumping to the present day, early-career researchers from the ALICE, ATLAS, CMS and LHCb collaborations brought participants up to date with the progress at the LHC in testing the Standard Model and the rich physics prospects at Run 3 and the HL-LHC.

Few presentations at the symposium did not mention Carlo Rubbia, who instigated the conversion of the SPS into a hadron collider and was the prime mover of the LHC, particularly, noted Evans, during the period when the US Superconducting Super Collider was under construction. His opening talk presented a commanding overview of colliders, their many associated Nobel prizes and their applications in wider society.

During a brief Q&A at the end of his talk, Rubbia reiterated his support for a muon collider operating as a Higgs factory in the LHC tunnel: “The amount of construction is small, the resources are reasonable, and in my view it is the next thing we should do, as quickly as possible, in order to make sure that the Higgs is really what we think it is.”

It seems in hindsight that the LHC was inevitable, but it was anything but

Christopher Llewellyn Smith

In a lively and candid presentation about how the LHC got approved, Llewellyn Smith also addressed the question of the next collider, noting it will require the unanimous support of the global particle-physics community, a “reasonable” budget envelope and public support. “It seems in hindsight that the LHC was inevitable, but it was anything but,” he said. “I think going to the highest energy is the right way forward for CERN, but no government is going to fund a mega project to reduce error bars – we need to define the physics case.”

Following a whirlwind “view from the US”, in which Young-Kee Kim of the University of Chicago described the Tevatron and RHIC programmes and collated congratulatory messages from the US Department of Energy and others, CERN DG Fabiola Gianotti rounded off proceedings with a look at the future of the LHC and beyond. She updated participants on the significant upgrade work taking place for the HL-LHC and on the status of the Future Circular Collider feasibility study, a high-priority recommendation of the 2020 update of the European strategy for particle physics which is due to be completed in 2025. “The extraordinary success of the LHC is the result of the vision, creativity and perseverance of the worldwide high-energy physics community and more than 30 years of hard work,” the DG stated. “Such a success demonstrates the strength of the community and it’s a necessary milestone for future, even more ambitious, projects.”

Videos from the one-off symposium, capturing the rich interactions between the people who made hadron colliders a reality, are available here.

Harnessing the LHC network

Harnessing the LHC network

On 15 November, around 260 physicists gathered at CERN (90 in person) to participate in the 2021 LHC Career Networking event, which is aimed at physicists, engineers and others who are considering leaving academia for a career in industry, non-governmental organisations and government. It was the fifth event in a series that was initially limited to attendance only by members of LHC experiments but which, in light of its strong resonance within the community, is now open to all.

Former members of the LHC experiments were invited to share their experiences of working in fields ranging from project management at the Ellen MacArthur Foundation, to consultants like McKinsey and pharmaceutical companies such as Boehringer Ingelheim. They spoke movingly of the difficulties of leaving academia and research, the introspection they experienced to discover the path that was right for them, and the sense of satisfaction and happiness they felt in their new roles.

Adjusting to new environments

Following a supportive welcome from Joachim Mnich, CERN director of research and computing, and Marianna Mazzilli, a member of the ALICE collaboration and chair of the organising committee, the first speaker to take to the stage in the main auditorium was Florian Kruse. Florian was a physicist on the LHCb experiment who, upon leaving CERN, decided to set up his own data-science and AI company called Point 8 – a throwback from many years spent commuting to the LHCb pit at LHC Point 8. His company has grown from three to 20 staff members, some ex-CERN, and continues to expand.

Setting the tone for the evening, he talked about what to expect when interacting with industry, how people view CERN physicists and where and how adjustments have to be made to adapt to a new environment – advising participants to “recalibrate your imposter syndrome” and “adjust to other audiences”.

Julia Hunt, a former CMS experimentalist, shared a personal insight into her journey out of academia, revealing that she fortuitously came across sailor Ellen MacArthur’s TED talk and soon landed the job of project manager at the Ellen MacArthur Foundation.

The field of data science has welcomed numerous former CERN physicists, among them ex-ATLAS members Max Baak and Till Eifert, former CMS and ALICE member Torsten Dahms, ex-CMS member Iasonas Topsis-Giotis and ex-ALICE member Elena Bruna. Max gave a mini-course in bond trading at ING bank, while Iasonas put a positive spin on his long search for a job by saying that each interview or application taught him essential lessons for the next application, eventually landing him a job as a manager at professional services company Ernst & Young in Belgium. In a talk titled “19 years in physics… and then?”, Torsten shared the sleepless nights he endured when deliberating whether to continue in a field that had him relocate himself and his family five times in 15 years, ultimately turning down a tenure-track position in 2019.

Elena, who despite having a permanent position left the field in 2018 to become a data scientist at Boehringer Ingelheim, highlighted the differences between physics (where data structures are usually designed in advance and data are largely available) and data science (where the value of data is not always known a priori, and tends to be more messy), and indicated areas to highlight on a data-science CV. These include keeping it to a maximum of two pages and emphasising skills and tools, including big-data analysis, machine-learning techniques, Monte Carlo simulations and working in international teams. The topic of CVs came up repeatedly, a key message being that physicists must modify the language used in academic applications because people “outside” just don’t understand our terminology.

Two networking breaks, held in person and accompanied by beer, wine and pizza for those who were present and via Zoom breakout rooms for remote participants, were alive with questions and discussion.  Former ATLAS member Till Eifert was surrounded by physicists eager to learn more about his role as a specialist consultant with McKinsey in Geneva, speaking passionately about the renewable energy, cancer diagnostics and decarbonisation projects he has worked on. Head of CERN Alumni relations Rachel Bray and her team were on hand to answer a multitude of questions about the CERN Alumni programme.

70-85% of jobs come through networking

Anthony Nardini

Emphasising the power of such events, speaker Anthony Nardini from entrepreneurial company On Deck cited a 2017 Payscale survey which found that 70–85% of jobs come through networking. Following up from the event on Twitter, he offered takeaways for all career “pivoters”: craft and prioritise your guiding principles, such as industry, job function, company stage mission; create a daily information-gathering practice so that you are reading the same newsletters, articles and Twitter feeds as those in your target roles; identify and contact “pathblazers” in your target organisations who understand your background; and do the work to pitch how your unique skillset can help a startup to grow.

All the speakers gave their time and contact details for follow-up questions and advice. The overall message was that, while the transition out of academia can be hard, CERN’s brand recognition in certain fields helps enormously. Use your connections and have confidence!

bright-rec iop pub iop-science physcis connect