Comsol -leaderboard other pages

Topics

Educational accelerator open to the public

What better way to communicate accelerator physics to the public than using a functioning particle accelerator? From January, visitors to CERN’s Science Gateway were able to witness a beam of protons being accelerated and focused before their very eyes. Its designers believe it to be the first working proton accelerator to be exhibited in a museum.

“ELISA gives people who visit CERN a chance to really see how the LHC works,” says Science Gateway’s project leader Patrick Geeraert. “This gives visitors a unique experience: they can actually see a proton beam in real time. It then means they can begin to conceptualise the experiments we do at CERN.”

The model accelerator is inspired by a component of LINAC 4 – the first stage in the chain of accelerators used to prepare beams of protons for experiments at the LHC. Hydrogen is injected into a low-pressure chamber and ionised; a one-metre-long RF cavity accelerates the protons to 2 MeV, which then pass through a thin vacuum-sealed window. In dim light, the protons in the air ionise the gas molecules, producing visible light, allowing members of the public to see the beam’s progress before their very eyes (see “Accelerating education” figure).

ELISA – the Experimental Linac for Surface Analysis – will also be used to analyse the composition of cultural artefacts, geological samples and objects brought in by members of the public. This is an established application of low-energy proton accelerators: for example, a particle accelerator is hidden 15 m below the famous glass pyramids of the Louvre in Paris, though it is almost 40 m long and not freely accessible to the public.

“The proton-beam technique is very effective because it has higher sensitivity and lower backgrounds than electron beams,” explains applied physicist and lead designer Serge Mathot. “You can also perform the analysis in the ambient air, instead of in a vacuum, making it more flexible and better suited to fragile objects.”

For ELISA’s first experiment, researchers from the Australian Nuclear Science Technology Organisation and from Oxford’s Ashmolean Museum have proposed a joint research project about the optimisation of ELISA’s analysis of paint samples designed to mimic ancient cave art. The ultimate goal is to work towards a portable accelerator that can be taken to regions of the world that don’t have access to proton beams.

Game on for physicists

Raphael Granier de Cassagnac and Exographer

“Confucius famously may or may not have said: ‘When I hear, I forget. When I see, I remember. When I do, I understand.’ And computer-game mechanics can be inspired directly by science. Study it well, and you can invent game mechanics that allow you to engage with and learn about your own reality in a way you can’t when simply watching films or reading books.”

So says Raphael Granier de Cassagnac, a research director at France’s CNRS and Ecole Polytechnique, as well as member of the CMS collaboration at the CMS. Granier de Cassagnac is also the creative director of Exographer, a science-fiction computer game that draws on concepts from particle physics and is available on Steam, Switch, PlayStation 5 and Xbox.

“To some extent, it’s not too different from working at a place like CMS, which is also a super complicated object,” explains Granier de Cassagnac. Developing a game often requires graphic artists, sound designers, programmers and science advisors. To keep a detector like CMS running, you need engineers, computer scientists, accelerator physicists and funding agencies. And that’s to name just a few. Even if you are not the primary game designer or principal investigator, understanding the
fundamentals is crucial to keep the project running efficiently.

Root skills

Most physicists already have some familiarity with structured programming and data handling, which eases the transition into game development. Just as tools like ROOT and Geant4 serve as libraries for analysing particle collisions, game engines such as Unreal, Unity or Godot provide a foundation for building games. Prebuilt functionalities are used to refine the game mechanics.

“Physicists are trained to have an analytical mind, which helps when it comes to organising a game’s software,” explains Granier de Cassagnac. “The engine is merely one big library, and you never have to code anything super complicated, you just need to know how to use the building blocks you have and code in smaller sections to optimise the engine itself.”

While coding is an essential skill for game production, it is not enough to create a compelling game. Game design demands storytelling, character development and world-building. Structure, coherence and the ability to guide an audience through complex information are also required.

“Some games are character-driven, others focus more on the adventure or world-building,” says Granier de Cassagnac. “I’ve always enjoyed reading science fiction and playing role-playing games like Dungeons and Dragons, so writing for me came naturally.”

Entrepreneurship and collaboration are also key skills, as it is increasingly rare for developers to create games independently. Universities and startup incubators can provide valuable support through funding and mentorship. Incubators can help connect entrepreneurs with industry experts, and bridge the gap between scientific research and commercial viability.

“Managing a creative studio and a company, as well as selling the game, was entirely new for me,” recalls Granier de Cassagnac. “While working at CMS, we always had long deadlines and low pressure. Physicists are usually not prepared for the speed of the industry at all. Specialised offices in most universities can help with valorisation – taking scientific research and putting it on the market. You cannot forget that your academic institutions are still part of your support network.”

Though challenging to break into, opportunity abounds for those willing to upskill

The industry is fiercely competitive, with more games being released than players can consume, but a well-crafted game with a unique vision can still break through. A common mistake made by first-time developers is releasing their game too early. No matter how innovative the concept or engaging the mechanics, a game riddled with bugs frustrates players and damages its reputation. Even with strong marketing, a rushed release can lead to negative reviews and refunds – sometimes sinking a project entirely.

“In this industry, time is money and money is time,” explains Granier de Cassagnac. But though challenging to break into, opportunity abounds for those willing to upskill, with the gaming industry worth almost $200 billion a year and reaching more than three billion players worldwide by Granier de Cassagnac’s estimation. The most important aspects for making a successful game are originality, creativity, marketing and knowing the engine, he says.

“Learning must always be part of the process; without it we cannot improve,” adds Granier de Cassagnac, referring to his own upskilling for the company’s next project, which will be even more ambitious in its scientific coverage. “In the next game we want to explore the world as we know it, from the Big Bang to the rise of technology. We want to tell the story of humankind.”

The beauty of falling

The Beauty of Falling

A theory of massive gravity is one in which the graviton, the particle that is believed to mediate the force of gravity, has a small mass. This contrasts with general relativity, our current best theory of gravity, which predicts that the graviton is exactly massless. In 2011, Claudia de Rham (Imperial College London), Gregory Gabadadze (New York University) and Andrew Tolley (Imperial College London) revitalised interest in massive gravity by uncovering the structure of the best possible (in a technical sense) theory of massive gravity, now known as the dRGT theory, after these authors.

Claudia de Rham has now written a popular book on the physics of gravity. The Beauty of Falling is an enjoyable and relatively quick read: a first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

De Rahm begins by setting the stage with the breakthroughs that led to our current paradigm of gravity. The Michelson–Morley experiment and special relativity, Einstein’s description of gravity as geometry leading to general relativity and its early experimental triumphs, black holes and cosmology are all described in accessible terms using familiar analogies. De Rham grips the reader by weaving in a deeply personal account of her own life and upbringing, illustrating what inspired her to study these ideas and pursue a career in theoretical physics. She has led an interesting life, from growing up in various parts of the world, to learning to dive and fly, to training as an astronaut and coming within a hair’s breadth of becoming one. Her account of the training and selection process for European Space Agency astronauts is fascinating, and worth the read in its own right.

Moving closer to the present day, de Rahm discusses the detection of gravitational waves at gravitational-wave observatories such as LIGO, the direct imaging of black holes by the Event Horizon Telescope, and the evidence for dark matter and the accelerating expansion of the universe with its concomitant cosmological constant problem. As de Rham explains, this latter discovery underlies much of the interest in massive gravity; there remains the lingering possibility that general relativity may need to be modified to account for the observed accelerated expansion.

In the second part of the book, de Rham warns us that we are departing from the realm of well tested and established physics, and entering the world of more uncertain ideas. A pet peeve of mine is popular accounts that fail to clearly make this distinction, a temptation to which this book does not succumb. 

Here, the book offers something that is hard to find: a first-hand account of the process of thought and discovery in theoretical physics. When reading the latest outrageously overhyped clickbait headlines coming out of the world of fundamental physics, it is easy to get the wrong impression about what theoretical physicists do. This part of the book illustrates how ideas come about: by asking questions of established theories and tugging on their loose threads, we uncover new mathematical structures and, in the process, gain a deeper understanding of the structures we have.

Massive gravity, the focus of this part of the book, is a prime example: by starting with a basic question, “does the graviton have to be massless?”, a new structure was revealed. This structure may or may not have any direct relevance to gravity in the real world, but even if it does not, our study of it has significantly enhanced our understanding of the structure of general relativity. And, as has occurred countless times before with intriguing mathematical structures, it may ultimately prove useful for something completely different and unforeseen – something that its originators did not have even remotely in mind. Here, de Rahm offers invaluable insights both into uncovering a new theoretical structure and what happens next, as the results are challenged and built upon by others in the community.

CMS peers inside heavy-quark jets

CMS figure 1

Ever since quarks and gluons were discovered, scientists have been gathering clues about their nature and behaviour. When quarks and gluons – collectively called partons – are produced at particle colliders, they shower to form jets – sprays of composite particles called hadrons. The study of jets has been indispensable towards understanding quantum chromodynamics (QCD) and the description of the final state using parton shower models. Recently, particular focus has been on the study of the jet substructure, which provides further input about the modelling of parton showers.

Jets initiated by the heavy charm (c-jets) or bottom quarks (b-jets) provide insight into the role of the quark mass, as an additional energy scale in QCD calculations. Heavy-flavour jets are not only used to test QCD predictions, they are also a key part of the study of other particles, such as the top quark and the Higgs boson. Understanding the internal structure of heavy-quark jets is thus crucial for both the identification of these heavier objects and the interpretation of QCD properties. One such property is the presence of a “dead cone” around the heavy quark, where collinear gluon emissions are suppressed in the direction of motion of the quark.

CMS has shed light on the role of the quark mass in the parton shower with two new results focusing on c- and b-jets, respectively. Heavy-flavour hadrons in these jets are typically long-lived, and decay at a small but measurable distance from the primary interaction vertex. In c-jets, the D0 meson is reconstructed in the K±π decay channel by combining pairs of charged hadrons that do not appear to come from the primary interaction vertex. In the case of b-jets, a novel technique is employed. Instead of reconstructing the b hadron in a given decay channel, its charged decay daughters are identified using a multivariate analysis. In both cases, the decay daughters are replaced by the mother hadron in the jet constituents.

CMS has shed light on the role of the quark mass in the parton shower

Jets are reconstructed by clustering particles in a pairwise manner, leading to a clustering tree that mimics the parton shower process. Substructure techniques are then employed to decompose the jet into two subjets, which correspond to the heavy quark and a gluon being emitted from it. Two of those algorithms are soft drop and late-kT. They select the first and last emission in the jet clustering tree, respectively, capturing different aspects of the QCD shower. Looking at the angle between the two subjets (see figure 1), denoted as Rg for soft drop and θ for late-kT, demonstrates the dead-cone effect, as the small angle emissions of b-jets (left) and c-jets (right) are suppressed compared to the inclusive jet case. The effect is captured better by the late-kT algorithm than soft drop in the case of c-jets.

These measurements serve to refine the tuning of Monte Carlo event generators relating to the heavy-quark mass and strong coupling. Identifying the onset of the dead cone in the vacuum also opens up possibilities for substructure studies in heavy-ion collisions, where emissions induced by the strongly interacting quark–gluon plasma can be isolated.

Salam’s dream visits the Himalayas

After winning the Nobel Prize in Physics in 1979, Abdus Salam wanted to bring world-class physics research opportunities to South Asia. This was the beginning of the BCSPIN programme, encompassing Bangladesh, China, Sri Lanka, Pakistan, India and Nepal. The goal was to provide scientists in South and Southeast Asia with new opportunities to learn from leading experts about developments in particle physics, astroparticle physics and cosmology. Together with Jogesh Pati, Yu Lu and Qaisar Shafi, Salam initiated the programme in 1989. This first edition was hosted by Nepal. Vietnam joined in 2009 and BCSPIN became BCVSPIN. Over the years, the conference has been held as far afield as Mexico.

The most recent edition attracted more than 100 participants to the historic Hotel Shanker in Kathmandu, Nepal, from 9 to 13 December 2024. The conference aimed to facilitate interactions between researchers from BCVSPIN countries and the broader international community, covering topics such as collider physics, cosmology, gravitational waves, dark matter, neutrino physics, particle astrophysics, physics beyond the Standard Model and machine learning. Participants ranged from renowned professors from across the globe to aspiring students.

Speaking of aspiring students, the main event was preceded by the BCVSPIN-2024 Masterclass in Particle Physics and Workshop in Machine Learning, hosted at Tribhuvan University from 4 to 6 December. The workshop provided 34 undergraduate and graduate students from around Nepal with a comprehensive introduction to particle physics, high-energy physics (HEP) experiments and machine learning. In addition to lectures, the workshop engaged students in hands-on sessions, allowing them to experience real research by exploring core concepts and applying machine-learning techniques to data from the ATLAS experiment. The students’ enthusiasm was palpable as they delved into the intricacies of particle physics and machine learning. The interactive sessions were particularly engaging, with students eagerly participating in discussions and practical exercises. Highlights included a special talk on artificial intelligence (AI) and a career development session focused on crafting CVs, applications and research statements. These sessions ensured participants were equipped with both academic insights and practical guidance. The impact on students was profound, as they gained valuable skills and networking opportunities, preparing them for future careers in HEP.

The BCVSPIN conference officially started the following Monday. In the spirit of BCVSPIN, the first plenary session featured an insightful talk on the status and prospects of HEP in Nepal, providing valuable insights for both locals and newcomers to the initiative. Then, the latest and the near-future physics highlights of experiments such as ATLAS, ALICE, CMS, as well as Belle, DUNE and IceCube, were showcased. From physics performance such as ATLAS nailing b-tagging with graph neural networks, to the most elaborate mass measurement of the W boson mass by CMS, not to mention ProtoDUNE’s runs exceeding expectations, the audience were offered comprehensive reviews of the recent breakthroughs on the experimental side. The younger physicists willing to continue or start hardware efforts surely appreciated the overview and schedule of the different upgrade programmes. The theory talks covered, among others, dark-matter models, our dear friend the neutrino and the interactions between the two. A special talk on AI invited the audience to reflect on what AI really is and how – in the midst of the ongoing revolution – it impacts the fields of physics and physicists themselves. Overviews of long-term future endeavours such as the Electron–Ion Collider and the Future Circular Collider concluded the programme.

BCVSPIN offers younger scientists precious connections with physicists from the international community

A special highlight of the conference was a public lecture “Oscillating Neutrinos” by the 2015 Nobel Laureate Takaaki Kajita. The event was held near the historical landmark of Patan Durbar Square, in the packed auditorium of the Rato Bangala School. This centre of excellence is known for its innovative teaching methods and quality instruction. More than half the room was filled with excited students from schools and universities, eager to listen to the keynote speaker. After a very pedagogical introduction explaining the “problem of solar neutrinos”, Kajita shared his insights on the discovery of neutrino oscillations and its implications for our understanding of the universe. His presentation included historical photographs of the experiments in Kamioka, Japan, as well as his participation at BCVSPIN in 1994. After encouraging the students to become scientists and answering as many questions as time allowed, he was swept up in a crowd of passionate Nepali youth, thrilled to be in the presence of such a renowned physicist.

The BCVSPIN initiative has changed the landscape of HEP in South and Southeast Asia. With participation made affordable for students, it is a stepping stone for the younger generation of scientists, offering them precious connections with physicists from the international community.

CDF addresses W-mass doubt

The CDF II experiment

It’s tough to be a lone dissenting voice, but the CDF collaboration is sticking to its guns. Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model (SM) prediction. All other measurements are statistically compatible with the SM, though slightly higher, including the most recent by the CMS collaboration at the LHC, which almost matched CDF’s stated precision of 9.4 MeV (CERN Courier November/December 2024 p7).

With CMS’s measurement came fresh scrutiny for the CDF collaboration, which had established one of the most interesting anomalies in fundamental science – a higher-than-expected W mass might reveal the presence of undiscovered heavy virtual particles. Particular scrutiny focused on the quoted momentum resolution of the CDF detector, which the collaboration claims exceeds the precision of any other collider detector by more than a factor of two. A new analysis by CDF verifies the stated accuracy of 25 parts per million by constraining possible biases using a large sample of cosmic-ray muons.

“The publication lays out the ‘warts and all’ of the tracking aspect and explains why the CDF measurement should be taken seriously despite being in disagreement with both the SM and silicon-tracker-based LHC measurements,” says spokesperson David Toback of Texas A&M University. “The paper should be seen as required reading for anyone who truly wants to understand, without bias, the path forward for these incredibly difficult analyses.”

The 2022 W-mass measurement exclusively used information from CDF’s drift chamber – a descendant of the multiwire proportional chamber invented at CERN by Georges Charpak in 1968 – and discarded information from its inner silicon vertex detector as it offered only marginal improvements to momentum resolution. The new analysis by CDF collaborator Ashutosh Kotwal of Duke University studies possible geometrical defects in the experiment’s drift chamber that could introduce unsuspected biases in the measured momenta of the electrons and muons emitted in the decays of W bosons.

“Silicon trackers have replaced wire-based technology in many parts of modern particle detectors, but the drift chamber continues to hold its own as the technology of choice when high accuracy is required over large tracking volumes for extended time periods in harsh collider environments,” opines Kotwal. “The new analysis demonstrates the efficiency and stability of the CDF drift chamber and its insensitivity to radiation damage.”

The CDF II detector operated at Fermilab’s Tevatron collider from 1999 to 2011. Its cylindrical drift chamber was coaxial with the colliding proton and antiproton beams, and immersed in an axial 1.4 T magnetic field. A helical fit yielded track parameters.

Boost for compact fast radio bursts

Fast radio bursts (FRBs) are short but powerful bursts of radio waves that are believed to be emitted by dense astrophysical objects such as neutron stars or black holes. They were discovered by Duncan Lorimer and his student David Narkevic in 2007 while studying archival data from the Parkes radio telescope in Australia. Since then, more than a thousand FRBs have been detected, located both within and without the Milky Way. These bursts usually last only a few milliseconds but can release enormous amounts of energy – an FRB detected in 2022 gave off more energy in a millisecond than the Sun does in 30 years – however, the exact mechanism underlying their creation remains a mystery.

Inhomogeneities caused by the presence of gas and dust in the interstellar medium scatter the radio waves coming from an FRB. This creates a stochastic interference pattern on the signal, called scintillation – a phenomenon akin to the twinkling of stars. In a recent study, astronomer Kenzie Nimmo and her colleagues used scintillation data from FRB 20221022A to constrain the size of its emission region. FRB 20221022A is a 2.5 millisecond burst from a galaxy about 200 million light-years away. It was detected on 22 October 2022 by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB).

The CHIME telescope is currently the world’s leading FRB detector, discovering an average of three new FRBs every day. It consists of four stationary 20 m-wide and 100 m-long semi-cylindrical paraboloidal reflectors with a focal length of 5 m (see “Right on CHIME” figure). 256 dual-polarisation feeds suspended along each axis gives it a field of view of more than 200 square degrees. With a wide bandwidth, high sensitivity and a high-performance correlator to pinpoint where in the sky signals are coming from, CHIME is an excellent instrument for the detection of FRBs. The antenna receives radio waves in the frequency range of 400 to 800 MHz.

Two main classes of models compete to explain the emission mechanisms of FRBs. Near-field models hypothesise that emission occurs in close proximity to the turbulent magnetosphere of a central engine, while far-away models hypothesise that emission occurs in relativistic shocks that propagate out to large radial distances. Nimmo and her team measured two distinct scintillation scales in the frequency spectrum of FRB 20221022A: one originating from its host galaxy or local environment, and another from a scattering site within the Milky Way. By using these scattering sites as astrophysical lenses, they were able to constrain the size of the FRB’s emission region to better than 30,000 km. This emission size contradicted expectations from far-away models. It is more consistent with an emission process occurring within or just beyond the magnetosphere of a central compact object – the first clear evidence for the near-field class of models.

Additionally, FRB 20221022A’s detection paper notes a striking change in the burst’s polarisation angle – an “S-shaped” swing covering about 130° – over a mere 2.5 milliseconds. They interpret this as the emission beam physically sweeping across our line of sight, much like a lighthouse beam passing by an observer, and conclude that it hints at a magnetospheric origin of the emission, as highly magnetised regions can twist or shape how radio waves are emitted. The scintillation studies by Nimmo et al. independently support this conclusion, narrowing the possible sources and mechanisms that power FRBs. Moreover, they highlight the potential of the scintillation technique to explore the emission mechanisms in FRBs and understand their environments.

The field of FRB physics looks set to grow by leaps and bounds. CHIME can already identify host galaxies for FRBs, but an “outrigger” programme using similar detectors geographically displaced from the main telescope at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, aims to strengthen its localisation capabilities to a precision of tens of milliarcsecond. CHIME recently finished deploying its third outrigger telescope in northern California.

Charm jets lose less energy

ALICE figure 1

Collisions between lead ions at the LHC generate the hottest and densest system ever created in the laboratory. Under these extreme conditions, quarks and gluons are no longer confined inside hadrons but instead form a quark–gluon plasma (QGP). Being heavier than the more abundantly produced light quarks, charm quarks play a special role in probing the plasma since they are created in the collision before the plasma is formed and interact with the plasma as they traverse the collision zone. Charm jets, which are clusters of particles originating from charm quarks, have been investigated for the first time by the ALICE collaboration in Pb–Pb collisions at the LHC using the D0 mesons (that carry a charm quark) as tags.

The primary interest lies in measuring the extent of energy loss experienced by different types of particles as they traverse the plasma, referred to as “in-medium energy loss”. This energy loss specifically depends on the particle type and particle mass, varying between quarks and gluons. Due to their larger mass, charm quarks at low transverse momentum do not reach the speed of light and lose substantially less energy than light quarks through both collisional and radiative processes, as gluon radiation by massive quarks is suppressed: the so-called “dead-cone effect”. Additionally, gluons, which carry a larger colour charge than quarks, experience greater energy loss in the QGP as quantified by the Casimir factors CA = 3 for gluons and CF = 4/3 for quarks. This makes the charm quark an ideal probe for studying the QGP properties. ALICE is well suited to study the in-medium energy loss of charm quarks, which is dependent on the mass of the charm quark and its colour charge.

The production yield of charm jets tagged with fully reconstructed D0 mesons (D0 Kπ+) in central Pb–Pb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2 was measured by ALICE. The results are reported in terms of nuclear modification factor (RAA), which is the ratio of the particle production rate in Pb–Pb collisions to that in proton–proton collisions, scaled by the number of binary nucleon–nucleon collisions. A measured nuclear modification factor of unity would indicate the absence of final-state effects.

The results, shown in figure 1, show a clear suppression (RAA < 1) for both charm jets and inclusive jets (that mainly originate from light quarks and gluons) due to energy loss. Importantly, the charm jets exhibit less suppression than the inclusive jets within the transverse momentum range of 20 to 50 GeV, which is consistent with mass and colour-charge dependence.

The measured results are compared with theoretical model calculations that include mass effects in the in-medium energy loss. Among the different models, LIDO incorporates both the dead-cone effect and the colour-charge effects, which are essential for describing the energy-loss mechanisms. Consequently, it shows reasonable agreement with experimental data, reproducing the observed hierarchy between charm jets and inclusive jets.

The present finding provides a hint of the flavour-dependent energy loss in the QGP, suggesting that charm jets lose less energy than inclusive jets. This highlights the quark-mass and colour-charge dependence of the in-medium energy-loss mechanisms.

Chamonix looks to CERN’s future

The Chamonix Workshop 2025, held from 27 to 30 January, brought together CERN’s accelerator and experimental communities to reflect on achievements, address challenges and chart a course for the future. As the discussions made clear, CERN is at a pivotal moment. The past decade has seen transformative developments across the accelerator complex, while the present holds significant potential and opportunity.

The workshop opened with a review of accelerator operations, supported by input from December’s Joint Accelerator Performance Workshop. Maintaining current performance levels requires an extraordinary effort across all the facilities. Performance data from the ongoing Run 3 shows steady improvements in availability and beam delivery. These results are driven by dedicated efforts from system experts, operations teams and accelerator physicists, all working to ensure excellent performance and high availability across the complex.

Electron clouds parting

Attention is now turning to Run 4 and the High-Luminosity LHC (HL-LHC) era. Several challenges have been identified, including the demand for high-intensity beams, radiofrequency (RF) power limitations and electron-cloud effects. In the latter case, synchrotron-radiation photons strike the beam-pipe walls, releasing electrons which are then accelerated by proton bunches, triggering a cascading electron-cloud buildup. Measures to address these issues will be implemented during Long Shutdown 3 (LS3), ensuring CERN’s accelerators continue to meet the demands of its diverse physics community.

LS3 will be a crucial period for CERN. In addition to the deployment of the HL-LHC and major upgrades to the ATLAS and CMS experiments, it will see a widespread programme of consolidation, maintenance and improvements across the accelerator complex to secure future exploitation over the coming decades.

Progress on the HL-LHC upgrade was reviewed in detail, with a focus on key systems – magnets, cryogenics and beam instrumentation – and on the construction of critical components such as crab cavities. The next two years will be decisive, with significant system testing scheduled to ensure that these technologies meet ambitious performance targets.

Planning for LS3 is already well advan­ced. Coordination between all stakeholders has been key to aligning complex interdependencies, and the experienced teams are making strong progress in shaping a resource-loaded plan. The scale of LS3 will require meticulous coordination, but it also represents a unique opportunity to build a more robust and adaptable accelerator complex for the future. Looking beyond LS3, CERN’s unique accelerator complex is well positioned to support an increasingly diverse physics programme. This diversity is one of CERN’s greatest strengths, offering complementary opportunities across a wide range of fields.

The high demand for beam time at ISOLDE, n_TOF, AD-ELENA and the North and East Areas underscores the need for a well-balanced approach that supports a broad range of physics. The discussions highlighted the importance of balancing these demands while ensuring that the full potential of the accelerator complex is realised.

Future opportunities such as those highlighted by the Physics Beyond Colliders study will be shaped by discussions being held as part of the update of the European Strategy for Particle Physics (ESPP). Defining the next generation of physics programmes entails striking a careful balance between continuity and innovation, and the accelerator community will play a central role in setting the priorities.

A forward-looking session at the workshop focused on the Future Circular Collider (FCC) Feasibility Study and the next steps. The physics case was presented alongside updates on territorial implementation and civil-engineering investigations and plans. How the FCC-ee injector complex would fit into the broader strategic picture was examined in detail, along with the goals and deliverables of the pre-technical design report (pre-TDR) phase that is planned to follow the Feasibility Study’s conclusion.

While the FCC remains a central focus, other future projects were also discussed in the context of the ESPP update. These include mature linear-collider proposals, the potential of a muon collider and plasma wakefield acceleration. Development of key technologies, such as high-field magnets and superconducting RF systems, will underpin the realisation of future accelerator-based facilities.

The next steps – preparing for Run 4, implementing the LS3 upgrade programmes and laying the groundwork for future projects – are ambitious but essential. CERN’s future will be shaped by how well we seize these opportunities.

The shared expertise and dedication of CERN’s personnel, combined with a clear strategic vision, provide a solid foundation for success. The path ahead is challenging, but with careful planning, collaboration and innovation, CERN’s accelerator complex will remain at the heart of discovery for decades to come.

The triggering of tomorrow

The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.

Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.

Evolving architecture

Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.

Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.

To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.

ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.

The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.

Substantial upgrades

The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.

Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.

The third edition of TDHEP suggests that innovation in this field is only set to accelerate

The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.

Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.

Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.

These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.

bright-rec iop pub iop-science physcis connect