The Future Circular Collider (FCC) offers a multi-stage facility – beginning with an e+e– Higgs and electroweak factory (FCC-ee), followed by an energy-frontier hadron collider (FCC-hh) in the same 91 km tunnel – that would operate until at least the end of the century. Following the recommendation of the 2020 update of the European strategy for particle physics, CERN together with its international partners have launched a feasibility study that is due to be completed in 2025. FCC Week 2023, which took place in London from 5 to 9 June, and attracted about 500 people, offered an excellent opportunity to strengthen the collaboration, discuss the technological and scientific opportunities, and plan the submission of the mid-term review of the FCC feasibility study to the CERN Council later this year.
The FCC study, along with the support of the European Union FCCIS project, aims to build an ecosystem of science and technology involving fundamental research, computing, engineering and skills for the next generation. It was therefore encouraging that around 40% of FCC Week participants were aged under 40.
Working together
In his welcome speech, Mark Thomson (UK STFC executive chair) stressed the importance of a Higgs factory as the next tool in exploring the universe at a fundamental level. Indeed, one of the no-lose theorems of the FCC programme, pointed out by Gavin Salam (University of Oxford), is that it will shed light on the Higgs’ self-interaction, which governs the shape of the Brout–Englert–Higgs potential. In her plenary address, Fabiola Gianotti (CERN Director-General) confirmed that the current schedule for the completion of the FCC feasibility study is on track, and stressed that the FCC is the only facility commensurate with the present size of CERN’s community, providing up to four experimental points, concluding “we need to work together to make it happen”.
Designing a new accelerator infrastructure poses a number of challenges, from civil engineering and geodesy to the development of accelerator technologies and detector concepts to meet the physics goals. One of the major achievements of the feasibility study so far is the development of a new FCC layout and placement scenario, thanks to close collaboration with CERN’s host states and external consultants. As Johannes Gutleber (CERN) reported, the baseline scenario has been communicated with the affected communes in the surrounding area and work has begun to analyse environmental aspects at the surface-site locations. Synergies with the local communities will be strengthened during the next two years, while an authorisation process has been launched to start geophysical investigations next year.
Essential for constructing the FCC tunnel is a robust 3D geological model, for which further input from subsurface investigations into areas of geological uncertainty is needed. On the civil-engineering side, two further challenges include alignment and geodesy for the new tunnel. Results from these investigations will be collected and fed into the civil-engineering cost and schedule update of the project. Efforts are also focusing on optimising cavern sizes, tunnel widenings and shaft diameters based on more refined requirements from users.
Transfer lines have been optimised such that existing tunnels can be reused as much as possible and to ensure compatibility between the lepton and hadron FCC phases. Taking CERN’s full experimental programme into account, the option of using the SPS as pre-booster for FCC-ee will be consolidated and compared with the cost with a high-energy linac option.
A new generation of young researchers will need to take the reins to ensure FCC gets delivered and exploit the physics opportunities offered by this visionary research infrastructure
At the heart of the FCC study are sustainability and environmental impact. Profiting from an R&D programme on high-efficiency klystrons initially launched for the proposed Compact Linear Collider, the goal is to increase the FCC-ee klystron efficiency from 57% (as demonstrated in the first prototypes) to 80% – resulting in an energy saving of 300 GWh per year without considering the impact that this development could have beyond particle physics. Other accelerator components where work is ongoing to minimise energy consumption include low-loss magnets, SRF cavities and high-efficiency cryogenic compressors.
The FCC collaboration is also exploring ways in which to reuse large volumes of excavated materials, including the potential for carbon capture. This effort, which builds on the results of the EU-funded “Mining the Future” competition launched in 2020, aims to re-use the excavated material locally for agriculture and reforestation while minimising global nuisances such as transport. Other discussions during FCC Week focused on the development of a renewable energy supply for FCC-ee.
If approved, a new generation of young researchers will need to take the reins to ensure FCC gets delivered and exploit the physics opportunities offered by this visionary research infrastructure. A dedicated early-career researcher session at FCC Week gave participants the chance to discuss their hopes, fears and experiences so far with the FCC project. A well-attended public event “Giant Experiments, Cosmic Questions” held at the Royal Society and hosted by the BBC’s Robin Ince also reflected the enthusiasm of non-physicists for fundamental exploration.
The highly positive atmosphere of FCC Week 2023 projected a strong sense of momentum within the community. The coming months will keep the FCC team extremely busy, with several new institutes expected to join the collaboration and with the scheduled submission of the feasibility-study mid-term review advancing fast ahead of its completion in 2025.
Michael Turner (MT): In June 2022, the US Department of Energy (DOE) and National Science Foundation (NSF) asked the US National Academy of Sciences to convene a committee to provide a long-term (30 years or more) vision for elementary particle physics in the US and to deliver its report in mid 2024. EPP-2024 follows three previous National Academy studies, the last one in 2006 being notable for its composition (more than half of the members were “outsiders”) and the fact that it both set a vision and priorities. EPP-2024 is an 18-member committee, co-chaired by Maria and myself, and comprises mostly particle physicists from across the breadth of the field. It includes two Nobel Prize winners, eight National Academy members and CERN Director-General Fabiola Gianotti. It will recommend a long-term vision, but will not set priorities.
How does EPP-2024 relate to the current “P5” prioritisation process in the US?
MT: The field is in the process of the third P5 (Particle Physics Project Prioritization Panel) exercise, following previous cycles in 2008 and 2014. The DOE and the NSF asked the 30-member P5 committee (chaired by Hitoshi Murayama of UC Berkeley) to provide a prioritised, 10-year budget plan in the context of a 20-year globally-aware strategy by October 2023. By way of contrast, EPP-2024 will assess where the field is today, describe its ambitions and the tools and workforce necessary to achieve those ambitions, all without discussing budgets, specific projects or priorities.
Both P5 and EPP-2024 have benefitted from the community-based activity, Snowmass 2021, sponsored by the American Physical Society, which brought together more than 1000 particle physicists to set their priorities and vision for the future in a report published in January 2023. Together, EPP-2024 and P5 will provide both a long-term vision and a shorter-term detailed plan for particle physics in the US that will maintain a vibrant US programme within the larger context of a field that is very international.
What took EPP-2024 to CERN earlier this year?
Maria Spiropulu (MS): CERN, from its inception, has been structured as an international organisation; pan-European surely, but structurally internationally ready. In 2018 I was in the Indian Treaty room of the White House when the then CERN Director-General Rolf Heuer proclaimed CERN as the biggest US laboratory not on US soil. Indeed, in the past decade the ties between US particle physics and CERN have become stronger – in particular via the LHC and HL-LHC and also the neutrino programme – and ever more critical for the future of the field at large, so it was only natural to visit CERN and to discuss with the community in our EPP Town Hall, the early-career contingent and others. It was a very productive visit and we were impressed with what we saw and learned. The early-career scientists were fully engaged and there was a long and lively discussion focused both on the long-term science goals of the field, the planning process in Europe and in the US, the role of the US at CERN and CERN’s role in the US, as well as the involvement of early-career researchers in the process. As the field evolves and innovative approaches from other domains are employed to address persistent science questions and challenges, we see our workforce as a major output of the field both feeding back to our research programme and the society writ large.
The questions we are asking now are big questions that require tenacity, resources, innovation and collaboration. Every technology advance and invention we can use to push the frontiers of knowledge we do. Of course, we need to investigate whether we can break these questions into shorter-timescale undertakings, perhaps less demanding in scale and resources, and with even higher levels of innovation, and then put the pieces together. Ultimately it is the will and determination of those who engage in the field that will draft the path forward.
How would you define particle physics today?
MT: There is broad agreement that the mission of particle physics is the quest for a fundamental understanding of matter, energy, space and time. That ambitious mission not only involves identifying the building blocks of matter and energy, and the interactions between them, but also understanding how space, time and the universe originated. As evidenced by the diversity of participants at Snowmass – astronomers and physicists of all kinds – the enterprise encompasses a broad range of activities. Those being prioritised by P5 range from experiments at particle accelerators and underground laboratories to telescopes of all kinds and a host of table-top experiments.
Long ago when I was an undergraduate at Caltech working with experimentalist Barry Barish (now a gravitational-wave astronomer), particle physics comprised experimenters who worked at accelerators and theorists who sought to explain and understand their results. While these two activities remain the core of the field, there is a “cloud” of activities that are also very important to the mission of particle physics. And for good reason: almost all the evidence for physics beyond the Standard Model involves the universe at large: dark matter, dark energy, baryogenesis and inflation. Neutrino masses were discovered in experiments that involved astrophysical sources (e.g. the Sun and cosmic-ray produced atmospheric neutrinos), and many of the big ideas in theoretical particle physics involve connecting quarks and the cosmos. Although some of the researchers involved in such cloud activities are particle physicists who have moved out of the core, the primary research of most isn’t directly associated with the mission of particle physics.
We stand on the tall shoulders of the Standard Model of particle physics – and general relativity – with a programme in place that includes the LHC, neutrino experiments, dark-matter and dark-energy experiments, CMB-polarisation measurements, precision tests and searches for rare processes and powerful theoretical ideas – not to mention all the ideas for future facilities. I believe that we are on the cusp of a major transformation in our understanding of the fundamentals of the physical world at least as exciting as the November 1974 revolution that brought us the Standard Model.
How can particle physics maintain its societal relevance next to more applied domains?
MS: To be sure, the edifice of science is ever more relevant to human civilisation and most of society’s functions. Particle physics and associated fields capture human imagination and curiosity in terms of questions that they grapple with – questions that no one else would take up, at least not experimentally. All science domains, technology-needs and products are important to our 21st-century workings. Particle physics is not more or less important, in fact it consumes and optimises and adapts the advances of most other domains toward very ambitious objectives of building an understanding of our universe. I would also argue that because we are the melting pot of so much input and tools from other seemingly unrelated science and technology domains, the field offers a very fertile and attractive ground for training a workforce able to tackle intellectually and technologically ambitious puzzles. It can be seen as overly demanding – and this is where mentorship, guidance and clarity of opportunities play a crucial role.
How does EPP-2024 take into account international aspects of the field?
MS: This is exemplified by a committee membership that includes the CERN Director-General, and also by the multiple testimonies and panels focusing on international collaboration, including the framework, the optimisation of science and societal outcomes, and the training of an outstanding workforce. We have collected information from distinguished panels and experts in Europe, Asia and the US that have traditionally led the field, and we study how smaller economies and nations participate and contribute successfully and to the benefit of their nations and the international discovery science goals at large. We also interrogate the role of our science in diplomacy and in scientific exchanges that may overcome geopolitical tensions. International big projects are not a walk in the park; in our field they have proven to be necessary, so we put in deliberate emphasis to make them work towards achieving ambitious goals that are otherwise intractable.
What has the EPP-2024 committee learned so far – any surprises?
MT: For me, a relative outsider to particle physics, several things have stood out. First, the breadth of the enterprise today: cosmology has become fully integrated into particle physics, and new connections have been made to AMO physics (quantum sensors, trapped atoms and molecules, atomic interferometry), gravitational physics (gravitational waves and precision tests of gravity theory), and nuclear physics (neutrino masses and properties). Not only have dark-matter searches for WIMPs and axions become “big science”, but there is exploration of a host of new candidates that has spurred the invention of novel detection schemes.
I believe that we are on the cusp of a major transformation in our understanding at least as exciting as the November 1974 revolution
In the US, particle physics has become a big tent that encompasses tabletop experiments to look for a small electric dipole moment of the electron, large galaxy surveys, cosmic microwave background experiments, long-baseline neutrino experiments, and of course collider experiments to explore the energy frontier. It is difficult to draw a box around a field called elementary particle physics.
On the science side, much has changed since the last National Academy report in 2006, which noted discovering the Higgs boson and exploring the soon-to-be-discovered world of supersymmetry as its big vision. The aspirations of the field are much loftier today, from understanding the emergence of space and time to the deep connections between gravity and quantum mechanics. At the same time, however, the path forward is less clear than it was in 2006.
The 14th International Particle Accelerator Conference (IPAC23) took place from 7 to 12 May in Venice, Italy. The fully in-person event had record attendance with 1660 registered participants (including 273 students) from 37 countries, illustrating the need for real-life interactions in the global accelerator landscape after the COVID-19 pandemic. IPAC is not only a scientific meeting but also a global marketplace for accelerators, as demonstrated by the 311 participants from 121 companies present.
Following inspiring opening speeches by Antonio Zoccoli (INFN president) and Alfonso Franciosi (Elettra president) about the important role of particle accelerators in Italy, the scientific programme got under way. It included 87 talks and over 1500 posters covering all particles (electrons, positrons, protons, ions, muons, neutrons, …), all types of accelerators (storage rings, linacs, cyclotrons, plasma accelerators, …), all use-cases (particle physics, photon science, neutron science, medical and industrial applications, material physics, biological and chemical, …) and institutes involved across the world. The extensive programme offered such a wide perspective of excellence and ambition that it is only possible to highlight a short subset of what was presented.
Starting proceedings was a report by Malika Meddahi (CERN) on the successful LHC Injectors Upgrade (LIU) project. This project, with its predominantly female leadership team, was executed on budget and on schedule. It provides the LHC with beams of increased brightness as required by the ongoing luminosity upgrade, as later reported by CERN’s Oliver Brüning. The focus then shifted to advanced X-ray light sources. Emanuel Karantzoulis (Elettra) presented Elettra 2.0 – a new ultra-low emittance light source in construction in Trieste. Axel Brachmann (SLAC) updated participants on the status of LCLS-II, the world´s first CW X-ray free-electron laser (XFEL). While beam commissioning is somewhat delayed, the superconducting RF accelerator structures perform beyond the performance specification and the facility is in excellent condition. The week´s programme included an impressive overview by Dong Wang (Shanghai Advanced Research Institute) on the future of XFELs for which user demand has led to an enormous investment aiming in particular at “high average power”, which will be used to serve many more experiments including those for highly non-linear QED. Gianluca Geloni (European XFEL) showed that user operation for the world`s presently most powerful XFEL has been successfully enhanced with self-seeding. Massimo Ferrario (INFN) described the promise of a novel, high-tech plasma-based FEL being explored by the European EuPRAXIA project.
Jörg Blaurock (FAIR/GSI) presented the status of the €3.3 billion FAIR project. Major obstacles have been overcome and the completed tunnel and many accelerator components are now being prepared for installation, starting in 2024. The European Spallation Source in Sweden is advancing well and the proton linac is approaching full beam commissioning, as presented by Ryoichi Miyamoto (ESS) and Andrea Pisent (INFN). Yuan He from China (IMP, CAS) presented opportunities in accelerator-driven nuclear power, both in safety and in reusing nuclear fuels, and impressed participants with the news on a Chinese facility that is progressing well in terms of up-time and reliability. This theme was also addressed by Ulrich Dorda (Belgian Nuclear Research Centre) who presented the status of the Multi-purpose Hybrid Research Reactor for High-tech Applications (MYRRHA) project. Another impressive moment of the programme was Andrey Zelinsky’s (NSC in Ukraine) presentation on the Ukraine Neutron Source facility at the National Science Center “Kharkov Institute of Physics & Technology” (NSC KIPT). Construction, system checks and integration tests for this new facility have been completed and beam commissioning is being prepared under extremely difficult circumstances, as a result of Russia’s invasion.
Technological highlights included a report by Claire Antoine (CEA) on R&D into thin-film superconducting RF cavities and their potential game-changing role in sustainability. Sustainability was a major discussion topic throughout IPAC23, and several speakers presented the role of accelerators for the development of fusion reactors. The final talk of the conference by Beate Heinemann (DESY) showed that without accelerators, much knowledge in particle physics would still be missing and she argued for new accelerator facilities at the energy frontier to allow further discoveries.
The prize session saw Xingchen Xu (Fermilab), Mikhail Krasilnikov (DESY/Zeuthen) and Katsunobu Oide (KEK) receive the 2023 EPS-AG accelerator prizes. In addition, the Bruno Touschek prize was awarded to Matthew Signorelli (Cornell University), while two student poster prizes went to Sunar Ezgi (Goethe Universität Frankfurt) and Jonathan Christie (University of Liverpool).
IPAC23 included for the first time in Europe an equal opportunity session, which featured talks from Maria Masullo (INFN) and Louise Carvalho (CERN) on gender and STEM, pointing to the need to change the narrative and to move “from talk to targets”. The 300 participants in the session learnt about ways to improve gender balance but also about such important topics as neurodiversity. The very well attended industrial session of IPAC23 brought together projects and industry in a mixed presentation and round-table format.
For the organizers, IPAC23 has been a remarkable and truly rewarding effort, seeing the many delegates, industry colleagues and students from all over the world coming together for a lively, peaceful and collaborative conference. The many outstanding posters and talks promise a bright future for the field of particle accelerators.
The GBAR experiment at CERN has joined the select club of experiments that have succeeded in synthesising antihydrogen atoms. Located at the Antiproton Decelerator (AD), GBAR aims to test Einstein’s equivalence principle by measuring the acceleration of an antihydrogen atom in Earth’s gravitational field and comparing it with that of normal hydrogen.
Producing and slowing down an antiatom enough to see it in free fall is no mean feat. To achieve this, the AD’s 5.3 MeV antiprotons are decelerated and cooled in the ELENA ring and a packet of a few million 100 keV antiprotons is sent to GBAR every two minutes. A pulsed drift tube further decelerates the packet to an adjustable energy of a few keV. In parallel, a linear particle accelerator sends 9 MeV electrons onto a tungsten target, producing positrons, which are accumulated in a series of electromagnetic traps. Just before the antiproton packet arrives, the positrons are sent to a layer of nanoporous silica, from which about one in five positrons emerges as a positronium atom. When the antiproton packet crosses the resulting cloud of positronium atoms, a charge exchange can take place, with the positronium giving up its positron to the antiproton, forming antihydrogen.
At the end of 2022, during an operation that lasted several days, the GBAR collaboration detected some 20 antihydrogen atoms produced in this way, validating the “in-flight” production method for the first time. The collaboration will now improve the production of antihydrogen atoms to enable precision measurements, for example, of its spectroscopic properties.
The first antihydrogen atoms were produced at CERN’s LEAR facility in 1995, but at an energy too high for any measurement to be made. Following this early success, CERN’s Antiproton Accumulator (used for the discovery of the W and Z bosons in 1983) was repurposed as a decelerator, becoming the AD, which is unique worldwide in providing low-energy antiprotons to antimatter experiments. After the demonstration of storing antiprotons by the ATRAP and ATHENA experiments, ALPHA, a successor of ATHENA, was the first experiment to merge trapped antiprotons and positrons and to trap the resulting antihydrogen atoms. Since then, ATRAP and ASACUSA have also achieved these two milestones, and AEgIS has produced pulses of antiatoms. GBAR now joins this elite club, having produced 6 keV antihydrogen atoms in-flight.
GBAR is also not alone in its aim of testing Einstein’s equivalence principle with atomic antimatter. ALPHA and AEgIS are also working towards this goal using complementary approaches.
The construction of the world’s largest optical telescope, the Extremely Large Telescope (ELT), has reached its mid-point, stated the European Southern Observatory (ESO) on 11 July. Originally planned to see first light in the early 2020s, operations will now start in 2028 due to delays inherent to building such a large and complex instrument, as well as the COVID-19 pandemic.
The base and frame of the ELT’s dome structure on Cerro Armazones in the Chilean Atacama Desert have now been set. Meanwhile at European sites, the five-system mirrors for the ELT are being manufactured. More than 70% of the supports and blanks for the main mirror – which at 39 m across will be the biggest primary mirror ever built – are complete, and mirrors two and three are cast and now in the process of being polished.
Along with six laser guiding sources that will act as reference stars, mirrors four and five form part of a sophisticated adaptive-optics system to correct for atmospheric disturbances. The ELT will observe the universe in the near-infrared and visible regions to track down Earth-like exoplanets, investigate faint objects in the solar system and study the first stars and galaxies. It will also explore black holes, the dark universe and test fundamental constants (CERN Courier November/December 2019 p25).
Within astronomy and cosmology, the idea that the universe is continuously expanding is a cornerstone of the standard cosmological model. For example, when measuring the distance of astronomical objects one often uses their redshift, which is induced by their velocity with respect to us due to the expansion. The expansion itself has, however, never been directly measured, i.e. no measurement exists that shows the increasing redshift with time of a single object. Although not far beyond the current capabilities of astrophysics, such a measurement is unlikely to be performed soon. Rather, evidence for it is based on correlations within populations of astrophysical objects. However, not all studies agree with this standard assumption.
One population study that supports the standard model concerns type 1A supernovae, specifically the observed correlation between their duration and distance. Such a correlation is predicted to be the result of time dilation induced by the higher velocity of more distant objects. Supporting this picture, gamma-ray bursts occurring at larger distances appear to, on average, last longer than those that occur nearby. However, similar studies of quasars thus far did not show any dependence of the length in their variability with their distance, thereby contradicting special relativity and leading to an array of alternative hypotheses.
Detailed studies
Quasars are active galaxies containing a supermassive blackhole surrounded by a relativistic accretion disk. Due to their brightness they can be observed with redshifts up to about z = 8, which, based on special relativity should show variabilities occurring √8 times slower than those that occur nearby. As previous studies did not observe such time dilation, alternative theories proposed included those that cast doubt on the extragalactic nature of quasars. A new, detailed study now removes the need for such theories.
These results do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model
In order to observe time dilation one requires a standard clock. Supernovae are ideal for this purpose because these explosions are all nearly identical, allowing their duration to be used to measure time dilation. For quasars the issue is more complicated as the variability of their brightness appears almost random. However, the variability can be modelled using a so-called dampened random walk (DRW), a random process combined with an exponential dampening component. This complex model does not allow the brightness of a quasar to be predicted, but contains a characteristic timescale in the exponent that should correlate to the redshift due to time dilation.
This idea has now been tested by Geraint Lewis and Brenden Brewer of the universities of Sydney and Auckland, respectively. The pair studied 190 quasars with redshifts up to z = 4, observed over a 20 year period by the Sloan Digital Sky Survey and PanSTARRS-1, and applied a Bayesian analysis to look for a correlation between the DRW parameters and their redshift. The data was found to match best a universe where the DRW parameters scale according to (1 + z)n with n = 1.28 ±0.29, thereby making it compatible with n = 1, the value expected by standard physics. This contradicts previous measurements, something the authors attribute to the smaller quasar sample used in previous studies. The complex nature of quasars and the large variability in their population requires long observations of a similar population to make the time dilation effect visible.
These new results, which were made possible due to the large amounts of data becoming available from large observatories, do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model.
At around 1 a.m. on 17 July, the LHC beams were dumped after only nine minutes in collision due to a radiofrequency interlock caused by an electrical perturbation. Approximately 300 milliseconds after the beams were cleanly dumped, several superconducting magnets lost their superconducting state, or quenched. Among them were the inner-triplet magnets located to the left of Point 8, which focus the beams for the LHCb experiment. While occasional quenches of some LHC magnets are to be expected, the large forces resulting from this particular event led to a breach of the vacuum helium pressure vessel, rapidly degrading the insulation vacuum and prompting a series of interventions with implications for the 2023 Run 3 schedule.
The leak occurred between the LHC’s cryogenic circuit, which contains the liquid helium, and the insulation vacuum that separates the cold magnet from the warm outer vessel (the cryostat) – a crucial barrier for preventing heat transfer from the surrounding LHC tunnel to the interior of the cryostat. As a result of the leak, the insulation vacuum filled with helium gas, cooling down the cryostat and causing condensation to form and freeze on the outside.
By 24 July the CERN teams had traced the leak to a crack in one of more than 2500 bellows that compensate for thermal expansion and contraction on the cryogenic distribution lines. Measuring just 1.6 mm long, it is thought to have been caused by a sudden increase in vacuum pressure when the magnet quench protection system (QPS) kicked in. Following the electrical perturbation, the QPS had dutifully triggered the quench heaters (which are designed to bring the whole magnet out of the superconducting state in a controlled and homogenous manner) of the magnets concerned, generating a heat wave according to expectations.
It is the first time that such a breachevent has occurred; the teamwork between many working groups, including safety, accelerator operations, vacuum, cryogenics, magnets, survey, beam instrumentation, machine protection, electrical quality assurance as well as material and mechanical engineering, made a quick assessment and action plan possible. On 25 July the affected bellow was removed. A new bellow was installed on 28 July, the affected modules were closed, and the insulation vacuum was pumped.
The electrical perturbation turned out to be caused by an uprooted tree falling on power lines in the nearby Swiss municipality of Morges. In early August, as the Courier went to press, the repairs were finished and the implications for Run physics were being assessed. The choice is between preparing the machine for a short-term proton–proton phase to account for some of the missed run time or sticking to the planned heavy-ion run at the end of the run year, since in 2022 there was no full heavy-ion run. The favoured scenario is to go with the latter and was presented to the LHC machine committee on 26 July.
On 25 July, during its 42nd meeting, the Council of SESAME unanimously approved Iraq’s request to become an associate member. Iraq will now become a prospective member of SESAME as a stepping stone to full membership.
“My visit to SESAME on 8 June 2023 has convinced me that Iraq will stand to greatly benefit from membership, and that this would be the right moment for it to become a member,” stated Naeem Alaboodi, minister of higher education and scientific research and head of the Iraqi Atomic Energy Commission, in his letter to Rolf Heuer, president of the SESAME Council. “However, before doing so it would like to better familiarise itself with the governance, procedures and activity of this centre, and feels that the best way of doing this would be by first taking on associate membership.”
SESAME (Synchrotron-light for Experimental Science and Applications in the Middle East), based in Allan, Jordan, was founded on the CERN model and established under the umbrella of UNESCO. It opened its doors to users in 2017, offering third-generation X-ray beamlines for a range of disciplines, with the aim to be the first international Middle-Eastern research institution enabling scientists to collaborate peacefully for the generation of knowledge (CERN Courier January/February 2023 p28). SESAME has eight full members (Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, Palestine and Turkey) and 17 observers, including CERN. One of SESAME’s main focuses is archaeological heritage. This will be the topic of the first Iraqi user study, which involves two Iraqi institutes collaborating in a project of the Natural History Museum in the UK.
Iraq has been following progress at SESAME for some time. As an associate member Iraq will enjoy access to SESAME’s facilities for its national priority projects and more opportunities for international collaboration.“Iraq’s formal association with SESAME will be very useful for Iraqi scientists to gain the required scientific knowledge in many different areas of science and applications using synchrotron radiation,” said Hua Liu, deputy director-general of the International Atomic Energy Agency, which has been actively encouraging its member states located in the region to seek membership of SESAME.
“The Council and all the members of SESAME are delighted by Iraq’s decision,” added Heuer. “We look forward to further countries of the region joining the SESAME family. With more beamlines available in the future, we hope that user groups from different countries will be working together on projects and we will see more transnational collaboration.”
The Large Hadron Collider (LHC) roared back to life on 5 July 2022, when proton–proton collisions at a record centre-of-mass energy of 13.6 TeV resumed for Run 3. To enable the ALICE collaboration to benefit from the increased instantaneous luminosity of this and future LHC runs, the ALICE experiment underwent a major upgrade during Long Shutdown 2 (2019–2022) that will substantially improve track reconstruction in terms of spatial precision and tracking efficiency, in particular for low-momentum particles. The upgrade will also enable an increased interaction rate of up to 50 kHz for lead–lead (PbPb) collisions in continuous readout mode, which will allow ALICE to collect a data sample more than 10 times larger than the combined Run 1 and Run 2 samples.
ALICE is a unique experiment at the LHC devoted to the study of extreme nuclear matter. It comprises a central barrel (the largest data producer) and a forward muon “arm”. The central barrel relies mainly on four subdetectors for particle tracking: the new inner tracking system (ITS), which is a seven-layer, 12.5 gigapixel monolithic silicon tracker (CERN Courier July/August 2021 p29); an upgraded time projection chamber (TPC) with GEM-based readout for continuous operation; a transition radiation detector; and a time-of-flight detector. The muon arm is composed of three tracking devices: a newly installed muon forward tracker (a silicon tracker based on monolithic active pixel sensors), revamped muon chambers and a muon identifier.
Due to the increased data volume in the upgraded ALICE detector, storing all the raw data produced during Run 3 is impossible. One of the major ALICE upgrades in preparation for the latest run was therefore the design and deployment of a completely new computing model: the O2 project, which merges online (synchronous) and offline (asynchronous) data processing into a single software framework. In addition to an upgrade of the experiment’s computing farms for data readout and processing, this necessitates efficient online compression and the use of graphics processing units (GPUs) to speed up processing.
Pioneering parallelism
As their name implies, GPUs were originally designed to accelerate computer-graphics rendering, especially in 3D gaming. While they continue to be utilised for such workloads, GPUs have become general-purpose vector processors for use in a variety of settings. Their intrinsic ability to perform several tasks simultaneously gives them a much higher compute throughput than traditional CPUs and enables them to be optimised for data processing rather than, say, data caching. GPUs thus reduce the cost and energy consumption of associated computing farms: without them, about eight times as many servers of the same type and other resources would be required to handle the ALICE TPC online processing of PbPb collision data at a 50 kHz interaction rate.
Since 2010, when the high-level trigger online computer farm (HLT) entered operation, the ALICE detector has pioneered the use of GPUs for data compression and processing in high-energy physics. The HLT had direct access to the detector readout hardware and was crucial to compress data obtained from heavy-ion collisions. In addition, the HLT software framework was advanced enough to perform online data reconstruction. The experience gained during its operation in LHC Run 1 and 2 was essential for the design and development of the current O2 software and hardware systems.
For data readout and processing during Run 3, the ALICE detector front-end electronics are connected via radiation-tolerant gigabit-transceiver links to custom field programmable gate arrays (see “Data flow” figure). The latter, hosted in the first-level processor (FLP) farm nodes, perform continuous readout and zero-suppression (the removal of data without physics signal). In the case of the ALICE TPC, zero-suppression reduces the data rate from a prohibitive 3.3 TB/s at the front end to 900 GB/s for 50 kHz minimum-bias PbPb operations. This data stream is then pushed by the FLP readout farm to the event processing nodes (EPN) using data-distribution software running on both farms.
Located in three containers on the surface close to the ALICE site, the EPN farm currently comprises 350 servers, each equipped with eight AMD GPUs with 32 GB of RAM each, two 32-core AMD CPUs and 512 GB of memory. The EPN farm is optimised for the fastest possible TPC track reconstruction, which constitutes the bulk of the synchronous processing, and provides most of its computing power in the form of GPU processing. As data flow from the front end into the farms and cannot be buffered, the EPN computing capacity must be sufficient for the highest data rates expected during Run 3.
Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing
Due to the continuous readout approach at the ALICE experiment, processing does not occur on a particular “event” triggered by some characteristic pattern in detector signals. Instead, all data is read out and stored during a predefined time slot in a time frame (TF) data structure. The TF length is usually chosen as a multiple of one LHC orbit (corresponding to about 90 microseconds). However, since a whole TF must always fit into the GPU’s memory, the collaboration chose to use 32 GB GPU memory to grant enough flexibility in operating with different TF lengths. In addition, an optimisation effort was put in place to reuse GPU memory in consecutive processing steps. During the proton run in 2022 the system was stressed by increasing the proton collision rates beyond those needed in order to maximise the integrated luminosity for physics analyses. In this scenario the TF length was chosen to be 128 LHC orbits. Such high-rate tests aimed to reproduce occupancies similar to the expected rates of PbPb collisions. The experience of ALICE demonstrated that the EPN processing could sustain rates nearly twice the nominal design value (600 GB/s) originally foreseen for PbPb collisions. Using high-rate proton collisions at 2.6 MHz the readout reached 1.24 TB/s, which was fully absorbed and processed on the EPNs. However, due to fluctuations in centrality and luminosity, the number of TPC hits (and thus the required memory size) varies to a small extent, demanding a certain safety margin.
Flexible compression
At the incoming raw-data rates during Run 3, it is impossible to store the data – even temporarily. Hence, the outgoing data is compressed in real time to a manageable size on the EPN farm. During this network transfer, event building is carried out by the data distribution suite, which collects all the partial TFs sent by the detectors and schedules the building of the complete TF. At the end of the transfer, each EPN node receives and then processes a full TF containing data from all ALICE detectors.
The detector generating by far the largest data volume is the TPC, contributing more than 90% to the total data size. The EPN farm compresses this to a manageable rate of around 100 GB/s (depending on the interaction rate), which is then stored to the disk buffer. The TPC compression is particularly elaborate, employing several steps including a track-model compression to reduce the cluster entropy before the entropy encoding. Evaluating the TPC space-charge distortion during data taking is also the most computing-intensive aspect of online calibrations, requiring global track reconstruction for several detectors. At the increased Run 3 interaction rate, processing on the order of one percent of the events is sufficient for the calibration.
During data taking, the EPN system operates synchronously and the TPC reconstruction fully loads the GPUs. With the EPN farm providing 90% of its compute performance via GPUs, it is also desirable to maximise the GPU utilisation in the asynchronous phase. Since the relative contribution of the TPC processing to the overall workload is much smaller in the asynchronous phase, GPU idle times would be high and processing would be CPU-limited if the TPC part only ran on the GPUs. To use the GPUs maximally, the central-barrel asynchronous reconstruction software is being implemented with native GPU support. Currently, around 60% of the workload can run on a GPU, yielding a speedup factor of about 2.25 compared to CPU-only processing. With the full adaptation of the central-barrel tracking software to the GPU, it is estimated that 80% of the reconstruction workload could be processed on GPUs.
In contrast to synchronous processing, asynchronous processing includes the reconstruction of data from all detectors, and all events instead of only a subset; physics analysis-ready objects produced from asynchronous processing are then made available on the computing Grid. As a result, the processing workload for all detectors, except the TPC, is significantly higher in the asynchronous phase. For the TPC, clustering and data compression are not necessary during asynchronous processing, while the tracking runs on a smaller input data set because some of the detector hits were removed during data compression. Consequently, TPC processing is faster in the asynchronous phase than in the synchronous phase. Overall, the TPC contributes significantly to asynchronous processing, but is not dominant. The asynchronous reconstruction will be divided between the EPN farm and the Grid sites. While the final distribution scheme is still to be decided, the plan is to split reconstruction between the online computing farm, the Tier 0 and the Tier 1 sites. During the LHC shutdown periods, the EPN farm nodes will almost entirely be used for asynchronous processing.
Great shape
In 2021, during the first pilot-beam collisions at injection energy, synchronous processing was running and successfully commissioned. In 2022 it was used during nominal LHC operations, where ALICE performed online processing of pp collisions at a 2.6 MHz inelastic interaction rate. At lower interaction rates (both for pp and PbPb collisions), ALICE ran additional processing tasks on free EPN resources, for instance online TPC charged-particle energy-loss determination, which would not be possible at the full 50 kHz PbPb collision rate. The particle-identification performance is demonstrated in the figure “Particle ID”, in which no additional selections on the tracks or detector calibrations were applied.
Another performance metric used to assess the quality of the online TPC reconstruction is the charged-particle tracking efficiency. The efficiency for reconstructing tracks from PbPb collisions at a centre-of-mass energy of 5.52 TeV per nucleon pair ranges from 94–100% for pT > 0.1 GeV/c. Here the fake-track rate is rather negligible, however the clone rate increases significantly for low-pT primary tracks due to incomplete track merging of very low-momentum particles that curl in the ALICE solenoidal field and leave and enter the TPC multiple times.
The effective use of GPU resources provides extremely efficient processors. Additionally, GPUs deliver improved data quality and compute cost and efficiency – aspects that have not been overlooked by the other LHC experiments. To manage their data rates in real time, LHCb developed the Allen project, a first-level trigger processed entirely on GPUs that reduces the data rate prior to the alignment, calibration and final reconstruction steps by a factor of 30–60. With this approach, 4 TB/s are processed in real time, with 10 GB of the most interesting collisions selected for physics analysis.
At the beginning of Run 3, the CMS collaboration deployed a new HLT farm comprising 400 CPUs and 400 GPUs. With respect to a traditional solution using only CPUs, this configuration reduced the processing time of the high-level trigger by 40%, improved the data-processing throughput by 80% and reduced the power consumption of the farm by 30%. ATLAS uses GPUs extensively for physics analyses, especially for machine-learning applications. Focus has also been placed on data processing, anticipating that in the following years much of that can be offloaded to GPUs. For all four LHC experiments, the future use of GPUs is crucial to reduce the cost, size and power consumption within the higher luminosities of the LHC.
Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing. Today, 99% of synchronous processing is performed on GPUs, dominated by the largest contributor, the TPC.
More code
On the other hand, only about 60% of asynchronous processing (for 650 kHz pp collisions) is currently running on GPUs, i.e. offline data processing on the EPN farm. For asynchronous processing, even if the TPC is still an important contributor to the compute load, there are several other subdetectors that are important. In fact, there is an ongoing effort to port considerably more code to the GPUs. Such an effort will increase the fraction of GPU-accelerated code to beyond 80% for full barrel tracking. Eventually ALICE aims to run 90% of the whole asynchronous processing on GPUs.
In November 2022 the upgraded ALICE detectors and central systems saw PbPb collisions for the first time during a two-day pilot run at a collision rate of about 50 Hz. High-rate PbPb processing was validated by injecting Monte Carlo data into the readout farm and running the whole data processing chain on 230 EPN nodes. Due to the TPC data volumes being somewhat larger than initially expected, this stress test is now being revalidated with continuously optimised TPC firmware using 350 EPN nodes together with the final TPC firmware to provide the required 20% compute margin with respect to foreseen 50 kHz PbPb operations in October 2023. Together with the upgraded detector components, the ALICE experiment has never been in better shape to probe extreme nuclear matter during the current and future LHC runs.
The discovery of the W boson at CERN in 1983 can well be considered the birth of precision electroweak physics. Measurements of the W boson’s couplings and mass have become ever more precise, progressively weaving in knowledge of other particle properties through quantum corrections. Just over a decade ago, the combination of several Standard Model (SM) parameters with measurements of the W-boson mass led to a prediction of a relatively low Higgs-boson mass, of order 100 GeV, prior to its discovery. The discovery of the Higgs boson in 2012 with a mass of about 125 GeV was hailed as a triumph of the SM. Last year, however, an unexpectedly high value of the W-boson mass measured by the CDF experiment threw a spanner into the works. One might say the 40-year-old W boson encountered a midlife crisis.
The mass of the W boson, mW, is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson. The mass of each fermion is determined by the strength of its interaction with the Brout–Englert–Higgs field, but this strength is currently only known to an accuracy of approximately 10% at best; future measurements from the High-Luminosity LHC and a future e+e– collider are required to achieve percent-level accuracy. Meanwhile, mW is predicted with an accuracy better than 0.01%. At tree level, this mass depends only on the mass of the Z boson and the weak and electromagnetic couplings. The first measurements of mW by the UA1 and UA2 experiments at the SppS collider at CERN were in remarkable agreement with this prediction, within the large uncertainties. Further measurements at the Tevatron at Fermilab and the Large Electron Positron collider (LEP) at CERN achieved sufficient precision to probe the presence of higher-order electroweak corrections, such as from a loop containing top and bottom quarks.
Increasing sophistication
Measurements of mW at the four LEP experiments were performed in collisions producing two W bosons. Hadron colliders, by contrast, can produce a single W-boson resonance, simplifying the measurement when utilising the decay to an electron or muon and an associated neutrino. However, this simplification is countered by the complication of the breakup of the hadrons, along with multiple simultaneous hadron–hadron interactions. Measurements at the Tevatron and LHC have required increasing sophistication to model the production and decay of the W boson, as well as the final-state lepton’s interactions in the detectors. The average time between the available datasets and the resulting published measurement have increased from two years for the first CDF measurement in 1991 to more than 10 years for the most recent CDF measurement announced last year (CERN Courier May/June 2022 p9). The latter benefitted from a factor of four more W bosons than the previous measurement, but suffered from a higher number of additional simultaneous interactions. The challenge of modelling these interactions while also increasing the measurement precision required many years of detailed study. The end result, mW = 80433.5 ± 9.4 MeV, differs from the SM prediction of mW = 80357 ± 6 MeV by approximately seven standard deviations (see “Out of order” figure).
The SM calculation of mW includes corrections from single loops involving fermions or the Higgs boson, as well as from two-loop processes that also include gluons. The splitting of the W boson into a top- and bottom-quark loop produces the largest correction to the mass: for every 1 GeV increase in top-quark mass the predicted W mass increases by a little over 6 MeV. Measurements of the top-quark mass at the Tevatron and LHC have reached a precision of a few hundred MeV, thus contributing an uncertainty on mW of only a couple of MeV. The calculated mW depends only logarithmically on the Higgs-boson mass mH, and given the accuracy of the LHC mH measurements, it contributes negligibly to the uncertainty on mW. The tree-level dependence of mW on the Z-boson mass and on the electromagnetic coupling strength contribute an additional couple of MeV each to the uncertainty. The robust prediction of the SM allows an incisive test through mW measurements, and it would appear to fail in the face of the recent CDF measurement.
Since the release of the CDF result last year, physicists have held extensive and detailed discussions, with a recurring focus on the measurement’s compatibility with the SM prediction and with the measurements of other experiments. Further discussions and workshops have reviewed the suite of Tevatron and LHC measurements, hypothesising effects that could have led to a bias in one or more of the results. These potential effects are subtle, as fundamentally the W-boson signature is strikingly unique and simple: a single charged electron or muon with no observable particle balancing its momentum. Any source of bias would have to lie in a higher-order theoretical or experimental effect, and the analysts have studied and quantified these in great detail.
Progress
In the spring of this year ATLAS contributed an update to the story. The collaboration re-analysed its data from 2011 to apply a comprehensive statistical fit using a profile likelihood, as well as the latest global knowledge of parton distribution functions (PDFs) – which describe the momentum distribution functions of quarks and gluons inside the proton. The preliminary result (mW = 80360 ± 16 MeV) reduces the uncertainty and the central value of its previous result published in 2017, further increasing the tension between the ATLAS result and that of CDF.
Meanwhile, the Tevatron+LHC W-mass combination working group has carried out a detailed investigation of higher-order theoretical effects affecting hadron-collider measurements, and provided a combined mass value using the latest published measurement from each experiment and from LEP. These studies, due to be presented at the European Physical Society High-Energy Physics conference in Hamburg in late August, give a comprehensive and quantitative overview of W-boson mass measurements and their compatibilities. While no significant issues have been identified in the measurement procedures and results, the studies shed significant light on their details and differences.
LHC versus Tevatron
Two important aspects of the Tevatron and LHC measurements are the modelling of the momentum distribution of each parton in the colliding hadrons, and the angular distribution of the W boson’s decay products. The higher energy of the LHC increases the importance of the momentum distributions of gluons and of quarks from the second generation, though these can be constrained using the large samples of W and Z bosons. In addition, the combination of results from centrally produced W bosons at ATLAS with more forward W-boson production at LHCb reduces uncertainties from the PDFs. At the Tevatron, proton–antiproton collisions produced a large majority of W bosons via the valence up and down (anti)quarks inside the (anti)proton, and these are also constrained by measurements at the Tevatron. For the W-boson decay, the calculation is common to the LHC and the Tevatron, and precise measurements of the decay distributions by ATLAS are able to distinguish several calculations used in the experiments.
In any combination of measurements, the primary focus is on the uncertainty correlations. In the case of mW, many uncertainties are constrained in situ and are therefore uncorrelated. The most significant source of correlated uncertainty is the PDFs. In order to evaluate these correlations, the combination working group generated large samples of events and produced simplified models of the CDF, DØ and ATLAS detectors. Several sets of PDFs were studied to determine their compatibility with broader W- and Z-boson measurements at hadron colliders. For each of these sets the correlations and combined mW values were determined, opening a panorama view of the impact of PDFs on the measurement (see “Measuring up” figure).
The mass of the W boson is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson
The first conclusion from this study is that the compatibility of all PDF sets with W- and Z-boson measurements is generally low: the most compatible PDF set, CT18 from the CTEQ collaboration, gives a probability of only 1.5% that the suite of measurements are consistent with the predictions. Using this PDF set for the W-boson mass combination gives an even lower compatibility of 0.5%. When the CDF result is removed, the compatibility of the combined mW value is good (91%), and when comparing this “N-1” combined value to the CDF value for the CT18 set, the difference is 3.6σ. The results are considered unlikely to be compatible, though the possibility cannot be excluded in the absence of an identified bias. If the CDF measurement is removed, the combination yields a mass of mW = 80369.2 ± 13.3 MeV for the CT18 set, while including all measurements results in a mass of mW = 80394.6 ± 11.5 MeV. The former value is consistent with the SM prediction, while the latter value is 2.6σ higher.
Two scenarios
The results of the preliminary combination clearly separate two possible scenarios. In the first, the mW measurements are unbiased and differ due to large fluctuations and the PDF dependence of the W- and Z-boson data. In the second, a bias in one or more of the measurements produces the low compatibility of the measured values. Future measurements will clarify the likelihood of the first scenario, while further studies could identify effect(s) that point to the second scenario. In either case the next milestone will take time due to the exquisite precision that has now been reached, and to the challenges in maintaining analysis teams for the long timescales required to produce a measurement. The W boson’s midlife crisis continues, but with time and effort the golden years will come. We can all look forward to that.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.