Comsol -leaderboard other pages

Topics

Harnessing the CERN model

Paul Lecoq in China in 1982

CERN’s international relationships are central to its work, and a perfect example of nations coming together for the purpose of peaceful research, regardless of external politics. Through working in China during the 1980s and the Soviet Union/Russia in the early 1990s, physicist Paul Lecoq’s long career is testament to CERN’s influence and standing.

Originally interested in astrophysics, Lecoq completed a PhD in nuclear physics in Montreal in 1972. After finishing his military service, during which he taught nuclear physics at the French Navy School, he came across an advertisement for a fellowship position at CERN. It was the start of a 47-year-long journey with the organisation. “I thought, why not?” Lecoq recalls. “CERN was not my initial target, but I thought it would be a very good place to go. Also, I liked skiing and mountains.”

Royal treatment

During his third year as a fellow, a staff position opened for the upcoming European Hybrid Spectrometer (EHS), which would test CERN’s potential for collaboration beyond its core member states. “The idea was to make a complex multi-detector system, which would be a multi-institute collaboration, with each institute having the responsibility to build one detector,” says Lecoq. One of these institutes was based in Japan, allowing the exchange of personnel. Lecoq was one of the first to benefit from this agreement and, thanks to CERN’s already substantial image, he was very well-received. “At the time, people were travelling much less than now, and Japan was more isolated. I was welcomed by the president of the university and had a very nice reception almost every day.” It was an early sign of things to come for Lecoq.

During the lifetime of the EHS, a “supergroup” of CERN staff was formed whose main role was to support partners across the world while also building part of the experiment. By the time the Large Electron–Positron Collider (LEP) came to fruition it was clear that it would also benefit from this successful approach. At that time, Sam Ting had been asked to propose an experiment for LEP by then Director-General Herwig Schopper, which would become the L3 experiment, and with the EHS coming to an end, says Lecoq, it was natural that the EHS supergroup was transferred to Ting. Through friends working in material science, Lecoq caught wind of the new scintillator crystal (BGO) that was being proposed for L3 – an idea that would see him link up with Ting and spend much of the next few years in China. 

BGO crystals had not yet been used in particle physics, and had only existed in a few small samples, but L3 needed more than 1 m3 of coverage. After sampling and testing the first crystal samples, Lecoq presented his findings at an L3 collaboration meeting. “At the end of the meeting, Ting pointed his finger in my direction and asked if I was free on Saturday. I responded, ‘yes sir’. Then he turned to his secretary and said, ‘book a flight ticket to Shanghai – this guy is coming with me!’”

This is something unique about CERN, where you can meet fantastic people that can completely change your life

Unknown to Lecoq upon his arrival in China, Ting had already prepared the possibility to develop the technology for the mass production of BGO crystals there, and wanted Lecoq to oversee this production. BGO was soon recognised as a crystal that could be produced in large quantities in a reliable and cost-effective way, and it has since been used in a generation of PET scanners. Lecoq was impressed by the authority Ting held in China. “The second day we were in China, we, well Ting, had been invited by the mayor of Shanghai for a dinner to discuss the opportunity for the experiment.” The mayor was Jiang Zemin, who only a few years later became China’s president. “I have been very lucky to have several opportunities like this in my career. This is something unique about CERN, where you can meet fantastic people that can completely change your life. It was also an interesting period when China was slowly opening up to the world – on my first trip everyone was in Mao suits, and in the next three to five years I could see a tremendous change that was so impressive.”

Lecoq’s journeyman career did not stop there. With LEP finishing towards the turn of the millennium and LHC preparations in full swing, his expertise was needed for the production of lead tungstate (PWO) crystals for CMS’s electromagnetic calorimeter. This time, however, Russia was the base of operations, and the 1.2 m3 of BGO crystal for L3 became more than 10 m3 of PWO for CMS. As with his spell in China, Lecoq was in Russia during a politically uncertain time, with his arrival shortly following the fall of the Berlin Wall. “There was no system anymore. But there was still very strong intellectual activity, with scientists at an incredible level, and there was still a lot of production infrastructure for military interest.”

It was interesting not only at the scientific level, but on a human level too

At the time, lithium niobate, a crystal very similar to PWO, was being exploited for radar communication and missile guidance, says Lecoq, and the country had a valuable (but unknown to the public) production-infrastructure in place. With the disarray at the end of the Cold War, the European Commission set up a system, along with Canada, Japan and the US, called the International Science and Technology Center (ISTC), whose role was to transfer the Soviet Union’s military industry into civil application. Lecoq was able to meet with ISTC and gain €7 million in funding to support PWO crystal production for CMS. Again, he stresses, this only happened due to the stature of CERN. “I could not have done that if I had been working only as a French scientist. CERN has the diplomatic contact with the European Commission and different governments, and that made it a lot easier.” Lecoq was responsible for choosing where the crystal production would take place. “These top-level scientists working in the military areas felt isolated, especially in a country that was in a period of collapse, so they were more than happy not only to have an opportunity to do their job under better conditions, but also to have the contacts. It was interesting not only at the scientific level, but on a human level too.”

Crystal clear

Back at CERN, Lecoq realised that introducing a new scintillating crystal, optimising its performance to the harsh operating conditions of the LHC, and developing mass-production technologies to produce large amounts of crystal in a reliable and cost-effective way, was a formidable challenge that could not be dealt with only by particle physicists. Therefore, in 1991, he decided to establish the Crystal Clear multidisciplinary collaboration, gathering experts in material science, crystal-growth, luminescence, solid-state physics and beyond. Here again, he says, the attractiveness of CERN as an internationally recognised research centre was a great help to convince institutes all over the world, some not connected to particle physics at all, to join the collaboration. Crystal Clear is still running today, and celebrating its 30th anniversary. 

Through developing international connections in unexpected places, Lecoq’s career has helped build sustained connections for CERN in some of the world’s largest and fruitfully scientific places. Now retired, he is a distinguished professor at the Polytechnic University in Valencia, where he has set up a public–private partnership laboratory for metamaterial-based scintillators and photodetectors, to aid a new generation of ionisation radiation detectors for medical imaging and other applications. Even now, he is able to flex the muscles of the CERN model by keeping in close contact with the organisation.

“My career at CERN has been extremely rich. I have changed so much in the countries I’ve worked with and the scientific aspect, too. It could only have been possible at CERN.”

Making a difference

Suzie Sheehy

How did you end up as an accelerator physicist?

Somewhat accidentally, because I didn’t even know that being a researcher in physics was a thing you could be until my second year of university. It was around then that I realised that someone like me could ask questions that didn’t have answers. That hooked my interest. My first project was in nuclear physics, and it involved using a particle accelerator for an experiment. I then attended the CERN summer student programme, working on ATLAS, which was my first proper exposure to the technology of particle physics. When it came to the time to do my PhD in around 2006, I had the choice to either stay in Melbourne to do particle physics, or go to Oxford, which had a strong accelerator programme. When I learned they were designing accelerators for cancer treatment, it blew my mind! I took the leap and decided to move to the other side of the world.

What did you do as a postdoc? 

I was lucky enough to get an 1851 Royal Commission Fellowship, which allowed me to start an independent research programme. It was a bit of a baptism of fire, as I had been working on medical machines but then moved to high-intensity proton accelerators. I was looking at fixed-field alternating gradient accelerators and their application to things like accelerator-driven reactors. After a while I found myself spending a lot of time building sophisticated simulations, and was getting a bit bored of computing. So I started a couple of collaborations with some teams in Japan – one of which was using ion traps to mimic the dynamics of particle beams at very high intensity. What I found really interesting is how beams behave at a fundamental level, and I am currently working on upgrading a small experiment called IBEX to test a new type of optics called non-linear integral optics, which is a focus of Fermilab at the moment. 

And now you’re back in the medical arena?

Yes – a few years ago I started working with people from CERN and the UK on compact medical accelerators for low- and middle-income countries. Then in 2019 I felt the pull to return to Australia to grow accelerator physics there. They have accelerators and facilities but didn’t have a strong academic accelerator community, so I am building up a group at Melbourne University that has a medical applications focus, but also looks at other areas. After 20 years of pushing for a proton therapy centre here, the first one is now being built. 

How and when did your career in science communication take off?

I was doing things like stage shows for primary-school children when I was a first-year undergraduate. I have always seen it as part of the process of being a scientist. Before my PhD I worked in a science museum and, while at Oxford, I started an outreach programme called Accelerate! that took live science shows to some 30,000 students in its first two years and is still running. From there, it sort of branched out. I did more public lectures, but also a little bit of TV, radio and some writing.

Sheehy presenting

Any advice for physicists who want to get into communication?

You need to build a portfolio, and demonstrate that you have a range of different styles, delivery modes and use language that people understand. The other thing that really helped me was working with professional organisations such as the Royal Institution in London. It does take a lot of time to do both your research and academic job well, and also do the communication well. A lot of my communication is about my research field – so luckily they enrich each other. I think my communication has the potential to have a much bigger societal impact than my research, so I am very serious about it. The first time someone pointed a video camera at me I was terrified. Now I can say what I want to say. We shouldn’t underestimate how much the public wants to hear from real working scientists, so keeping a very strong research base keeps my authenticity and credibility.

What is your work/life balance like? 

I am not a fan of the term “work/life balance” as it tends to imply that one is necessarily in conflict with the other. I think it’s important to set up a kind of work/life integration that supports well-being while allowing you to do the work you want to do. When I was invited back to Melbourne to build an accelerator group, I’d just started a new research group in Oxford. I stepped down my teaching and we agreed that I would take periods of sabbatical to spend time in Melbourne until I finished my experiment. I have been so incredibly grateful to everyone on both sides for their understanding. Key to that has been learning how other people’s expectations affect you and finding a way to filter them out and drive your own goals. Working in two completely different time zones, it would be easy to work ridiculously long days, so I have had to learn to protect my health. The hardest thing, and I think a lot of early/mid-career researchers will relate to this, is that academia is an infinite job: you will never do enough for someone to tell you that you have done enough. The pressure always feels like it’s increasing, especially when you are a post-doc or on tenure track, or in the process of establishing a new group or lab. You have to learn how to take care of your mental health and well-being so that you don’t burn out. With everything else that’s going on in the world right now, this is even more important. 

You are active in trying to raise the profile of women in physics. What does this involve on a practical level?

There has been a lot of focus for many years in getting more women into subjects like physics. My view is that whenever I meet young people they’re interested already. In many countries the gender balance at undergraduate level is similar. So what’s happening instead is that we are pushing women and minorities out. My focus, within my sphere of influence, is to make sure that the culture that I am perpetuating and the values that I hold within my research groups are well defined and communicated. 

I kind of pulled back from active engagement in panel sessions and things like that a number of years ago, because I realised that the most important way I can contribute is by being the best scientist that I can be. The fact that I happen to have a public profile is great in that it makes people aware that people like me exist. One of the things that has helped me the most is to build a really great community of peers of other women in physics. I think for the first seven or eight years of my career, when imposter syndrome was strong and I questioned if I fitted in, I realised that I didn’t have a single direct female colleague. With most people in my field being men, it’s likely that when choosing a speaker, for example, the first person we think of is male. Taking time to be well-networked with women in the field is incredibly important in that regard. Today, I find that creating the right environment means that people will seek out my research group because they hear it’s a nice place to be. Students today are much savvier with this stuff – they can tell toxic professors a mile away. I am trying to show them that there is a way of doing research that doesn’t involve the horrible sides to it. Research is hard enough already, so why make it harder? 

Tell us about your debut book The Matter of Everything?

It’s published by Bloomsbury (UK/Commonwealth) and Knopf (US) and is due out in early 2022. Its subtitle is “The 12 experiments that made the modern world”, starting with the cathode-ray tube and going all the way through to the LHC and what might come next. It’s told from the perspective of an experimental physicist. What isn’t always captured in popular physics books is how science is actually done, but it’s very human to feel like you’re failing in the lab. I also delve into what first interested me in accelerators, specifically the things that have emerged unexpectedly from these research areas. People think that Apple invented everything in the iPhone, but if it wasn’t for curiosity-driven physics experiments then it wouldn’t be possible. On a personal note, as I went through these stories in the field, often in the biographies and the acknowledgments, I would end up going down these rabbit holes of women whose careers were cut short because they got married and had to quit their job. It’s been lovely to have the opportunity to learn that these women were there, and it wasn’t just white men. 

You have to learn how to take care of your mental health and well-being so that you don’t burn out

Do you have a preference as to which collider should come next after the LHC? 

I think it should be one of the linear ones. The size of future circular colliders and the timescales involved are quite overwhelming, and you have to wonder if the politics might change throughout the project. A linear machine such as the ILC is more ready to go, if the money and will was there. But I also think there is value in the diversity of the technology. The scaling of SLAC’s linear electron machine, for example, really pushed the industrialisation of that accelerator technology – which is part of the reason why we have 3 GHz electron accelerators now in every hospital. There will be other implications to what we build, other than physics results – even though the decisions will be made on the physics. 

What do you say to students considering a career in particle physics? 

I will answer that from the perspective of the accelerator field, which is very exciting. If you look historically, new technologies have always driven new discoveries. The accelerator field is going through an interesting “technology discovery phase”, for example with laser-driven plasma accelerators, so there will be huge changes to what we are doing in 10–15 years’ time that could blow the decisions surrounding future colliders out of the water. This happened in the 1960s in the era of proton accelerators, where suddenly there was a new technology and it meant you could build machines with a much higher energy with smaller magnets, and suddenly the people who took that risk were the ones who ended up pushing the field forward. I sometimes feel experimental and theoretical physicists are slightly disconnected to what’s going on with accelerator physics now. When making future decisions, people should attend accelerator conferences…it may influence their choices.

Physics flies high at SINP

The main building of SINP MSU

The Skobeltsyn Institute of Nuclear Physics (SINP) was established at Lomonosov Moscow State University (MSU) on 1 February 1946, in pursuance of a decree of the government of the USSR. SINP MSU was created as a new type of institute, in which the principles of integrating higher education and fundamental science were prioritised. Its initiator and first director was Soviet physicist Dmitri Vladimirovich Skobeltsyn, who was known for his pioneering use of the cloud chamber to study the Compton effect in 1923 – aiding the discovery of the positron less than a decade later.

It is no coincidence that SINP MSU was established in the immediate aftermath of the Second World War, following the first use of nuclear weapons in conflict. The institute was created on the basis that it would train personnel who would specialise in nuclear science and technology, after the country realised that there was a shortage of specialists in the field. Thanks to strong leadership from Skobeltsyn and one of his former pupils, Sergei Nikolaevich Vernov, SINP MSU quickly gained recognition in the country. As soon as 1949, the government designated it a leading research institute. By this time a 72 cm cyclotron was already in use, the first to be used in a higher education institute in the USSR. 

Skobeltsyn and Vernov continued with their high ambitions as they expanded the facility to the Lenin Hills, along with other scientific departments in MSU. Proposed in 1949 and opened in 1953, the new building in Moscow was granted approval to build a set of accelerators and a special installation for studying extensive air showers (EASs). The first accelerator built there was a 120 cm cyclotron, and its first outstanding scientific achievement was the discovery by A F Tulinov of the so-called “shadow effect” in nuclear reactions on single crystals, which makes it possible to study nuclear reactions at ultra-short time intervals. Significant scientific successes were associated with the commissioning of a unique installation, the EAS-MSU, at the end of the 1950s for the study of ultra-high-energy cosmic rays. Several results were obtained through a new method for studying EASs in the region of 1015–1017 eV, leading to the discovery of the famous “knee” in the energy spectrum of primary cosmic rays.

The space race 

1949 marked SINP MSU’s entrance into astrophysics and, in particular, satellite technology. The USSR’s launch of Sputnik 1, Earth’s first artificial satellite, in 1957 gave Vernov, an enthusiastic experimentalist who had previously researched cosmic rays in the Earth’s stratosphere, the opportunity to study outer-atmosphere cosmic rays. This led to the installation of a Geiger counter on the Sputnik 2 satellite and a scintillation counter on Sputnik 3, to enable radiation experiments. Vernov’s experiments on Sputnik 2 enabled the first detection of the outer radiation belt. However, this was not confirmed until 1958 by the US’s Explorer 1, which carried an instrument designed and built by James Van Allen. Sputnik 3 confirmed the existence of an inner radiation belt, having received information from Australia and South America, as well as from sea-based stations. 

Soyuz carrier rocket

Vernov, who was Skobeltsyn’s successor as SINP director in 1960–1982, later worked on the “Electron” and “Proton” series of satellites, which studied the radiation-belt structure, energy spectra and temporal variations associated with geomagnetic activity. This led to pioneering results on the spectrum and composition of galactic cosmic rays, and to the first model of radiation distribution in near-Earth space in the USSR.

SINP MSU has carried on Vernov’s cosmic legacy by continuing to develop equipment for satellites. Since 2005 the institute has developed its own space programme through the university satellites Tatiana-Universitetsky and Tatiana-2, as well as the Vernov satellite. These satellites led to new phenomena such as ultraviolet flashes from the atmosphere being discovered. In 2016 a tracking system for ultraviolet rays was installed on board the Lomonosov satellite (see “Vernov’s legacy” image), developed at SINP MSU under the guidance of former director Mikhail Igorevich Panasyuk. This allowed fluorescence light radiated by EASs of ultra-high-energy cosmic rays to be measured for the first time, and prompt-emission observations of multi-wavelength gamma-ray bursts. The leading role of the entire mission of the Lomonosov satellite belongs to the current rector of MSU, Victor Sadovnichy.

High-energy exploration 

In 1968, under strong endorsement by Vernov and the director of a new Russian accelerator centre in Protvino, Anatoly Alekseyevich Longunov (who went on to be MSU rector from 1977 to 1991), a department of high-energy physics was established under the leadership of V G Shevchenko at SINP MSU, and the following year it was decided that a high-energy laboratory would be established at MSU. Throughout the years to follow, collaborations with laboratories in USSR and across the world, including CERN, Fermilab, DESY and the Joint Institute for Nuclear Research (JINR), lead the department to be at the forefront of the field. 

At the end of the 1970s a centre was created at SINP MSU for bubble-chamber film analysis. At the time it was one of the largest automated complexes for processing and analysing information from large tracking detectors in the country. In collaboration with other institutes worldwide, staff at the institute studied soft hadronic processes in the energy range 12–350 GeV at a number of large facilities, including the Mirabelle Hydrogen Bubble Chamber and European Hybrid Spectrometer. 

Extensive and unique experimental data have been obtained on the characteristics of multiple hadron productions, including fragmentation distributions. Throughout the years, exclusive reaction channels, angular and momentum correlations of secondary particles, resonance production processes and annihilation processes were also investigated. These results have made it possible to reliably test the predictions of phenomenological models, including the dual-parton model and the quark–gluon string model, based on the fundamental theoretical scheme of dual-topological unitarisation. 

For the first time in Russia, together with a number of scientific and technical enterprises with the leading role of the SINP MSU, an integrated system has now been created for the development, design, mass production and testing of large silicon solid and microstrip detectors. On this basis, at the turn of the millennium a hadron–electron separator was built for the ZEUS experiment at HERA, DESY.

Rolf Heuer visit to Lomonosov Moscow State University

The institute delved into theoretical studies in 1983, with the establishment of the laboratory of symbolic computations in high-energy physics and, in 1990, the department of theoretical high-energy physics. One of its most striking achievements was the creation of the CompHEP software package, which has received global recognition for its ability to automate calculations of collisions between elementary particles and their decays within the framework of gauge theories. This is freely available and allows physicists (even those with little computer experience) to calculate cross sections and construct various distributions for collision processes within the Standard Model and its extensions. Members of the department later went on to make a significant contribution to the creation of a Tier-2 Grid computer segment in Russia for processing and storing data from the LHC detectors.

Over the past 35 years of research in the field of particle accelerators at SINP MSU, research has moved from the development of large accelerator complexes for fundamental research, to now focusing on the creation and production of applied accelerators for security systems, industry and medicine.

Teaching legacy

Throughout its 75 years, SINP MSU has also nurtured thousands of students. In 1961 a new branch of SINP MSU, the department for nuclear research, was established in Dubna. It became the basis for training students from the MSU physics faculty in nuclear physics using the capabilities of the largest international scientific centre in Russia – JINR. The department, which is still going strong today, teaches with a hands-on approach, with students attending lectures by leading JINR scientists and taking part in practical training held at the JINR laboratories.

The institute is currently participating in the upgrade of the LHC detectors (CMS, ATLAS, LHCb) for the HL-LHC project, as well as in projects within the Physics Beyond Colliders initiative (e.g. NA64, SHiP). These actions are under the umbrella of a 2019 cooperation agreement between CERN and Russia concerning high-energy physics and other domains of mutual interest. Looking even further ahead, SINP MSU scientists are also working on the development of research programmes for future collider projects such as the FCC, CLIC and ILC. Furthermore, the institute is involved in the upcoming NICA Complex in Russia, which plans to finish construction in 2022.

After 75 years, the institute is still as relevant as ever, and whatever the next chapter of particle physics will be, SINP MSU will be involved.

Intercepting the beams

The SPS internal beam dump

Imagine standing in the LHC tunnel when the machine is operating. Proton beams are circulating around the 27 km ring more than 11,000 times per second, colliding at four points to generate showers of particles that are recorded by ATLAS, CMS, ALICE, LHCb and other detectors. After a few hours of operation, the colliding beams need to be disposed of to allow a new physics fill. Operators in the CERN control centre instruct beam-transfer equipment to shunt the circulating beams into external trajectories that transport them away from the cryogenic superconducting magnets. Each beam exits the ring and travel for 600 metres in a straight line before reaching a compact cavern housing a large steel cylinder roughly 9 m long, 70 cm in diameter and containing about 4.4 tonnes of graphitic material. Huge forces are generated in the impact. If you could witness the event up close, you would hear a massive “bang” – like a bell – generated by the sudden expansion and successive contraction of the steel shell. 

What you will have witnessed is a beam-intercepting system in action. Of course, experiencing a beam dump in person is not possible, due to the large amount of radiation generated in the impact, which is one of the reasons why access to high-energy accelerators is strictly forbidden during operation.

Beam-intercepting systems are essential devices designed to absorb the energy and power of a particle beam. Generally, they are classified in three categories depending on their use: particle-producing devices, such as targets; systems for beam cleaning and control, such as collimators or scrapers; and those with safety functions, such as beam dumps or beam stoppers. During the current long-shutdown 2 (LS2), several major projects have been undertaken to upgrade some of the hundreds of beam-intercepting systems across CERN’s accelerator complex, in particular to prepare the laboratory for the high-luminosity LHC era.

Withstanding stress

Beam-intercepting devices have to withstand enormous mechanical and thermally-induced stresses. In the case of the LHC beam dump, for example, upgrades of the LHC injectors will deliver a beam which at high energy will have a kinetic energy equivalent to 560 MJ during LHC Run 3, roughly corresponding to the energy required to melt 2.7 tonnes of copper. Released in a period of just 86 μs, this corresponds to a peak power of 6.3 TW or, put differently, 8.6 billion horse power. 

The upgraded LHC beam dump

In general, the energy deposited in beam-intercepting devices is directly proportional to the beam energy, its intensity and the beam-spot size, as well as to the density of the absorbing material. From the point of view of materials, this energy is transformed into heat. In a beam dump, for example, the collision volume (which is usually much smaller than the beam-intercepting device itself) is heated to temperatures of 1500 C or more. This heat causes the small volume to try to expand but, because the surrounding area has a much lower temperature, there is no room for expansion. Instead, the hot volume pushes against the colder surrounding area, risking breaking the structure. To reach a sufficient attenuation, due to the high energy of the beams in CERN’s accelerators, we need devices that in some cases are several metres long.

Beam-intercepting devices must be able to withstand routine operation and also accident scenarios, where they serve to protect more delicate equipment such as cryomagnets. Amongst the many challenges that need to be faced are operation under ultra-high-vacuum conditions, and maintaining integrity and functionality when enduring energy densities up to several kJ/cm3 or power densities up to several MW/cm3. For physics applications, optimisation processes have led to the use of low-strength materials, such as pure lead for the generation of neutrons at the n_TOF facility or iridium and tantalum for the generation of antiprotons at the Antiproton Decelerator (AD) facility.

Preparing for HL-LHC 

The LHC Injectors Upgrade (LIU) Project, which was launched in 2010 and for which the hardware was installed during LS2, will allow beams with a higher intensity and a smaller spot size to be injected into the LHC. This is a precondition for the full execution of the High-Luminosity LHC (HL-LHC), which will enable a large increase in the integrated luminosity collected by the experiments. To safely protect sensitive equipment in the accelerator chain, the project required a series of new devices in the injector complex from the PS Booster to the SPS, including new beam-intercepting devices. One example is the new SPS internal beam dump, the so-called TIDVG (Target Internal Dump Vertical Graphite), which was installed in straight-section five of the SPS during 2020 (see “Structural integrity” image). The main challenge faced for this device was the need to dissipate a large amount of power from the device rapidly and efficiently to avoid reaching temperatures not acceptable by the beam-dump materials.

Dispersion-suppressor collimators being installed and checked

The TIDVG is used to dispose of the SPS circulating beam whenever necessary, for example in case of emergency during LHC beam-setup, filling or machine-development periods, and to dispose of the part of the beam dedicated to fixed-target experiments that remains after the slow-extraction process. Aiming at reducing the energy density deposited in the dump core’s absorbing material (and hence minimising the associated thermo-mechanical stresses), the beam is diluted by kicker magnets, producing a sinusoidal pattern on the front of the first absorbing block. The dump is designed to absorb all beam energies in the SPS, from 14 GeV (injection from the PS) to 450 GeV. 

The LHC Injectors Upgrade Project will allow beams with a higher intensity and a smaller spot size to be injected into the LHC

With respect to the pre-LS2 device, the beam power to be absorbed by the dump will be four-times higher, with an average power of 300 kW. To reduce the local energy deposition whilst maintaining the total required beam absorption, the length of the new dump has been increased by 70 cm, leading to a 5 m-long dump. The dump blocks are arranged so that the density of the absorbing materials increases as the beam passes through the device: 4.4 m of isostatic graphite, 20 cm of a molybdenum alloy and 40 cm of pure tungsten. This ensures that the stresses associated with the resulting thermal gradients are kept within acceptable values. The core of the component, which receives the highest thermal load, is cooled directly by a dedicated copper-alloy jacket surrounding the blocks, which can only release their heat through the contact with the jacket; to maximise the thermal conductivity at the interfaces between the stainless-steel cooling pipes and the copper alloy, these materials are diffusion-bonded by means of hot isostatic pressing. The entire core is embedded in an air-cooled, seamless 15 mm-thick stainless-steel hollow cylinder. Due to the high activation of the dump expected after operation, in addition to the first cast-iron shielding, the assembly is surrounded by a massive, multi-layered external shield comprising an inner layer of 50 cm of concrete, followed by 1 m of cast iron and an external layer 40 cm of marble. Marble is used on the three sides accessible by personnel to minimise the residual dose rate in the vicinity after short cool-down times. 

Collimator system upgrades

Beam collimators and masks are essential components in accelerator systems. They act as intermediate absorbers and dilutors of the beam in case of beam losses, minimising the thermal energy received by components such as superconducting magnets (leading to quench) or delicate materials in the LHC experiments. The other function of the collimators is to clean up the halo of the beam, by removing particles moving away from the correct orbit. Collimators generally consist of two jaws – moveable blocks of robust materials – that close around the beam to clean it of stray particles. More than 100 of these vital devices are placed around the LHC in critical locations.

Upgraded LHC external dumps

The jaw materials can withstand extreme temperatures and stresses (resulting in deposited energy densities up to 6 kJ/cm3), while maintaining – at least for the LHC collimators – good electrical conductivity to reduce the impedance contribution to the machine. Several developments were incorporated in the SPS-to-LHC transfer line collimators built in the framework of the LIU project, as well as in the LHC collimators for the HL-LHC. For the former, dedicated and extremely robust 3D carbon-composite materials were developed at CERN in collaboration with European industry, while for the latter, dedicated molybdenum carbide-graphite composites were developed, again in collaboration with European firms. For these cases, more than 30 new collimators have been built and installed in the SPS and LHC during LS2 (see “New collimators” image). 

LHC beam-dump upgrades

Several challenges associated with the LHC beam dump system had to be overcome, especially on the dump-block itself: it needs to be ready at any time to accept protons, from injection at 450 GeV up to top energy (6.5 TeV, with up to 7 TeV in the future); it must be reliable (~200 dump events per year); and it must accept fast-extracted beams, given that the entire LHC ring is emptied in just 86 μs. At 560 MJ, the projected stored beam energy during Run 3 will also be 75% higher than it was during Run 2. 

Welding of the upstream cover and proton window

The dump core (around 8 m long) consists of a sandwich of graphitic materials of sufficiently low density to limit the temperature rise – and therefore the resulting thermal-induced stresses – in the material (see “End of the line” image). The graphite is contained in a 12 mm-thick special stainless-steel grade (see “Dump upgrades” image) and the assembly is surrounded by shielding blocks. Roughly 75% (±430 MJ) of the energy that gets deposited by either electromagnetic shower and ionisation losses of hadrons and muons is deposited in the graphite, while around 5% (±25 MJ) is deposited in the thin steel vessel, and the remaining energy is deposited in the shielding assembly. Despite the very low density (1.1 g/cm3) employed in the middle section of the core, temperatures up to 1000 C have been reached during Run 2. From Run 3, temperatures up to 1500 C will be reached. These temperatures could be much higher if it were not for the fact that the beam is “painted” on the face of the dump by means of dilution kickers situated hundreds of metres upstream. The dump must also guarantee its structural integrity even in the case of failures of these dilution systems. 

Although the steel vessel is responsible for absorbing just 5% of the deposited energy, the short timescales involved lead to a semi-instantaneous rise in temperature of more than 150 C, generating accelerations up to 2000 g and forces of several hundred tonnes. Following the operational experience during LHC Run 1 and Run 2, during LS2 several upgrades have been implemented on the dump. These include complex instrumentation to yield information and operational feedback during Run 3, until 2025. In the later HL-LHC era, the dump will have to absorb an additional 50% more energy per dump than during Run 3 (up to 750 MJ/dump), presenting one of numerous beam-interception challenges to be faced.

Fixed-target challenges 

Beyond the LHC, challenging conditions are also encountered for antiproton production at CERN’s Antiproton Decelerator (AD), which serves several antimatter experiments. In this case, high-density materials are required to make sources as point-like as possible to improve the capture capabilities of the downstream magnetic-horn focusing system. Energy densities up to 7 kJ/cm3 and temperatures up to 2500 C are reached in refractory materials such as iridium, tantalum and tungsten. Such intense energy densities and the large gradients resulting from the very small transverse beam size generate large thermal stresses and produce damage in the target material, which must be minimised to maintain the reliability of the AD’s physics programme. To this end, a new air-cooled antiproton production target will be installed in the antiproton target area this year. Similar challenges are faced when producing neutrons for the n_TOF facility: in this case a new nitrogen-cooled pure lead spallation target weighing roughly 1.5 tonnes will be commissioned this year, ready to produce neutrons spanning 11 orders of magnitude in energy, from 25 meV to several GeV (see “Neutron production target” image). 

Preparation for irradiation of graphite and copper alloy

Reliability is a key aspect in the construction of beam-intercepting devices, not just because machine operation strongly depends on them, but because replacing devices is not easy due to their residual radioactivation after operation. But how do we know that new devices will fulfill their function successfully once installed in the machine? CERN’s HiRadMat facility, which allows single proton pulse testing using a high-intensity beam from the SPS, is one solution. Extremely high energy densities can be reached in test materials and in complex systems, allowing the experimental teams to investigate – in a controlled manner – the behaviour of materials or complex mechanical systems when impacted by proton (or ion) beams. During the past few years, the facility was heavily employed by both CERN and external teams from laboratories such as STFC, Fermilab, KEK and GSI, testing materials from graphite to copper and iridium across the whole spectrum of densities (see “Material integrity test” image). To be able to correctly predict the behaviour of materials when impacted by protons and other charged particles, a full understanding of thermo-physical and material properties is mandatory. Examples of critical properties include the coefficient of thermal expansion, heat capacity, thermal and electrical conductivity as well as the Young’s modulus and yield strength, as well as their temperature dependence. 

Dealing with radiation damage is becoming increasingly important as facilities move to higher beam intensities and energies, presenting potential show-stoppers for some beam-intercepting devices. To better understand and predict the radiation response of materials, the RaDIATE collaboration was founded in 2012, bringing together the high-energy physics, nuclear and related communities. The collaboration’s research includes determining the effect of high-energy proton irradiation on the mechanical properties of potential target and beam-window materials,   and developing our understanding via micro-structural studies. The goal is to enable accurate lifetime predictions for materials subjected to beam impact, to design robust components for high-intensity beams, and to develop new materials to extend lifetimes. CERN is partner to this collaboration, as well as Fermilab, STFC/UKRI, Oak Ridge, KEK, Pacific Northwest National Laboratory, and other institutions and laboratories worldwide.

Future projects 

High-energy physics laboratories across the world are pursuing new energy and/or intensity frontiers, either with hadron or lepton machines. In all cases, whether collider physics or fixed-target, neutrino or beam-dump experiments, beam-intercepting devices are at the heart of accelerator operations. For the proposed 100 km-circumference Future Circular Collider (FCC), several challenges have already been identified. Owing to the small emittances and high luminosities involved in a first electron–positron FCC phase, the positron source system, and its target and capture system, will require dedicated R&D and testing as well as the two lepton dumps. FCC’s proton–proton phase, further in the future, will draw on lessons from the HL-LHC operation, but it will also operate at uncharted energy densities for beam-intercepting devices, both for beam cleaning and shaping collimators as well as for the beam dumps.

Installation of the tantalum-clad pure tungsten block

The recently launched muon-collider initiative, meanwhile, will require a target system capable of providing copious amounts of muons generated either by proton beams or electrons impacting on a target, depending on the scheme under consideration. For the former, beams of several MW could collide on a production target, which will have to be very efficient to produce muons of the required momenta while being sufficiently reliable to operate without failure for long periods. The muon collider target and front-end systems will also require magnets and shielding to be located quite close to the production target and will have to cope with radiation load and heat deposition. These challenges will be tackled extensively in the next few years, both from a physics and engineering perspective.

Successful beam-intercepting devices require extensive knowledge and skills

As one of the front-runner projects in the Physics Beyond Colliders initiative, the proposed Beam Dump Facility at CERN would require the construction of a general-purpose high-intensity and high-energy fixed-target complex, initially foreseen to be exploited by the Search for Hidden Particles (SHiP) experiment. At the heart of the installation resides a target/dump assembly that can safely absorb the full high-intensity 400 GeV/c SPS beam, while maximising the production of charm and beauty mesons and using high-Z materials, such as pure tungsten and molybdenum alloy, to reduce muon background for the downstream experiment. The nature of the beam pulse induces very high temperature excursions between pulses (up to 100 °C), leading to considerable thermally induced stresses and long-term fatigue considerations. The high average power deposited on target (305 kW) also creates a challenge for heat removal. A prototype target was built and tested at the end of 2018, at one tenth of the nominal power but able to reach the equivalent energy densities and thermal stresses (see “Beam-dump facility” image).

Human efforts

The development, construction and operation of successful beam-intercepting devices require extensive knowledge and skills, ranging from mechanical and nuclear engineering, to physics, vacuum technologies and advanced production techniques. Technicians also constitute the backbone of the design, assembly and installation of such equipment. International exchanges with experts in the fields and with laboratories working with similar challenges is essential, as is cross-discipline collaboration, for example in aerospace, nuclear and advanced materials. In addition, universities provide key students and personnel capable of mastering and developing these techniques both at CERN and in CERN’s member states’ laboratories and industries. This intense multidisciplinary effort is vital to successfully tackle the challenges related to current and future high-energy and high-intensity facilities and infrastructures, as well as to develop systems with broader societal impact, for example in X-ray synchrotrons, medical linacs, and the production of radioisotopes for nuclear medicine. 

Gao takes position at Brookhaven

Haiyan Gao

Experimental nuclear physicist Haiyan Gao has been appointed associate laboratory director for nuclear and particle physics at Brookhaven National Laboratory (BNL), beginning 1 June. Gao, whose research interests include the structure of the nucleon, searches for exotic QCD states and searches for new physics in electroweak interactions, is currently a professor of physics at Duke University, and has previously held positions at Argonne National Laboratory and Massachusetts Institute of Technology (MIT) . At BNL she replaces Dmitri Denisov, who has held the position on an interim basis after Berndt Mueller’s departure last year.

While at Duke, Gao was the founding vice chancellor for academic affairs at the new Duke Kunshan University, based in Kushan, China — a Chinese-American academic partnership between Duke University and Wuhan University established in 2013.

I am very excited by the opportunity and the impact I will be able to make in collaboration with many people at the lab

The appointment comes at a vital time for BNL, with preparations taking place for the Electron-Ion Collider, which expects first physics in the next decade. The unique facility will, for the first time, be able to systematically explore and map out the dynamical system that is the ordinary QCD bound state. On the appointment, Gao states: “The nuclear & particle physics directorate is well-known internationally in accelerator science, high-energy physics, and nuclear physics. I am very excited by the opportunity and the impact I will be able to make in collaboration with many people at the Lab.”

New excited beauty-strange baryon observed

Figure 1

Beauty baryons are a subject of great interest at the LHC, offering unique insights into the nature of the strong interaction and the mechanisms by which hadrons are formed. While the ground states Λb0, Σb±, Ξb, Ξb0, Ωb were observed at the Tevatron at Fermilab and the SPS at CERN, the LHC’s higher energy and orders-of-magnitude larger integrated luminosity have allowed the discovery of more than a dozen excited beauty baryon states among the 59 new hadrons observed at the LHC so far (see LHCb observes four new tetraquarks).

Many hadrons with one c or b quark are quite similar. Interchanging heavy-quark flavours does not significantly change the physics predicted by effective models assuming “heavy quark symmetry”. The well-established charm baryons and their excitations therefore provide excellent input for theories modelling the less well understood spectrum of beauty-baryons. A number of the lightest excited b baryons, such as Λb(5912)0, Λb(5920)0, and several excited Ξb and Ωb states, have been observed, and are consistent with their charm partners. By contrast, however, heavier excitations, such as the Λb(6072)0 and Ξb(6227) isodoublet (particles that differ only by an up or down quark), cannot yet be readily associated with charmed partners.

New particles

The first particle observed by the CMS experiment, in 2012, was the beauty- strange baryon Ξb(5945)0 (CERN Courier June 2012 p6). It is consistent with being the beauty partner of the Ξc(2645)+ with spin-parity 3/2+, while the Ξb(5955) and Ξb(5935) states observed by LHCb are its isospin partner and the beauty partner of the Ξc0, respectively. The charm sector also suggests the existence of prominent heavier isodoublets, called Ξb**: the lightest orbital Ξb excitations with orbital momentum between a light diquark (a pairing of a s quark with either a d or a u quark) and a heavy b quark. The isodoublet with spin-parity 1/2 decays into Ξbπ± and the one with 3/2 into Ξb* π±.

The CMS collaboration has now observed such a baryon, Ξb(6100), via the decay sequence Ξb(6100)Ξb(5945)0πΞb π+ π. The new state’s measured mass is 6100.3 ± 0.6 MeV, and the upper limit on its natural width is 1.9 MeV at 95% confidence level. The Ξb ground state was reconstructed in two channels: J/ψ Ξ and J/ψ Λ K. The latter channel also includes partially reconstructed J/ψ Σ0 K (where the photon from the Σ0Λ γ decay is too soft to be reconstructed).

If the Ξb(6100) baryon were only 13 MeV heavier, it would be above the Λb0 K mass threshold

The observation of this baryon and the measurement of its properties are useful for distinguishing between different theoretical models predicting the excited beauty baryon states. It is curious to note that if the Ξb(6100) baryon were only 13 MeV heavier, a tiny 0.2% change, it would be above the Λb0 K mass threshold and could decay to this final state. The Ξb(6100) might also shed light on the nature of previous discoveries: if it is the 3/2 member of the lightest orbital excitation isodoublet, then the Ξb(6227) isodoublet recently found by the LHCb collaboration could be the 3/2 orbital excitation of Ξb or Ξb* baryons. 

Light neutral mesons probed to high pT

Neutral pion (π0) and eta-meson (η) production cross sections at midrapidity have recently been measured up to unprecedentedly high transverse momenta (pT) in proton–proton (pp) and proton–lead (p–Pb) collisions at √sNN = 8 and 8.16 TeV, respectively. The mesons were reconstructed in the two-photon decay channel for pT from 0.5 and 1 GeV up to 200 and 50 GeV for π0 and η mesons, respectively. The high momentum reach for the π0 measurement was achieved by identifying two-photon showers reconstructed as a single energy deposit in the ALICE electromagnetic calorimeter.

In pp collisions, measurements of identified hadron spectra are used to constrain perturbative predictions from quantum chromodynamics (QCD). At large momentum transfer (Q2), one relies in these perturbative approximations of QCD (pQCD) on the factorisation of computable short-range parton scattering processes such as quark–quark, quark–gluon and gluon–gluon scatterings from long-range properties of QCD that need experimental input. These properties are modelled by parton distribution functions (PDFs), which describe the fractional-momentum (x) distributions of quarks and gluons within the proton, and fragmentation functions, which describe the fractional-momentum distribution of quarks or gluons for hadrons of a certain species.

In p–Pb collisions, nuclear effects are expected to significantly affect particle production, in particular at small parton fractional momentum x, compared to pp collisions. Modification at low pT (~1 GeV), usually attributed to nuclear shadowing (CERN Courier March/April 2021 p19), can be parameterised by nuclear parton distribution functions (nPDFs). However, since high parton densities are reached at the LHC, the Colour Glass Condensate (CGC) framework is also applicable at low pT (x values as small as ~5 × 10–4), which predicts strong particle suppression due to saturation of the parton phase space in nuclei. Above momenta of about 10 GeV/c, measurements in p–Pb collisions can also be sensitive to the energy loss of the outgoing partons in nuclear matter.

The nuclear modification factor (RpPb), shown in the lower panel of the figure, was measured as the ratio of the cross sections in p–Pb and pp collisions normalised by the atomic mass number. Below 10 GeV, RpPb is found to be smaller than unity, while above 10 GeV it is consistent with unity. The measurement is described by calculations over the full transverse momentum range and provides further constraints to the nPDF parameterisations for lower than about 5 GeV. The direct comparison of the neutral pion cross section in pp collisions at 8 TeV, with pQCD calculations shown in the upper panel of the figure, reveals differences in the low to intermediate pT range, which, however, cancel in RpPb, since similar differences are also present for the p–Pb cross section. Future high-precision measurements are ongoing using the large dataset from pp collisions at 13 TeV, providing further constraints to pQCD calculations.

NeuTel as vibrant as ever

The IceCube observatory

The XIX International Workshop on Neutrino Telescopes (NeuTel) attracted 1000 physicists online from 18 to 26 February, under the organisation of INFN Sezione di Padova and the Department of Physics and Astronomy of the University of Padova.

The opening session featured presentations by Sheldon Lee Glashow, on the past and future of neutrino science, Carlo Rubbia, on searches for neutrino anomalies, and Barry Barish, on the present and future of gravitational-wave detection. This session was a propitious moment for IceCube principal investigator Francis Halzen to give a “heads-up” on the first observation, in the South-Pole detector, of a so-called Glashow resonance – the interaction of an electron antineutrino with an atomic electron to produce a real W boson, as the eponymous theorist predicted back in 1960. According to Glashow’s calculations, the energy at which the resonance shall happen depends on the mass of the W boson, which was discovered in 1983 by Rubbia and his team. 

The first edition of NeuTel saw the birth of the idea of instrumenting a large volume of Antarctic ice

The first edition of NeuTel saw the birth of the idea of instrumenting a large volume of Antarctic ice to capture high-energy neutrinos – a “Deo volente” (God willing) detector, as Halzen and collaborators then dubbed it. Thirty-three years later, as the detection of a Glashow resonance demonstrates, it is possible to precisely calibrate the absolute energy scale of these gigantic instruments for cosmic particles, and we have achieved several independent proofs of the existence of high-energy cosmic neutrinos, including first confirmations by ANTARES and Baikal-GVD.

Astrophysical models describing the connections between cosmic neutrinos, photons and cosmic rays were discussed in depth, with special emphasis on blazars, starburst galaxies and tidal-distribution events. Perspectives for future global multi-messenger observations and campaigns, including gravitational waves and networks of neutrino instruments over a broad range of energies, were illustrated, anticipating core-collapse supernovae as the most promising sources. The future of astroparticle physics relies upon very large infrastructures and collaborative efforts on a planetary scale. Next-generation neutrino telescopes might follow different strategic developments. Extremely large volumes, equipped with cosmic-ray-background veto techniques and complementary radio-sensitive installations might be the key to achieving high statistics and high-precision measurements over a large energy range, given limited sky coverage. Alternatively, a network of intermediate-scale installations, like KM3NeT, distributed over the planet and based on existing or future infrastructures, might be better suited for population studies of transient phenomena. Efforts are currently being undertaken along both paths, with a newborn project, P-ONE, exploiting existing deep-underwater Canadian infrastructures for science to operate strings of photomultipliers.

T2K and NOvA did not update last summer’s leptonic–CP–violation results. The tension of their measurements creates counter-intuitive fit values when a combination is tried, as discussed by Antonio Marrone of the University of Bari. The most striking example is the neutrino mass hierarchy: both experiments in their own fits favour a normal hierarchy, but their combination, with a tension in the value of the CP phase, favours an inverted hierarchy.

The founder of the Borexino experiment, Gianpaolo Bellini, discussed the results of the experiment together with the latest exciting measurements of the CNO cycle in the Sun. DUNE, Hyper-K, and JUNO presented progress towards the realisation of these leading projects, and speakers discussed their potential in many aspects of new-physics searches, astrophysics investigations and neutrino–oscillation sensitivities. The latest results of the reactor–neutrino experiment Neutrino-4, which about one year ago claimed 3.2σ evidence for an oscillation anomaly that could be induced by sterile neutrinos, were discussed in a dedicated session. Both ICARUS and KATRIN presented their sensitivities to this signal in two completely different setups.

Marc Kamionkowski (John Hopkins University) and Silvia Galli (Institut d’Astrophysique de Paris) both provided an update on the “Hubble tension”: an approximately 4σ difference in the Hubble constant when determined from angular temperature fluctuations in the cosmic microwave background (probing the expansion rate when the universe was approximately 380,000 years old) and observing the recession velocity of supernovae (which provides its current value). This Hubble tension could hint at new physics modifying the thermal history of our universe, such as massive neutrinos that influence the early-time measurement of the Hubble parameter.

Lectures on Accelerator Physics

Lectures on Accelerator Physics

Alex Chao, one of the leading practitioners in the field, has written an introductory textbook on accelerator physics. It is a lucid and insightful presentation of the principles behind the workings of modern accelerators, touching on a multitude of aspects, from elegant mathematical concepts and fundamental electromagnetism to charged-particle optics and the stability of charged particle beams. At the same time, numerous practical examples illustrate key concepts employed in the most advanced machines currently in operation, from high-energy colliders to free-electron lasers. 

The author is careful to keep the text rigorous, yet not to overload it with formal derivations, and exhibits a keen sense for finding simple, convincing arguments to introduce the basic physics. A large number of homework problems (most of them with solutions) facilitate the stated aim to stimulate thinking. The variety of these is the fruit of extensive teaching experience. The book assumes only a basic understanding of special relativity and electromagnetism, while readers with advanced language skills will benefit from occasional remarks in Chinese, mainly philosophical in nature (translated in most cases). The present reviewer could not help wondering about the missed punchlines. 

The discussion on “symplecticity” and Liouville’s theorem lets physics ideas stand out against the background of mathematics

Beginners and advanced students alike will find pleasure in striking derivations of basic properties of simple physical systems by dimensional analysis. Students will also find the presentation on the use of phase-space (coordinate-momentum space) concepts in classical mechanics capable of clearing the fog in their heads. In particular, an insightful presentation of transverse and longitudinal phase-space manipulation techniques provides modern-day examples of advanced designs. Furthermore, an important discussion on “symplecticity” and Liouville’s theorem – ideas that yield powerful constraints on the evolution of dynamical systems – lets physics ideas stand out against the background of formal mathematics. The discussion should help students avoid imagining typical unphysical ideas such as beams focused to infinitesimally small dimensions: the infamous “death rays” first dreamt up in the 1920s and 1930s. The treatment of the stability criteria for linear and non-linear systems, in the latter case introducing the notion of dynamical aperture (the stable region of phase space in a circular accelerator), serves as a concrete illustration of these deep and beautiful concepts of classical mechanics.

The physics of synchrotron radiation and its detailed effects on beam dynamics of charged-particle beams provide the essentials for understanding the properties of lepton and future very-high-energy hadron colliders. Lectures on Accelerator Physics also describes the necessary fundamentals of accelerator-based synchrotron light sources, reaching as far as the physics principles of free-electron lasers and diffraction-limited storage rings.

A chapter on collective instability intro­duces some of the most important effects related to the stability of beams as multi-particle systems. A number of essential effects, including head–tail instability and the Landau damping mechanism, which play a crucial role in the operation of present and future particle accelerators and colliders, are explained with great elegance. The beginner, armed with the insights gained from these lectures, is well advised to turn to Chao’s classic 1993 text Physics of Collective Beam Instabilities in High Energy Accelerators for a more in-depth treatment of these phenomena.

This book is a veritable “All you wanted to know about accelerators physics but were afraid to ask”. It is a compilation of ideas, and can be used as a less dry companion to yet another classic compilation, in this case of formulas: the Handbook of Accelerator Physics and Engineering, edited by Chao and Maury Tigner.

LHC reinterpreters think long-term

A Map of the Invisible

The ATLAS, CMS and LHCb collaborations perform precise measurements of Standard Model (SM) processes and direct searches for physics beyond the Standard Model (BSM) in a vast variety of channels. Despite the multitude of BSM scenarios tested this way by the experiments, it still constitutes only a small subset of the possible theories and parameter combinations to which the experiments are sensitive. The (re)interpretation of the LHC results in order to fully understand their implications for new physics has become a very active field, with close theory–experiment interaction and with new computational tools and related infrastructure being developed. 

From 15 to 19 February, almost 300 theorists and experimental physicists gathered for a week-long online workshop to discuss the latest developments. The topics covered ranged from advances in public software packages for reinterpretation to the provision of detailed analysis information by the experiments, from phenomenological studies to global fits, and from long-term preservation to public data.

Open likelihoods

One of the leading questions throughout the workshop was that of public likelihoods. The statistical model of an experimental analysis provides its complete mathematical description; it is essential information for determining the compatibility of the observations with theoretical predictions. In his keynote talk “Open science needs open likelihoods’’, Harrison Prosper (Florida State University) explained why it is in our scientific interest to make the publication of full likelihoods routine and straightforward. The ATLAS collaboration has recently made an important step in this direction by releasing full likelihoods in a JSON format, which provides background estimates, changes under systematic variations, and observed data counts at the same fidelity as used in the experiment, as presented by Eric Schanet (LMU Munich). Matthew Feickert (University of Illinois) and colleagues gave a detailed tutorial on how to use these likelihoods with the pyhf python package. Two public reinterpretation tools, MadAnalysis5 presented by Jack Araz (IPPP Durham) and SModelS presented by Andre Lessa (UFABC Santo Andre) can already make use of pyhf and JSON likelihoods, and others are to follow. An alternative approach to the plain-text JSON serialisation is to encode the experimental likelihood functions in deep neural networks, as discussed by Andrea Coccaro (INFN Genova) who presented the DNNLikelihood framework. Several more contributions from CMS, LHCb and from theorists addressed the question of how to present and use likelihood information, and this will certainly stay an active topic at future workshops.  

The question of making research data findable, accessible, interoperable and reusable is a burning one throughout modern science

A novelty for the Reinterpretation workshop was that the discussion was extended to experiences and best practices beyond the LHC, to see how experiments in other fields address the need for publicly released data and reusable results. This included presentations on dark-matter direct detection, the high-intensity frontier, and neutrino oscillation experiments. Supporting Prosper’s call for data reusability 40 years into the future – “for science 2061” – Eligio Lisi (INFN Bari) pointed out the challenges met in reinterpreting the 1998 Super-Kamiokande data, initially published in terms of the then-sufficient two-flavour neutrino-oscillation paradigm, in terms of contemporary three-neutrino descriptions, and beyond. On the astrophysics side, the LIGO and Virgo collaborations actively pursue an open-science programme. Here, Agata Trovato (APC Paris) presented the Gravitational Wave Open Science Center, giving details on the available data, on their format and on the tools to access them. An open-data policy also exists at the LHC, spearheaded by the CMS collaboration, and Edgar Carrera Jarrin (USF Quito) shared experiences from the first CMS open-data workshop. 

The question of making research data findable, accessible, interoperable and reusable (“FAIR” in short) is a burning one throughout modern science. In a keynote talk, the head of the GO FAIR Foundation, Barend Mons, explained the FAIR Guiding Principles together with the technical and social aspects of FAIR data management and data reuse, using the example of COVID-19 disease modelling. There is much to be learned here for our field. 

The wrap-up session revolved around the question of how to implement the recommendations of the Reinterpretation workshop in a more systematic way. An important aspect here is the proper recognition, within the collaborations as well as the community at large, of the additional work required to this end. More rigorous citation of HEPData entries by theorists may help in this regard. Moreover, a “Reinterpretation: Auxiliary Mat­erial Presentation” (RAMP) seminar series will be launched to give more visibility and explicit recognition to the efforts of preparing and providing extensive mat­erial for reinterpretation. The first RAMP meetings took place on 9 and 23 April.

bright-rec iop pub iop-science physcis connect