Comsol -leaderboard other pages

Topics

Mountain observatory nets PeV gamma rays

The universe seen with protons > 100TeV

Recent years have seen rapid growth in high-energy gamma-ray astronomy, with the first measurement of TeV photons from gamma-ray bursts by the MAGIC telescope and the first detection of gamma rays with energies above 100 TeV by the HAWC observatory.

Now, the Large High Altitude Air Shower Observatory (LHAASO) in China has increased the energy scale at which the universe has been observed by a further order of magnitude. The recent LHAASO detection provides the first clear evidence of the presence of galactic “pevatrons”: sources in the Milky Way capable of accelerating protons and electrons to PeV energies. Although PeV cosmic rays are known to exist, magnetic fields pervading the universe perturb their direction and therefore do not allow their origin to be traced. The gamma rays produced by such cosmic-rays, on the other hand, point directly to their source.

Wide field of view
LHAASO is located in the mountains of the Sichuan province of China and offers a wide field of view to study both high-energy cosmic and gamma rays. Once completed, the observatory will contain a water Cherenkov detector with a total area of about 78,000 m2, 18 widefield- of-view Cherenkov telescopes and a 1 km2 array of more than 5000 scintillator- based electromagnetic detectors (EDs). Finally, more than 1000 underground water Cherenkov tanks (the MDs) are placed over the grid to detect muons.

The latter two detectors, of which only half were finished during data-taking for this study, are used to directly detect the showers produced when high-energy particles interact with the Earth’s atmosphere. The EDs detect the shower profile and incoming angle, using charge and timing information of the detector array, while the MDs are used to distinguish hadronic showers from the electromagnetic showers produced by high-energy gamma rays. Thanks to both its large size and the MDs, LHAASO will ultimately be two orders of magnitude more sensitive at 100 TeV than the HAWC facility in Mexico, the previous most sensitive detector of this type.

The measurements reported by the Chinese-led international LHAASO collaboration reveal a total of 12 sources Astrowatch Mountain observatory nets PeV gamma rays located across the galactic plane (see image above). This distribution is expected, since gamma rays at such energies have a high cross-section for pair production with the cosmic microwave background and therefore the universe starts to become opaque at energies exceeding tens to hundreds of TeV, leaving only sources within our galaxy visible. Of the 12 presented sources, only the Crab nebula can be directly confirmed. This substantiates the pulsar-wind nebulae as a source in which electrons are accelerated beyond PeV energies, which in turn are responsible for the gamma rays through inverse Compton scattering.

Of specific interest is the source responsible for the photon with the highest energy, 1.4 PeV

The origin of the other photons remains unknown as the observed emission regions contain several possible sources within them. The sizes of the emission regions exceed the angular resolution of LHAASO, however, indicating that emission takes place over large scales. Of specific interest is the source responsible for the photon with the highest energy, 1.4 PeV. This came from a region containing both a supernova remnant as well as a star-forming cluster, both of which are prime theoretical candidates for hadronic pevatrons.

Tip of the iceberg
More detailed spectrometry as well as morphological measurements, in which the differences in emission intensity throughout the sources are measured, could allow the sources of > 100 TeV gamma rays to be identified in the next one or two years, say the authors. Furthermore, as the current 12 sources were visible using only one year of data from half the detector, it is clear that LHAASO is only seeing the tip of iceberg when it comes to high-energy gamma rays.

KEK tackles neutron-lifetime puzzle

The apparatus in which neutrons from J-PARC were clocked

More than a century after its discovery, the proton remains a source of intrigue, its charge-radius and spin posing puzzles that are the focus of intense study. But what of its mortal sibling, the neutron? In recent years, discrepancies between measurements of the neutron lifetime using different methods constitute a puzzle with potential implications for cosmology and particle physics. The neutron lifetime determines the ratio of protons to neutrons at the beginning of big-bang nucleosynthesis and thus affects the yields of light elements, and it is also used to determine the CKM matrix-element Vud in the Standard Model.

The neutron-lifetime puzzle stems from measurements using two techniques. The “bottle” method counts the number of surviving ultra-cold neutrons contained in a trap after a certain period, while the “beam” method uses the decay probability of the neutron obtained from the ratio of the decay rate to an incident neutron flux. Back in the 1990s, the methods were too imprecise to worry about differences between the results. Today, however, the average neutron lifetime measured using the bottle and beam methods, 879.4 ± 0.4 s and 888.0 ± 2.0 s, respectively, stand 8.6 s (or 4σ) apart. 

We think it will take two years to obtain a competitive result from our experiment

Kenji Mishima

In an attempt to shed light on the issue, a team at Japan’s KEK laboratory in collaboration with Japanese universities has developed a new experimental setup. Similar to the beam method, it compares the decay rate to the reaction rate of neutrons in a pulsed beam from the Japan Proton Accelerator Research Complex (J-PARC). The decay rate and the reaction rate are determined by simultaneously detecting electrons from the neutron decay and protons from the reaction 3He 3H in a 1 m-long time-projection chamber containing diluted 3He, removing some of the systematic uncertainties that affect previous beam methods. The experiment is still in its early stages, and while the first results have been released – τn = 898 ± 10(stat)+15–18 (sys) s – the uncertainty is currently too large to draw conclusions.

“In the current situation, it is important to verify the puzzle by experiments in which different systematic errors dominate,” says Kenji Mishima of KEK, adding that further improvements in the statistical and systematic uncertainties are underway. “We think it will take two years to obtain a competitive result from our experiment.”

Several new-physics scenarios have been proposed as solutions of the neutron lifetime puzzle. These include exotic decay modes involving undetectable particles with a branching ratio of about 1%, such as “mirror neutrons” or dark-sector particles. 

Tracking the rise of pixel detectors

Pixel detectors have their roots in photography. Up until 50 years ago, every camera contained a roll of film on which images were photochemically recorded with each exposure, after which the completed roll was sent to be “developed” to finally produce eagerly awaited prints a week or so later. For decades, film also played a big part in particle tracking, with nuclear emulsions, cloud chambers and bubble chambers. The silicon chip, first unveiled to the world in 1961, was to change this picture forever.

During the past 40 years, silicon sensors have transformed particle tracking in high-energy physics experiments

By the 1970s, new designs of silicon chips were invented that consisted of a 2D array of charge-collection sites or “picture elements” (pixels) below the surface of the silicon. During the exposure time, an image focused on the surface generated electron–hole pairs via the photoelectric effect in the underlying silicon, with the electrons collected as signal information in the pixels. These chips came in two forms: the charge-coupled device (CCD) and the monolithic active pixel sensor (MAPS) – more commonly known commercially as the CMOS image sensor (CIS). Willard Boyle and George Smith of Bell Labs in the US were awarded the Nobel Prize for Physics in 2009 for inventing the CCD. 

Central and forward pixel detector

In a CCD, the charge signals are sequentially transferred to a single on-chip output circuit by applying voltage pulses to the overlying electrode array that defines the pixel structure. At the output circuit the charge is converted to a voltage signal to enable the chip to interface with external circuitry. In the case of the MAPS, each pixel has its own charge-integrating detection circuitry and a voltage signal is again sequentially read out from each by on-chip switching or “scanning” circuitry. Both architectures followed rapid development paths, and within a couple of decades had completely displaced photographic film in cameras. 

For the consumer camera market, CCDs had the initial lead, which passed to MAPS by about 1995. For scientific imaging, CCDs are preferred for most astronomical applications (most recently the 3.2 Gpixel optical camera for the Vera Rubin Observatory), while MAPS are the preferred option for fast imaging such as super-resolution microscopy, cryoelectron microscopy and pioneering studies of protein dynamics at X-ray free-electron lasers. Recent CMOS imagers with very small, low-capacitance pixels achieve sufficiently low noise to detect single electrons. A third member of the family is the hybrid pixel detector, which is MAPS-like in that the signals are read out by scanning circuitry, but in which the charges are generated in a separate silicon layer that is connected, pixel by pixel, to a readout integrated circuit (ROIC). 

During the past 40 years, these devices (along with their silicon-microstrip counterparts, to be described in a later issue) have transformed particle tracking in high-energy physics experiments. The evolution of these device types is intertwined to such an extent that any attempt at historical accuracy, or who really invented what, would be beyond the capacity of this author, for which I humbly apologise. Space constraints have also led to a focus on the detectors themselves, while ignoring the exciting work in ROIC development, cooling systems, mechanical supports, not to mention the advanced software for device simulation, the simulation of physics performance, and so forth. 

CCD design inspiration

The early developments in CCD detectors were disregarded by the particle-detector community. This is because gaseous drift chambers, with a precision of around 100 μm, were thought to be adequate for all tracking applications. However, the 1974 prediction by Gaillard, Lee and Rosner that particles containing charm quarks “might have lifetimes measurable in emulsions”, followed by the discovery of charm in 1975, set the world of particle-physics instrumentation ablaze. Many groups with large budgets tried to develop or upgrade existing types of detectors to meet the challenge: bubble chambers became holographic; drift chambers and streamer chambers were pressurised; silicon microstrips became finer-pitched, etc. 

Pixel architectures

A CCD, MAPS and hybrid chip

Illustrations of a CCD (left), MAPS (middle) and hybrid chip (right). The first two typically contain 1 k × 1 k pixels, up to 4 k × 4 k or beyond by “stitching”, with an active layer thickness (depleted) of about 20 µm and a highly doped bulk layer back-thinned to around 100 µm, enabling a low-mass tracker, even potentially bent into cylinders round the beampipe. 

The CCD (where I is the imaging area, R the readout register, TG the transfer gate, CD the collection diode, and S, D, G the source, drain and gate of the sense transistor) is pixellised in the I direction by conducting gates. Signal charges are shifted in this direction by manipulating the gate voltages so that the image is shifted down, one row at a time. Charges from the bottom row are tipped into the linear readout register, within which they are transferred, all together in the orthogonal direction, towards the output node. As each signal charge reaches the output node, it modulates the voltage on the gate of the output transistor; this is sensed, and transmitted off-chip as an analog signal. 

In a MAPS chip, pixellisation is implemented by orthogonal channel stops and signal charges are sensed in-pixel by a tiny front-end transistor. Within a depth of about 1 µm below the surface, each pixel contains complex CMOS electronics. The simplest readout is “rolling shutter”, in which peripheral logic along the chip edge addresses rows in turn, and analogue signals are transmitted by column lines to peripheral logic at the bottom of the imaging area. Unlike in a CCD, the signal charges never move from their “parent” pixel. 

In the hybrid chip, like a MAPS, signals are read out by scanning circuitry. However, the charges are generated in a separate silicon layer that is connected, pixel by pixel, to a readout integrated circuit. Bump-bonding interconnection technology is used to keep up with pixel miniaturisation. 

The ACCMOR Collaboration (Amsterdam, CERN, Cracow, Munich, Oxford, RAL) had built a powerful multi-particle spectrometer, operating at CERN’s Super Proton Synchrotron, to search for hadronic production of the recently-discovered charm particles, and make the first measurements of their lifetimes. We in the RAL group picked up the idea of CCDs from astronomers at the University of Cambridge, who were beginning to see deeper into space than was possible with photographic film (see left figure in “Pixel architectures” panel). The brilliant CCD developers in David Burt’s team at the EEV Company in Chelmsford (now Teledyne e2v) suggested designs that we could try for particle detection, notably to use epitaxial silicon wafers with an active-layer thickness of about 20 μm. At a collaboration meeting in Cracow in 1978, we demonstrated via simulations that just two postage-stamp-sized CCDs, placed 1 and 2 cm beyond a thin target, could cover the whole spectrometer aperture and might be able to deliver high-quality topological reconstruction of the decays of charm particles with expected lifetimes of around 10–13 s. 

We still had to demonstrate that these detectors could be made efficient for particle detection. With a small telescope comprising three CCDs in the T6 beam from CERN’s Proton Synchrotron we established a hit efficiency of more than 99%, a track measurement precision of 4.5 μm in x and y, and two-track resolution of 40 μm. Nothing like this had been seen before in an electronic detector. Downstream of us, in the same week, a Yale group led by Bill Willis obtained signals from a small liquid-argon calorimeter. A bottle of champagne was shared! 

It was then a simple step to add two CCDs to the ACCMOR spectrometer and start looking for charm particles. During 1984, on the initial shift, we found our first candidate (see “First charm” figure), which, after adding the information from the downstream microstrips, drift chambers (with two large aperture magnets for momentum measurement), plus a beautiful assembly of Cherenkov hodoscopes from the Munich group, proved to be a D+ K+π+π event. 

Vertex detector

It was more challenging to develop a CCD-based vertex detector for the SLAC Large Detector (SLD) at the SLAC Linear Collider (SLC), which became operational in 1989. The level of background radiation required a 25 mm-radius beam pipe, and the physics demanded large solid-angle coverage, as in all general-purpose collider detectors. The physics case for SLD had been boosted by the discovery in 1983 that the lifetime of particles containing b quarks was longer than for charm, in contrast to the theoretical expectation of being much shorter. So the case for deploying high-quality vertex detectors at SLC and LEP, which were under construction to study Z0 decays, was indeed compelling (see “Vertexing” figure). All four LEP experiments employed a silicon-microstrip vertex detector.

Early in the silicon vertex-detector programme, e2V perfected the art of “stitching” reticles limited to an area of 2 × 2 cm2, to make large CCDs (8 × 1.6 cm2 for SLD). This enabled us to make a high-performance vertex detector that operated from 1996 until SLD shut down in 1998, and which delivered a cornucopia of heavy-flavour physics from Z0 decays (see “Pioneering pixels” figure). During this time, the LEP beam pipe, limited by background to 54 mm radius, permitted its experiments’ microstrip-based vertex detectors to do pioneering b physics. But it had reduced capability for the more elusive charm, which was shorter lived and left fewer decay tracks. 

Between LEP with its much higher luminosity and SLD with its small beam pipe, state-of-the-art vertex detector and highly polarised electron beam, the study of Z0 decays yielded rich physics. Highlights included very detailed studies of an enormous sample of gluon jets from Z0 b b g events, with cleanly tagged b jets at LEP, and Ac, the parity-violation parameter in the coupling of the Z0 to c-quarks, at SLD. However, the most exciting discovery of that era was the top quark at Fermilab, in which the SVX microstrip detector of the CDF detector played an essential part (see “Top detector” figure). This triggered a paradigm shift. Before then, vertex detectors were an “optional extra” in experiments; afterwards, they became obligatory in every energy frontier detector system. 

Hybrid devices

While CCDs pioneered the use of silicon pixels for precision tracking, their use was restricted by two serious limitations: poor radiation tolerance and long readout time (tens of ms due to the need to transfer the charge signals pixel by pixel through a single output circuit). There was clearly a need for pixel detectors in more demanding environments, and this led to the development of hybrid pixel detectors. The idea was simple: reduce the strip length of well-developed microstrip technology to equal its width, and you had your pixel sensor. However, microstrip detectors were read out at one end by ASIC (application-specific integrated circuit) chips having their channel pitch matched to that of the strips. For hybrid pixels, the ASIC readout required a front-end circuit for each pixel, resulting in modules with the sensor chip facing the readout chip, with electrical connections made by metal bump-bonds (see right figure in “Pixel architectures” panel). The use of relatively thick sensor layers (compared to CCDs) compensated for the higher node capacitance associated with the hybrid front-end circuit.

The first charm decay

Although the idea was simple, its implementation involved a long and challenging programme of engineering at the cutting edge of technology. This had begun by about 1988, when Erik Heijne and colleagues in the CERN microelectronics group had the idea to fit full nuclear-pulse processing electronics in every pixel of the readout chip, with additional circuitry such as digitisation, local memory and pattern recognition on the chip periphery. With a 3 μm feature size, they were obliged to begin with relatively large pixels (75 × 500 μm), and only about 80 transistors per pixel. They initiated the RD19 collaboration, which eventually grew to 150 participants, with many pioneering developments over a decade, leading to successful detectors in at least three experiments: WA97 in the Omega Spectrometer; NA57; and forward tracking in DELPHI. As the RD19 programme developed, the steady reduction in feature size permitted the use of in-pixel discriminators and fast shapers that enhanced the noise performance, even at high rates. This would be essential for operation of large hybrid pixel systems in harsh environments, such as ATLAS and CMS at the LHC. RD19 initiated a programme of radiation hardness by design (enclosed-gate transistors, guard rings, etc), which was further developed and broadly disseminated by the CERN microelectronics group. These design techniques are now used universally across the LHC detector systems. There is still much to be learned, and advances to a smaller feature size bring new opportunities but also surprises and challenges. 

The advantages of the hybrid approach include the ability to choose almost any commercial CMOS process and combine it with the sensor best adapted to the application. This can deliver optimal speed of parallel processing, and radiation hardness as good as can be engineered in the two component chips. The disadvantages include a complex and expensive assembly procedure, high power dissipation due to large node capacitance, and more material than is desirable for a tracking system. Thanks to the sustained efforts of many experts, an impressive collection of hybrid pixel tracking detectors has been brought to completion in a number of detector facilities. As vertex detectors, their greatest triumph has been in the inferno at the heart of ATLAS and CMS where, for example, they were key to the recent measurement of the branching ratio for H  b b . 

Facing up to the challenge

The high-luminosity upgrade to the LHC (HL-LHC) is placing severe demands on ATLAS and CMS, none more so than developing even more powerful hybrid vertex detectors to accommodate a “pileup” level of 200 events per bunch crossing. For the sensors, a 3D variant invented by Sherwood Parker has adequate radiation hardness, and may provide a more secure option than the traditional planar pixels, but this question is still open. 3D pixels have already proved themselves in ATLAS, for the insertable B layer (IBL), where the signal charge is drifted transversally within the pixel to a narrow column of n-type silicon that runs through the thickness of the sensor. But for HL-LHC, the innermost pixels need to be at least five times smaller in area than the IBL, putting extreme pressure on the readout chip. The RD53 collaboration led by CERN has worked for years on the development of an ASIC using 65 nm feature size, which enables the huge amount of radiation-resistant electronics to fit within the pixel area, reaching the limit of 50 × 50 μm2. Assembling these delicate modules, and dealing with the thermal stresses associated with the power dissipation in the warm ASICs mechanically coupled to the cold sensor chips, is still a challenge. These pixel tracking systems (comprising five layers of barrel and forward trackers) will amount to about 6 Gpixels – seven times larger than before. Beyond the fifth layer, conditions are sufficiently relaxed that microstrip tracking will still be adequate. 

SLD vertex detector, ATLAS pixel detector and simulated tracks

The latest experiment to upgrade from strips to pixels is LHCb, which has an impressive track record of b and charm physics. Its adventurous Vertex Locator (VELO) detector has 26 disks along the beamline, equipped with orthogonally oriented r and ϕ microstrips, starting from inside the beampipe about 8 mm from the LHC beam axis. LHCb has collected the world’s largest sample of charmed hadrons, and with the VELO has made a number of world-leading measurements including the discovery of CP violation in charm. LHCb is now statistics-limited for many rare decays and will ramp up its event samples with a major upgrade implemented in two stages (see State-of-the-art-tracking for high luminosities).

For the first upgrade, due to begin operation early next year, the luminosity will increase by a factor of up to five, and the additional pattern recognition challenge will be addressed by a new pixel detector incorporating 55 μm pixels and installed even closer (5.1 mm) to the beam axis. The pixel detector uses evaporative CO2 microchannel cooling to allow operation under vacuum. LHCb will double its efficiency by removing the hardware trigger and reading out the data at the beam-crossing frequency of 40 MHz. The new “VeloPix” readout chip will achieve this with readout speeds of up to 20 Gb/s, and the software trigger will select heavy-flavour events based on full event reconstruction. For the second upgrade, due to begin in about 2032, the luminosity will be increased by a further factor of 7.5, allowing LHCb to eventually accumulate 10 times its current statistics. Under these conditions, there will be, on average, 40 interactions per beam crossing, which the collaboration plans to resolve by enhanced timing precision (around 20 ps) in the VELO pixels. The upgrade will require both an enhanced sensor and readout chip. This is an adventurous long-term R&D programme, and LHCb retain a fallback option with timing layers downstream of the VELO, if required. 

Monolithic active pixels

Being monolithic, the architecture of MAPS is very similar to that of CCDs (see middle figure in “Pixel architectures” panel). The fundamental difference is that in a CCD, the signal charge is transported physically through some centimetres of silicon to a single charge-sensing circuit in the corner of the chip, while in a MAPS the communication between the signal charge and the outside world is via in-pixel electronics, with metal tracks to the edge of the chip. The MAPS architecture looked very promising from the beginning, as a route to solving the problems of both CCDs and hybrid pixels. With respect to CCDs, the radiation tolerance could be greatly increased by sensing the signal charge within its own pixel, instead of transporting it over thousands of pixels. The readout speed could also be dramatically increased by in-pixel amplitude discrimination, followed by sparse readout of only the hit pixels. With respect to hybrid pixel modules, the expense and complications of bump-bonded assemblies could be eliminated, and the tiny node capacitance opened the possibility of much thinner active layers than were needed with hybrids.

A slice through the imaging region of a stacked Sony CMOS image sensor

MAPS have emerged as an attractive option for a number of future tracking systems. They offer small pixels where needed (notably for inner-layer vertex detectors) and thin layers throughout the detector volume, thereby minimising multiple scattering and photon conversion, both in barrels and endcaps. Excess material in the forward region of tracking systems such as time-projection and drift chambers, with their heavy endplate structures, has in the past led to poor track reconstruction efficiency, loss of tracks due to secondary interactions, and excess photon conversions. In colliders at the energy frontier (whether pp or e+e), however, interesting events for physics are often multi-jet, so there are nearly always one or more jets in the forward region. 

The first MAPS devices contained little more than a collection diode, a front-end transistor operated as a source follower, reset transistor and addressing logic. They needed only relaxed charge-collection time, so diffusive collection sufficed. Sherwood Parker’s group demonstrated their capability for particle tracking in 1991, with devices processed in the Centre for Integrated Studies at Stanford, operating in a Fermilab test beam. In the decades since, advances in the density of CMOS digital electronics have enabled designers to pack more and more electronics into each pixel. For fast operation, the active volume below the collection diode needs to be depleted, including in the corners of the pixels, to avoid loss of tracking efficiency. 

The Strasbourg group led by Marc Winter has a long and distinguished record of MAPS development. As well as highly appreciated telescopes in test beams at DESY for general use, the group supplied its MIMOSA-28 devices for the first MAPS-based vertex detector: a 356 Mpixel two-layer barrel system for the STAR experiment at Brookhaven’s Relativistic Heavy Ion Collider. Operational for a three-year physics run starting in 2014, this detector enhanced the capability to look into the quark–gluon plasma, the extremely hot form of matter that characterised the birth of the universe. 

Advances in the density of CMOS digital electronics have enabled designers to pack more and more electronics into each pixel

An ingenious MAPS variant developed by the Semiconductor Laboratory of the Max Planck Society – the Depleted P-channel FET (DEPFET) – is also serving as a high-performance vertex detector in the Belle II detector at SuperKEKB in Japan, part of which is already operating. In the DEPFET, the signal charge drifts to a “virtual gate” located in a buried channel deeper than the current flowing in the sense transistor. As Belle II pushes to even higher luminosity, it is not yet clear which technology will deliver the required radiation hardness. 

The small collection electrode of the standard MAPS pixel presents a challenge in terms of radiation hardness, since it is not easy to preserve full depletion after high levels of bulk damage. An important initiative to overcome this was initiated in 2007 by Ivan Perić of KIT, in which the collection electrode is expanded to cover most of the pixel area, below the level of the CMOS electronics, so the charge-collection path is much reduced. Impressive further developments have been made by groups at Bonn University and elsewhere. This approach has achieved high radiation resistance with the ATLASpix prototypes, for instance. However, the standard MAPS approach with small collection electrode may be tunable to achieve the required radiation resistance, while preserving the advantages of superior noise performance due to the much lower sensor capacitance. Both approaches have strong backing from talented design groups, but the eventual outcome is unclear. 

Advanced MAPS

Advanced MAPS devices were proposed for detectors at the International Linear Collider (ILC). In 2008 Konstantin Stefanov of the Open University suggested that MAPS chips could provide an overall tracking system of about 30 Gpixels with performance far beyond the baseline options at the time, which were silicon microstrips and a gaseous time-projection chamber. This development was shelved due to delays to the ILC, but the dream has become a reality in the MAPS-based tracking system for the ALICE detector at the LHC, which builds on the impressive ALPIDE chip development by Walter Snoeys and his collaborators. The ALICE ITS-2 system, with 12.5 Gpixels, sets the record for any pixel system (see ALICE tracks new territories). This beautiful tracker has operated smoothly on cosmic rays and is now being installed in the overall ALICE detector. The group is already pushing to upgrade the three central layers using wafer-scale stitching and curved sensors to significantly reduce the material budget. At the 2021 International Workshop on Future Linear Colliders held in March, the SiD concept group announced that they will switch to a MAPS-based tracking system. R&D for vertexing at the ILC is also being revived, including the possibility of CCDs making a comeback with advanced designs from the KEK group led by Yasuhiro Sugimoto.

Bert Gonzalez with the SVX microstrip vertex detector

The most ambitious goal for MAPS-based detectors is for the inner-layer barrels at ATLAS and CMS, during the second phase of the HL-LHC era, where smaller pixels would provide important advantages for physics. At the start of high-luminosity operation, these layers will be equipped with hybrid pixels of 25 × 100 μm2 and 150 μm active thickness, the pixel area being limited by the readout chip, which is based on a 65 nm technology node. Encouraging work led by the CERN ATLAS and microelectronics groups and the Bonn group is underway, and could result in a MAPS option of 25 × 25 μm2, requiring an active-layer thickness of only about 20 μm, using a 28 nm technology node. The improvement in tracking precision could be accompanied by a substantial reduction in power dissipation. The four-times greater pixel density would be more than offset by the reduction in operating voltage, plus the much smaller node capacitance. This route could provide greatly enhanced vertex detector performance at a time when the hybrid detectors will be coming to the end of their lives due to radiation damage. However, this is not yet guaranteed, and an evolution to stacked devices may be necessary. A great advantage of moving to monolithic or stacked devices is that the complex processes are then in the hands of commercial foundries that routinely turn out thousands of 12 inch wafers per week. 

High-speed and stacked

During HL-LHC operations there is a need for ultra-fast tracking devices to ameliorate the pileup problems in ATLAS, CMS and LHCb. Designs with a timing precision of tens of picoseconds are advancing rapidly – initially low-gain avalanche diodes, pioneered by groups from Torino, Barcelona and UCSC, followed by other ultra-fast silicon pixel devices. There is a growing list of applications for these devices. For example, ATLAS will have a layer adjacent to the electromagnetic calorimeter in the forward region, where the pileup problems will be severe, and where coarse granularity (~1 mm pixels) is sufficient. LHCb is more ambitious for its stage-two upgrade, as already mentioned. There are several experiments in which such detectors have potential for particle identification, notably π/K separation by time-of-flight up to a momentum limit that depends on the scale of the tracking system, typically 8 GeV/c.

Monolithic and hybrid pixel detectors answer many of the needs for particle tracking systems now and in the future. But there remain challenges, for example the innermost layers at ATLAS and CMS. In order to deliver the required vertexing capability for efficient, cleanly separated b and charm identification, we need pixels of dimensions about 25 × 25 μm, four times below the current goals for HL-LHC. They should also be thinner, down to say 20 μm, to preserve precision for oblique tracks. 

A Fermilab/BNL stacked pixel detector

Solutions to these problems, and similar challenges in the much bigger market of X-ray imaging, are coming into view with stacked devices, in which layers of CMOS-processed silicon are stacked and interconnected. The processing technique, in which wafers are bonded face-to-face, with electrical contacts made by direct-bond interconnects and through-silicon vias, is now a mature technology and is in the hands of leading companies such as Sony and Samsung. The CMOS imaging chips for phone cameras must be one of the most spectacular examples of modern engineering (see “Up close” figure). 

Commercial CMOS image sensor development is a major growth area, with approximately 3000 patents per year. In future these developers, advancing to smaller-node chips, will add artificial intelligence, for example to take a number of frames of fast-moving subjects and deliver the best one to the user. Imagers under development for the automotive industry include those that will operate in the short-wavelength infrared region, where silicon is still sensitive. In this region, rain and fog are transparent, so a driverless car equipped with the technology will be able to travel effortlessly in the worst weather conditions. 

While we developers of pixel imagers for science have not kept up with the evolution of stacked devices, several academic groups have over the past 15 years taken brave initiatives in this direction, most impressively a Fermilab/BNL collaboration led by Ron Lipton, Ray Yarema and Grzegorz Deptuch. This work was done before the technical requirements could be serviced by a single technology node, so they had to work with a variety of pioneering companies in concert with excellent in-house facilities. Their achievements culminated in three working prototypes, two for particle tracking and one for X-ray imaging, namely a beautiful three-tier stack comprising a thick sensor (for efficient X-ray detection), an analogue tier and a digital tier (see “Stacking for physics” figure). 

Technology nodes

12 inch silicon wafers

The relatively recent term “technology node” embraces a number of aspects of commercial integrated circuit (IC) production. First and foremost is the feature size, which originally meant the minimum line width that could be produced by photolithography, for example the length of a transistor gate. With the introduction of novel transistor designs (notably the FinFET), this term has been generalised to indicate the functional density of transistors that is achievable. At the start of the silicon-tracker story, in the late 1970s, the feature size was about 3 µm. The current state-of-the-art is 5 nm, and the downward Moore’s law trend is continuing steadily, although such narrow lines would of course be far beyond the reach of photolithography. There are other aspects of ICs that are included in the description of any technology node. One is whether they support stitching, which means the production of larger chips by step-and-repeat of reticles, enabling the production of single devices of sizes 10 × 10 cm2 and beyond, in principle up to the wafer scale (which these days is a diameter of 200 or 300 mm, evolving soon to 450 mm). Another is whether they support wafer stacking, which is the production of multi-layer sandwiches of thinned devices using various interconnect technologies such as through-silicon vias and direct-bond interconnects. A third aspect is whether they can be used for imaging devices, which implies optimised control of dark current and noise. For particle tracking, the most advanced technology nodes are unaffordable (the development cost of a single 5 nm ASIC is typically about $500 million, so it needs a large market). However, other features that are desirable and becoming essential for our needs (imaging capability, stitching and stacking) are widely available and less expensive. For example, Global Foundries, which produces 3.5 million wafers per annum, offers these capabilities at their 32 and 14 nm nodes.

For the HL-LHC inner layers, one could imagine a stacked chip comprising a thin sensor layer (with excellent noise performance enabled by an on-chip front-end circuit for each pixel), followed by one or more logic layers. Depending on the technology node, one should be able to fit all the logic (building on the functionality of the RD53 chip) in one or two layers of 25 × 25 μm pixels. The overall thickness could be 20 μm for the imaging layer, and 6 μm per logic layer, with a bottom layer sufficiently thick (~100 μm) to give the necessary mechanical stability to the relatively large stitched chips. The resulting device would still be thin enough for a high-quality vertex detector, and the thin planar sensor-layer pixels including front-end electronics would be amenable to full depletion up to the 10-year HL-LHC radiation dose.

There are groups in Japan (at KEK led by Yasuo Arai, and at RIKEN led by Takaki Hatsui) that have excellent track records for developing silicon-on-insulator devices for particle tracking and for X-ray detection, respectively. The RIKEN group is now believed to be collaborating with Sony to develop stacked devices for X-ray imaging. Given Sony’s impressive achievements in visible-light imaging, this promises to be extremely interesting. There are many applications (for example at ITER) where radiation-resistant X-ray imaging will be of crucial importance, so this is an area in which stacked devices may well own the future. 

Outlook 

The story of frontier pixel detectors is a bit like that of an art form – say cubism. With well-defined beginnings 50 years ago, it has blossomed into a vast array of beautiful creations. The international community of designers see few boundaries to their art, being sustained by the availability of stitched devices to cover large-area tracking systems, and moving into the third dimension to create the most advanced pixels, which are obligatory for some exciting physics goals. 

Face-to-face wafer bonding is now a commercially mature technology

Just like the attribute of vision in the natural world, which started as a microscopic light-sensitive spot on the surface of a unicellular protozoan, and eventually reached one of its many pinnacles in the eye of an eagle, with its amazing “stacked” data processing behind the retina, silicon pixel devices are guaranteed to continue evolving to meet the diverse needs of science and technology. Will they one day be swept away, like photographic film or bubble chambers? This seems unthinkable at present, but history shows there’s always room for a new idea. 

‘A CERN for climate change’

Climate models

In the early 1950s, particle accelerators were national-level activities. It soon became obvious that to advance the field further demanded machines beyond the capabilities of single countries. CERN marked a phase transition in this respect, enabling physicists to cooperate around the development of one big facility. Climate science stands to similarly benefit from a change in its topology.

Modern climate models were developed in the 1960s, but there weren’t any clear applications or policy objectives at that time. Today we need hard numbers about how the climate is changing, and an ability to seamlessly link these changes to applications – a planetary information system for assessing hazards, planning food security, aiding global commerce, guiding infrastructural investments, and much more. National centres for climate modelling exist in many countries. But we need a centre “on steroids”: a dedicated exascale computing facility organised on a similar basis to CERN that would allow the necessary leap in realism.

Quantifying climate

To be computationally manageable, existing climate models solve equations for quantities that are first aggregated over large spatial and temporal scales. This blurs their relationship to physical laws, to phenomena we can measure, and to the impacts of a changing climate on infrastructure. Clouds, for example, are creatures of circulation, particularly vertical air currents. Existing models attempt to infer what these air currents would be given information about much larger scale 2D motion fields. There is a necessary degree of abstraction, which leads to less useful results. We don’t know if air is going up or down an individual mountain, for instance, because we don’t have individual mountains in the model, at best mountain ranges. 

Tim Palmer

In addition to more physical models, we also need a much better quantification of model uncertainty. At present this is estimated by comparing solutions across many low-resolution models, or by perturbing parameters of a given low-resolution model. The particle-physics analogy might be that everyone runs their own low-energy accelerators hoping that coordinated experiments will provide high-energy insights. Concentrating efforts on a few high-resolution climate models, where uncertainty is encoded through stochastic mathematics, is a high-energy effort. It would result in better and more useful models, and open the door to cooperative efforts to systematically explore the structural stability of the climate system and its implications for future climate projections.

Working out climate-science’s version of the Standard Model thus provides the intellectual underpinnings for a “CERN for climate change”. One can and should argue about the exact form such a centre should take, whether it be a single facility or a federation of campuses, and on the relative weight it gives to particular questions. What is important is that it creates a framework for European climate, computer and computational scientists to cooperate, also with application communities, in ways that deliver the maximum benefit for society.

Building momentum

A number of us have been arguing for such a facility for more than a decade. The idea seems to be catching on, less for the eloquence of our arguments, more for the promise of exascale computing. A facility to accelerate climate research in developing and developed countries alike has emerged as a core element of one of 12 briefing documents prepared by the Royal Society in advance of the United Nations Climate Change Conference, COP26, in November. This briefing flanks the European Union’s “Destination Earth” project, which is part of its Green Deal programme – a €1 billion effort over 10 years that envisions the development of improved high-resolution models with better quantified uncertainty. If not anchored in a sustainable organisational concept, however, this risks throwing money to the wind.

Bjorn Stevens

Giving a concrete form to such a facility still faces internal hurdles, possibly similar to those faced by CERN in its early days. For example, there are concerns that it will take away funding from existing centres. We believe, and CERN’s own experience shows, that the opposite is more likely true. A “CERN for climate change” would advance the frontiers of the science, freeing researchers to turn their attention to new questions, rather than maintaining old models, and provide an engine for European innovation that extends far beyond climate change.

Exploring the Hubble tension

Licia Verde

Did you always want to be a cosmologist?

One day, around the time I started properly reading, somebody gave me a book about the sky, and I found it fascinating to think about what’s beyond the clouds and beyond where the planes and the birds fly. I didn’t know that you could actually make a living doing this kind of thing. At that age, you don’t know what a cosmologist is, unless you happen to meet one and ask what they do. You are just fascinated by questions like “how does it work?” and “how do you know?”.

Was there a point at which you decided to focus on theory?

Not really, and I still think I’m somewhat in-between, in the sense that I like to interpret data and am plugged-in to observational collaborations. I try to make connections to what the data mean in light of theory. You could say that I am a theoretical experimentalist. I made a point to actually go and serve at a telescope a couple of times, but you wouldn’t want to trust me in handling all of the nitty-gritty detail, or to move the instrument around. 

What are your research interests?

I have several different research projects, spanning large-scale structure, dark energy, inflation and the cosmic microwave background. But there is a common philosophy: I like to ask how much can we learn about the universe in a way that is as robust as possible, where robust means as close as possible to the truth, even if we have to accept large error bars. In cosmology, everything we interpret is always in light of a theory, and theories are always at some level “spherical cows” – they are approximations. So, imagine we are missing something: how do I know I am missing it? It sounds vague, but I think the field of cosmology is ready to ask these questions because we are swimming in data, drowning in data, or soon will be, and the statistical error bars are shrinking. 

This explains your current interest in the Hubble constant. What do you define as the Hubble tension? 

Yes, indeed. When I was a PhD student, knowing the Hubble constant at the 40–50% level was great. Now, we are declaring a crisis in cosmology because there is a discrepancy at the very-few-percent level. The Hubble tension is certainly one of the most intriguing problems in cosmology today. Local measurements of the current expansion rate of the universe, for example based on supernovae as standard candles, which do not rely heavily on assumptions about cosmological models, give values that cluster around 73 km s–1 Mpc–1. Then there is another, indirect route to measuring what we believe is the same quantity but only within a model, the lambda-cold-dark-matter (ΛCDM) model, which is looking at the baby universe via the cosmic microwave background (CMB). When we look at the CMB, we don’t measure recession velocities, but we interpret a parameter within the model as the expansion rate of the universe. The ΛCDM model is extremely successful, but the value of the Hubble constant using this method comes out at around 67 km s–1 Mpc–1, and the discrepancy with local measurements is now 4σ or more.

What are the implications if this tension cannot be explained by systematic errors or some other misunderstanding of the data?

The Hubble constant is the only cosmological parameter in the ΛCDM universe that can be measured both directly locally and from classical cosmological observations such as the CMB, baryon acoustic oscillations, supernovae and big-bang nucleosynthesis. It’s also easy to understand what it is, and the error bars are becoming small enough that it is really becoming make-or-break for the ΛCDM model. The Hubble tension made everybody wake up. But before we throw the model out of the window, we need something more.

How much faith do you put in the ΛCDM model compared to, say, the Standard Model of particle physics?

It is a model that has only six parameters, most constrained at the percent level, which explains most of the observations that we have of the universe. In the case of Λ, which quantifies what we call dark energy, we have many orders of magnitude between theory and experiment to understand, and for dark matter we are yet to find a candidate particle. Otherwise, it does connect to fundamental physics and has been extremely successful. For 20 years we have been riding a wave of confirmation of the ΛCDM model, so we need to ask ourselves: if we are going to throw it out, what do we substitute it with? The first thing is to take small steps away from the model, say by adding one parameter. For a while, you could say that maybe there is something like an effective neutrino species that might fix it, but a solution like this doesn’t quite fit the CMB data any more. I think the community may be split 50/50 between being almost ready to throw the model out and keeping working with it, because we have nothing better to use. 

It is really becoming make-or-break for the ΛCDM model

Could it be that general relativity (GR) needs to be modified? 

Perhaps, but where do we modify it? People have tried to tweak GR at early times, but it messes around with the observations and creates a bigger problem than we already have. So, let’s say we modify in middle times – we still need it to describe the shape of the expansion history of the universe, which is close to ΛCDM. Or we could modify it locally. We’ve tested GR at the solar-system scale, and the accuracy of GPS is a vivid illustration of its effectiveness at a planetary scale. So, we’d need to modify it very close to where we are, and I don’t know if there are modifications on the market that pass all of the observational tests. It could also be that the cosmological constant changes value as the universe evolves, in which case the form of the expansion history would not be the one of ΛCDM. There is some wiggle room here, but changing Λ within the error bars is not enough to fix the mismatch. Basically, there is such a good agreement between the ΛCDM model and the observations that you can only tinker so much. We’ve tried to put “epicycles” everywhere we could, and so far we haven’t found anything that actually fixes it.

What about possible sources of experimental error?

Systematics are always unknowns that may be there, but the level of sophistication of the analyses suggests that if there was something major then it would have come up. People do a lot of internal consistency checks; therefore, it is becoming increasingly unlikely that it is only due to dumb systematics. The big change over the past two years or so is that you typically now have different data sets that give you the same answer. It doesn’t mean that both can’t be wrong, but it becomes increasingly unlikely. For a while people were saying maybe there is a problem with the CMB data, but now we have removed those data out of the equation completely and there are different lines of evidence that give a local value hovering around 73 km s–1 Mpc–1, although it’s true that the truly independent ones are in the range 70–73 km s–1 Mpc–1. A lot of the data for local measurements have been made public, and although it’s not a very glamorous job to take someone else’s data and re-do the analysis, it’s very important.

Is there a way to categorise the very large number of models vying to explain the Hubble tension?

Values of the Hubble constant

Until very recently, there was an interpretation of early versus late models. But if this is really the case, then the tension should show up in other observables, specifically the matter density and age of the universe, because it’s a very constrained system. Perhaps there is some global solution, so a little change here and a little in the middle, and a little there … and everything would come together. But that would be rather unsatisfactory because you can’t point your finger at what the problem was. Or maybe it’s something very, very local – then it is not a question of cosmology, but whether the value of the Hubble constant we measure here is not a global value. I don’t know how to choose between these possibilities, but the way the observations are going makes me wonder if I should start thinking in that direction. I am trying to be as model agnostic as possible. Firstly, there are many other people that are thinking in terms of models and they are doing a wonderful job. Secondly, I don’t want to be biased. Instead I am trying to see if I can think one-step removed, which is very difficult, from a particular model or parameterisation. 

What are the prospects for more precise measurements?

For the CMB, we have the CMB-S4 proposal and the Simons Array. These experiments won’t make a huge difference to the precision of the primary temperature-fluctuation measurements, but will be useful to disentangle possible solutions that have been proposed because they will focus on the polarisation of the CMB photons. As for the local measurements, the Dark Energy Spectroscopic Instrument, which started observations in May, will measure baryon acoustic oscillations at the level of galaxies to further nail down the expansion history of the low-redshift universe. However, it will not help at the level of local measurements, which are being pursued instead by the SH0ES collaboration. There is also another programme in Chicago focusing on the so-called tip of the red-giant-branch technique, with more results to come out. Observations of multiple images from strong gravitational lensing is another promising avenue that is very actively pursued, and, if we are lucky, gravitational waves with optical counterparts will bring in another important piece of the puzzle. 

If we are lucky, gravitational waves with optical counterparts will bring in another important piece of
the puzzle

How do we measure the Hubble constant from gravitational waves?

It’s a beautiful measurement, as you can get a distance measurement without having to build a cosmic distance ladder, which is the case with the other local measurements that build distances via Cepheids, supernovae, etc. The recession velocity of the GW source comes from the optical counterpart and its redshift. The detection of the GW170817 event enabled researchers to estimate the Hubble constant to be 70 km s–1 Mpc–1, for example, but the uncertainties using this novel method are still very large, in the region of 10%. A particular source of uncertainty comes from the orientation of the gravitational-wave source with respect to Earth, but this will come down as the number of events increases. So this route provides a completely different window on the Hubble tension. Gravitational waves have been dubbed, rather poetically, “standard sirens”. When these determinations of the Hubble constant become competitive with existing measurements really depends on how many events are out there. Upgrades to LIGO, VIRGO, plus next-generation gravitational-wave observatories will help in this regard, but what if the measurements end up clustering between or beyond the late- and early-time measurements? Then we really have to scratch our heads! 

How can results from particle physics help? 

Principally, if we learn something about dark matter it could force us to reconsider our entire way to fit the observations, perhaps in a way that we haven’t thought of because dark matter may be hot rather than cold, or something else that interacts in completely different ways. Neutrinos are another possibility. There are models where neutrinos don’t behave like the Standard Model yet still fit the CMB observations. Before the Hubble tension came along, the hope was to say that we have this wonderful model of cosmology that fits really well and implies that we live in a maximally boring universe. Then we could have used that to eventually make the connection to particle physics, say, by constraining neutrino masses or the temperature of dark matter. But if we don’t live in a maximally boring universe, we have to be careful about playing this game because the universe could be much, much more interesting than we assumed. 

Gerd-Jürgen Beyer 1940–2021

Gerd Beyer

Gerd Beyer, who passed away on 20 January aged 81, played a major role in the development of biomedical research, both at CERN’s ISOLDE facility and at many other laboratories. He will be remembered as a tireless worker in the field of nuclear and applied nuclear physics combined with new radiochemical methods.

Gerd was born in Berlin in 1940 and studied radiochemistry at the Technical University of Dresden (TUD). He then joined the Joint Institute for Nuclear Research (JINR) in Dubna, where he developed advanced production methods of rare short-lived radioisotopes for use in nuclear spectroscopy. At the Central Institute for Nuclear Research in Rossendorf, he became proficient in the use of the U-120 cyclotron and the RFR research reactor to produce medical radioisotopes, and in the development of the associated radiopharmaceuticals. He completed his Dr. habil. at TUD on the production of radionuclides by means of rapid radiochemical methods in combination with mass separation.

In 1971 Gerd was invited to ISOLDE, joining Helge Ravn to prepare extremely pure samples of rare long-lived nuclei for studies of their electron-capture decay, in view of their potential for determining neutrino masses. Back in Rossendorf, he continued to develop radiopharmaceuticals and to introduce them into nuclear medicine in the former East Germany and the Eastern Bloc countries. He developed a number of new methods for labelling and synthesising radiopharmaceuticals, in particular the rather difficult problem of efficiently separating fission-produced 99Mo from large samples of low-enriched uranium. This brought him into many collaborations all over the world, with a view to transferring his know-how to other laboratories. As head of cyclotron radiopharmaceuticals, he took the initiative to introduce a PET scanner programme in the German Democratic Republic (GDR), based on the Rossendorf positron camera, using gas detectors derived from pioneering work at CERN.

During his visits to CERN, Gerd spotted the potential of the ISOLDE mass-separation technique to allow the introduction and use of better-suited but hitherto unavailable nuclides.

In 1985, in close collaboration with ISOLDE, he began to prepare for the future use of large facilities to produce such radionuclides. He reactivated ISOLDE’s contacts with the University Hospital of Geneva (HUG), starting a collaboration on the use of exotic positron-emitting nuclides for PET imaging, which resulted in the development of new radiopharmaceuticals based on radionuclides of the rare earths and actinides.

Shortly after the fall of the GDR, Gerd lost his job at Rossendorf and had to start a new career elsewhere. Via a CERN scientific associateship, he became a guest professor at HUG and, later, head of its radiochemistry group, with responsibility for setting up and operating a new cyclotron. This allowed him to continue his work on developing new approaches to labelling monoclonal antibodies and peptides with exotic lanthanide positron emitters produced at ISOLDE, determining their in vivo stability and demonstrating their promising imaging properties. Gerd was also the first to demonstrate the promising therapeutic properties of the alpha emitter 149Tb.

When he retired from HUG, Gerd co-proposed that CERN build a new radiochemical laboratory in connection with ISOLDE. Here, the large knowledge base on target and mass-separator techniques for the production and handling of radionuclides could be used to make samples of these high-purity nuclides available for use in a broader biomedical research programme. Years later, Gerd’s initial idea was eventually realised with the creation of the CERN-MEDICIS facility.

Gerd was a first-rate experimental scientist, highly skilled in the laboratory, and he stayed professionally active to the very end. As a guest professor, a member of numerous professional societies and a holder of many consultancy positions, he spared no effort in sharing and transferring his know-how, recently to the young generation of scientists at MEDICIS. 

During Gerd’s outstanding career, his work on the production of radiopharmaceuticals saved innumerable lives. His R&D towards new radio­pharmaceuticals and, in particular, his pioneering work on 149Tb for targeted alpha therapy, is opening up new perspectives for efficient cancer treatment. It is therefore particularly tragic that the development of efficient antiviral drugs came too late to support Gerd in his brave fight against COVID-19.

Hadron formation differs outside of jets

Figure 1

The production of different types of hadrons provides insights into one of the most fundamental transitions in nature – the “hadronisation” of highly energetic partons into hadrons with confined colour charge. To understand how this transition takes place we have to rely on measurements, and measurement-driven modelling. This is because the strong interaction processes that govern hadronisation are characterised by a scale given by the typical size of hadrons – about 1 fm – and cannot be calculated with perturbative techniques. The ALICE collaboration has recently performed a novel study of hadronisation by comparing the production of strange neutral baryons and mesons inside and outside of charged-particle jets.

One of the ways to contrast baryon and meson production is to analyse the ratio of their momentum distributions. This has been done in most of the collision systems, but the comparison is particularly interesting in heavy-ion collisions, where a large baryon-to-meson enhancement is often referred to as the “baryon anomaly”. A characteristic maximum at intermediate transverse momenta (1–5 GeV) is found in all systems, but in Pb–Pb collisions the ratio is strongly increased, to the extent that it exceeds unity, implying the production of more baryons than mesons. The rise of the ratio has been associated with either hadron formation from the recombination of two or three quarks, or the migration of the heavier baryons to higher momenta by the strong all-particle “radial” flow associated with the production and expansion of a quark–gluon plasma. 

A recent result adds an extra twist to the study of strange baryons and mesons

The ALICE collaboration has studied baryon-to-meson ratios extensively. A recent result adds an extra twist to the study of strange baryons and mesons by studying the ratios in two parts of the events separately – inside jets and in the event portion perpendicular to a jet cone. This allows physicists to look “under the peak” to reveal more about its origin. The latest study focuses on the neutral and weakly decaying Λ baryon and K0S meson – particles often known collectively as V0 due to their decay particles forming a “V” within a detector. The ALICE detector can reconstruct these decaying particles reliably even at high momenta via invariant-mass analysis using the charged-particle tracks seen in the detectors.

The particles associated with the jets show the typical ratio known from the high momentum tail of the inclusive baryon-to-meson distribution – essentially no enhancement – and similar values were found in both pp and p–Pb collisions, consistent with simulations of hard pp collisions using PYTHIA 8 (see figure 1). By contrast, the particles found away from jets do indeed show a baryon-to-meson enhancement that qualitatively resembles the observations in Pb–Pb collisions. The new study clarifies that the high rise of the ratio is associated with the soft part of the events (regions where no jet with more than pT = 10 GeV is produced) and brings the first quantitative guidance for modelling the baryon-to-meson enhancement with an additional important constraint – the absence of the jet. Moreover, finding that the “within-jet” ratio is similar in pp and p–Pb collisions, while the “out-of-jet” ratio shows larger values in p–Pb than in pp collisions, gives even more to ponder about the possible origin of the effect in relation to an expanding strongly interacting system. Future measurements involving multi-strange baryons may shed further light on this question. 

Poland marks 30 years at CERN

Rising up

When CERN was established in the 1950s, with the aim of bringing European countries together to collaborate in scientific research after the Second World War, countries from East and West Europe were invited to join. At the time, the only eastern country to take up the call was Yugoslavia. Poland’s accession to CERN membership in 1991 was therefore a particularly significant moment in the organisation’s history because it was the first country from behind the former Iron Curtain to join CERN. Its example was soon followed by a range of Eastern European countries throughout the 1990s.

At the origin of Polish participation at CERN was a vision of the three world-class physicists: Marian Danysz and Jerzy Pniewski from Warsaw and Marian Mięsowicz from Kraków, who had made first contacts with CERN in the early 1960s. The major domains of Polish expertise around that time encompassed the analysis of bubble-chamber data (especially those related to high-multiplicity interactions), the properties of strange hadrons, charm production, and the construction of gaseous detectors.

In 1963, Poland gained observer status at the CERN Council — the first country from Eastern Europe to do so. During the subsequent 25 years, almost out of nothing, a critical mass of national scientific groups collaborating with CERN on everyday basis was established. By the late 1980s, the CERN community recognised that Poles deserved full access to CERN. With the feedback and support of their numerous brilliant pupils, Danysz, Pniewski and Mięsowicz had accomplished a goal which had seemed impossible. Today, Poland’s red and white flag graces the membership rosters of all four major Large Hadron Collider (LHC) experiments and beyond.

Poland30 Wired in

Entering the fray
Poland joined CERN two years after the start-up of the Large Electron Positron Collider (LEP), the forerunner to the LHC. Having already made strong contributions to the
construction of LEP’s DELPHI experiment, in particular its silicon vertex detector, electromagnetic calorimeter and RICH detectors, Polish researchers quickly became involved in DELPHI data analyses, including studies of the properties of beauty baryons and searches for supersymmetric particles.

Poland’s accession to CERN membership 30 years ago was the very first case of the return of our nation to European structures

With the advent of the LHC era, Poles became members of all four major LHC-experiment collaborations. In ALICE we are proud of our broad contribution to the study of the quark gluon plasma using HBT-interferometry and electromagnetic probes, and of our participation in the design of and software development for the ALICE time projection chamber. Polish contributions to the ATLAS collaboration encompass not only numerous software and hardware activities (the latter concerning the inner detector and trigger), but also data analyses, notably searching for new physics in the Higgs sector, studies of soft and elastic hadron interactions and a central role in the heavy-ion programme. Involvement in CMS has revolved around the experiment’s muon-detection system, studies of Higgs-boson production and its decays to tau leptons, W+W interactions and searches for exotic, in particular long-lived, particles. This activity is also complemented by software development and coordination of physics analysis for the TOTEM experiment. Last but not least, Polish groups in LHCb have taken important hardware responsibilities for various subdetectors (including the VELO, RICH and high-level trigger) together with studies of b->s transitions, measurements of the angle γ of the CKM matrix and searches for CPT violation, to name but a few.

Beyond colliders
The scope of our research at CERN was never limited to LEP and the LHC. In particular, Polish researchers comprise almost one third of collaborators on the fixed-target experiment NA61/SHINE, where they are involved across the experiment’s strong-interactions programme. Indeed, since the late 1970s, Poles have actively participated in the whole series of deep-inelastic scattering experiments at CERN: EMC, NMC, SMC, COMPASS and recently AMBER. Devoted to studies of different aspects of the partonic structure of the nucleon, these experiments have resulted in spectacular discoveries, including the EMC effect, nuclear shadowing, the proton “spin puzzle”, and 3D imaging of the nucleon.

Poland30 Hands on

Polish researchers have also contributed with great success to studies at CERN’s ISOLDE facility. One of the most important achievements was to establish the coexistence of various nuclear shapes, including octupoles, at low excitation energy in radon, radium and mercury nuclei, using the Coulomb-excitation technique. Polish involvement in CERN neutrino experiments started with the BEBC bubble chamber, followed by the CERN Dortmund Heidelberg Saclay Warsaw (CDHSW​) experiment and, more recently, participation in the ICARUS experiment and the T2K near-detector as part of the CERN Neutrino Platform. In parallel, we take part in preparations for future CERN projects, including the proposed Future Circular Collider and Compact Linear Collider. In terms of theoretical research, Polish researchers are renowned for the phenomenological description of strong interactions and also play a crucial role in the elaboration of Monte Carlo software packages. In computing generally, Poland was the regional leader in implementing the grid computing platform.

The past three decades have brought a few-fold increase in the population of Polish engineers and technicians involved in accelerator science. Experts contributed significantly to the LHC construction, followed by the services (e.g. electrical quality assurance of the LHC’s superconducting circuits) during consecutive long shutdowns. Detector R&D is also a strong activity of Polish engineers and technicians, for example via membership of CERN’s RD51 collaboration which exists to advance the development and application of micropattern gas detectors. These activities take place in the closest cooperation with national industry, concentrated around cryogenic applications. Growing local expertise in accelerator science also saw the establishment of Poland’s first hadron-therapy centre, located at the Institute of Nuclear Physics PAN in Kraków.

Poland@CERN 2019 saw over 20 companies and institutions represented by around 60 participants take part in more than 120 networking meetings

Collaborations between CERN and Polish industry was initiated by Maciej Chorowski, and there are numerous examples. One is the purchase of vacuum vessels manufactured by CHEMAR in Kielce and RAFAKO in Racibórz, and parts of cryostats from METALCHEM in Kościan. Industrial supplies for CERN were also provided by KrioSystem in Wrocław and Turbotech in Płock, including elements of cryostats for testing prototype superconducting magnets for the LHC. CERN also operates devices manufactured by the ZPAS company in Wolibórz, while Polish company ZEC Service has been awarded CMS Gold awards for the delivery and assembly of cooling installations. Creotech Instruments – a company established by a physicist and two engineers who met at CERN – is a regular manufacturer of electronics for CERN and enjoys a strong collaboration with CERN’s engineering teams. Polish companies also transfer technology from CERN to industry, such as TECHTRA in Wrocław, which obtained a license from CERN for the production and commercialisation of GEM (Gas Electron Multiplier) foil. Deliveries to CERN are also carried out, inter alia, by FORMAT, Softcom or Zakład Produkcji Doświadczalnej CEBEA from Bochnia. At the most recent exhibition of Polish industry at CERN, Poland@CERN 2019, over 20 companies and institutions represented by around 60 participants took part in more than 120 networking meetings.

Poland30 Schools out

Societal impact
CERN membership has so far enabled around 550 Polish teachers to visit the lab, each returning to their schools with enhanced knowledge and enthusiasm to pass on to younger generations. Poland ranks sixth in Europe in terms of participation in particle-physics masterclasses participants, and at least 10 PhD theses in Poland based on CERN research are defended annually. Over the past 30 years, CERN has also become a second home for some 560 technical, doctoral or administrative students and 180 summer students, while Polish nationals have taken approximately 150 staff positions and 320 fellowships.

Some have taken important positions at CERN. Agnieszka Zalewska was chair of the CERN Council from 2013 to 2015, Ewa Rondio acted as a member of CERN’s directorate in 2009-2010 and Michał Turała chaired the electronics-and-computing-for-physics division in 1995-1998. Also, several of our colleagues were elected as members of CERN bodies such as the Scientific Policy Committee. Our national community at CERN is well integrated, and likes to pass the time outside working hours in particular during mountain hikes and summer picnics.

Poland’s accession to CERN membership 30 years ago was the very first case of the return of our nation to European structures, preceding the European Union and NATO. Poland joined the European Synchrotron Radiation Facility in 2004, the Institut Laue-Langevin in 2006 and the European Space Agency in 2012. It was also a founding member of the European Spallation Source and the Facility for Antiproton and Ion Research,and is a partner of the European X-ray Free-Electron Laser.

Today, six research institutes and 11 university departments located in eight major Polish cities are focused on high-energy physics. Among domestic projects that have benefitted from CERN technology-transfer is the Jagiellonian PET detector, which is exploring the use of inexpensive plastic scintillators for whole-body PET imaging, and the development of electron linacs for radiotherapy and cargo scanning at the National Centre for Nuclear Research in Świerk, Warsaw.

During the past few years, thanks to closer alignment between participation in CERN experiments and the national roadmap for research infrastructures, the long-term funding scheme for Poland’s CERN membership has been stabilised. This fact, together with the highlights described here, allow us to expect that in the future CERN will be even more “Polish”.

Herbert Lengeler 1931–2021

Herbert Lengeler

Experimental physicist Herbert Lengeler, who made great contributions to the development of superconducting radiofrequency (SRF) cavities, passed away peacefully on 26 January, just three weeks short of his 90th birthday.

Herbert was born in 1931 in the German- speaking region of Eastern Belgium. He studied mathematics and engineering at the Université Catholique de Louvain in Belgium, and experimental physics at RWTH Aachen University in Germany. He worked there as a scientific assistant and completed his PhD in 1963 on the construction of a propane bubble chamber, going on to perform experiments with this instrument on electron-shower production at the 200 MeV electron synchrotron of the University of Bonn.

In 1964 Herbert was appointed as a CERN staff member in the track chamber and accelerator research divisions. He was involved in the construction, testing and operation of an RF particle separator for a bubble chamber. In 1967 he then joined a collaboration between CERN and IHEP in Serpukhov, in the Soviet Union, within which he led the construction of an RF particle separator for both IHEP and the French bubble-chamber Mirabelle, which was installed in the same institution.

In 1971 the value of SRF separators for improved continuous-wave particle beams was recognised. This necessitated the use of SRF systems with high fields and low RF losses. Since a development programme for SRF had just been initiated at the Karlsruhe Institute of Technology in Germany, Herbert joined the research centre on behalf of CERN. In the following pioneering period up to 1978, he led the development of full-niobium SRF cavities operated at liquid-helium temperatures, with all required auxiliary systems.

The success of the SRF separator led to ambitious plans for upgrading the energy of LEP at CERN, which were initiated in 1981. A first SRF cavity with its auxiliaries (RF couplers, frequency tuner, cryostat) was installed and successfully tested in 1983 in the PETRA collider at DESY in Hamburg. Following this, in 1987, an SRF cavity with all auxiliaries and a new helium refrigerator was installed and tested at CERN’s SPS. In parallel, Herbert orchestrated the development of niobium sputtering on copper cavities as a cheaper alternative to bulk niobium. Gradually, additional SRF cavities were installed in the LEP collider, resulting in a doubling of its beam energy by the end of its running period in 2000.

From 1989 onwards, Herbert gradually retired from the LEP upgrade programme and devoted more time to other activities at CERN, such as consultancy for SRF activities at KEK, DESY and Jefferson Lab. In 1993 he was appointed project leader for the next-generation neutron source for Europe, the European Spallation Source, a position he held until his retirement from the project and CERN in 1996.

Herbert was always interested in communicating his experience to younger people. From 1989 to 2001 he frequently gave lectures on accelerator physics and technology as an honorary professor at the Technical University of Darmstadt in Germany. In 1998 he was awarded an honorary doctorate from the Russian Academy of Sciences for his contribution to the CERN–IHEP collaboration.

Herbert was an enthusiastic musician. He had been married since 1959 to Rosmarie Müllender- Lengeler, and the couple had four children and 10 grandchildren.

Luc Pape 1939–2021

Luc Pape

Our colleague and friend Luc Pape passed away on 9 April after a brief illness. Luc’s long and rich career covered all aspects of our field, from the early days of bubble-chamber physics in the 1960s and 1970s, to the analysis of CMS data at the LHC.

In the former, Luc contributed to the development of subtle methods of track reconstruction, measurement and event analysis. He participated in important breakthroughs, such as the first evidence for scaling violation in 1978 in neutrino interactions in BEBC and early studies of the structure of the weak neutral current. Luc developed software to allow the identification of produced muons by linking the extrapolated bubble-chamber tracks to the signals of the external BEBC muon identifier.

Luc’s very strong mathematical background was instrumental in these developments. He acquired a deep expertise in software and stayed at the cutting edge of this field. He also exploited clever techniques and rigorous methods that he adapted in further works. At the end of the bubble-chamber era, Luc was among the experts studying the computing environment of future experiments. He was also one of the people involved in the origin of the Physics  Analysis Workstation (PAW) tool.

After this, Luc joined the DELPHI collaboration. Analysing the computing needs of the LEP experiments, he was among the first to realise the necessity of moving from shared central computing to distributed farms for large experiments. He thus conceived, pushed and, with motivated collaborators, built and exploited the DELPHI farm (DELFARM), allowing physicists to rapidly analyse DELPHI data and produce data-summary (DST) files for the whole collaboration. Using his strong expertise in most available software tools, Luc progressively improved track analysis, quality checking and event viewing. DELPHI users will remember TANAGRA (track analysis and graphics package), the backbone of the DELANA (DELPHI analysis) program, and DELGRA for event visualisation.

Luc’s passion for physics never faded. Open minded, but with a predilection for supersymmetry (SUSY), the subtle phenomenology of which he mastered brightly, he became the very active leader of the DELPHI, and then of the full LEP SUSY groups.

After retiring from CERN in 2004, he enjoyed the hospitality of the ETH Zurich group in CMS, to which he brought his expertise on SUSY. Collaborating closely with many young physicists, he introduced into CMS the “stransverse mass” method for SUSY searches, and pioneered several leptonic and hadronic SUSY analyses. He first convened the CMS SUSY/BSM group (2003–2006), then the SUSY physics analysis group (2007–2008), preparing various topological searches to be performed with the first LHC collisions. Responsible for SUSY in the Particle Data Group from 2000–2012, he helped define SUSY benchmark scenarios within reach of hadron colliders, present and future. Comforted by the discovery of a light scalar boson in 2012 (a necessary feature of but not proof of SUSY), he continued exploring novel analysis methods and strategies to interpret any potential evidence for SUSY particles.

We will remember Luc for the exceptional combination of a genuine enthusiasm for physics, an outstanding competence and rigour in analysis, incorporating quite technical matters, and a deep concern about young colleagues with whom he interacted beautifully. Luc had a strong interest in other domains, including cosmology, African ethnicities and arts, and Mesopotamian civilisations. With his wife, he also undertook some quite demanding Himalayan treks.

We have lost a most remarkable and complete physicist, a man of great integrity, devoid of personal ambition, a rich personality, interested by many aspects of life, and a very dear friend.

bright-rec iop pub iop-science physcis connect